Assignment consists of designing and implementing a program that will analyze the use of a paged memory system with 32 Kbytes of physical memory. Your program will read the file available for download through file and identify the number of page faults and the simulated overhead time as specified below.
The size of a page will be selected during the execution so the user can test two different page sizes. The data file has 1 million addresses. Your program will accept several records of input data, according to the following format:
Address (in hexadecimal)
Where code is identified by a single digit with the following meaning:
0 – address for data read
1 – address for data write
2 – address for instruction fetch
An example of the data file contents could be
The address size is 32 bits. An address such as AB means 000000AB. The program must be written in C or C++ and be the smallest possible code to solve the problem. DO NOT CODE ANY SOLUTION THAT CAN BE APPLIED TO OTHER PROBLEMS (HINT: you can read hexadecimal numbers in C++ using unsigned integers and the command FILE>>hex>>number;)
During the execution of this file, pages will be stored in the physical memory and later replaced following a First In First Out algorithm. Pages accessed should be marked as referenced and those with a writing code should be marked as modified. During page replacement, the overhead time will be increased by 100 cycles due to the disk load operation and if the page has been modified another 500 cycles should be added to the overhead to account for the writing back in disk. The experiment should be repeated with a Least Recently Used algorithm (hint: a linked list or a queue may help you to track the LRU information)
Report your times for page size 4096 and 2048, running under FIFO and RU page replacements.
You must turn in the source code of your program and a short report. This report must contain the comparison of the two possible line sizes.
refer to the attached document
(Beverly)Preserving data integrity is particularly important in a database where multiple users access data simultaneously. The interleaving of a users query with another user optimizes the time in which data is retrieved. When the database is accessed by one user, before that user is finished, to optimize the time, another user can begin their query. This is an example of interleaving. Concurrency controls are set in place to coordinate the simultaneous execution of transactions within a multiprocessing database system and to ensure the integrity of the data. The scheduler is the DBMS component that establishes the order in which concurrent transaction operations are executed (Carlos Coronel, 2020, p. 494). This action is done by interleaving the execution of the database operations in a specific sequence that will ensure serializability. Serializability is the main job of the scheduler and makes sure that all queries that are interleaved are yielding the same results as if they were executed in serial order or one after another. This action will ensure the integrity of the data.
During all transactions, data is moved and is in an unavoidable state of inconsistency. This is more so a threat with multiprocessing database systems. If the system used a serial type of schedule then each transaction would be executed one at a time, with no interference or threat of inconsistency. If the system uses a non-serial type of schedule, this would mean that the execution of data would be accessed simultaneously by users and could cause inconsistencies within the movement of data. The scheduler then would use the serializable schedule to assure that the interleaving of the execution of two or more transactions is maintaining database consistency.
Carlos Coronel, S. M. (2020).
Database Systems.Boston: Cengage Learning, Inc.
Gaurav, S. (2022, June 20).
Serializability in DBMS. Retrieved from Scaler Topics: https://www.scaler.com/topics/dbms/serializability-in-dbms/