Hệ điều hành - Chapter 8: Virtual memory

Determines where in real memory a process piece is to reside Important in a segmentation system such as best-fit, first-fit, and etc are possible alternatives Irrelevant for pure paging or combined paging with segmentation

ppt51 trang | Chia sẻ: huyhoang44 | Lượt xem: 650 | Lượt tải: 0download
Bạn đang xem trước 20 trang tài liệu Hệ điều hành - Chapter 8: Virtual memory, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
Chapter 8 Virtual MemoryReal memoryMain memory, the actual RAM, where a process executesVirtual memory is a storage allocation scheme in which secondary memory can be addressed as though it were part of main memorySize is limited by the amount of secondary memory availableVirtual address is the address assigned to a location in virtual memoryKeys to Virtual Memory1) Memory references are logical addresses that are dynamically translated into physical addresses at run timeA process may be swapped in and out of main memory, occupying different regions at different times during execution2) A process may be broken up into pieces (pages or segments) that do not need to be located contiguously in main memoryBreakthrough in Memory ManagementIf both of those two characteristics are present, then it is not necessary that all of the pages or all of the segments of a process be in main memory during execution.If the next instruction and the next data location are in memory then execution can proceedExecution of a ProcessOS brings into main memory a few pieces of the programResident set: portion of process that is in main memoryExecution proceeds smoothly as long as all memory references are to locations that are in the resident setAn interrupt (memory access fault) is generated when an address is needed that is not in main memoryExecution of a ProcessOS places the process in a blocking statePiece of process that contains the logical address is brought into main memoryOS issues a disk I/O Read requestAnother process is dispatched to run while the disk I/O takes placeAn interrupt is issued when disk I/O complete which causes OS to place the affected process in the Ready stateImplications of this new strategyMore efficient processor utilization More processes may be maintained in main memory because only load in some of the pieces of each processMore likely a process will be in the Ready state at any particular timeA process may be larger than main memoryThis restriction in programming is liftedOS automatically loads pieces of a process into main memory as requiredThrashingA condition in which the system spends most of its time swapping pieces rather than executing instructions.It happens when OS frequently throws out a piece just before it is usedTo avoid this, OS tries to guess, based on recent history, which pieces are least likely to be used in the near future.Principle of LocalityProgram and data references within a process tend to cluster  only a few pieces of a process will be needed over a short period of timeIt is possible to make intelligent guesses about which pieces will be needed in the future, which avoids thrashingThis suggests that virtual memory may work efficientlyPerformance of Processes in VM EnvironmentDuring the lifetime of the process, references are confined to a subset of pages.Support Needed for Virtual MemoryHardware must support paging and segmentation OS must be able to manage the movement of pages and/or segments between secondary memory and main memoryPagingEach process has its own page tableEach page table entry contains the frame number of the corresponding page in main memoryTwo extra bits are needed to indicate:P(resent): whether the page is in main memory or notM(odified): whether the contents of the page has been altered since it was last loadedPaging TableIt is not necessary to write an unmodified page out when it comes to time to replace the page in the frame that it currently occupiesAddress TranslationThe page no. is used to index the page table and look up the frame no.The frame no. is combined with the offset to produce the real addressPage TablesPage tables can be very largeConsider a system that supports 231=2Gbytes virtual memory with 29=512-byte pages. The number of entries in a page table can be as many as 222Most virtual memory schemes store page tables in virtual memoryPage tables are subject to pagingTwo-Level Hierarchical Page Tableis composed of 220 4-kbyte (212) pagesComposed of 220 4-byte page table entries, occupying 210 pagesComposed of 210 4-byte page table entriesAddress Translation for Hierarchical page tableThe root page always remains in main memoryTranslation Lookaside BufferEach virtual memory reference can cause two physical memory accessesOne to fetch the page tableOne to fetch the dataTo overcome this problem a high-speed cache is set up for page table entriesCalled a Translation Lookaside Buffer (TLB)Contains page table entries that have been most recently usedTranslation Lookaside BufferTLB operationTLB hitTLB missBy the principle of locality, most virtual memory references will be to locations in recently used pages. Therefore, most references will involve page table entries in the cache.Page SizePage size is an important hardware design decisionSmaller page size less amount of internal fragmentation more pages required per process larger page tablessome portion of page tables must be in virtual memorydouble page fault (first to bring in the needed portion of the page table and second to bring in the process page)Page SizeLarge page size is better becauseSecondary memory is designed to efficiently transfer large blocks of dataFurther complications to Page SizeSmall page sizea large number of pages will be available in main memory for a processas time goes on during execution, the pages in memory will all contain portions of the process near recent references low page fault rateIncreased page size causes pages to contain locations further from any recent referencethe effect of the principle of locality is weakened page fault rate risesFurther complications to Page SizeExample Page SizeThe design issue of page size is related to the size of physical main memory and program size.At the same time that main memory is getting larger, the address space used by applications is also growing. architectures that support multiple page sizesSegmentationSegmentation allows the programmer to view memory as consisting of multiple address spaces or segments.Segments may be of unequal size.Each process has its own segment table.Segment TableA bit is needed to determine if segment is already in main memory, if present,Segment base is the starting address of the corresponding segment in main memoryLength is the length of the segmentAnother bit is needed to determine if the segment has been modified since it was loaded in main memoryAddress Translation in SegmentationThe segment no. is used to index into the segment table and look up the segment baseThe segment base is added to the offset to produce the real addressCombined Paging and SegmentationA user’s address space is broken up into a number of segments and each segment is broken into fixed-size pagesFrom the programmer’s point of view, a logical address still consists of a segment number and a segment offset.From the system’s point of view, the segment offset is viewed as a page number and page offsetCombined Paging and SegmentationThe base now refers to a page table.Address TranslationThe segment no. is used to index into the segment table to find the page table for that segmentThe page no. is used to index the page table and look up the frame no.The frame no. is combined with the offset to produce the real addressKey Design ElementsFetch policyPlacement policyReplacement policyCleaning policyKey aim: Minimise page faultsNo definitive best policyFetch PolicyDetermines when a page should be brought into memoryDemand pagingonly brings pages into main memory when a reference is made to a location on the pagemany page faults when process first startedPrepaging pages other than the one demanded by a page fault are brought in. more efficient to bring in pages that reside contiguously on the diskPlacement PolicyDetermines where in real memory a process piece is to resideImportant in a segmentation system such as best-fit, first-fit, and etc are possible alternativesIrrelevant for pure paging or combined paging with segmentationReplacement PolicyWhen all of the frames in main memory are occupied and it is necessary to bring in a new page, the replacement policy determines which page currently in memory is to be replaced.But, which page is replaced?Replacement PolicyPage removed should be the page least likely to be referenced in the near future How is that determined?Most policies predict the future behavior on the basis of past behaviorTradeoff: the more sophisticated the replacement policy, the greater the overhead to implement itReplacement Policy Frame LockingFrame LockingIf frame is locked, it may not be replacedKernel of the operating systemKey control structuresI/O buffersAssociate a lock bit with each frame which may be kept in a frame table as well as being included in the current page tableOptimal policySelects for replacement that page for which the time to the next reference is the longestResults in the fewest number of page faults but it is impossible to have perfect knowledge of future eventsServes as a standard to judge real-world algorithmsOptimal Policy ExampleThe optimal policy produces three page faults after the frame allocation has been filled.Least Recently Used (LRU)Replaces the page that has not been referenced for the longest timeBy the principle of locality, this should be the page least likely to be referenced in the near futureDifficult to implementOne approach is to tag each page with the time of last reference. This requires a great deal of overhead.LRU ExampleThe LRU policy does nearly as well as the optimal policy.In this example, there are four page faultsFirst-in, first-out (FIFO)Treats page frames allocated to a process as a circular bufferPages are removed in round-robin styleSimplest replacement policy to implementPage that has been in memory the longest is replacedBut, these pages may be needed again very soon if it hasn’t truly fallen out of useFIFO ExampleThe FIFO policy results in six page faults.Note that LRU recognizes that pages 2 and 5 are referenced more frequently than other pages, whereas FIFO does not.Clock PolicyUses an additional bit called a “use bit”When a page is first loaded in memory or referenced, the use bit is set to 1When it is time to replace a page, the OS scans the set flipping all 1’s to 0The first frame encountered with the use bit already set to 0 is replaced.Similar to FIFO, except that, any frame with a use bit of 1 is passed over by the algorithm.Clock Policy Exampleincoming page 727Clock Policy ExampleNote that the clock policy is adept at protecting frames 2 and 5 from replacement.An asterisk indicates that the corresponding use bit is equal to 1.The arrow indicates the current position of the pointer. Replacement Policy ComparisonTwo conflicting constraints:1. We would like to have a small page fault rate in order to run efficiently, 2. We would like to keep asmall frame allocation.Page BufferingReplacing a modified page is more costly than unmodified page because the former must be written to secondary memorySolution: page buffering + FIFOReplaced page remains in memory (only the entry in the page table for this page is removed) and is added to the tail of one of two listsFree page list if page has not been modifiedModified page list if it hasPage BufferingThe free page list is a list of page frames available for reading in pages. When a page is to be read in, the page frame at the head of the list is used, destroying the page that was there.The important aspect is that the page to be replaced remains in memory.If the process references that page, it is returned to the resident set of that process at little cost. In effect, the free and modified page lists act as a cache of pages. Cleaning PolicyA cleaning policy is concerned with determining when a modified page should be written out to secondary memory.Demand cleaningA page is written out only when it has been selected for replacement Minimizes page writes A process that suffers a page fault may have to wait for two page transfers before it can be unblocked  decrease processor utilizationCleaning PolicyPrecleaningPages are written out in batches (before they are needed) Reduces the number of I/O operations  Pages written out may have been modified again before they are replaced  waste of I/O operations with unnecessary cleaning operations Cleaning PolicyBetter approach: incorporates page bufferingCleans only pages that are replaceabledecouples the cleaning and replacement operationsPages in the modified list are periodically written out in batches and moved to the free list significantly reduces the number of I/O operations and therefore the amount of disk access timePages in the free list are either reclaimed if referenced again or lost when its frame is assigned to another page

Các file đính kèm theo tài liệu này:

  • pptchap8_571.ppt