Hệ điều hành - Chapter 11: I/o management and disk scheduling
SCAN is biased against the area most recently traversed
does not exploit locality
SCAN favors
jobs whose requests are for tracks nearest to both innermost and outermost tracks and
the latest-arriving jobs
45 trang |
Chia sẻ: huyhoang44 | Lượt xem: 652 | Lượt tải: 0
Bạn đang xem trước 20 trang tài liệu Hệ điều hành - Chapter 11: I/o management and disk scheduling, để xem tài liệu hoàn chỉnh bạn click vào nút DOWNLOAD ở trên
Chapter 11I/O Management and Disk SchedulingOperating System Design IssuesI/O BufferingDisk SchedulingDisk CacheGoal: GeneralityFor simplicity and freedom from error, it’s better to handle all I/O devices in a uniform mannerDue to the diversity of device characteristics, it is difficult in practice to achieve true generalitySolution: use a hierarchical modular design of I/O functionsHide details of device I/O in lower-level routinesUser processes and upper levels of OS see devices in terms of general functions, such as read, write, open, close, lock, unlockA Model of I/O OrganizationLogical I/O: Deals with the device as a logical resource and is not concerned with the details of actually controlling the deviceAllows user processes to deal with the device in terms of a device identifier and simple commands such as open, close, read, writeDevice I/O:Converts requested operations into sequence of I/O instructionsUses buffering techniques to improve utilizationA Model of I/O OrganizationScheduling and Control:Performs actual queuing / scheduling and control operationsHandles interrupts and collects and reports I/O status Interacts with the I/O module and hence the device hardwareGoal: EfficiencyMost I/O devices are extremely slow compared to main memory I/O operations often form a bottleneck in a computing systemMultiprogramming allows some processes to be waiting on I/O while another process is executingGoal: EfficiencySwapping brings in ready processes but this is an I/O operation itselfA major effort in I/O design has been schemes for improving the efficiency of I/OI/O bufferingDisk schedulingDisk cacheRoadmapOperating System Design IssuesI/O BufferingDisk SchedulingDisk CacheNo BufferingWithout a buffer, OS directly accesses the device as and when it needsA data area within the address space of the user process is used for I/ONo BufferingProcess must wait for I/O to complete before proceedingbusy waiting (like programmed I/O)process suspension on an interrupt (like interrupt-driven I/O or DMA)Problemsthe program is hung up waiting for the relatively slow I/O to completeinterferes with swapping decisions by OSNo BufferingIt is impossible to swap the process out completely because the data area must be locked in main memory before I/OOtherwise, data may be lost or single-process deadlock may happenthe suspended process is blocked waiting on the I/O event, and the I/O operation is blocked waiting for the process to be swapped inI/O BufferingIt may be more efficient to perform input transfers in advance of requests being made and to perform output transfers some time after the request is made.Block-oriented BufferingFor block-oriented I/O devices such as disks and USB drivesInformation is stored in fixed sized blocksTransfers are made a block at a timeCan reference data by block numberStream-Oriented BufferingFor stream-oriented I/O devices such asterminalsprinterscommunication portsmouse and other pointing devices, and most other devices that are not secondary storage Transfer information as a stream of bytesSingle BufferOS assigns a buffer in the system portion of main memory for an I/O requestBlock OrientedSingle BufferInput transfers are made to system bufferBlock moved to user space when neededThe next block is immediately requested, expecting that the block will eventually be neededRead ahead or Anticipated InputA reasonable assumption as data is usually accessed sequentiallyBlock OrientedSingle Buffer Provide a speedupUser process can be processing one block of data while the next block is being read in OS is able to swap the process out because the I/O operation is taking place in system memoryBlock OrientedSingle Buffer Complicate the logic in OSOS must keep track of the assignment of system buffers to user processes Affect the swapping logicConsider both the I/O operation and swapping involve the same diskDoes it make sense to swap the process out after the I/O operation finishes?Stream-orientedSingle BufferLine-at-time or Byte-at-a-timeTerminals often deal with one line at a time with carriage return signaling the end of the lineAlso line printerByte-at-a-time suits devices where a single keystroke may be significantAlso sensors and controllersInteraction between OS and user process follows the producer/consumer modelDouble BufferUse two system buffers instead of oneA process can transfer data to or from one buffer while OS empties or fills the other bufferCircular BufferMore than two buffers are usedEach individual buffer is one unit in a circular bufferUsed when I/O operation must keep up with processFollows the bounded-buffer producer/consumer modelBuffer LimitationsBuffering smoothes out peaks in I/O demandBut with enough demand eventually all buffers become full and their advantage is lostIn multiprogramming environment, buffering can increase the efficiency of OS and the performance of individual processesRoadmapOperating System Design IssuesI/O BufferingDisk SchedulingDisk CacheDisk Performance ParametersCurrently, disks are at least four orders of magnitude slower than main memory performance of disk storage subsystem is of vital concernA general timing diagram of disk I/O transfer is shown here.Positioning the Read/Write HeadsWhen the disk drive is operating, the disk is rotating at constant speed.To read or write, the head must be positioned at the desired track and at the beginning of the desired sector on that track.Track selection involves moving the head in a movable-head system. Disk Performance ParametersAccess Time is the sum of:Seek time: The time it takes to position the head at the desired trackRotational delay or rotational latency: The time it takes for the beginning of the sector to reach the headTransfer Time is the time taken to transfer the data (as the sector moves under the head)Disk Performance ParametersTotal average access time Ta Ta = Ts + 1 / (2r) + b / (rN)where Ts = average seek time b = no. of bytes to be transferred N = no. of bytes on a track r = rotation speed, in revolutions / sec.Due to the seek time, the order in which sectors are read from disk has a tremendous effect on I/O performance Disk SchedulingPoliciesTo compare various schemes, consider a disk head is initially located at track 100.assume a disk with 200 tracks and that the disk request queue has random requests in it. The requested tracks, in the order received by the disk scheduler, are 55, 58, 39, 18, 90, 160, 150, 38, 184.First-in, first-out (FIFO)Process requests sequentiallyFair to all processesMay have good performance if most requests are to clustered file sectorsApproaches random scheduling in performance if there are many processesdisk arm movementPriorityControl of the scheduling is outside the control of disk management softwareGoal is not to optimize disk use but to meet other objectivesOften, short batch jobs & interactive jobs are given higher priority than longer jobsProvide high throughput & good interactive response timeLonger jobs may have to wait an excessively long timeLast-in, first-outGood for transaction processing systemsThe device is given to the most recent user so there should be little arm movement for moving through a sequential filePossibility of starvation since a job may never regain the head of the lineShortest Service Time FirstSelect the disk I/O request that requires the least movement of the disk arm from its current positionAlways choose the minimum seek timeSCANArm moves in one direction only, satisfying all outstanding requests until it reaches the last track in that direction then the direction is reversedLOOK policy: reverse direction when there are no more requests in a directionSCANSCAN is biased against the area most recently traversed does not exploit localitySCAN favors jobs whose requests are for tracks nearest to both innermost and outermost tracks andthe latest-arriving jobsC-SCAN (Circular SCAN)Restricts scanning to one direction onlyWhen the last track has been visited in one direction, the arm is returned to the opposite end of the disk and the scan begins againReduces the maximum delay experienced by new requestsN-step-SCANWith SSTF, SCAN, C-SCAN, the arm may not move if processes monopolize the device by repeated requests to one track: arm stickinessSegments the disk request queue into subqueues of length NSubqueues are processed one at a time, using SCANNew requests are added to other queue when a queue is being processedFSCANTwo subqueuesWhen a scan begins, all of the requests are in one of the queues, with the other emptyDuring the scan, all new requests are put into the other queueService of new requests is deferred until all of the old requests have been processedPerformance ComparedComparison of Disk Scheduling AlgorithmsRoadmapOperating System Design IssuesI/O BufferingDisk SchedulingDisk CacheDisk CacheBuffer in main memory for disk sectorsContains a copy of some of the sectorsWhen an I/O request is made for a particular sector, a check is made to determine if the sector is in the disk cacheIf so, the request is satisfied via the cacheIf not, the requested sector is read into the disk cache from the diskDisk CacheLocality of referenceWhen a block of data is fetched into the cache to satisfy a single I/O request, it is likely that there will be future references to that same blockOne design issue: replacement strategyWhen a new sector is brought into the disk cache, one of the existing blocks must be replacedLeast Recently Used (LRU)The block that has been in the cache the longest with no reference to it is replacedA stack of pointers reference the cacheMost recently referenced block is on the top of the stackWhen a block is referenced or brought into the cache, it is placed on the top of the stackThe block on the bottom of the stack is to be replacedLRU Disk Cache PerformanceThe miss ratio is, principally, a function of the size of the disk cacheLeast Frequently Used (LFU)The block that has experienced the fewest references is replacedA counter is associated with each blockIncremented each time the block is accessedWhen replacement is required, the block with the smallest count is selected. Consider certain blocks are referenced repeatedly in short intervals due to locality, but are referenced relatively infrequently overallFrequency-Based ReplacementBlocks are logically organized in a stack, similar to LRUOn a cache hitOn a miss, the block with the smallest count that is not in the new section is chosen for replacementRefined Frequency-Based ReplacementOnly blocks in the old section are eligible for replacementAllows relatively frequently referenced blocks a chance to build up their reference counts before becoming eligible for replacementSimulation studies indicate that this refined policy outperforms LRU and LFU
Các file đính kèm theo tài liệu này:
- chap11_3465.ppt