Chapter 5: Device Management

시작하기. 무료입니다
또는 회원 가입 e메일 주소
Chapter 5: Device Management 저자: Mind Map: Chapter 5: Device Management

1. I/O System

1.1. Each I/O device consists of a device controller and the physical device itself.

1.2. Devices: - storage devices: for permanent storage (e.g. disk, tape) - communication devices: to transfer data from the computer to another machine (e.g. keyboard, a terminal display, or a serial port to a modem or a network).

1.3. Devices can be character-oriented or block-oriented

2. I/O Strategies

2.1. • Direct I/O with polling – CPU is responsible to check I/O status

2.2. • DMA I/O with polling – rarely found (technology)

2.3. • Direct I/O with interrupts – transfer data in device (buffer). Interrupts happen on start and termination of data transfer – involve CU & go through CPU

2.4. • DMA I/O with interrupts – involve CU but not go through CPU – fastest.

3. Interrupt Driven I/O

3.1. • Instead of having the CPU continuously poll status register of I/O device(s), have I/O device notify CPUwhen it has completed an I/O operation.

3.1.1. – CPU initiates an I/O operation as described before

3.1.2. – as I/O device performs the operation, CPU is switched to another process (thru the process scheduler).

3.1.3. – When the I/O device is done, it notifies the CPU by sending it an interrupt signal.

3.1.4. – The CPU switches control to an interrupt handler to service the interrupt.

3.1.5. – The interrupt handler completes I/O operation and returns control to interrupted process.

4. Interrupt Handler

4.1. a program that is part of the device manager.

4.1.1. Save the state of the interrupted process: save the contents of CPU registers (all registers) and load CPU registers with its own values:

4.1.1.1. Context Switch -Determine which I/O device caused the interrupt -Branch to the device driver associated with that device.

5. Device Driver

5.1. a program that is part of the device manager.

5.1.1. When called, it does the following: – determine the cause of the interrupt – complete the I/O operation – clear the done flag of the device controller status register – restore the state of the interrupted process context switch – return control to interrupted process

6. Interrupt Vector

6.1. • Replace the InterruptRequest flag with an interrupt vector, ie. a collection of flags, one flag for each device. • Replace the OR gate with a vector of interrupt request lines one for each device. • An Interrupt Vector Table: a table of pointers to device drivers : entry i of the table stores the address of device driver i. • The interrupt vector table is generally stored at a fixed location in memory (e.g. first 100 locations).

7. Race Condition

7.1. 1. Disable all other interrupts while an interrupt is being processed

7.2. 2. Enable other interrupts while an interrupt is being processed – Must use system stack to save PC and state of the interrupted process. – Must use a priority scheme. – Part of the interrupt handler routine should not be interrupted.

7.3. • Most CPUs have two types of interrupts: – maskable interrupts: can be interrupted – un-maskable interrupts: can not be interrupted

8. Driver-Kernel Interface

8.1. • Drivers are distinct from main part of kernel

8.2. • Kernel makes calls on specific functions, drivers implement them

8.3. • Drivers use kernel functions for: – Device allocation – Resource (e.g., memory) allocation – Scheduling

9. Why Disk Scheduling Is Necessary

9.1. • First-come-first-served (FCFS) scheduling has major drawbacks – Seeking to randomly distributed locations results in long waiting times – Under heavy loads, system can become overwhelmed

9.2. • Requests must be serviced in logical order to minimize delays – Service requests with least mechanical motion

9.3. • The first disk scheduling algorithms concentrated on minimizing seek times, the component of disk access that had the highest latency

9.4. • Modern systems perform rotational optimization as well

10. First-Come-First-Served (FCFS) Disk Scheduling

10.1. Requests serviced in order of arrival

10.2. – Advantages • Fair • Prevents indefinite postponement • Low overhead

10.3. – Disadvantages • Potential for extremely low throughput – FCFS typically results in a random seek pattern because it does not reorder requests to reduce service delays

11. Shortest-Seek-Time-First

11.1. Service request closest to read-write head

11.2. – Advantages • Higher throughput and lower response times than FCFS • Reasonable solution for batch processing systems

11.3. – Disadvantages • Does not ensure fairness • Possibility of indefinite postponement • High variance of response times • Response time generally unacceptable for interactive systems

12. Optimizing Seek Time

12.1. • Multiprogramming on I/O-bound programs => set of processes waiting for disk

12.2. • Seek time dominates access time => minimize seek time across the set

13. Performing a Read Operation

13.1. Read(device_i, "%d", x)

13.2. • CPU is executing some process

14. I/O Devices

14.1. hardware that connects the device to the computer’s address and data bus

14.2. – continuously monitors and controls the operation of the device. – provides an interface to the computer: a set of components that the CPU can manipulate to perform I/O operations. Need to have a standard interface so that devices can be nterchanged.

14.3. Device Manager: consists of a collection of device drivers

14.4. – hide the operation details of each device controller from application programmer. – provide a “common” interface to all sorts of devices.

15. Device Manager Abstraction

15.1. • Device have controllers that provide an interface to the software

15.2. A device driver is a collection of functions that abstract the operation of a particular device.

15.3. • The device manager infrastructure : part of the OS that houses the collection of device drivers

16. System Call Interface

16.1. • Functions available to application programs

16.2. • Abstract all devices (and files) to a few interfaces

16.3. • Make interfaces as similar as possible – Block vs character – Sequential vs direct access

16.4. • Device driver implements functions (one entry point per API function)

17. Device Status Table

17.1. a table containing information about each I/O device.

17.2. Contains an entry for each I/O device.

17.3. • Each entry contains such information as: – device type, address, state (idle, busy, not functioning, etc.) – if device is busy: • the type of operation being performed by that device • the process ID of the process that issued the operation • for some devices: a queue of waiting requests for that device

18. Buffering

18.1. • A technique that people use in everyday life to help them do two or more things at the same time.

18.1.1. • Input buffering: technique of having the input device copy information into main memory before the process request it.

18.1.2. • Output buffering: technique of saving info in memory and then writing it to the device while the process continues execution.

18.2. • Employed by device manager to keep I/O devices busy during times when a process is not requiring I/O operations.

19. I/O Buffering

19.1. A buffer is a memory area used to store data while it is being transferred between two devices or between a device and an application.

19.2. Used to reduce the effects of speed mismatch between I/O device and CPU or among I/O devices.

19.3. • Generally used to allow more overlap between producer and consumer

19.4. ==> more overlap between the CPU and I/O devices.

20. Disk Optimizations

20.1. • Transfer Time: Time to copy bits from disk surface to memory

20.2. • Disk latency time: Rotational delay waiting for proper sector to rotate under R/W head

20.3. • Disk seek time: Delay while R/W head moves to the destination track/cylinder

20.4. • Access Time = seek + latency + transfer

20.4.1. Characteristics of Moving-Head Disk Storage

20.4.1.1. • Physical layout of disk drives

20.4.1.2. • Rotate on spindle

20.4.1.3. • Made up of tracks, which in turn contain sectors

20.4.1.4. • Vertical sets of tracks form cylinders

20.4.1.5. – Rotational latency

20.4.1.6. – Seek time • Time for read-write head to move to new cylinder

20.4.1.7. – Transmission time • Time for all desired data to spin by read-write head

21. Disk Scheduling Strategies

21.1. – Throughput • Number of requests serviced per unit of time

21.2. – Mean response time • Average time spent waiting for request to be serviced

21.3. – Variance of response times • Measure of the predictability of response times

21.4. • Overall goals : – Maximize throughput – Minimize response time and variance of response times

22. C-SCAN

22.1. • Similar to SCAN, but at the end of an inward sweep, the disk arm jumps (without servicing requests) to the outermost cylinder

22.2. Reduce variance of response times as the expense of throughput and mean response time

23. LOOK & C-LOOK

23.1. LOOK: Improvement on SCAN scheduling

23.2. C-LOOK: Improves C-SCAN scheduling