Chapter 5: Device Management

Get Started. It's Free
or sign up with your email address
Chapter 5: Device Management by Mind Map: Chapter 5: Device Management

1. Performing a Read Operation

1.1. Read(device_i, "%d", x)

1.2. • CPU is executing some process

2. I/O System

2.1. Each I/O device consists of a device controller and the physical device itself.

2.2. Devices: - storage devices: for permanent storage (e.g. disk, tape) - communication devices: to transfer data from the computer to another machine (e.g. keyboard, a terminal display, or a serial port to a modem or a network).

2.3. Devices can be character-oriented or block-oriented

3. I/O Devices

3.1. hardware that connects the device to the computer’s address and data bus

3.2. – continuously monitors and controls the operation of the device. – provides an interface to the computer: a set of components that the CPU can manipulate to perform I/O operations. Need to have a standard interface so that devices can be nterchanged.

3.3. Device Manager: consists of a collection of device drivers

3.4. – hide the operation details of each device controller from application programmer. – provide a “common” interface to all sorts of devices.

4. Device Manager Abstraction

4.1. • Device have controllers that provide an interface to the software

4.2. A device driver is a collection of functions that abstract the operation of a particular device.

4.3. • The device manager infrastructure : part of the OS that houses the collection of device drivers

5. System Call Interface

5.1. • Functions available to application programs

5.2. • Abstract all devices (and files) to a few interfaces

5.3. • Make interfaces as similar as possible – Block vs character – Sequential vs direct access

5.4. • Device driver implements functions (one entry point per API function)

6. I/O Strategies

6.1. • Direct I/O with polling – CPU is responsible to check I/O status

6.2. • DMA I/O with polling – rarely found (technology)

6.3. • Direct I/O with interrupts – transfer data in device (buffer). Interrupts happen on start and termination of data transfer – involve CU & go through CPU

6.4. • DMA I/O with interrupts – involve CU but not go through CPU – fastest.

7. Interrupt Driven I/O

7.1. • Instead of having the CPU continuously poll status register of I/O device(s), have I/O device notify CPUwhen it has completed an I/O operation.

7.1.1. – CPU initiates an I/O operation as described before

7.1.2. – as I/O device performs the operation, CPU is switched to another process (thru the process scheduler).

7.1.3. – When the I/O device is done, it notifies the CPU by sending it an interrupt signal.

7.1.4. – The CPU switches control to an interrupt handler to service the interrupt.

7.1.5. – The interrupt handler completes I/O operation and returns control to interrupted process.

8. Interrupt Handler

8.1. a program that is part of the device manager.

8.1.1. Save the state of the interrupted process: save the contents of CPU registers (all registers) and load CPU registers with its own values:

8.1.1.1. Context Switch -Determine which I/O device caused the interrupt -Branch to the device driver associated with that device.

9. Device Driver

9.1. a program that is part of the device manager.

9.1.1. When called, it does the following: – determine the cause of the interrupt – complete the I/O operation – clear the done flag of the device controller status register – restore the state of the interrupted process context switch – return control to interrupted process

10. Interrupt Vector

10.1. • Replace the InterruptRequest flag with an interrupt vector, ie. a collection of flags, one flag for each device. • Replace the OR gate with a vector of interrupt request lines one for each device. • An Interrupt Vector Table: a table of pointers to device drivers : entry i of the table stores the address of device driver i. • The interrupt vector table is generally stored at a fixed location in memory (e.g. first 100 locations).

11. Race Condition

11.1. 1. Disable all other interrupts while an interrupt is being processed

11.2. 2. Enable other interrupts while an interrupt is being processed – Must use system stack to save PC and state of the interrupted process. – Must use a priority scheme. – Part of the interrupt handler routine should not be interrupted.

11.3. • Most CPUs have two types of interrupts: – maskable interrupts: can be interrupted – un-maskable interrupts: can not be interrupted

12. Device Status Table

12.1. a table containing information about each I/O device.

12.2. Contains an entry for each I/O device.

12.3. • Each entry contains such information as: – device type, address, state (idle, busy, not functioning, etc.) – if device is busy: • the type of operation being performed by that device • the process ID of the process that issued the operation • for some devices: a queue of waiting requests for that device

13. Driver-Kernel Interface

13.1. • Drivers are distinct from main part of kernel

13.2. • Kernel makes calls on specific functions, drivers implement them

13.3. • Drivers use kernel functions for: – Device allocation – Resource (e.g., memory) allocation – Scheduling

14. Buffering

14.1. • A technique that people use in everyday life to help them do two or more things at the same time.

14.1.1. • Input buffering: technique of having the input device copy information into main memory before the process request it.

14.1.2. • Output buffering: technique of saving info in memory and then writing it to the device while the process continues execution.

14.2. • Employed by device manager to keep I/O devices busy during times when a process is not requiring I/O operations.

15. I/O Buffering

15.1. A buffer is a memory area used to store data while it is being transferred between two devices or between a device and an application.

15.2. Used to reduce the effects of speed mismatch between I/O device and CPU or among I/O devices.

15.3. • Generally used to allow more overlap between producer and consumer

15.4. ==> more overlap between the CPU and I/O devices.

16. Disk Optimizations

16.1. • Transfer Time: Time to copy bits from disk surface to memory

16.2. • Disk latency time: Rotational delay waiting for proper sector to rotate under R/W head

16.3. • Disk seek time: Delay while R/W head moves to the destination track/cylinder

16.4. • Access Time = seek + latency + transfer

16.4.1. Characteristics of Moving-Head Disk Storage

16.4.1.1. • Physical layout of disk drives

16.4.1.2. • Rotate on spindle

16.4.1.3. • Made up of tracks, which in turn contain sectors

16.4.1.4. • Vertical sets of tracks form cylinders

16.4.1.5. – Rotational latency

16.4.1.6. – Seek time • Time for read-write head to move to new cylinder

16.4.1.7. – Transmission time • Time for all desired data to spin by read-write head

17. Why Disk Scheduling Is Necessary

17.1. • First-come-first-served (FCFS) scheduling has major drawbacks – Seeking to randomly distributed locations results in long waiting times – Under heavy loads, system can become overwhelmed

17.2. • Requests must be serviced in logical order to minimize delays – Service requests with least mechanical motion

17.3. • The first disk scheduling algorithms concentrated on minimizing seek times, the component of disk access that had the highest latency

17.4. • Modern systems perform rotational optimization as well

18. Disk Scheduling Strategies

18.1. – Throughput • Number of requests serviced per unit of time

18.2. – Mean response time • Average time spent waiting for request to be serviced

18.3. – Variance of response times • Measure of the predictability of response times

18.4. • Overall goals : – Maximize throughput – Minimize response time and variance of response times

19. First-Come-First-Served (FCFS) Disk Scheduling

19.1. Requests serviced in order of arrival

19.2. – Advantages • Fair • Prevents indefinite postponement • Low overhead

19.3. – Disadvantages • Potential for extremely low throughput – FCFS typically results in a random seek pattern because it does not reorder requests to reduce service delays

20. Shortest-Seek-Time-First

20.1. Service request closest to read-write head

20.2. – Advantages • Higher throughput and lower response times than FCFS • Reasonable solution for batch processing systems

20.3. – Disadvantages • Does not ensure fairness • Possibility of indefinite postponement • High variance of response times • Response time generally unacceptable for interactive systems

21. C-SCAN

21.1. • Similar to SCAN, but at the end of an inward sweep, the disk arm jumps (without servicing requests) to the outermost cylinder

21.2. Reduce variance of response times as the expense of throughput and mean response time

22. LOOK & C-LOOK

22.1. LOOK: Improvement on SCAN scheduling

22.2. C-LOOK: Improves C-SCAN scheduling

23. Optimizing Seek Time

23.1. • Multiprogramming on I/O-bound programs => set of processes waiting for disk

23.2. • Seek time dominates access time => minimize seek time across the set