1. Semaphore
1.1. Variable that has an int. value upon which 3 operations defined
1.1.1. may be initialized to non-negative int. value
1.1.2. semWait operation decrements the value
1.1.3. semSignal operation increments the value
1.2. Strong sem.
1.2.1. process that has been blocked the longest is released from the queue first (FIFO)
1.3. Weak sem.
1.3.1. order in which processes are removed from the queue is not specified
1.4. Implementation
1.4.1. imperative that the semWait & semSignal operation be implemented as atomic primitives
1.4.2. in hardware / firmware
2. Monitors
2.1. Programming language construct that provides equivalent functionality to that of semaphore & is easier to control
2.2. Characteristic
2.2.1. local data variable are accessible only by the monitor's procedures & not by any external procedure
2.2.2. process enters monitor by invoking one of its procedures
2.2.3. only 1 process may be executing in the monitor at a time
2.3. Implemented in a no. of programming languages
2.3.1. Concurrent Pascal
2.3.2. Pascal - Plus
2.3.3. Java
2.3.4. Modula-2
2.3.5. Modula-3
3. Synchronization
3.1. To enforce mutual exclusion
3.2. To exchange information
3.3. Achieved by the use of condition variables that are contained within the monitor & accessible only within the monitor
4. Blocking Send, Blocking Receive
4.1. Blocking receive
4.1.1. both sender & receiver are blocked until the message is delivered
4.1.2. sometimes refered to as a rendezvous
4.2. Non-blocking send
4.2.1. non-blocking send,blocking receive
4.2.2. non-blocking send, non-blocking received
5. Addressing
5.1. Direct add.
5.1.1. send primitive includes a specific identifier of destination process
5.1.2. receive primitive can be handled in 1 of 2 ways
5.1.2.1. require that the process explicitly designate a sending process
5.1.2.2. implicit add.
5.2. Indirect add.
5.2.1. messages are sent to a shared data structure consisting of queues that can temporarily hold messages
5.2.2. queues are referred to as mailboxes
5.2.3. 1 process sends a message to the mailbox & the other process picks up the message from the mailbox
5.2.4. allows for greater flexibility in the use of message
6. Multiple Process
6.1. OS design is concerned with management of process & thread
6.1.1. multiprogramming
6.1.2. multiprocessing
6.1.3. distributed processing
7. Concurrency
7.1. Arises in context
7.1.1. multiple app.
7.1.2. structured app.
7.1.3. OS structure
7.2. Shared data
7.2.1. may cause problems
7.2.2. process may shared data
7.2.3. thread in same process can share global address space
7.3. Principle
7.3.1. interleaving & overlapping
7.3.2. in multiprogramming, relative speed of execution of processes cannot predicted
7.4. Difficulty
7.4.1. sharing of global resources
7.4.2. manage allocation of resources optimally
7.4.3. locate programming error
8. Race Condition
8.1. Read & write shared data
8.1.1. multiple process
8.1.2. thread
8.2. Final result depends on execution order
9. Mutual Exclusion
9.1. Need
9.1.1. no controlled access to shared data
9.1.2. result of concurrent execution will depend on order in which instruction interleaved
9.1.3. error in timing dependent, usually not reproducible
9.2. Enforced it on execution of critical sections
9.2.1. process execute code that manipulated shared data
9.3. Data coherence
9.3.1. ensures if properly used
9.4. Requirement
9.4.1. non interference
9.4.2. must be enforced
9.4.3. no deadlock & starvation
9.4.4. progress
9.4.5. no assumption
9.4.5.1. relative process speed
9.4.5.2. no. of processes
9.4.6. a process remain inside its critical section for a finite only
9.5. Hardware support
9.5.1. interrupt disabling
9.5.1.1. uni-processor system
9.5.1.2. guarantees mutual exclusion
9.5.2. not work in multiprocessor architecture