• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/34

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

34 Cards in this Set

  • Front
  • Back
Cooperating Process
-Process that can affect or be affected by other processes executing in the system.
-Can directly share a logical address space (code and data) or be allowed to share data only through files or messages (threads).
Race Condition
-Where several processes access and manipulate the same data concurrently and the outcome of the execution depends on the particular order in which the access takes place.
-Process Synchronization and coordination is used to avoid race conditions.
Producer-Consumer Problem (Bounded-Buffer Problem)
-2 processes (producer & consumer) share a common, fixed-size buffer used as a queue.
-Producer generates a piece of data and places it in the buffer, while simultaneously the consumer consumes the data (removes it from the buffer) one piece at a time.
-The problem is to make sure the producer doesn't add data to a full buffer and make sure the consumer doesn't remove data from an empty buffer.
Producer-Consumer Problem (Bounded-Buffer Problem) Solution #1 (Counter)
-One solution that fills all buffers is to have an integer counter that keeps track of the # of full buffers.
-It's incremented by the producer when it produces a new buffer and decremented by the consumer when it consumes a buffer.
-This solution causes race conditions! (Bad Solution)
Critical Section
-Piece of code that accesses a shared resource that must not be concurrently accessed by more than one process.
-Process may be changing common variables, updating a table, writing a file, etc.
-No 2 processes are executing in its critical section at the same time.
-2 approaches for handling critical sections in OS's are preemptive kernels and nonpreemptive kernels.
Critical Section Problem
-To design a protocol that the processes can use to cooperate.
-Each process must request permission to enter its critical section.
-Section of code implementing the permission request is the entry section.
-Critical section is followed by an exit section.
-Remaining code is the remainder section.
Critical Section Problem Solution Requirements
1) Mutual Exclusion: No 2 processes can execute in their critical sections at the same time.
2) Progress: If no process is executing it its critical section and some wish to enter their's, then only this processes no executing in their remainder sections can decide which will enter its critical section next, and this selection can't be postponed indefinitely.
3) Bounded Waiting: There's a bound on the # of times other processes are allowed to enter their critical sections after a process has made a request to enter its critical section.
Preemptive Kernel
-Allows a process to be preempted while it is running in kernel mode.
-May be more responsive; there's less risk that a kernel-mode process will run for a long period before relinquishing the processor to a waiting process.
-More suitable for real-time programming.
Nonpreemptive Kernel
-Does not allow a process running in kernel mode to be preempted.
-A kernel-mode process will run until it exits kernel mode, blocks, or voluntarily yields control of the CPU.
-Essentially free of race conditions in kernel mode since only one process is active in the kernel at a time.
Locking
-Protecting critical regions through the use of locks.
Atomically
-Instructions done without interruption; if 2 of the same instructions are executed simultaneously, they will be executed sequentially in some arbitrary order.
-Modern machines provide special atomic hardware instructions that can be used to solve the critical-section problem.
-test_and_set() instruction is executed atomically; returns original value of passed parameter and sets the new value of passed parameter to TRUE.
-test_and_set() mutual exclusion is implemented by declaring a boolean variable "lock", initialized to false.
-compare_and_swap() instruction is executed atomically; returns original value of passed parameter *value and sets the variable *value to the value of the passed parameter new_value if *value==expected.
-compare_and_swap() mutual exclusion is implemented by declaring a global variable (lock) initialized to 0.
-These algorithme don't satisfy the bounded-waiting requirement.
-These are complicated and generally inaccessible to application programmers.
Mutex Lock
-Simplest software tool to solve critical section problem that protects critical regions and prevents race conditions.
-A process must acquire the lock before entering a critical section w/ the acquire() function; it releases the lock when it exits the section w/ the release() function.
-Boolean variable "available" indicates if the lock is available or not.
-Calls to acquire() and release() must be atomic; usually implemented using hardware atomic instructions.
-Disadvantage: It requires busy waiting.
Busy Waiting
-While a process is in its critical section, any other process that tries to enter its critical section must loop continuously un the call to acquire().
-This type of mutex lock is also called a spinlock; the process spins while waiting for the lock to become available.
-Wastes CPU cycles that other processes might be able to use productively what a single CPU is shared.
-Advantage to spinlocks: No context switch is required when a process must wait on a lock; they're useful when locks are expected to be held for short times.
-Spinlocks are employed on multiprocessor systems when one thread can spin on one processor while another thread performs its critical section on another.
Semaphore
-Semaphore S is an integer variable that apart from initialization, is accessed only through 2 standard atomic operations:
1) wait() which decrements S as long as it's not <= 0 (busy waiting).
2) signal() which increments S.
-When one process modifies the semaphore value, no other process can simultaneously modify that same value.
-wait(S) must be executed without interruption.
-Semaphores can be used to solve various synchronization problems.
Counting Semaphore
-Integer value S can range over an unrestricted domain.
-Can be used to control access to a given resource consisting of a finite # of instances, with the semaphore initialized to the # of resources available. Each process that wants to use a resource performs wait(), decrementing the count; each process that releases a resource performs signal(), incrementing the count.
-Counting semaphores can be implemented as a binary semaphore.
Binary Semaphore
-Integer value S can range only between 0 and 1.
-Behave similarly to mutex locks.
Semaphore Implementation w/ No Busy Waiting
-To avoid busy waiting, a process can block itself when it finds that the semaphore value isn't positive.
-Block operation places process into a waiting queue associated with the semaphore, and state of the process is switched to the waiting state. Control is transferred to the CPU scheduler, which selects another process to execute.
-A blocked process is restarted by a wakeup() operation, which changes the process to a ready state and places it in the ready queue. This is all done when some other process executes a signal() operation.
-Each semaphore has an integer value "value" and a list of processes "list".
-block() and wakeup() are provide by the OS as basic system calls.
-Semaphore values may be negative.
-Busy waiting isn't completely eliminated; it's moved from entry section to the critical section of application programs.
Deadlock
-2 or more processes are waiting indefinitely for an event that can be caused only by one of the waiting processes.
Starvation (Indefinite Blocking)
-A process may never by removed from the semaphore queue in which it is suspended.
-May occur if we remove processes from the list in LIFO order.
Priority Inversion
-Scheduling problem when a lower-priorty process holds a lock needed by a higher-priorty process.
-On a system with more than 2 priorities, a process with a middle priority affects how long a higher priority process must wait for a lower priority process to relinquish resources.
-Problem solved via a priority-inheritance protocol.
Priority-Inheritence Protocol
-All processes that are accessing resources needed by a higher-priority process inherit the higher priority until they are finished with the resources in question, then when finished their priorities revert to their original values.
-Prevents other processes from preempting a process' execution while a higher-priority process waits for resources of said process.
Classical Problems of Synchronization
1) Bounded-Buffer Problem
2) Readers-Writers Problem
3) Dining-Philosophers Problem
Bounded-Buffer Problem (Producer-Consumer Problem)
Semaphore Solution:
-int n; (Assume pool consists of n buffers and each buffer can hold one item.)
-semaphore mutex = 1; (Provides mutual exclusion for access to buffer pool.)
-semaphore empty = n; (Number of empty buffers.)
-semaphore full = 0; (Number of full buffers.)
Readers-Writers Problem
Problem:
-Database is to be shared among several concurrent processes.
-Readers only read database; don't perform updates.
-Writers update database; can read and write.
-Only one writer can access the database at the same time; multiple readers can read at the same time.
Variations (all involving priorities):
1) first readers-writers problem: Requires that no reader be kept waiting unless a writer has already obtained permission to use the shared object. Possible writer starvation.
2) second readers-writers problem: Requires that once a writer is ready, that writer perform its write as soon as possible. Possible reader starvation.
Solution to 1st reads-writers problem:
-semaphore rw_mutex = 1; (Common to both reader and writer processes that functions as a mutual exclusion semaphore for writers.)
-semaphore mutex = 1; (Ensures mutual exclusion when variable read_count is updated.)
-int read_count = 0; (Keeps track of how many processes are currently reading the object.)
Reader-Writer Locks
-Requires specifying the mode of the lock: either read or write access.
-When a process wants to read shared data, it requests the reader-writer lock in read mode; when it wants to modify shared data, it requests the lock it write mode.
-Most useful in applications where it's easy to identify which processes only read or write shared data, and in applications that have more readers than writers.
Dining-Philosophers Problem
Problem:
-5 silent philosophers sit at a table w/ 5 chopsticks and a bowl of rice, alternatively thinking and eating.
-One can only eat when they have both chopsticks and they can't talk; when done eating, they put chopsticks down.
-Each chopstick can be held by 1 philosopher at a time.
-Illustrates need to allocate several resources among several processes in a deadlock-free and starvation-free manner.
Solutions:
1) Represent each chopstick with a semaphore; a philosopher tries to grab a chopstick by executing wait() and releases it by executing signal(). This could cause a deadlock!!!
Deadlock Handling:
2) Allow at most 4 philosophers to be sitting simultaneously at the table.
3) Allow a philosopher to pick up her chopsticks only if both chopsticks are available (picking up must be done in a critical section).
4) Use an asymmetric solution - an odd-numbered philosopher picks up 1st the left chopstick and then the right chopstick. Even-numbered philosopher picks up 1st the right chopstick then the left chopstick.
Synchronization in Windows
-On single-processor systems, when kernel accesses global resources, it temporarily masks interrupts for all interrupt handlers. (Synchronization in kernel)
-On multiprocessor systems, Windows uses spinlocks to protect access to global resources (only for short code segments), and a thread will never be preempted while holding a spinlock. (Synchronization in kernel)
-For thread synchronization outside the kernel, Windows provides dispatcher objects.
Dispatcher Object (Windows)
-Threads synchronize according to several different mechanisms, including mutex locks, semaphores, events, and timers.
-System protects shared data by requiring a thread to gain ownership of a mutex to access data and to release ownership when it's finished.
-Events, like condition variables, notify a waiting thread when a desired condition occurs.
-Timers are used to notify one or more thread when time has expired.
-Dispatcher objects can be in one of 2 states:
1) signaled state: object is available; thread won't be blocked when acquiring object.
2) non-signaled state: object not available; thread will be blocked.
-When a thread blocks on a non-signaled dispatcher object, it's placed in a waiting queue of that object; when the state for the dispatcher object moves to signaled, the kernel checks for any threads waiting on the object.
-Kernel then selects one thread for mutexes, or all threads from the waiting queue for events, and moves them from the waiting state to the ready state to resume execution.
Critical-Section Object (Windows)
-A user-mode mutex that can often be acquired and released without kernel intervention.
-On a multiprocessor system, it first uses a spinlock while waiting for the other thread to release the object.
-If it spins too long, the acquiring thread will then allocate a kernel mutex and yield its CPU.
Synchronization in Linux
-Prior to version 2.6, Linux was a nonpreemptive kernel; a process running in kernel mode couldn't be preempted.
-Simplest synchronization technique within kernel is an atomic integer atomic_t. All math operations using atomic integers are performed without interruption. Useful where an integer variable, like a counter, needs to be updated.
-Mutex locks are available for protecting critical sections within the kernel.Task must invoke the mutex_lock() function prior to entering critical section and mutex_unlock() after exiting. If mutex lock is unavailable, task is put into a sleep state and awakened when the lock's owner invokes mutex_unlock().
-Linux provides spinlocks, semaphores, and reader-writer versions of these 2 locks for the kernel.
-On single-processor systems, spinlocks are replaced by enabling and disabling kernel preemption; rather than holding a spinlock, kernel disables preemption(preempt_disable()) and rather than releasing the spinlock, kernel enables preemption (preempt_enable()).
-Spinlocks, along with enabling and disabling kernel preemption are used when a lock is held for a short duration.
-Semaphores or mutex locks are used when locks must be held for a longer period.
Synchronization in Solaris
-Solaris provides adaptive mutex locks, condition variables, semaphores, reader-writer locks, and turnstiles to control access to critical sections.
-Solaris uses adaptive mutex method to protect only data that is accessed by short code segments; used if a lock will be held for less than a few hundred instructions.
-For longer code segments, condition variables and semaphores are used.
-Reader-writer locks are used to protect data that are accessed frequently but are usually accessed in a read-only manner. Used only on long sections of code.
-Same types of locks are available inside and outside the kernel.
Adaptive Mutex (Solaris)
-Protects access to every critical data item.
-On a multiprocessor system, it starts as a standard semaphore implemented as a spinlock.
-If data is locked and already in use, adaptive mutex does one of 2 things:
1) If lock is held by a thread that is running on another CPU, thread spins.
2) If thread holding the lock is not currently in run state, thread blocks, going to sleep until awakened by release of lock.
-On a single-processor system, threads always sleep if they encounter lock.
Turnstile (Solaris)
-A queue structure containing threads blocked on a lock.
-Solaris uses turnstiles to order the list of threads waiting to acquire either an adaptive mutex or a reader-writer lock.
-Solaris gives each kernel thread its own turnstile.
-The turnstile for the first thread to block on a synchronized object becomes the turnstile for the object itself and threads subsequently blocking on the lock will be added to this turnstile. When the initial thread releases the lock, it gains a new turnstile from a list of turnstiles maintained by the kernel.
-To prevent a priority inversion, turnstiles are organized according to a priority-inheritance protocol: if a lower-priority thread currently holds a lock on which a higher-priority thread is blocked, the thread with the lower priority will temporarily inherit the priority of the higher-prioirty thread until it release its lock.
PThread Synchronization
-Pthread API is available at the user level and not part of any particular kernel.
-Provides mutex locks, conditional variables, and read-write locks.
-Mutex locks is the fundamental technique in Pthreads; a thread acquires the lock before entering a critical section with pthread_mutex_lock() and releases it upon exiting with pthread_mutex_unlock().
-Pthread uses the pthread_mutex_t data type and mutex is created with pthread_mutex_init().
-Calling thread is blocked if mutex lock is unavailable until owner invokes pthread_mutex_unlock().
-All mutex functions return 0 with correct operation.
-Semaphores are not part of the Pthreads standard; they belong to the POSIX SEM extension.
-POSIX specifies 2 types of semaphores:
1) named: has an actual name in the file system and can be shared by multiple unrelated processes.
2) unnamed: can be used only by threads belonging to the same process. sen_init() creates and initializes an unnamed semaphore.
-Pthreads cakes the original wait() and signal operations sem_wait() and sem_post().
-All semaphore functions return 0 when successful.
-Non-portable extensions include spinlocks and read-write locks.