• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/21

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

21 Cards in this Set

  • Front
  • Back
Basic Concept
-In a single-processor system, only one processor can run at a time.
-To avoid wasting waiting time, several processes are kept in memory at one time; when one process has to wait the OS takes the CPU away from it and gives the CPU to anther process.
CPU-I/O Burst Cycle
-Process execution consists of a cycle of CPU execution and I/O wait.
-Processes alternate between 2 states: CPU burst followed by an I/O burst, and so on.
-An I/O heavy program typically has many short CPU bursts; a CPU heavy program typically has a few long CPU bursts.
Short-Term Scheduler (CPU Scheduler)
-Selects a process from the ready queue to be executed when the CPU becomes idle.
-The records in the queue are generally process control blocks (PCBs) of the processes.
Nonpreemptive Scheduling (Cooperative)
-Once the CPU has been allocated to a process, the process keeps the CPU until it releases the CPU either by terminating or by switching to the waiting state.
-Process can't be interrupted during execution.
-Doesn't require special hardware (like a timer).
-Used by Windows 3.x., Mac OS prior to OS X.
Preemptive Scheduling
-A process can be interrupted during execution and scheduling can be done based on priority.
-Scheduling takes place when process switches from running to ready state (interrupt occurs), or from waiting to ready state (I/O completion).
-Can result in race conditions when processes share data.
-Affects design of the OS kernel.
-Introduced with Windows 95 and Mac OS X.
Dispatcher
-The module that gives control of the CPU to the process selected by the short-term scheduler.
-This function involves:
1) Switching context
2) Switching to user mode
3) Jumping to the proper location in the user program to restart that program.
-Dispatch Latency is the time is takes for the dispatcher to stop one process and start another running.
Scheduling Criteria
1) CPU utilization: Keep CPU as busy as possible. Should range from 40% to 90% depending on load.
2) Throughput: # of processes completed per time unit.
3) Turnaround time: Amount of time from time of submission to time of completion to execute process. It's the sum of the periods spent waiting to get into memory, waiting in the ready queue, executing on the CPU, and doing I/O.
4) Waiting time: Sum of the periods spent waiting in the ready queue.
5) Response time: Time from the submission of a request until the first response is produced. Time it takes to start responding, not output the response.
-It's desirable to maximize CPU utilization and throughput, and to minimize turnaround time, waiting time, and response time.
Gantt Chart
-A bar chart that illustrates a particular schedule, including the start and finish times of each of the participating processes.
First-Come, First-Served (FCFS) Scheduling Algorithm
-Simplest scheduling algorithm.
-Process that requests the CPU first is allocated the CPU first.
-Managed with a FIFO queue.
-When process enters the ready queue, its PCB is linked onto the tail of the queue; when CPU is free, it's allocated to the process at the head of the queue.
-Disadvantage: Average waiting time is often long.
-This algorithm is nonpreemptive; it's troublesome for time-sharing systems.
Convoy Effect
-When shorter processes wait for one big process to get off the CPU.
-Results in lower CPU and device utilization than might be possible if shorter processes went first.
Shortest-Job-First (SJF) Scheduling Algorithm
-When CPU is available, it's assigned to the process with the smallest next CPU burst.
-If next CPU burst of 2 processes are the same, FCFS scheduling is used to break the tie.
-Optimal algorithm that gives the minimum average waiting time.
-Can't be implemented in short-term CPU scheduling; no way to know the length of the next CPU burst.
-You can try to estimate the next burst's length as an exponential average of the measured lengths of previous CPU bursts.
-SJF algorithm can be preemptive or nonpreemptive.
-Preemptive SJF scheduling is sometimes called shortest-remaining-time-first scheduling.
Priority-Scheduling Algorithm
-A priority is associated with each process, and the CPU is allocated to the process with the highest priority.
-Equal-priority processes are scheduled in FCFS order.
-An SJF algorithm is a priority algorithm when the priority is the inverse of the next CPU burst (larger the CPU burst, lower the priority and vice versa).
-Priorities can be defined internally (using some measurable quantity to compute the priority) or defined externally (set by criteria outside the OS).
-Priority scheduling can be preemptive or nonpreemptive.
-Major problem with this algorithm is indefinite blocking, or starvation.
Indefinite Blocking (Starvation)
-When low-priority processes are kept waiting indefinitely by a steady stream of higher-priority processes.
-Solution: Aging involves gradually increasing the priority of processes that wait in the system for a long time.
Round-Robin Scheduling Algorithm
-Used for time-sharing systems.
-Similar to FCFS scheduling, but preemption is added to allow system to switch between processes after allotted time sequence expires.
-Small unit of time, time quantum or time slice, is defined (10 to 100 milliseconds long).
-Ready queue is treated as a circular queue.
-CPU scheduler goes around the ready queue, allocating the CPU to each process for a time interval of up to 1 time quantum.
-To implement RR scheduling, treat ready queue as a FIFO queue of processes; CPU picks 1st process from ready queue, sets a timer to interrupt after 1 time quantum, and dispatches the process.
-Either 1) the process has a CPU burst shorter than 1 time quantum and voluntarily releases CPU, or 2) the CPU burst is longer than 1 time quantum and the timer goes off, causing an interrupt. A context switch will execute and the process will be put at the tail of the ready queue.
-No process is allocated the CPU for more than 1 time quantum in a row, unless it's the only runnable process.
-Average waiting time is often long.
-Performance of RR algorithm depends on size of time quantum; if it's extremely large, RR policy is the same as the FCFS policy; if it's extremely small, RR algorithm can result in many context switches.
-80% of CPU bursts should be shorter than the time quantum.
Multilevel Queue Scheduling Algorithm
-Partitions the ready queue into several separate queues.
-Allows for categorization of processes (i.e. foreground and background).
-Processes are permanently assigned to one queue, generally based on some property of the process.
-Each queue has its own scheduling algorithm.
-There must be scheduling among the queues, which is commonly implemented as fixed-priority preemptive scheduling.
-Another possibility is to time-slice among the queues where each queue gets a certain potion of the CPU time, which it can then schedule among its various processes.
-Advantage: Low scheduling overhead.
Multilevel Feedback Queue Scheduling Algorithm
-Same as multilevel queue scheduling algorithm, but allows a process to move between queues.
-Processes are separated according to the characteristics of their CPU bursts.
-If a process uses too much CPU time, it will be moved to a lower-priority queue; a process that waits too long in a lower-priority queue may be moved to a higher-priority queue.
-The lowest-priority queue if run with the FCFS scheduling algorithm for longer processes.
-A multilevel feedback queue scheduler is defined by:
1) # of queues
2) scheduling algorithm for each queue
3) method used to determine when to upgrade a process to a higher-priority queue.
4) method used to determine when to demote a process to a lower-priority queue.
5) method used to determine which queue a process will enter when that process needs service.
-Most general and most complex CPU-scheduling algorithm.
Thread Scheduling
-On OS's that support them, it's kernel-level threads, not processes, that are being scheduled by the OS.
-To run on a CPU, user-level threads must be mapped to a kernel-level thread.
Process-Contention Scope (PCS)
-The thread library schedules user-level threads to run on an available LWP (lightweight process).
-Used on systems implementing many-to-one and many-to-many models.
-Competition for the CPU takes place among threads belonging to the same process.
-Typically done according to priority; scheduler selects thread with the highest priority to run. User-level thread priorities are set by the programmer and not adjusted by the thread library.
System-Contention Scope (SCS)
-Used by the kernel to decide which kernel-level thread to schedule onto a CPU.
-Systems using the one-to-one model, such as Windows, Linux, and Solaris, use only SCS.
-Competition for the CPU takes place among all threads in the system.
Pthread Scheduling
-PTHREAD_SCOPE_PROCESS schedules threads using PCS scheduling.
-PTHREAD_SCOPE_SYSTEM schedules threads using SCS scheduling.
Turnaround Time
-Turnaround Time = Completion Time - Arrival Time