Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
47 Cards in this Set
- Front
- Back
Process execution consists of
|
CPU execution and I/O wait
|
|
Maximum CPU utilization obtained with
|
multiprogramming
(CPU scheduled selects another process when current one is in I/O burst) |
|
What queue does CPU scheduler select processes from?
|
ready queue
|
|
CPU scheduling decisions take place when a process:
|
1) switches from running to waiting state
2) switches from running to ready state 3) switches from waiting to ready state 4) terminates |
|
Nonpreemptive
|
once the CPU has been allocated to a process, the process keeps it until terminates or waiting for I/O
(Also called cooperative scheduling) |
|
Preemptive
|
processes can be rescheduled when a new process comes in. needs hardware support such as a timer
|
|
Scheduling conditions are preemptive or nonpreemptive?
Switches from running to waiting state Terminates |
Nonpreemptive
|
|
Scheduling conditions are preemptive or nonpreemptive?
Switches from running to ready state Switches from waiting to ready |
Preemptive
|
|
2 solutions to preemption affecting OS kernel design
|
1)waiting for either the system call to complete or I/O block
2) disable kernel preemption when updating shared data |
|
**Dispatch latency
|
the time it takes for the dispatcher to stop one process and start another running
|
|
Dispatcher module gives control of the CPU to the process selected by:
|
the short-term scheduler
|
|
**CPU utilization
|
percentage of CPU being busy
|
|
**Throughput
|
number of processes that complete execution per time unit
|
|
** Turnaround time
|
the time to execute a particular process, from the time of submission to the time of completion
|
|
**Waiting time
|
the total time spent waiting in the ready queue
|
|
**Response time
|
the time it takes from when a request was submitted until the first response is produced
time it takes to start responding |
|
Scheduling algorithms seek to maximize ____ and _____, and minimize ____, ______, and ______.
|
Maximize: CPU utilization and throughput
Minimize: turnaround time, waiting time, and response time |
|
First Come, First Served (FCFS) Scheduling -- preemptive or nonpreemptive?
|
Non-preemptive
Just schedule processes as they come in |
|
**Convoy effect
|
all other processes waiting until the running CPU bound process is done (FCFS)
|
|
Shortest Job First Scheduling -- preemptive or nonpreemptive?
|
Can be preemptive or nonpreemptive
Preemptive is shortest remaining time first Provably optimal |
|
How to predict length of next CPU burst
|
assuming it is related to previous CPU burst, can be predicted with exponential averaging
|
|
Shortest remaining time first
|
Preemptive SJF
Reschedule when a process arrives |
|
Priority Scheduling
|
selects the ready process with highest priority
priority number is associated with each process |
|
Shortest job first (SJF) is a special case of priority scheduling - how?
|
priority is the inverse of predicted next CPU burst time
|
|
Priority scheduling - preemptive or nonpreemptive?
|
Can be both, like SJF
|
|
**Starvation
|
low priority processes may never execute, because high priority processes are selected first
|
|
**Aging
|
gradually increase the priority of processes that wait in the system for a long time
|
|
Round Robin (RR)
|
selects processes in a round-robin fashion
each process gets a small unit of CPU time (Time Quantum q) Once a process uses its time quantum, it is preempted and put to the tail of the ready queue |
|
Multilevel Queue
|
ready queue is partitioned into separate groups
processes are permanently assigned to a given queue each queue has its own scheduling algorithm scheduling must be done among the queues |
|
Time slice
|
In Multilevel Queue Scheduling, each queue gets a certain amount of CPU time which it can schedule among its processes
|
|
Multilevel Feedback Queue
|
uses multilevel queues
a process can move between various queues tries to infer type of process goal is to give interactive and I/O intensive processes high priority Most general CPU-scheduling algorithm |
|
Multilevel Feedback Queues are defined by:
|
1) number of queues
2) scheduling algorithm for each queue 3) method to determine when to assign a process a higher priority 4) method to determine when to demote a process 5) method to determine which queue a process will enter when that process needs service |
|
Thread scheduling
|
OS kernel schedules kernel threads
|
|
**System-contention scope
|
competition among all threads in a system; kernel is not aware of user threads
Thread library schedules user threads onto light weight processes (LWP) |
|
**Process-contention scope
|
scheduling competition within the process
usually based on priority set by the user |
|
Pthread Scheduling
|
API allows specifying either Process-Contention Scope or System-Contention Scope during thread creation
|
|
the Pthread scheduling API
|
pthread_attr_set/getscope
|
|
schedules threads using PCS scheduling
|
PTHREAD_SCOPE_PROCESS
|
|
schedules threads using SCS scheduling
|
PTHREAD_SCOPE_SYSTEM
|
|
Multiple-Processor Scheduling
|
CPU scheduling when multiple CPUs are available
|
|
**Asymmetric multiprocessing
|
approach to multiple-processor scheduling
only one processor makes scheduling descisions, I/O processing, and other system activity other processes act as dummy processing units |
|
**Symmetric multiprocessing
|
approach to multiple-processor scheduling
each process is self-scheduling scheduling data structures are shared, needs to be synchronized used by common OS's |
|
**Processor affinity
|
migrating process is expensive to invalidate and repopulate cache
Solution: let a process have an affinity for the processor currently running SOFT and HARD affinity |
|
Multicore processors
|
has multiple processor cores on same chip
|
|
**Memory stall
|
when accessing memory, a process spends a significant amount of time waiting for the data to become available
Complication of multicore processors |
|
Solution to memory stall
|
Multithreaded CPU core
share the execute unit, but duplicate the architecture states (registers) for each CPU thread one thread can execute while the other is in memory stall |
|
Virtualization
|
may undo good scheduling efforts in the host or guests
Host kernel schedules multiple guests onto CPUs each guest does its own scheduling can result in poor response time and performance |