• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/28

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

28 Cards in this Set

  • Front
  • Back

Java Thread States

1) new: new Thread


2) running: start()


3) blocked: sleep()


4) dead: exits run() method

Kernel Threads

The kernel has a thread table




Advantages


♦ A process having large number of threads can be given more time


♦ Good for applications that frequently block




Disadvantages


♦ Slow and inefficient


♦ Significant overhead and increased in kernel complexity

User Threads

Advantages


♦ Can be implemented on OS that does not support threads


♦ Modification to OS not required


♦ Simple Representation


♦ Simple Management


♦ Fast and Efficient




Disadvantages


♦ lack of coordination between threads and operating system kernel


♦ User-level threads requires non-blocking systems call i.e., a multithreaded kernel

Java threads may be created by:

extends Thread class and overwrites its run() function


♦ Implementing the Runnable interface

Thread libraries

Provides the programmer with an API for creating and managing threads


Three main thread libraries in use today


♦ POSIX Pthreads


♦ Win32


♦ Java

Thread

Is a basic unit of CPU utilization


♦ Has thread ID, a program counter, a register set, and a stack




Shares with the other threads belonging to the same process


♦ Code section


♦ Data section,


♦ and other OS resources (such as open files and signals)

Thread Cancellation

Two general approaches:


Asynchronous cancellation terminates the target thread immediately


Deferred cancellation allows the target thread to periodically check if it should be cancelled

Thread Pools

Create a number of threads in a pool where they await work


Advantages:


♦ Usually slightly faster to service a request with an existing thread than create a new thread


♦ Allows the number of threads in the application(s) to be bound to the size of the pool

CPU Scheduling


Basic Concepts

CPU–I/O Burst Cycle – Process execution consists of a cycle of CPU execution and I/O wait


An I/O bound program typically has many short CPU bursts


A CPU-bound program might have a few long CPU bursts

CPU Scheduler

CPU scheduling decisions may take place when a process:


1. Switches from running to waiting state


2. Switches from running to ready state


3. Switches from waiting to ready


4. Terminates




Scheduling under 1 and 4 has no choice in terms of scheduling, nonpreemptive


2 and 3 is preemptive

Dispatcher

Dispatcher module gives control of the CPU to the process selected by the short-term scheduler;

Scheduling Criteria

CPU utilization – keep the CPU as busy as possible


Throughput – # of processes that complete their execution per time unit


Turnaround time – amount of time to execute a particular process


Waiting time – amount of time a process has been waiting in the ready queue


Response time – amount of time it takes from when a request was submitted until the first response is produced, not output (for time-sharing environment)

First-Come, First-Served (FCFS) Scheduling

Similar to the concept of FIFO.

Process Scheduling


Convoy effect

Short process behind long process


♦ All the other processes wait for the long process to get off the CPU

Shortest-Job-First (SJF) Scheduling

Two schemes:


♦ nonpreemptive : Once a process is given to the CPU it runs to completion


♦ preemptive : Policy that switches the CPU process to the shortest at that time.




SJF is optimal – gives minimum average waiting time for a given set of processes




Study the examples given in the lectures

Priority Scheduling

A priority number (integer) is associated with each process




The CPU is allocated to the process with the highest priority (smallest integer ≡ highest priority)


♦ Preemptive


♦ nonpreemptive




SJF is a priority scheduling where priority is the predicted next CPU burst time




Problem ≡ Starvation – low priority processes may never execute


Solution ≡ Aging – as time progresses increase the priority of the process

Priority Scheduling Problems

Problem ≡ Starvation – low priority processes may never execute




Solution ≡ Aging – as time progresses increase the priority of the process

Round Robin (RR)

Each process gets a small unit of CPU time (time quantum) to execute code then switches to the next process in the queue

Multilevel Queue

Ready queue is partitioned into separate queues:


Each queue has its own scheduling algorithm


Scheduling must be done between the queues

Multilevel Feedback Queue

A process can move between the various queues; aging can be implementedthis way


Multilevel-feedback-queue scheduler defined by the following parameters:


♦ number of queues


♦ scheduling algorithms for each queue


♦ method used to determine when to upgrade a process


♦ method used to determine when to demote a process


♦ method used to determine which queue a process will enter when thatprocess needs service

Multiple-Processor Scheduling


Asymmetric multiprocessing

Only one processoraccesses the system data structures, reducing the need for data sharing.

Multiple-Processor Scheduling


Symmetric multiprocessing (SMP)

Eachprocessor is self-scheduling. All processes may be in a common ready queue, oreach processor may have its own private queue of ready processes.

Multiple-Processor Scheduling


Processor affinity

Cache memory populated with data accessed recently bythe process in execution; one process should be kept on one processor


 Soft affinity – not guaranteed


 Hard affinity – guaranteed . e.g. Linux

Multiple-Processor Scheduling


Load Balancing

Attempts to keep the workload evenly distributed across all processors inan SMP system

Load Balancing:


push migration andpull migration

Push: a specific task periodically checks theload on each processor and—if it finds an imbalance—evenly distributes theload by moving (or pushing) processes from overloaded to idle or less-busyprocessors.


Pull: occurs when an idle processor pulls a waiting taskfrom a busy processor

Process Synchronization:


Race Condition

Whereseveral processes access and manipulate the same data concurrently and theoutcome of the execution depends on the particular order in which the accesstakes place

Critical Section Solution must Satisfy

1. Mutual exclusion. If process Pi is executing in its critical section, then noother processes can be executing in their critical sections.




2. Progress. If no process is executing in its critical section and someprocesses wish to enter their critical sections, then only those processesthat are not executing in their remainder sections can participate indeciding which will enter its critical section next, and this selection cannotbe postponed indefinitely.




3. Bounded waiting. There exists a bound, or limit, on the number of timesthat other processes are allowed to enter their critical sections after aprocess has made a request to enter its critical section and before thatrequest is granted.

Peterson’s Solution:

Requires the two processes to share two data items:


int turn; //for process i and j


boolean flag[2]; // flag[i] or flag[j]




process i sets flag[i] = true, however it


Loops a while condition (turn = j and flag[j]) which check when process j has completed. Once this condition is false then process i is ready to enter critical sections. After it is done process i sets flag[i] to false. (similar algorithm in process j)