• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/297

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

297 Cards in this Set

  • Front
  • Back
What is an OS
Software that makes computer easier to use
Abstraction
The desired illusion
Mechanism
It is what creates the illusion
Policy
How to make the illusion work well to meet a goal
OS v. Kernel
OS software on machine -Apps. Kernel is what makes it all work
All progs depend on Kernel
Loads & runs programs. Accessed via system calls
Kernel works with hardware
To access device drivers and handle interrupts
Kernel allocates resources
CPU time, Memory, controls I/O devices
Kernel provides abstract machine
User interface, resources and simplicity
Kernel is located
below the running progs and above the hardware
What is a resource
Anything that allows work to get done
Phys: CPU
Abs: thread
Phys: Memory
Abs: Segment
Phys: Disk
Abs: File
Phys: Network
Abs: Connection
Phys: Display
Abs: Window
Phys: Keyboard/Mouse
Abs: stream
Phys: Computer
Abs: process
What if no kernel
All you have is bare hardware. Minimal Kernel is needed
Virtual Processors
By Switching CPU use
Multiple Virtual Memories
By Memory partitioning and re addressing
Most basic kernel function
run a program
Basic resources for a processor
CPU, Memory & I/O
Machine state for CPU
PC SP GP
Machine state for Memory
Code, Glob Vars, Stack of records
Process Mem Structure
Text then data then stack
Text holds
Code: program Instr
Data holds
Glob Vars and Heap (dynam allocation)
Stack Holds
Activation Records. Has automatic growth and shrinkage
Stack of Activation Records
One exists per process
Activation Record Stores
Where to return, link to previous record, local vars
Stack Pointer
points to top. Return instr relies on the SP
Yield(p)
Let process p run (give it CPU). Requires context switching
How to Context Switch
1st save contxt of cur proc. 2nd Load contxt of next proc
How to Load Context
Load gen regs, sp, etc. Then load PC (MUST BE LAST!)
State of Proc’s progress
Running, Redy or blocked
Ready State
dispatch to Running State
Running State
preempt to Ready State or sleep to Blocked State
Blocked State
wake up to Ready State
Dispatch
Allocate CPU to a process
Preempt
Take away CPU from process
Sleep
process gives up CPU to wait for event
Wakeup
event occurred, make process ready
When does the kernel run
System calls or hardware interrupt
Is the Kernel a process
NO, supports procs and devices. Extension of a Proc
How does Kernel get control
Running procs can give up control voluntarily
Blocking System call example
read()
How to block
call yield() to give up cpu
How is cpu forcibly taken away
Preemption!
When timer expires
interrupt is generated and hardware forces control to kernel
What hardware does during context switch
user to kernel mode
What kernel does during context switch
save context of last running proc. select a ready proc. Restore context of selected proc. Return from interrupt
What is a thread
A single sequential execution of instructions
Is a thread a part of a process?
Yes, corresponds to execution of inst sequence
Where does a thread live
In the memory of a proc
To the user a thread is
unit of parallelism
To the kernel a thread is
unit of schedulability
Reconsidered defn of a proc
prog in execution. Has thread + other resources
Implementing threads at user level
kernel is unknown
Threads at kernel level
Kernel can sched threads on diff CPUs
Thread library
part of the kernel
Best policy to schedule procs on CPU
No best policy, it depends goals
FCFS sched
non preemptive. Bad for short procs. Avoids starvation.
RR sched
preemptive. More overhead then FCFS, Avoids starvation
SPN
Assume service time known. best for nonpreemptive. Allows starvation
SRT
Assume service time known. Best for preemptive. Allows starvation
MLFQ
Preemptive. Complex, adaptive, highly responsive. Allows starvation
MLFQ alg
select proc in lowest queue. Run for 2^k quantums. Move to next higher queue. If proc blocks or is preempted -> return to prev proc or ret zero
Priority Sched
slect proc with highest priority. Sched based on criteria
Fair Share(Propor sched)
Proc gets predetermined CPU time. Choose least used
Utilization
Fraction of CPU time received
Compute Ratio
Utilization to fraction promised
If compute ratio < 1
Proc getting less than its fair share
If compute ratio >1
Proc getting more than its fair share
Types of Real Times systems
Hard v. Soft & Periodic v. aperiodic
Types of real time scheduling
Earliest Deadline & Rate Monotonic Scheduling
Each proc in a periodic process has
C (compute time), T(period) and U(util)
Sum of utilizations cannot
exceed 100%
Earliest Deadline
works for periodic and aperiodic. Requires sorting O(nlogn)
ED meets all deadlines when
sum of util is <= 100%
RMS
Prioritize based on rates. Rate=1/t
RMS deadline guarantee to be met if
U1+U2+…+Un <= n(2^(1/n)-1)
RMW is limited to
Periodic Processes
Process Synchronization
When one proc has to wait for another
Use of synchro
Prevent race condition. Wait for resources to be available
Critical Section
Code that must run atomically with respect to other procs
Enforce Mutual Exclusion by
having only one proc in a crit section
1st rule of mutex
No 2 procs can be in their crit sec at the same time
2nd rule of mutex
No proc out its crit sec may stop procs entering their crit sec
3rd rule of mutex
No proc should wait forever to enter its crit sec
4th rule of mutex
No assumptions can be made about speeds or # of CPUs
How to achieve mutex
surround crit sec with entry/exit code
Mutex Entry acts like a
barrier
Mutex Exit should
release other entry barriers
Mutex Software lock
Breaks 1st rule of mutex
Mutex turn taking
Breaks rule 2 of mutex
Mutex state intention
Breaks rule 3 of mutex
Mutex hardware sol: TSET
works. Hardware instr
Semaphore
has integer value. List of waiting procs works like a gate
Semaphore alg
if sem > 0 gate is open. Else gate is closed.
Negative sem
There are waiting procs
Semaphore Operations
sem s, init(sem s, int n), wait(sem s) & signal(sem s)
Can you test a sem’s value
NO
Mutex sem
sem initialied to 1, only 1 proc can enter crit sec
Synch sem
sem init to 0. Procs wait for each other
Pure Synchro
no way for proc to tell it blocked. No info transfer
Wait(sem s) alg
decement s. if s<0 block proc and assoc with s
Signal(sems) alg
increment s. if blocked proc, unblock any of them
Sem data structure has
int value. List of waiting procs.
Wait and signal must be
Atomic operations
The bodies of wait and signal are
critical sections
Mechanisms for sem
TSET lock or petersons solution
When is busy waiting ok
when at lower level and occur at brief known times
IPC requires
data transfer & synchronization. Need mechanism for both
Examples of cooperating procs
Pipeline, cli/serv, par/child
Prod/Consum needs
Synchro and mutex for crit secs
Monitors
Prog language construct for IPC.
Monitors bring structure to IPC by
localizing crit secs and synch
Message passing
OpSys mechanism for IPC
MPI data transfer
in to and out of kernel message bufs
MPI synch
cant receive message until message in sent
Who should messages be sent to
ports not procs
What is proc wants to receive from anyone
pid = receive (*, msg)
Sync(blocking) v asynch(nonblocking)
send non-blocking, receive is blocking
Deadlock
set of procs is permanently blocked
Conditions for deadlock
mutex, hold and wait, no preemption, circular wait
Ddlck
mutex only one proc can use a resource at a time
Ddlck hold and wait
proc holds resource while waiting for another
Ddlck no preemption
cant take a resource from a proc
Ddlck circular wait
the waiting process forms a cycle
Ddlck prevention
remove any deadlock conditions
Ddlck avoidance
avoid ddlck situations
Ddlck detection
don’t try to stop deadlock, if it happens then resolve
DL prevention mutex
relax when sharing is possible
DL prevention hold and wait
get all resources (wait till all are free)
DL prevention no preemption
allow resources to be taken away
DL prevention circular wait
order all resources, force ordered acquisition
Text
code of the program
Data
static variables
Stack
automatic variables
Other Mem
shared mem regions
Compiler Generates
Memory addresses. Ranges for text, data, stack
What compiler doesn’t know
physical mem size, regions of phys mem
Supporting mult procs requires CPU and
Memory
Fragmentation
unused space between procs, cant be allocated
Mem allocation with no holes
compaction or break into pieces
Buddy System
Partition mem into powers of 2
Addressing Problem
compiler generate mem references, unknown loc of proc
Protection problem
modifying another procs mem
Space problem
more procs then less mem they can have
Kernel occupies
lowest addresses
Logical address
starts at 0, compiler generated, indep of phys mem location
Coverting logical to phys
SW: at load time, HW: at access time
Base registers filled with
start address
To translate logical address
add base
Is address < bound
YES: add to base, NO: invalid address TRAP
On every context switch
load registers, only kernel does this, kernel is protected
Benefit of kernel loading registers
proc protect & each proc is separately allocated
To create a proc you must
load it into mem
What to load based on prog
TEXT, DATA, STACK
If there is enough fragmented space
mem allocation still may not succeed
Segmented address space
mem partitioned into segments, segments have diff size
Paged address space
partition into pages, pages are same size
Segment
linearly address mem, contains code, data, stack
Each segment has
identifier segment s and size Ns
Logical addresses are in the form (s,i)
offset i < Ns, segment s
How to translate logical to phys address
use segment table ST
Segment s into base physical address b
b = ST(s) then add b + i
Physical address =
ST(s) + i
How many segment tables per proc
one
Segment table entry has
V: valid bit, Base: segment loc, Bound:seg size, perm
Where is segment table base and size registers are located
In memory
Pros of Segmentation
located and grows/shrinks indep, separately protected
Con of segmentation
Variable size allocation external fragmentation
Frame
physical unit of information, page fits exactly into a frame
Form of logical paged address
(p,i): p is page num. 0<p<N-1, i is offset
Size of logical paged address space
NL*pagesize, NL=max num of pages
Segment v Page
segment good logical, page good physical
Advantage of locality of reference
most refs are to a small num of pages
Keep translations of common refs in
High speed mem
TLB
Translation look aside buffer
Translation cost determined by
sped of mem, speed of tlb, hit ratio
Hit Ratio
fraction of refs satisfied by tlb 95%
Larger the TLB
higher hit rate, slower response, greater expense
TLB has major effoct on perf
must flush on context switch,
If valid bit if off
page fault trap to kernel
Page fault
find page on disk, read it into free frame, record page number, set val
What needs to be in mem
only parts being ref’d, other parts can be on disk, bring in when needed
Virtual mem
illusion of large mem, keep only part of logical mem in phys mem
Virt mem based on paging
VM->page table->phys mem
Cost of page fault
expensive
Principle of locality
not all pieces of mem are ref’d uniformly over time
Referencing a cluster in space/time
allows prediction based on past
Page replacement policy
remove page not in locality reference
Main idea of page replacement
which pages to move, when to move them
page replacement FIFO
select oldest page, simple, poor performance
page replacement OPT
select page to be used furthest in time, optimal but need future knowledge
page replacement LRU
select page least recently used, works given locality, costly
apprx LRU clock alg
select old & not recently used
apprx LRU clk alg ref bit
when frame filled with page, ref bit = 0(OS), if accessed bit=1(HW)
how clk works
frames in circle, if ref bit ==0 select, else ref bit =0, advance clock, break when found
resident set
procs pages in phys mem
local resident set
limit frame selection to requesting proc, inefficient procs have unused frames
global resident set
select any frame from any proc, efficient resident sets grow/shrink accordingly
isolation v no isolation in resident set
local isolates effects of page behav, global can neg effect others
multiprogramming level
number of procs in phys mem(non empty resident sets)
resident set contains
the working set
dennings working set model
workgin set W(t, delta), pages ref’d during last delta(process time)
if working set doesn’t fit
swap the proc out
on pg flt there may be no free pages
page something out to make room, proc must wait in meantime
maintain pool of free pages
faulting prec must only wait for page in
paging daemon periodically
replenishes pool, when pool goes below threashold-> replenish
VM efficiency
OPT>=LRU>=CLOCK>=FIFO
If working set cannot be resident
swap out
File
logical unit of storage, container of data, accessed by <name, region within file>
File system
a structured collection of files, access control, name space, persistent storage
File sys requirement
longterm storage, persistent, large, sharing, security
File attributes
Type, Times, Sizes, Structure, access control
File Operations
Create, prep for access, access, search, attributes, mutex, name management
Read/Write model
fd=open(fname,usage) nr=read(fd,buf,size) nw=write(fd,buf,size) close(fd)
Region1 file Sys metadata FSM
info about the file system
Region2 file metadata FM
file control blocks
Region3 Data Blocks DB
file contents
FSM
sizes[files in use, Data blocks, free entries], free lists(bit maps)[file control blocks, data blocks]
FM
info about the file, ref’d by number/index, contains[attributes, refs to data blocks]
Contiguous blocks
single sequence of blocks
Extents
groups of contiguous blocks
Non-contiguous blocks
blocks individually named
Unix block Map
13 pntrs, 10 drct refs, 1 indrct ref to N, 1 dbly-indrct N^2, 1 trply-indrct N^3 DB
Unix block map N depends on
how many pointers fit in a block
Keeping track of free blocks
free block map, dbly linked list, bit map
Filename to file control block
Parse name, each branch corresponds to a folder
Why disks for Filesystem storage
persistent, random access, cheap, but slow(mechanical)
Reduce disk access by
reading mult blocks in 1 access, maintain a block cache
SSD
NAND-based flash mem, non volatile, unaffected by shock, mag field, no noise
SSD cons
limited number of writes
SSD pros
10x faster, 10x more expensive, 10x less space
FS buf cache
on read or write chk if buf has block, if not get from disk remove LRU block
Critical blocks
FSM, directories, i-nodes, free block list
Clustering strategy
Place related blocks close to each other. Reduce disk head movement/seektime
Block size
larger the block size the better the thrput, smaller block less wasted space
Tech trends
disk density is growing faster than disk speed. Make disk blocks larger
What is I/O
input from attached device to CPU. Output from CPU to device
I/O probs
so many diff types of I/O devices. Wide range of speed, operation, data transfer
Device drivers
encapsulates device-dependent code. Contains device specific register read/writes
DD implements standard interface
open() close() read() write()
DD interrupt handlers
executes when i/o completes. Updates data structure, wakes up waiting procs
CPU and device controller communicate
with I/O instr, Memory instructions
I/O SW structure layers
User, kernel, hardware
I/O user layer
user I/O stdio library
I/o Kernel Layer
device independent I/O, and the device drivers
I/O HW layer
Device controllers for the drivers in kernel, and the devs
Kernel enforces protection
have kernel own resources, kernel can allow temp access
To access resource
a process must ask for it, then kernel can test whether access should be given
Once a proc has access to a resource
kernel can prevent others from gaining access
Mechanism to protect kernel
mem protect, protect mode of operation, kernel v user, clk intrpt
All kernel protect mechs must be
supported by hardware
Protection
how to limit access to a resource
Resource
object that requires protection
Domain
set of resource, permission pairs
Process
access resources within a domain
Access Control list ACL
for each resource list(domain, permission)
Capability List CL
for each domain list(resource, permission)
ACL
Like a registry: if name on list ok to access. inneficient, must lookup on access easy to revoke
CL
like ticket: if u have it get access. Efficient, on access just produce capability, hard to revoke
Unix permissions
each file has set of permissions. r/w/x owner/group/world
SETUID
if setuid is set, proc runs in domain of owner. Runs with the owners rights
Security
Confidentiality, integrity, authenticity, availability
Security threats
Interception, interruption, modification, fabrication
Sec Policy
specifics of security for a particular system. What each entity can do to each object
Sec Mechanisms
Encryption, authentication, authorization, auditing
User Authentication
Passwords, encrypted passwords
Challenge/response Protocol
user and system know alg. System provides input to alg user send output
Trojan Horse
prog that contains hidden malicious code, can harm environment under user’s name
Trap Door
secret access point in prog
Internet worm
prog that copies itself over the network
Virus
Code attached to legitimate prog. When prog runs the virus runs, causes damage, spreads to progs
Denial of Service
Prevent others from using system but using up resources bombarding with reqs
Intrusion detection
look for patterns of attack behavior, look for unusual behavior. Sol: use audit log
Pub Key Encrypt
A encypts using B’s pub key. B decrypts using private key RSA
Topology networks
Ring star bus graph
Geographic network
LAN WAN
Circuit Switching
establish path, send data. Reserve resources provide performance control
Packet Switching
Forward packets hop to hop. File sharing despite bursts, statistical multiplexing
Protocol?
Get message from sender to receiver using an agreed message format and transfer rules
Data
what the sender want the receiver to know
Header
info to support protocol. Source and dest addresses. State of protocol operation. Error control
OSI presentation
standardized formats. Eg: jpeg
OSI session
start/end transport, synchro
OSI transport
end-to-end reliable connections
OSI network
routing
OSI Link
framing, detecting errors
OSI Physical
0s and 1s over a wire
3 levels of address
Domain: cs.ucsd.edu, logical IP, physical Ethernet
Address resolution
mapping higher level name to lower level name
Size of address space
IPv4 32 bit=4billion addresses. IPv6 128 bit address=2.56*10^38
Error control
parity: odd even, CRC cyclic redundancy code, checksum, ARQ