• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/89

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

89 Cards in this Set

  • Front
  • Back
Mulitprogramming
More than one program is run simultaneously on one processor

-Several Programs are kept in memory and processor switches between them

-improves use of processor and I/O resource utilization
Multiprocessesor
A computer with more than processor

used to improve processing speeds and for putting less stress on one processor
Why is it impractical to use a virtual machine for a hard real-time system?
Hard real-time systems needs tasks to perform as quickly as possible to emulate real-time. Virtual memory access data from hard drive and maps it to RAM, it can lag in performance
What role did the development of graphical user interfaces play in the personal computer revolution?
It made computers more user-friendly by providing a visual aide in accessing computer services
Hardware devices (9)
-processor
-peripheral devices
-bus
-motherboard
-registers
-hard disk
-memory
-tertiary device storage
-cache
What hardware device(s) executes program instructions?
Processor
What hardware device(s) not required for a computer to execute program instructions?
Peripheral devices, tertiary storage devices
What hardware device(s) is a (are) volatile storage medium?
Memory, cache and register.
What device(s) is(are) the PCB that connects a system’s processors to memory, secondary storage and peripheral devices?
Mainboard
What device(s) are persistent storage medium?
hard disk, tertiary device, (some peripheral devices)
What device(s) are a set of traces that transmit data between hardware devices
Bus
What hardware device(s) is a (are) fast memory that improves application performance?
Cache
What hardware device(s) is a (are) lowest level of memory in the memory hierarchy that a processor can reference?
directly.
Main Memory
Sort the following list from fastest/$$$ to Slowest/$: secondary storage, registers,main memory, tertiary storage, L2 cache, L1 cache.
Register memory, L1 cache, L2 cache, main memory, secondary storage, tertiary storage.
Why do systems contain several data stores of different size and speed?
Filling the computer with only registers and cache would cost too much money, but filling the computer with secondary, and tertiary storage is cheap but way too slow. So a medium is needed. Also, both volatile and nonvolatile memory is essential for a good working computer
What is the motivation behind caching?
Caching speeds memory access by keeping duplicate copies of frequently referenced
data in fast memory. Caches are used so systems can store a large volume of data, and low memory access times.
Programming Languages (5)
-machine language
-assembly language
-high-level language
-OO programming language
-structured programming language
Machine Language Definition
-written using 1’s and 0’s
Assembly Language Definition
-requires a translator programmer to convert the code into something a specific processor can understand
High-Level Language Definition
-enables programmers to write code using everyday English words and mathematical
notation
-requires a translator programmer to convert the code into something a specific processor can understand
-Java/C++/Fortran/Pascal
-OO Programing Languages
-focuses on manipulating things
-requires a translator programmer to convert the code into something a specific processor can understand
-Java and C++
-enables programmers to write code using everyday English words and mathematical
notation
Structured Programming Language
-specifies basic computer operations using English-like abbreviations for instructions
-requires a translator programmer to convert the code into something a specific processor can understand
-Fortran and Pascal
-enables programmers to write code using everyday English words and mathematical
notation
Why does it not make sense to maintain the blocked list in priority order?
Because processes do not unblock in priority order, rather they unblock unpredictably in the order in which their I/O completions or event completions occur
Process Definitions
- a program in execution
-an asynchronous activity
-an task
-an entity that represents a program in execution
why is a process hard to describe?
its definition is sensitive to particular environments and an abstraction
Why might it be useful to add a dead state to the state-transition diagram?
-A parent process might want to retrieve some data from an exiting process before the process exits

-process may contain modifications to a file temporarily placed in memory. Those modifications must be applied to the copy of the file on secondary storage before process’s memory can be freed.
User-Level Threads
-perform threading operations in user space
-threads are create by runtime libraries that can access kernel primitives directly
-transparent to the operating system
-multithreaded process is dispatched as a unit
-many-to-one thread mapping
-do not require OS to support threads
-more efficient
- requires OS manage all threads
Kernel Level Threads
-maps each thread to its own execution context
-kernel can dispatch several processing threads to different processes at once
-one-to-one thread mapping
-threads are recognized individually
invokes kernel,
-less portable
-consumer more resources
Would an algorithm that performs several independent calculations concurrently be more efficient if it used threads, or if it did not use threads? Why is
this a hard question to answer?
It really depends on the number of processers available and the threading model type used.

It would be more efficient to use a multiprocessor versus a single processor with user-level threads.
With the second, the process would have to perform more actions like thread spawning and switching contexts
Give several reasons why the study of concurrency is appropriate and important for students of operating systems.
As a resource manager, the operating system must enable its resources to be accessed by several threads that execute asynchronously and concurrently.The study of concurrency is important because it enables students to recognize when access to shared resources by concurrently executing threads must be synchronized and how to implement this synchronization.
Explain why false:When several threads access shared information
in main memory,mutual exclusion must be enforced to prevent the production of
indeterminate results.
If all of the threads ONLY read shared information, then the threads
may access the information concurrently without mutual exclusion.
Explain in detail how binary semaphores and binary semaphore operations can be
implemented in the kernel of an operating system.
-kernel will have to allocate a bit to represent the value of the semaphore and a queue to organize the threads that are blocked for the semaphore.
-queue can just be
implemented as a linked list of PCBs.
-When a thread calls the P command, kernel can disable
interrupts. If semaphore value is 0, the thread can be added to the end of blocked queue and removed from ready queue. Then interrupts can be reenabled.
-If semaphore value is 1, the thread sets the value of the semaphore to be 0, then reenables
interrupts.When a thread calls the V operation, the kernel first disables interrupts.
-kernel checks to see if the blocked queue is empty. If it is, semaphore value is set to 1, and interrupts are reenabled.
If there is a thread blocked for the semaphore, kernel can remove that thread from the queue, place the thread in the ready queue, then reenable interrupts.
Monitors
-Use of information hiding
- clients of a monitor cannot directly access its variables, nor can clients access the internal monitor procedures.
Semaphores
-contains protected variable altered only by function P and V
-enforcing thread synchronization and mutual exclusion.
When a resource is returned by a thread calling a monitor, the monitor gives priority to a waiting thread over a new requesting thread.Why?
To avoid indefinite postponement of waiting threads.
Why is it considerably more difficult to test, debug, and prove program correctness for
concurrent programs than for sequential programs?
Concurrent programs have far more possible execution sequences than sequential programs because its actions can be performed in various sequences. Subtle timing can also be a big issue because debugging can be hard when the precise sequences of events occurred can't be replicated
What is deadlock?
A situation when multiple processes are waiting for an event that will never occur.
What is indefinite postponement?
- a state that occurs when a certain timing and ordering of events prevents a process or thread from making progress for an indeterminate amount of
time.
comes from not using FCFS and priority schemes (high priority threads keep using resources and low-priority doesn't)
How does indefinite postponement differ from deadlock?
A process under indefinite postponement still has a chance to be executed. A process in deadlock will never finish executing unless operator or system takes action
How is indefinite postponement similar to deadlock?
Indefinite postponement and deadlock still prevent the process from executing. System may not be able to determine if process is postpone or deadlocked
State the four necessary conditions for a deadlock to exist.
Mutual Exclusion Condition
Wait-For Condition
No-Preemption
Circular-Wait
Mutual Exclusion Condition
Only one process can access a resource at a time,

-Needed so information won't be lost (for example, process modifying data at once)
Wait For Condition
When processes hold onto other resources while waiting for another resource to complete its execution.

-Needed because released resources and lead to contention
Pre-emption Condition
No resources can be yanked from processes so others can use it

-Needed because yanked resources can cause loss of data
Circular Wait condition
-processes waiting for resources form a circular chain
-Needed because process can't contend for resource that others own while those other process content for resources owned by earlier processes
Three Level of Schedulers
-High Level Scheduler
-Intermediate Scheduler
-Dispatcher
High Level Scheduler
-admissions level scheduler
-determines whether to admit another job to system to compete for resources
-dictates degree of multiprogramming
Intermediate Scheduler
-decides whether process should be in other states besides suspended
-acts as a buffer between admitted jobs and assignment of processors
Dispatcher
-switches a process form the ready state to the running state (may be one ready list or more)
-assigns processer to processes
-may assign priority also
True or False:
A process scheduling discipline is preemptive if the processor cannot be forcibly removed from a process.
False. Preemption means the processor can be forcibly removed.
True or False:
Real-time systems generally use preemptive processor scheduling.
True.The key to the success of a real-time system is its ability to meet processes’ deadlines
True or False:
Timesharing systems generally use nonpreemptive processor scheduling.
False. They use preemptive scheduling to guarantee a fast response to new requests. The goal is to service trivial, I/O-bound, interactive requests immediately and quickly in favor of more lengthy requests that can receive lower levels of service.
True or False:
Turnaround times are more predictable in preemptive than in nonpreemptive systems.
False. In a nonpreemptive system, once a process gets a processor, it will run to completion; there is no uncertainty caused by the possibility of repeatedly being
preempted by other processes.
True or False:
One weakness of priority schemes is that the system will faithfully honor the priorities, but the priorities themselves may not be meaningful.
True. The point here is that the assignment of priorities in such a system is an important task. If, indeed, we create a priority-honoring system, then we should put substantial effort into ensuring that priorities are assigned meaningfully. If not, then we have a mechanism that works properly yet delivers questionable results.
Why can FIFO not be an appropriate processor scheduling scheme for interactive users?
The assumption is that with FIFO, once a process is initiated it runs to completion. Process scheduling determines what processes should be ran for most optimal results
How memory fragmentation occurs
-Contiguous allocation
-Noncontiguous allocation
-Fixed Partition Multiprogramming with absolute translation and loading
-Fixed-partition multiprogramming with relocatable translation and loading
- Variable-partition multiprogramming
-Multiprogramming with memory
swapping
Example of memory defragmentation Contiguous allocation
Several chunks do not fill the available holes
Example of Memory Defragmentation Noncontiguous allocation
A job does not fill its designated partition; a partition is empty and has no jobs waiting for it, while jobs that would fit in that partition
have been designated for busy partitions.
Fixed-partition multiprogramming with relocatable translation and loading
A job does not fill the partition it occupies; a small job in one partition could also fit in another open partition—a second job arrives too large for the open partition
but it could fit in the occupied partition—the arriving job must wait
Example of memory Variable-partition multiprogramming
After contiguously loading jobs, there will normally be one memory hole remaining; as contiguously loaded jobs randomly terminate, holes will be dispersed throughout memory.
Example of Memory Fragmentation Multiprogramming with memory
swapping
The job currently swapped in does not fill available memory.
Discuss the motivations for multiprogramming.
it provides better device utilization because a particular job may use a subset of a variety of I/O devices. Also these types of devices take little processor time.
What characteristics of programs and machines make multiprogramming desirable?
Ideally, a large portion of the jobs to be run should be I/O bound, the jobs should use sharable devices or different dedicated devices, the processor should be fast enough to support the overhead implicit in a multiprogramming operating system (such as interrupt processing and context switching), sufficient main memory should be available to hold the active jobs to make the multiprogramming efficient and thus worthwhile and the system should have a sufficiently rich collection of devices to support the needs of the jobs being multiprogrammed at any time.
In what circumstances is multiprogramming undesirable?
a system that processes only lengthy processor-bound jobs might produce better throughput when the jobs are executed one at a time on a dedicated processor in sequence than when they are multiprogrammed; the context-switching overhead in a multiprogrammed processor-bound system would reduce throughput The context-switching overhead also exists in an I/O-bound system, but the point is that the I/O devices can function in parallel with the processor(s).
Advantages of non contiguous memory allocation
-Programs can run even if memory requirement is larger than the largest available memory area.
Disadvantages of noncontiguous allocation
- This type of allocation is more costly and is harder to implement with hardware and OS
What are the techniques for mapping virtual address to physical addresses under pages
-Direct mapping
-Associative Mapping
-Direct/Associate Mapping
Direct mapping
-when page table contains one entry for every page in the process's virtual memory space.
-Associative mapping
-content-addressed associative memory thats holds entire page table. When references main memory, map is searched simultaneous for its frame
Direct/Associate Mapping
TLB only holds small most references pages. page mapping mechanism search in TLB first, then searches in page table
Memory Management Strategies
-Fetch
-Placement
-Replacement
Fetch strategy
-Bring page from secondary memory to primary memory
-do this by demand or anticipation
Placement strategy
Place a page in a page frame in primary memory
Replacement strategy
If primary memory is full, choose a page to replace. Do this in such a manner that the page being replaced is unlikely to be needed in the near future.
List several reasons why it is necessary to prevent certain pages from being paged out of main memory.
-page can be referenced often
-pages could have just be put in
-page can be part of a real-time system
- page could be getting modified
Large Page size vs. Small Page size
LPS
-increase range of mem that TLB can reference->more TLB hits
-reduce time-consuming i/o ops
-reduce wasted memory from table fragmentation

SPS
-help process establish a smaller/tighter working set-> more memory available for other processes
-reduce internal fragmentation
What are the essential goals of disk scheduling?Why is each important?
(2)
- have greatest amount of throughput so you can have more requests done as possible to machines more efficient
- to minimize variance of response times to provide predictability and avoid indefinite postponement
What makes a given disk scheduling discipline fair?
It will not let other requests cut each other requests waiting in request queue
Just how important is fairness compared to other goals of disk scheduling disciplines?
It is important, but remember disk scheduling is designed to maximize amount of throughput
Latency optimization usually has little effect on system performance except under extremely heavy loads. Why?
Latency optimization is worthwhile only when there is a high probability of having multiple requests to the same cylinder. Under light to moderate loads, the request queue
typically contains one or zero requests per cylinder.
Give several reasons why it may not be useful to store logically contiguous pages from a process’s virtual memory space in physically contiguous areas on secondary storage.
-Secondary storage has been known to have fragmentation. It isn't a good idea to model virtual memory after that
-virtual spaces are dynamic, so compaction would have to be performed alot
-There is no significant gain from doing this.
What are the motivations for structuring file systems hierarchically?
- you can find a file in logarithmic time, and organization becomes easier and more structured by organizing files and content
Queued Access Methods
-Used when the sequence in which records are to be processed
can be anticipated, such as in sequential and indexed sequential processing.
-perform anticipatory buffering and scheduling of I/O operations to try to
have the next record ready for processing when the previous record has been processed.
Basic Access Methods
used in situations where the sequence of records to be processed cannot be anticipated, such as in direct access applications.
Why is a precise statement of security requirements critical to the determination of
whether a given system is secure?
There are many different views of security. This issue is extremely sensitive to the nature of the applications and to the consequences of a security breach.
capabilities list Vs. access control list
An access control list is attached to an object.A capabilities list is attached to a subject.
Why are denial-of-service attacks of such great concern to operating systems designers?
Operating systems designers want their machines to keep operating and servicing
users at a responsible pace.Denial-of-service attacks can bring a system to its knees, possibly
preventing service in life-threatening situations.
List a few types of denial-of-service attacks.
-A malicious user may flood a server with request packets so that other users may not be able to receive information from the server.
-a denial-of-service attack can involve modifying routing tables to redirect traffic.
Why is it difficult to detect a distributed denial-of-service attack on a server?
It is difficult to determine whether many legitimate users wish to access data from a
server, or a malicious user is causing a distributed denial-of-service attack.