• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/50

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

50 Cards in this Set

  • Front
  • Back
Using page fault frequency scheme for controlling thrashing, suppose we have an upper limit of 20 page faults per 100 references and a lower limit of 4 page faults per 100 references. A process is currently experiencing 22 page faults per 100 references. To prevent the process from thrashing we should:
Increase its page frame allocation.

Generally, increasing page frames allocated to a process will reduce the page fault frequency. 22/100 > 20/100, so we want to reduce it.
Assume a main memory of 1024KB is partitioned into 32 fixed size partitions each 32KB in size. Suppose that every process that will run on this system is between 1KB and 20KB. What is the minimum and maximum size of internal fragmentation that can occur?
Min = 12KB
Max = 31KB

Internal fragmentation is wasted space within the partition. A 1KB process leaves 31KB wasted; 20KB leaves 12KB wasted
Address Binding Can be done at:
Compile/Assembly, Load, and Execution time.

Not during program development time.
The primary advantage of a virtual memory system is:
User Programs can be larger than the physical memory of the system.
Assume that a process whose size is 500 bytes [addresses 0-499] has generated a logical address of 60 and the MMU currently has a value of 1400 in the relocation register for that process. When the MMU calculates the physical address that currently corresponds to the generated logical address, it will calculate:
1460
In a pure demand-paged virtual memory system, the max number of page faults that a single process can incur is equal to the number of different pages that it will reference during its execution. (T/F)
False
A relocation register allows the system loader to place relocatable programs at any location in the memory it so desires. The MMU then resets the relocation register value after loading the program. (T/F)
True
The backing store is:
a high speed disk used to temporarily hold memory images of processes in execution that have been context-switched out to make room for other processes.
A race condition
results when several threads try to access and modify the same data concurrently.
If address binding is to occur at load time, then the compiler or assembler must generate
relative code
Given a paged memory system where each page contains 16 bytes (2^4), the physical memory contains a total of 2048 bytes (2^11) and the logical memory consists of 256 bytes (2^8). How many bits are required to uniquely represent the set of different pages and displacements?
Page Number = 4 bits
Displacement = 4 bits

2^4 bytes per page; so 4 bits are needed.

256 bytes of logical memory, divided into 16 byte pages = 16 pages (2^4); which means 4 bits to uniquely represent each page.
The working set model for page frame allocation in a virtual memory environment was designed specifically to
Eliminate thrashing and Maintain an optimal level of multiprogramming
For any of the algorithms in the class of page replacement algorithms known as "stack algorithms", the number of page faults for a given reference string will be the same as the number of page faults for the reverse of the reference string. (T/F)
True
Given the logical address 0xAEF9 (Hex) with a page size of 256 bytes, what is the page number?
0xAE

256 = 2^8; this means 8 bits are needed for the displacement. The total string is 16 bits long (4 hex digits). Therefore the displacement is 'F9' and the page number is
AE'
Programs that employ dynamically linked addressing are loaded using dynamic run time loading. (T/F)
False
____ can be used to prevent busy waiting when implementing a semaphore.
Waiting Queues
____ is the dynamic storage allocation algorithm which results in the smallest leftover hole in memory
Best Fit
In the enhanced second chance algorithm, which of the following ordered pairs represents a page that would be the best choice for replacement?
(0,0)

Reference bit, Modify bit

0,0 = Neither recently used nor modified
The ____ is an approximation of a program's locality.
Working Set
Optimal page replacement ____.
is used mostly for comparison with other page replacement schemes.

(It is not physically implementable because it requires looking into the future.)
Assume the value of the base and limit registers are 1200 and 350, respectively. What is the legal address space?
Between 1200 and 1550, inclusive.
Assume a system has a TLB hit ratio of 90%. It requires 15 ns to access the TLB, and 85 ns to access main memory. What is the effective memory access time in nanoseconds for this system?
108.5 ns

90% of the time it takes 100ns, 10% of the time it takes 185ns (TLB+memaccess*2).
(100*0.9) + (185*0.1) = 108.5
What is true of the direct access method?
It allows programs to read and write records in no particular order.
A mount point is ____.
the location within the file structure where the file system is to be attached.
The difference between simple paging and simple segmentation is:
Paging has no external fragmentation and segmentation has no internal fragmentation
The primary goal of an operating system is:
to make the underlying hardware more convenient to use.
The architecture of a modern computer system consists of a common bus which connects the CPU and IO devices to the memory subsystem. We can describe this system as:
interrupt driven in which the CPU and IO device controllers compete for memory cycles.
The long-term scheduler is responsible for placing processes into the job pool based upon some selection protocol. (T/F)
True
Which of the following is not a "state" in which a given process can currently be in:
temporary

(can be waiting, running, ready, suspended and so on...)
A clustered system ____.
gathers together multiple CPU's to accomplish computational work.
The ____ refers to the number of processes in memory.
degree of multiprogramming
Which is least likely to be included in the PCB?
a - contents of registers belonging to process
b - number of interrupts generated thus far
c - state of the process
d - time at which process entered system
b - the number of interrupts generated by the process thus far.
Historically, the design of OS's has focused on:
efficient use of system resources.
An IO bound process is
a process that requires more time in IO service than CPU services
Which of the following is not a class of interrupts?'
a - Hardware
b - User
c - Timer
d - Program
e - All above
b - User
Round Robin scheduling degenerates to first come first serve scheduling is the time quantum is too long. (T/F)
True
The short term scheduler is responsible for removing processes from the ready list and placing them in the correct IO waiting queue. (T/F)
False
As you move up the memory hierarchy in a modern computer, the following typically holds true:
Memory becomes faster
Amount of memory decreases
Memory becomes more expensive
Consider a system in which threads are supported only at the user level. Suppose we have two processes, P1: 1 thread and P2: 50 threads. Assume each process receives the same length time slice, how much slow is a thread in P2 executed than in P1?
50 times slower
The rate of a periodic task in a hard real-time system is ____, where p is a period and t is the processing time.
1/p
One of the main advantages to the layered approach in OS design is:
the system can be constructed in a modular fashion.
Which of the following is not a goal of uniprocessor scheduling?
a - increase processor efficiency
b - improve throughput
c - improve response time
d - increase memory utilization
d - increase memory utilization
____ is the number of processes that are completed per time unit.
Throughput
A(n) ____ refers to where a process is accessing/updating shared data.
critical section
A significant problem with priority scheduling algorithms is ____.
starvation
The Shortest Process Next scheduling protocol prioritizes processes based upon the length of their expected service time. Since this means you must predict the future, such a protocol cannot actually be implemented and is only useful as a theoretical tool. (T/F)
False
General priority scheduling can lead to starvation. To alleviate this problem, implement:
aging the process.
Interrupts may be triggered by hardware or software. (T/F)
True
Processor scheduling techniques that use a feedback mechanism are basically penalizing processes that have been executing for long periods of time, thus tending to prefer processes which have not been executing for long periods of time. (T/F)
True
Convert 1024 (decimal) to binary.
100 0000 0000