• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/124

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

124 Cards in this Set

  • Front
  • Back
what is main memory
-also called 'real memory', 'physical memory', or 'primary memory'.
-volatile memory that stores instructions and data.
-can be directly accessed by the processor
why is it generally inefficient to allow only one process to be in memory at a time?
-if the single process blocks for I/O, no other processes can use the processor
what would happen if a system allowed many processes to be placed in main memory, but did not divide memory into partitions?
-the processes would share all their memory. Any malfunctioning or malicious process could damage any or all of the other processes
when is it important for a memory manager to minimize wasted memory space?
-when memory is more expensive than the processor-time overhead incurred by placing programs as tightly as possible into main memory.
-also when the system needs to keep the largest possible contiguous memory region available for large incoming programs and data
why should memory management organizations and strategies be as transparent as possible to process?
-memory management transparency improves application portability and facilitates development because the programmer is not concerned with memory management strategies.
-it also allows memory management strategies to be changed without rewriting applications
what are the different types of memory management strategies
-fetch strategies
-demand fetch strategies
-anticipatory fetch strategies
-placement strategies
-first fit, best fit, worst fit
-replacement strategies
is high resource utilization or low overhead more important to a placement strategy?
-the answer depends on the system objectives andthe relative costs of resources and overhead.
-in general, the OS designer must balance overhead with high memory utilization to meet the system's goal
name the two types of fetch strategies and describe when each one might be more appropriate than the other
-the two types are demand fetch and anticipatory fetch.
-if the system cannot predict future memory usage with accuracy, then the lower overhead of demand fetching results in higher performance and utilization
-if the programs exhibits predictable behavior, anticipatory fetch strategies can improve performance by ensuring that pieces of programs or data are located in memory before processes reference them
when is noncontiguous preferable to contiguous memory allocation?
-when available memory contains no area large enough to hold the incoming program in one contiguous piece, but sufficient smaller pieces of memory are available that, in total, are large enough
what sort of overhead might be involved in a noncontiguous memory allocation scheme?
-there would be overhead in keeping track of available blocks and blocks that belong to separate processes, and where those blocks reside in memory.
how did the ICOS facilitate program development
-programmers were able to perform I/O without themselves having to write the low-level commands that were now incorporated with ICOS, which all programmers could use instead of having to 'reinvent the wheel'
describe the costs and benefits of overlays
-overlays enabled programmers to write programs larger than real memory
-however managing these overlays increased program complexity, which increased the size of the programs and the cost of software development
why is a single boundary register insufficient for protection in a multiuser system
-the single boundary would protect the OS from being corrupted by user processes, but not protect processes from corrupting each other
why are system calls necessary for an OS
-system calls enable processes to request services from the OS while ensuring that the OS is protected from its processes
(T/F) Batch-processing systems removed the need for a system operator
-false.
-a system operator was needed to set up and 'tear down' the jobs and control them as they executed
what was the key contribution of early-batch processing systems?
-they automated various aspects of job-to-job transition, considerable reducing the amount of time wasted between jobs and improved resource utilization
explain the need for relocating compilers, assemblers, and loaders
-before such tools, programmers manually specified the partition into which their program had to be loaded, which potentially wasted memory and process utilization, and reduced application portability
describe the benefits and drawbacks of large and small partition sizes
-larger partitions allow large programs to run, but result in internal fragmentation of small programs
-small partitions reduce the amount of internal fragmentation and increase the level of multiprogramming by allowing more programs to reside in memory at once, but limit program size
explain the difference between internal and external fragmentation
-internal fragmentation occurs in fixed-partition environments when a process is allocated more space than it needs, leading to wasted memory space inside each partition
-external fragmentation occurs in variable-parition environments when memory is wasted due to holes developing in memory between partitions
describe two techniques to reduce external fragmentation in variable-partition multiprogramming systems
-coalescing merges adjacent free memory blocks into one larger block
-memory compaction relocates partitions to be adjacent to one another to consolidate free memory into a single block
why is first-fit an appealing strategy?
-first-fit is intuitively appealing because it does not require that the free memory list be sorted, so it incurs little overhead.
-however it may operate slowly it the holes that are too small to hold the incoming jobs are at the front of the free memory list
(T/F)none of the memory placement strategies (first fit, worst fit, best fit) result in internal fragmentation
-true
explain the overhead of swapping in terms of processor utilization. Assume that memory can hold only one processor at a time
-enormous numbers of processor cycles are wasted when swapping a program between disk and memory
why were swapping systems in which only a single process at a time was in main memory insufficient for multiuser interactive systems
-this type of swapping system could not provide reasonable response times to a large number of users, which is required for interactive systems
give an example of when it might be inefficient to load an entire program into memory before running it
-many programs have error-processing functions that are rarely, if ever, used. loading these functions into memory reduces space available to processes
why is increasing the size of main memory an insufficient solution to the problem of limited memory space
-purchasing additional main memory is not always economically feasible.
-a better solution is to create the illusion that the system contains more memory than the process will ever need
explain the difference between a process's virtual address space and the system's physical address space
-a process's virtual address space refers to the set of addresses a process may reference to access memory when running on a virtual memory system
explain the appeal of artificial contiguity
-artificial contiguity simplifies programming by enabling a process to reference its memory as if it were contiguous, even though its data and instructions may be scattered throughout main memory
why is the start address of a process's block map table, a, placed in a special high-speed register
-placing 'a' in a high-speed register facilitates fast address translation, which is crucial in making the virtual memory implementation feasible
compare and contrast the notions of a page and a page frame
-pages and page frames are identical in size
-a page refers to a fixed size block of main memory
-a virtual memory page must be loaded into a page frame in main memory before a processor can access its contents
10.4 pg. 427 #1
10.4 pg. 427 #1
10.3 pg 424 #1
10.3 pg 424 #1
why should the size of page table entries be fixed
-the key to virtual memory is that address translation must occur quickly
-if the size of page table entries are fixed, the calculation that locates the entries is simple, which facilitates fast page address translation
what type of special-purpose hardware is required for page address translation by direct mapping
-a high-speed processor register is needed to store the base address of the page table
why is page address translation by pure associative mapping not used
-associative memory is even more expensive than direct-mapped cache memory
-therefore it would be prohibitively expensive to build a system that contained enough associative memory to store all of a process's PTE's
does page address translation by pure associative mapping require any special-purpose hardware
-this technique requires an associative memory
-however it does not require a page table origin register to store the location of the start of the page table, because associative memory is content addressed, rather than located addressed
(T/F) The majority of a process's PTE must be stored in the TLB (translation lookaside buffer) to achieve high performance
-false
-processes often achieve 90% or more of the performance possible when only a small portion of a process's PTE's are stored in the TLB
why does the system need to invalidate a PTE in the TLB if a page is moved to secondary storage
-if the system does not invalidate the PTE, a reference to the nonresident page will cause the TLB to return a page frame that might contain invalid data or instructions
discuss the benefits and drawbacks of using a multilevel paging system instead of a direct-mapped paging system
-multilevel paging systems require considerably less main memory spapce to hold mapping information than direct-mapped paging systems
-however, multilevel paging systems require more memory accesses each time a process references a page whose mapping is not in the TLB, and potentially can run more slowly
a designer suggests reducing the memory overhead of direct-mapped page tables by increasing the size of pages. evaluate the consequences of each decision
-assuming that the size of each virtual address remains fixed, page address translation by direct mapping using large pages reduces the number of entries the system must store for each process
-the solution also reduces memory access overhead compared to a multilevel page table
-however as page sizes increases, so do the possibility and magnitude of internal fragmentation.
compare and contrast inverted page tables to direct mapped page tables in terms of memory efficiency and address translation efficiency
-inverted page tables incur less memory overhead than direct-mapped page tables because an inverted page tables contain only one PTE for each physical page frame, where as a direct mapped page table contains one PTE for each virtual page
-However, address translation may be much slower using an inverted page table than when using a direct mapped table bc the system may have to access memory several times to follow a collision chain
why are PTE's larger in inverted page tables than in direct mapped page tables
-A PTE in an inverted page table must store a virtual page number and a pointer to the next PTE in the collision chain.
-A PTE in a direct-mapped page table need only store a page frame number and a resident bit
why might it be difficult to implement page sharing when using inverted page tables
-IPT maintains exactly one PTW in memory for each page frame, therefore the OS must maintain sharing information in a data structure outside the inverted page table.
-if a shared page is already in memory, since it was already recently referenced to
how does page sharing affect performance when the OS moves a page from main memory to secondary storage
-when it moves, the OS must update the corresponding PTE for ever process sharing that page.
-if numerous processes share the page, this could incur significant overhead compared to that for an unshared page
(T/F) segmented virtual memory systems do not incur fragmentation
-false
-segmented virtual memory systems can incur external fragmentation
how does segmentation differ from variable-partition multiprogramming
-programs in a segmented virtual memory system can be larger than main memory and need only a portion of their data and instructions in memory to execute
10.5 #1 pg 446
10.5 #1 pg 446
10.5 #2 pg 446
10.5 #2 pg 446
how does segmentation reduce sharing overhead compared to sharing under pure paging
-segmentation enables an entire of shared memory to fit inside one segment, so the OS maintains sharing information for one segment.
-under paging this segment might consume several pages, so the OS would have to maintain sharing info for each page
can copy-on-write be implemented using segments and if so how
-yes.
-allocating a copy of the parent's segment map to its child
which access rights are appropriate for a process's stack segment
-a process should be able to read and write data in its stack segment and append new stack frames to the segment
what special-purpose hardware is required to implement memory protection keys
-a high-speed register is required to store the current process's memory protection key
what special-purpose hardware is required for segmentation/paging systems
-a high speed register to store the base address of the segment map table and to store the base address of the corresponding page table and an associative map
why are segmentation/paging systems appealing?
-they offer the architectural simplicity of paging and the access control capabilities of segmentation
what are the benefits and drawbacks of maintaing a linked list of PTWs that map to a shared page
-benefits: enables the system to update PTE's quickly when a shared page is replaced
-drawbacks:incurs memory overhead
explain the difference between demand fetch strategies and anticipatory fetch strategies in virtual memory systems. which one requires more overhead
-demand fetch strategies load pages or segments into main memory only when a process explicitly references them
-anticipatory fetch strategies attempt to predict which pages or segments a process will need and load them ahead of time
-AFS requires more overhead bc the system must spend time determining the likelihood that a page or segment will be referenced
why are placement strategies trivial in paging systems that use only one page size
-bc any incoming page can be placed into any available page frame
does locality favor anticipatory paging or demand paging.
-locality favors anticipatory paging bc it indicates that the OS should be able to predict, with reasonable probability, the pages that a process will use
explain how looping through an array exhibits both spatial and temporal locality
-spatial locality bc the elements of an array are contiguous in virtual memory
-temporal bc the elements are generally much smaller than a page
why is the space-time product of demand paging higher than that of anticipatory paging
-reason is that process has pages in memory that it is not using while it waits for its pages to be painstakingly demand-paged in, one at a time
how could demand paging increase/decrease the degree of multiprogramming in a system?
-increase:the system brings into main memory only those pages that processes actually need
-decrease:it requires more execution time bc they need to retrieve pages from secondary storage more often
in what scenarios is the Linux anticipatory paging strategy inappropriate
-if the process exhibit random page-reference behavior
why is anticipatory paging likely to yield better performance than demand paging? how might it yield poorer performance?
-better:more efficient to bring in several contiguous pages in one I/O transfer
-worse:if the process does not actually use the pages that were pre-paged in
what other factors complicates replacement strategies on systems that use pure segmentation
--such systems must consider the size of the segment being replaced compared to the size of the incoming segment
is it possible to perform optimal page replacement for certain types of processes? if so, give example
-yes
-a process with one data page that is intensively referenced and whose data and instructions are referenced purely sequentially
how is RAND (random page replacement) fair? why is this fairness inappropriate for replacement strategies?
-fair in that all pages in memory are equally likely to be replaced
-inappropriate bc it must try not to replace pages that will be referenced soon
can RAND ever operate exactly as OPT?
-yes, it could accidentally make all the right page replacement decisions. but highly unlikely
why does FIFO page replacement lead to poor performance for many processes?
-FIFO replaces pages according to their age, which unlike locality, is not a good predictor of how pages will be used in the future
chpt 11.6 pg 288 #2
chpt 11.6 pg 288 #2
(T/F)when using the FIFO page-replacement startegy, the number of page faults a process generates always increases as the number of page frames allocated to that process increases
-false
-the normal behavior is that page faults will decrease bc more of the process's pages can be available in memory, decreasing the chance that a referenced page will not be available
(T/F) LRU (last-recently used) is used to benefit the processes that exhibit spatial locality
-false
-LRU benefits processes that exhibit temporal locality
why is 'pure' LRU rarely implemented
-LRU incurs the overhead of maintaining an ordered list of pages and reordering the list
why is frequency of page usage a poor heuristic for reducing the number of future page faults
-frequency measures the number of times a page is referenced, but not how many of those references generated page faults
how does the modified bit improve the performance in the NUR (not used recently) replacement strategy
-the bit enables the OS to determine which pages can be overwritten without first being flushed to disk
how could NUR replace the worst possible page
-the next page that is about to be referenced could have its referenced but reset to zero just before a page-replacement decision is made
how can an NUR page be modified but not referenced
-NUR periodically resets the referenced bits
which strategy incurs the most overhead, second chance or clock?
-second chance requires the system to dequeue and requeue a page each time its resident bit is turned off.
-clock generally incurs less overhead bc it modifies the value of a pointer each time a page's resident bit is turned off
why are second-chance and clock page replacement more efficient than LRU
-these algorithms reduce the number of times the system updates page usage information
despite providing near optimal performance, what hinders the far page-replacement strategy from being widely implemented
-far is complex to implement and it incurs substantial exution-time overhead
when might far strategy replace a page that will be referenced soon
-the process may subsequently 'walk' the access graph directly to the page that was replaced, which would cause a page fault
why is it difficult to determine the size of a process's working set
-it is trivial bc the working set size is exactly the number of unique pages that have been referenced within the window
-to determine the 'true' amount is difficult. reducing the number of pages allocated to the process , at some point page fault rate will increase
what trade-offs are inherent in choosing a window size
-if w is too small, a process's true working set might not be in memory at all times, lending to thrashing
-is its too large memory might be wasted
how does PFF (page fault frequency) approximate the working set model
-both change the size of a process's allocation space dynamically to prevent thrashing
-working set model readjust after every memory access
-PFF readjusts after each page fault
what problems can arise if the PFF upper threshold is too small? too large?
-too small: the system will allocate more pages to a process than it needs
-large: release a process's working set pages, leading to thrashing
why could a voluntary page release yield better performance than a pure working set page replacement strategy
-VPR can release the pages that are 'hanging around' sooner.
why then is voluntary page release not widely implemented in today's systems
-it is hard to choose the right pages to release because we cannot predict the path of execution a program will take
why are large page sizes more favorable in today's systems than they were decades ago?
-the cost of memory has become cheaper.
what are the negatives of having multiple page sizes
-both the OS and hardware must support multiple page sizes to provide efficient memory management
compare and contrast monolithic kernels on the issues of efficiency/performance, maintainability/reliability and the ability to extend or add features
-monolithic OS: every component of the OS is contained in the kernel. efficient bc dew calls criss from user space to kernel space. operated with unrestricted access to the computer's hardware and software.
-layered kernels: the implementation and interface are separate for each layer. allows each layer to be tested and debugged separately. enables designers to change each layer without modifying others. less efficient bc several calls may be required to communicate between the layers.
what is the difference between a purely layered architecture and a microkernal architecture
-layered: enables communication between OS components in adjacent layers.
-micro:enables communication between all OS components via the microkernel
sort from fastest and most expensive to cheapest and less expensive
-registers, L1, L2, main memory, secondary storage, tertiary storage
why do systems contain several data stores of different speed and size
-for the fastest and most reliable storage.
-they have different jobs, main memory stores volatile memory, secondary storage stores persistent data. increases speed and optimal performance
-cheaper to have multiple components
what is the motivation behind caching
-for rapid access in high-speed cache in memories and caching data from disk in main memory for rapid access as a program runs
given processes P1 and P2, explain the steps involved in performing a context-switch between these two processes
-p1 executes on the processor
-kernel stores process p1's execution context to its PCB in memory
-after an interrupt the kernel dispatches a new process, p2 and initiates a context switch
-kernel loads p2's execution context from its PCB in memory
-p2 executes on the processor
why could a voluntary page release yield better performance than a pure working set page replacement strategy
-VPR can release the pages that are 'hanging around' sooner.
why then is voluntary page release not widely implemented in today's systems
-it is hard to choose the right pages to release because we cannot predict the path of execution a program will take
why are large page sizes more favorable in today's systems than they were decades ago?
-the cost of memory has become cheaper.
what are the negatives of having multiple page sizes
-both the OS and hardware must support multiple page sizes to provide efficient memory management
compare and contrast monolithic kernels on the issues of efficiency/performance, maintainability/reliability and the ability to extend or add features
-monolithic OS: every component of the OS is contained in the kernel. efficient bc dew calls criss from user space to kernel space. operated with unrestricted access to the computer's hardware and software.
-layered kernels: the implementation and interface are separate for each layer. allows each layer to be tested and debugged separately. enables designers to change each layer without modifying others. less efficient bc several calls may be required to communicate between the layers.
what is the difference between a purely layered architecture and a microkernal architecture
-layered: enables communication between OS components in adjacent layers.
-micro:enables communication between all OS components via the microkernel
sort from fastest and most expensive to cheapest and less expensive
-registers, L1, L2, main memory, secondary storage, tertiary storage
why do systems contain several data stores of different speed and size
-for the fastest and most reliable storage.
-they have different jobs, main memory stores volatile memory, secondary storage stores persistent data. increases speed and optimal performance
-cheaper to have multiple components
what is the motivation behind caching
-for rapid access in high-speed cache in memories and caching data from disk in main memory for rapid access as a program runs
given processes P1 and P2, explain the steps involved in performing a context-switch between these two processes
-p1 executes on the processor
-kernel stores process p1's execution context to its PCB in memory
-after an interrupt the kernel dispatches a new process, p2 and initiates a context switch
-kernel loads p2's execution context from its PCB in memory
-p2 executes on the processor
what is contained in the PCB or process descriptors
-PID
-Process Set
-Program counter
-System priorities
-Credentials
what does mutual exclusion mean
-the cooperative understanding that access to a shared resource will only be provided to one thread/process at a time
describe 'busy waiting' and explain why its bad for the performance in an OS
-when a loop condition is continually being tested.
-this is bad because the thread is not performing any important operations it needs to and it uses up CPU clock cycles and memory. Too much busy waiting can negatively effect a systems performance
why would it be more efficient to have threads of the same process communicate with one another verses having threads of different processes communicate?
-threads share the same memory so threads on the same process can use read/write commands as oppose to using pipes and messaging
consider a program where there are several threads and there is a shared list. would it be acceptable to have all threads read from the list simultaneously without updating its contents?
-yes bc mutual exclusion is only required when an update to the shared memory location is being performed
why is it important for a thread to execute a critical section as quickly as possible
if a thread spends too much time in its critical section, then it will delay other threads and this will ultimately slow down the entire application
what would happen if a thread does not call enterMutualExclusion() before entering its critical section and accessing the shared memory locations
-it will cause indeterminate results
what would happen if a thread completes its critical section and then does not call exitMutualExclusion()
-will cause all threads to be permanently waiting to access their critical sections
-deadlock
In peterson's algorithm, what variable is the one that prevents indefinite postponement?
-the favoredThread
describe why an n-thread mutual exclusion approach like Lamports would be difficult when the threads are not on the same machine, but are in a distributed system on different machines
-the algorithm would require messaging to update shared data such as the ticket values.
-the delay attributed to messaging to update shared variables would impede the progress of the threads
can a program that enters an infinite loop monopolize a preemptive scheduling system?
-it depends on the priority of the program that contains the infinite loop.
-if the programs priority is a high-priority process than it will obtain more CPU time if there are a collection of low-priority processes waiting
-in general, this is not going to be the case
is scheduling overhead always wasteful
-no.
-the scheduling overhead is the time used by the OS to make decisions about the utilization
can indefinite postponement occur in a system that uses a FIFO scheduler
-no bc processes in FIFO are served based on arrival time. early arrival processes will be serviced prior to later ones
in HRRN scheduling, a short process will always be scheduled before a long process. true or false
false.
HRRN measures the amount of waiting in relationship to the amount of time needed by the CPU.
how does Fair Share Scheduling differ from standard process scheduling algorithms
-FSS algorithms differ bc scheduling decisions are based on a group of processors and not individual processors.
why is it difficult to meet a process's stated deadline
-the difficulty is that new processes may be created before the deadline is reached.
distinguish between multiprogramming and multiprocessing
-multiprogramming is the ability to store multiple programs in memory at once so that they can execute concurrently
-multiprocessing employs more than one process at a time
what is a process
entity that represents a program in execution
name 3 services for managing processes
-context switching
-interrupts
-signals
-message passing
list 5 components of a PCB
-PID
-process state
-program counter
-scheduling priority
-credentials