• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/80

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

80 Cards in this Set

  • Front
  • Back
Program
an interactive unit, such as a file stored on a disk.
Processor
(1) a synonym for the CPU (2) any component in a computing system capable of preforming a sequence of activities.
Thread
a portion of a program run independently of other portions. Multithreaded application programs can have several threads running at one time with the same or different priorities.
Process
an instance of execution of a program that is identifiable and controllable by the operating system.
Multiprogramming
A technique that allows a single processor to process several programs residing simultaneously in main memory and interleaving their execution by overlapping I/O requests with CPU requests.
Multiprocessing
When two or more processors share system resources that may include some or all of the following: the same main memory, I/O devices, and control program routines.
Job Scheduler
the high-level scheduler of the Processor Manager that selects jobs from a queue of incoming jobs based on each job’s characteristics.
Process Scheduler
The low-level scheduler of the Processor Manager that establishes the order in which processes in the READY queue will be served by the CPU.
Pre-Emptive Policy
Any process scheduling strategy that, based on predetermined policies, interrupts the processing of a job and transfers the CPU to another job. It is widely used in time-sharing environments.
Non-Pre-Emptive Scheduling Policy
A job scheduling strategy that functions without external interrupts so that once a job captures the processor and begins execution, it remains in the state uninterrupted until it issues an I/O request or it’s finished.
Criteria for a Good Process Scheduling Policy
• Maximize Throughput
o Run as many jobs as possible in a given amount of time.
• This could be accomplished easily by running only short jobs or by running jobs without interruptions.
• Minimize Response Time
o Quickly turn around inactive requests.
• This could be done by running only interactive jobs and letting the batch jobs wait until the interactive load ceases.
• Minimize Turnaround Time
o Move entire jobs in and out of the system quickly.
• This could be done by running all batch jobs first (because batch jobs can be grouped to run more efficiently than interactive jobs).
• Minimize Waiting Time
o Move jobs out of the READY queue as quickly as possible.
• This could only be done by reducing the number of users allowed on the system so the CPU would be available immediately whenever a job entered the READY queue.
• Maximize CPU Efficiency
o Keep the CPU busy 100 percent of the time.
• This could be done by running only CPU-bound jobs (and not I/O-bound jobs).
• Ensure fairness for all jobs
o Give everyone an equal amount of CPU and I/O time.
• This could be done by giving special treatment to any job, regardless of its processing characteristics or priority.
Operation of the major process scheduling policies
First Come First Serve
A nonpreemptive process scheduling policy (or algorithm) that handles jobs according to their arrival time.
Operation of the major process scheduling policies
Shortest Job Next
A nonpreemptive process scheduling policy that selects the waiting job with the shortest CPU cycle time.
Operation of the major process scheduling policies
SRT
A preemptive process scheduling policy similar to the SJN algorithm that allocates the processor to the job closest to completion.
Operation of the major process scheduling policies
Round Robin
A preemptive process scheduling policy that allocates to each job one unit of processing time per turn to ensure that the CPU is equally shared among all active processes and isn’t monopolized by any one job.
Operation of the major process scheduling policies
Priority Scheduling
A nonpreemptive process scheduling policy that allows for the execution of high-priority jobs before low-priority jobs.
Deadlock
A problem occurring when the resources needed by some jobs to finish execution are held by other jobs, which, in turn, are waiting for other resources to become available. Also called deadly embrace.
Starvation
the result of conservative allocation of resources in which a single job is prevented from execution because it’s kept waiting for resources that never become available.
Conditions for Deadlock
Mutual Exclusion
One of the four conditions for deadlock in which only one process is allowed to have access to a resource.
Conditions for Deadlock
Resource Holding
One of the four conditions for deadlock in which each process refuses to relinquish the resources it holds until its execution is completed even though it isn’t using them because it’s waiting for other resources.
Conditions for Deadlock
No Preemption
One of the four conditions for deadlock in which a process is allowed to hold on to the resources while it is waiting for other resources to finish execution.
Conditions for Deadlock
Circular Wait
One of the four conditions for deadlock through which each process involved is waiting for a resource being held by another; each process is blocked and can’t continue, resulting in deadlock.
Advantages & Disadvantages of the strategies for handling deadlock
• Prevent one of the four conditions from occurring (prevention).
o A design strategy for an operating system where resources are managed in such a way that some of the necessary conditions for deadlock does not hold.
• Avoid the deadlock if it becomes probable (avoidance).
o The dynamic strategy of deadlock avoidance that attempts to ensure that resources are never allocated in such a way as to place a system in an unsafe state.
• Detect the deadlock when it occurs and recover from it gracefully (detection).
o The process of examining the state of an operating system to determine whether a deadlock exists.
Seven Cases of Deadlock
1. Deadlock on File Requests
• If jobs are allowed to request and hold files for the duration of their execution, a deadlock can occur as the simplified directed graph shown in Figure 5.2 graphically illustrates.
• For example, consider the case of a home construction company with two application programs, purchasing (P1) and sales (P2), which are active at the same time. Both need to access two files, inventory (F1) and suppliers (F2), to read and write transactions. One day the system deadlocks when the following sequence of events takes place:
1. Purchasing (P1) accesses the supplier file (F2) to place an order for more lumber.
2. Sales (P2) accesses the inventory file (F1) to reserve the parts that will be required to build the home ordered that day.
3. Purchasing (P1) doesn’t release the supplier file (F2) but requests the inventory file (F1) to verify the quantity of lumber on hand before placing its order for more, but P1 is blocked because F1 is being held by P2.
4. Meanwhile, sales (P2) doesn’t release the inventory file (F1) but requests the supplier file (F2) to check the schedule of a subcontractor. At this point, P2 is also blocked because F2 is being held by P1.
• Any other programs that require F1 or F2 will be put on hold as long as this situation continues. This deadlock will remain until one of the two programs is closed or forcibly removed and its file is released. Only then can the other program continue and the system returns to normal.
2. Deadlock in Databases
3. Deadlock in Dedicated Device Allocation
4. Deadlock in Multiple Device Allocation
5. Deadlock in Spooling
6. Deadlock in a Network
7. Deadlock in Disk Sharing
Process Synchronization
(1) the need for algorithms to resolve conflicts between processors in a multiprocessing environment; or (2) the need to ensure that events occur in the proper order even if they are carried out by several processes.
Busy waiting
a method by which processes, waiting for an event to occur continuously test to see if the condition has changed and remain in unproductive, resource-consuming wait loops.
Critical region
a part of a program that must complete execution before other processes can have access to the resources being used.
Mutex
a condition that specifies that only one process may update (modify) a shared resource at a time to ensure correct operations and results.
Producers & Consumers
a classic problem in which a process produces data that will be consumed, or used, by another process.
Readers & Writers
A problem that arises when two types of processes need to access a shared resource such as a file or a database.
Semaphores
A type of shared data item that may contain either binary or nonnegative integer values and is used to provide mutual exclusion.
Test-and-Set
An indivisible machine instruction, which is executed in a single machine cycle to determine whether the processor is available.
Describe for and exec
• The fork system call creates a new process that is essentially a clone of the existing one. The child is a complete copy of the parent. For example, the child gets a copy of the parent’s data space, heap and stack. Note that this is a copy. The parent and child do not share these portions of memory. The child also inherits all the open file handles (and streams) of the parent with the same current file offset.
• To actually load and execute a different process, the fork request is used first to generate the new process. The kernel system call: exec(char* program filename) is then used to load the new program image over the forked process:
ls
list of what is saved in the directory.
ls –a
list of what’s in all the files in the directory (including hidden stuff)
ls –l
list of what’s in the directory with the properties.
pwd
(prefix working directory) - tells you what current directory your in.
cd
change the directory.
system
give a command out to the operating system.
getpid
tells the process id.
waitpid
waits for another process.
sleep
takes you off the process (pre-emptive)
Criteria for measuring the system performance of an OS
Throughput
Response time
Turnaround time
Resource utilization
Availability
Throughput
a composite measure that indicates the productivity of the system as a whole; the term is often used by system managers.
Response Time
the interval required to process a user's request
Turnaround Time
the time from the submission of the job until its output is returned to the user.
Resource Utilization
a measure of how much each unit is contributing to the overall operation
Availability
indicates the likelihood that a resource will be ready when a user needs it.
Mean TIme Between Failures
the average time a unit is operational before it breaks down
Mean Time To Repair
the average time needed to fix a failed unit and put it back in service
How to measure availability
MTBF/(MTBF + MTTR)
How to measure reliability
Reliability(t) = e^-(1/MTBF(t))
Reliability
the probability that a unit will not fail during a given time period t
Role of patches in system administration
Patches are files that provide updates to the system and programs. They are important for the reasons:

the need for vigilant security precautions against constantly changing system threats; the need to assure system compliance with government regulations regarding privacy and financial accountability; and the need to keep systems running at peak efficiency.
Operating system
the chief piece of software, the portion of the computing system that manages all of the hardware and all of the other software. To be specific, it controls every file, every device, every section of main memory, and every nanosecond of processing time. It controls who can use the system and how. In short, it’s the boss.
Type of Operating Systems
Batch Systems
Interactive Systems
Real-time Systems
Hybrid Systems
Embedded Systems
Batch Systems
date from the earliest computers, when they relied on stacks of punched cards or reels of magnetic tape for input. Jobs were entered by assembling the cards into a deck and running the entire deck of cards through a card reader as a group—a batch. The efficiency of a batch system is measured in throughput—the number of jobs completed in a given amount of time (for example, 550 jobs per hour).
Interactive Systems
give a faster turnaround than batch systems but are slower than the real-time systems we talk about next. They were introduced to satisfy the demands of users who needed fast turnaround when debugging their programs. The operating system required the development of time-sharing software, which would allow each user to interact directly with the computer system via commands entered from a type- writer-like terminal. The operating system provides immediate feedback to the user and response time can be measured in fractions of a second.
Real-time systems
Real-time systems are used in time-critical environments where reliability is key and data must be processed within a strict time limit. The time limit need not be ultra-fast, but system response time must meet the deadline or risk signifi- cant consequences. These systems also need to provide contingencies to fail grace- fully—that is, preserve as much of the system’s capabilities and data as possible to facilitate recovery. For example, real-time systems are used for space flights, airport traffic control, fly-by-wire aircraft, critical industrial processes, certain medical equipment, and telephone switching, to name a few.
Hybrid Systems
a combination of batch and interactive. They appear to be interactive because individual users can access the system and get fast responses, but such a system actually accepts and runs batch programs in the background when the interactive load is light. A hybrid system takes advantage of the free time between high-demand usage of the system and low-demand times. Many large computer sys- tems are hybrids.
Embedded Systems
computers placed inside other products to add features and capabilities. For example, you find embedded computers in household appliances, automobiles, digital music players, elevators, and pacemakers. In the case of automo- biles, embedded computers can help with engine performance, braking, and naviga- tion. For example, several projects are under way to implement “smart roads,” which would alert drivers in cars equipped with embedded computers to choose alternate routes when traffic becomes congested.
Operating systems for embedded computers are very different from those for general computer systems. Each one is designed to perform a set of specific programs, which are not interchangeable among systems. This permits the designers to make the oper- ating system more efficient and take advantage of the computer’s limited resources, such as memory, to their maximum.
The components (subsytems) of an operating system)
File Management
Device Management
CPU Management
Memory Management
Network Management
File Manager
The Memory Manager is in charge of main memory, also known as RAM, short for Random Access Memory. The Memory Manager checks the validity of each request for memory space and, if it is a legal request, it allocates a portion of memory that isn’t already in use. In a multiuser environment, the Memory Manager
sets up a table to keep track of who is using which section of memory. Finally, when the time comes to reclaim the memory, the Memory Manager deallocates memory.
A primary responsibility of the Memory Manager is to protect the space in main memory occupied by the operating system itself—it can’t allow any part of it to be accidentally or intentionally altered.
Process Manager
The Processor Manager decides how to allocate the
central processing unit (CPU). An important function of the Processor Manager is to keep track of the status of each process. A process is defined here as an instance of execution of a program.

The Processor Manager monitors whether the CPU is executing a process or waiting for a READ or WRITE command to finish execution. Because it handles the processes’ transitions from one state of execution to another, it can be compared to a traffic con- troller. Once the Processor Manager allocates the processor, it sets up the necessary registers and tables and, when the job is finished or the maximum amount of time has expired, it reclaims the processor.
Device Manager
The Device Manager monitors every device, channel, and control unit. Its job is to choose the most efficient way to allocate all of the system’s devices, printers, ports, disk drives, and so forth, based on a scheduling policy chosen by the system’s designers.
The Device Manager does this by allocating each resource, starting its operation, and, finally, deallocating the device, making it available to the next process or job.
File Manager
The File Manager (the subject of Chapter 8) keeps track of every file in the system, including data files, program files, compilers, and applications. By using predeter- mined access policies, it enforces restrictions on who has access to which files. The File Manager also controls what users are allowed to do with files once they access them. For example, a user might have read-only access, read-and-write access, or the authority to create and delete files. Managing access control is a key part of file management. Finally, the File Manager allocates the necessary resources and later deallocates them.
Network Manager
Operating systems with Internet or networking capability have a fifth essential man- ager called the Network Manager that provides a con- venient way for users to share resources while controlling users’ access to them. These resources include hardware (such as CPUs, memory areas, printers, tape drives, modems, and disk drives) and software (such as compilers, application programs, and data files).
Von Neumann Model
Main memory (random access memory, RAM) is where the data and instructions must reside to be processed.
• I/O devices, short for input/output devices, include every peripheral unit in the system such as printers, disk drives, CD/DVD drives, flash memory, keyboards, and so on.
• The central processing unit (CPU) is the brains with the circuitry (sometimes called the chip) to control the interpretation and execution of instructions. In essence, it controls the operation of the entire computer system, as illustrated in Figure 1.5. All storage references, data manipulations, and I/O operations are initiated or performed by the CPU.
Main Memory
Refers to physical memory that is internal to the computer. The word main is used to distinguish it from external mass storage devices such as disk drives. Another term for main memory is RAM.

The computer can manipulate only data that is in main memory. Therefore, every program you execute and every file you access must be copied from a storage device into main memory. The amount of main memory on a computer is crucial because it determines how many programs can be executed at one time and how much data can be readily available to a program.
Partitions
a section of hard disk storage of arbitrary size. Partitions can be static or dynamic.
Fragmentation
a condition in main memory where wasted memory space exists within partitions, called internal fragmentation, or between partitions, called external
fragmentation.
Compaction
the process of collecting fragments of available memory space into contiguous blocks by moving programs and data in a computer’s memory or
secondary storage.
Address Resolution
the process of changing the address of an instruction or data item
to the address in main memory at which it is to be loaded or relocated.
Fixed Partition
(also called static partitions) within the main memory—one partition for each job. Because the size of each partition was designated when the system was powered on, each partition could only be reconfigured when the computer system was shut down, reconfigured, and restarted. Thus, once the system was in operation the partition sizes remained static.

The fixed partition scheme works well if all of the jobs run on the system are of the same size or if the sizes are known ahead of time and don’t vary between reconfigurations.

There are significant consequences if the partition sizes are too small; larger jobs will be rejected if they’re too big to fit into the largest partitions or will wait if the large partitions are busy. As a result, large jobs may have a longer turnaround time as they wait for free partitions of sufficient size or may never run.

On the other hand, if the partition sizes are too big, memory is wasted. If a job does not occupy the entire partition, the unused memory in the partition will remain idle; it can’t be given to another job because each partition is allocated to only one job at a time. It’s an indivisible unit.
Dynamic Partitioning
available memory is still kept in contiguous blocks but jobs are given only as much memory as they request when they are loaded for processing. Although this is a significant improvement over fixed partitions because memory isn’t wasted within the partition, it doesn’t entirely eliminate the problem.

A dynamic partition scheme fully utilizes memory when the first jobs are loaded. But as new jobs enter the system that are not the same size as those that just vacated memory, they are fit into the available spaces on a priority basis.
Paging
Before a job is loaded into memory, it is divided into parts called pages that will be loaded into memory locations called page frames. Paged memory allocation is based on the concept of dividing each incoming job into pages of equal size. Some operating systems choose a page size that is the same as the memory block size and that is also the same size as the sections of the disk on which the job is stored.

The sections of a disk are called sectors (or sometimes blocks), and the sections of main memory are called page frames. The scheme works quite efficiently when the pages, sectors, and page frames are all the same size. The exact size (the number of bytes that can be stored in each of them) is usually determined by the disk’s sector size. Therefore, one sector will hold one page of job instructions and fit into one page frame of memory.
Demand Paging
With demand paging, jobs are still divided into equally sized pages that initially reside in secondary storage. When the job begins to run, its pages are brought into memory only as they are needed.

Demand paging takes advantage of the fact that programs are written sequentially so that while one section, or module, is processed all of the other modules are idle. Not all the pages are accessed at the same time, or even sequentially.

Creates page faults when a page cannot be found.
Segmentation
With segmented memory allocation, each job is divided into several segments of different sizes, one for each module that contains pieces that perform related functions. Segmented memory allocation was designed to reduce page faults that resulted from having a segment’s loop split over two or more pages.

A subroutine is an example of one such logical group. This is fundamentally different from a paging scheme, which divides the job into several pages all of the same size, each of which often contains pieces from more than one program module.
Internal Fragmentation
The phenomenon of partial usage of fixed partitions and the coinciding creation of unused spaces within the partition is called internal fragmentation, and is a major drawback to the fixed partition memory allocation scheme.
External Fragmentation
the subsequent allocation of memory creates fragments of free memory between blocks of allocated memory.