• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/57

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

57 Cards in this Set

  • Front
  • Back
computer system
A computer system consists of hardware and systems software that work together to run application programs.
source program (or source file)
source program (or source file) that the programmer creates with an editor and saves in a text file called something.c
bytes
8-bit chunks called bytes
ASCII standard
ASCII standard that represents each character with a unique byte-sized integer value
\n
Notice that each text line is terminated by the invisible newline character ‘\n’, which is represented by the integer value 10 (ASCII).
C was developed from 1969 to 1973 by
Dennis Ritchie of Bell Laboratories.
So why the success[of C]?
C was closely tied with the Unix operating system.
C is a small, simple language
C was designed for a practical purpose.
[C] is not perfect for all programmers and all situations
C pointers are a common source of confusion and programming errors. C also lacks explicit support for useful abstractions such as classes, objects, and exceptions.
gcc compiler driver reads the source file... and translates it into
an executable object file... [in] ...four phases
(preprocessor, compiler, assembler, and linker)
Preprocessing phase
The preprocessor (cpp) modifies the original C program according to directives that begin with the # character. For example, the #include <stdio.h> command in line 1 of hello.c tells the preprocessor to read the contents of the system header file stdio.h and insert it directly into the program text. The result is another C program, typically with the .i suffix.
Compilation phase
The compiler (cc1) translates the text file hello.i into the text file hello.s, which contains an assembly-language program. Each statement in an assembly-language program exactly describes one low-level machine-language instruction in a standard text form. Assembly language isuseful because it provides a common output language for different compilers for different high-level languages. For example, C compilers and Fortran compilers both generate output files in the same assembly language.
Assembly phase
Next, the assembler (as) translates hello.s into machine language instructions, packages them in a form known as a relocatable object program, and stores the result in the object file hello.o. The hello.o file is a binary file whose bytes encode machine language instructions rather than characters. If we were to view hello.o with a text editor, it would appear to be gibberish.
Linking phase
Notice that our hello program calls the printf function, which is part of the standard C library provided by every C compiler. The printf function resides in a separate precompiled object file called printf.o, which must somehow be merged with our hello.o program. The linker (ld) handles this merging. The result is the hello file, which is an executable object file (or simply executable) that is ready to be loaded into memory and executed by the system.
some important
reasons why programmers need to understand how compilation systems work:
Optimizing program performance.

Understanding link-time errors

Avoiding security holes.
To run the executable file on a Unix system, we type its name to an application program known as a [what]
shell
buses
Running throughout the system is a collection of electrical conduits called buses that carry bytes of information back and forth between the components. Buses are typically designed to transfer fixed-sized chunks of bytes known as words.
Input/output (I/O) devices
the system’s connection to the external world. Our example system has four I/O devices: a keyboard and mouse for user input, a display for user output, and a disk drive (or simply disk) for long-term storage of data and programs.
controller or an adapter
(What is difference?)
Each I/O device is connected to the I/O bus by either a controller or an adapter. The distinction between the two is mainly one of packaging.

Controllers are chip sets in the device itself or on the system’s main printed circuit board (often called the motherboard).

An adapter is a card that plugs into a slot on the motherboard.

Regardless, the purpose of each is to transfer information back and forth between the I/O bus and an I/O device.
main memory
The main memory is a temporary storage device that holds both a program and the data it manipulates while the processor is executing the program.
central processing unit (CPU), or simply processor
central processing unit (CPU), or simply processor, is the engine that interprets (or executes) instructions stored in main memory
program counter (PC)
At its core is a word-sized storage device (or register) called the program counter (PC). At any point in time, the PC points at (contains the address of) some machine-language instruction in main memory.
instruction set architecture.
instructions execute in strict sequence, and executing a single instruction involves performing a series of steps.
arithmetic/logic unit (ALU)
The ALU computes new data and address values.
4 things a processor does
Load: Copy a byte or a word from main memory into a register, overwriting the previous contents of the register.
Store: Copy a byte or a word from a register to a location in main memory, overwriting the previous contents of that location.
Operate: Copy the contents of two registers to theALU, perform an arithmetic operation on the two words, and store the result in a register, overwriting the previous contents of that register.
Jump: Extract a word from the instruction itself and copy that word into the
program counter (PC), overwriting the previous value of the PC.
words
Buses are typically designed to transfer fixed-sized chunks of bytes known as words. The number of bytes in a word (the word size) is a fundamental system parameter that varies across systems. Most machines today have word sizes of either 4 bytes (32 bits) or 8 bytes (64 bits).
direct memory access (DMA)
the data travels directly from disk to main memory, without passing through the processor.
processor-memory gap
processor can read data from the register file almost 100 times faster than from memory. Even more troublesome, as semiconductor technology progresses over the years, this processor-memory gap continues to increase. It is easier and cheaper to make processors run faster than it is to make main memory run faster.
main memory vs disk
Because of physical laws, larger storage devices are slower than smaller storage devices. And faster devices are more expensive to build than their slower
counterparts. For example, the disk drive on a typical system might be 1000 times larger than the main memory, but it might take the processor 10,000,000 times longer to read a word from disk than from memory.
cache memories (or simply caches)
To deal with the processor-memory gap, system designers include smaller faster storage devices called cache memories (or simply caches) that serve as temporary staging areas for information that the processor is likely to need in the near future.
static random access memory
(SRAM)
The L1 and L2 caches are implemented with a hardware technology known as static random access memory (SRAM).
L1 cache on the processor chip holds tens of thousands of bytes and can be accessed nearly as fast as the register file.
One of the most important lessons in this book is that application programmers who are aware of cache memories can exploit them to improve the performance of their programs by an order of magnitude. You will learn more about these important devices and how to exploit them in Chapter 6.
blank
memory hierarchy
In fact, the storage devices in every computer system are
organized as a memory hierarchy similar to Figure 1.9. As we move from the top
of the hierarchy to the bottom, the devices become slower, larger, and less costly
per byte. The register file occupies the top level in the hierarchy, which is known
as level 0, or L0. We show three levels of caching L1 to L3, occupying memory
hierarchy levels 1 to 3. Main memory occupies level 4, and so on.
The operating system has two primary purposes:
(1) to protect the hardware from misuse by runaway applications, and (2) to provide applications with simple and uniform mechanisms for manipulating complicated and often wildly different low-level hardware devices.
The operating system achieves both goals via the fundamental abstractions shown in Figure 1.11: processes, virtual memory, and
files.
See diagram on http://ilab.rutgers.edu/~skane9/AbstractionOS.jpg.
Layered view of a computer system
http://ilab.rutgers.edu/~skane9/layerCS.jpg
process
operating system’s abstraction for a running program. Multiple processes can run concurrently on the same system, and each process appears to have exclusive use of the hardware.
By [WHAT?], we mean that the instructions of one process are interleaved with the instructions of another process.
concurrently
a
single CPU can appear to execute multiple processes concurrently by having the
processor switch among them. The operating system performs this interleaving
with a mechanism known as [WHAT?].
context switching
uniprocessor system
containing a single CPU.
The operating system keeps track of all the state information that the process needs in order to run. This state, which is known as the [WHAT?], includes information such as the current values of the PC, the register file, and the contents of [WHAT?].
context

main memory
context switch
When the operating system decides to transfer control from the current process to some new process, it performs a context switch by saving the context of the current process, restoring the context of the new process, and then passing control to the new process. The new process picks up exactly where it left off.
When we ask it to run ...[a] program, the shell carries out our request by invoking a special function known as a [WHAT?].
system call
threads,
Although we normally think of a process as having a single control flow, in modern
systems a process can actually consist of multiple execution units, called threads,
each running in the context of the process and sharing the same code and global
data.
[BLANK] is an abstraction that provides each process with the illusion that it has exclusive use of the main memory.
Virtual memory
virtual address space
Each process has the same uniform view of
memory, which is known as its virtual address space.
Heap
Heap. The code and data areas are followed immediately by the run-time heap.
Unlike the code and data areas, which are fixed in size once the process begins running, the heap expands and contracts dynamically at run time as a result of calls to C standard library routines such as malloc and free.
Shared libraries
Near the middle of the address space is an area that holds the code and data for shared libraries such as the C standard library and the math library. The notion of a shared library is a powerful but somewhat difficult concept.
Stack.
At the top of the user’s virtual address space is the user stack that the compiler uses to implement function calls. Like the heap, the user stack expands and contracts dynamically during the execution of the program. In particular, each time we call a function, the stack grows. Each time we return from a function, it contracts.
Kernel virtual memory
The kernel is the part of the operating system that is always resident in memory. The top region of the address space is reserved for the kernel. Application programs are not allowed to read or write the contents of this area or to directly call functions defined in the kernel code.
A [WHAT?] is a sequence of bytes, nothing more and nothing less.
file
From the point of view of an individual system, the network can be viewed as just another I/O device
When the system copies a sequence of bytes from main memory to the network adapter, the data flows across the network to another machine, instead of, say, to a local disk drive. Similarly, the system can read data sent from other machines and copy this data to its main memory.
An important idea to take away from this discussion is that a system is more than just hardware. It is:
a collection of intertwined hardware and systems software that must cooperate in order to achieve the ultimate goal of running application programs.
concurrency and parallelism (important theme)
We use the term concurrency to refer to the general concept of a system with multiple, simultaneous activities, and the term parallelism to refer to the use of concurrency to make a system run faster.
Thread-Level Concurrency
Building on the process abstraction, we are able to devise systems where multiple
programs execute at the same time, leading to concurrency. With threads, we
can even have multiple control flows executing within a single process.
Example of Thread-Level Concurrency
Traditionally, this concurrent execution was only simulated, by having a single computer rapidly switch among its executing processes, much as a juggler keeps multiple balls flying through the air.
When we construct a system consisting of multiple processors all under the control of a single operating system kernel, we have a [WHAT?].
multiprocessor system
Hyperthreading, sometimes called [blank blank blank], is a technique that allows a single CPU to execute multiple flows of control.
simultaneous multi-threading