• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/29

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

29 Cards in this Set

  • Front
  • Back
• Two systems can have the exact same hardware, software components, and
applications, but provide different levels of protection because of the different
security policies and security models the two systems were built upon.
• A CPU contains a control unit, which controls the timing of the execution of
instructions and data, and an ALU, which performs mathematical functions
and logical operations.
• Most systems use protection rings. The more privileged processes run in the
lower-numbered rings and have access to all or most of the system resources.
Applications run in higher-numbered rings and have access to a smaller
amount of resources.
• Operating system processes are executed in privileged or supervisor mode, and
applications are executed in user mode, also known as “problem state.”
• Secondary storage is nonvolatile and can be a hard drive, CD-ROM drive,
floppy drive, tape backup, or a jump drive.
• Virtual storage combines RAM and secondary storage so the system seems to
have a larger bank of memory.
• A deadlock situation occurs when two processes are trying to access the same
resource at the same time.
• Security mechanisms can focus on different issues, work at different layers,
and vary in complexity.
• The more complex a security mechanism is, the less amount of assurance it
can usually provide.
• Not all system components fall under the trusted computing base (TCB),
which includes only those system components that enforce the security policy
directly and protect the system. These components are within the security
perimeter.
• Components that make up the TCB are hardware, software, and firmware that
provide some type of security protection.
• A security perimeter is an imaginary boundary that has trusted components
within it (those that make up the TCB) and untrusted components outside it.
• The reference monitor concept is an abstract machine that ensures all subjects
have the necessary access rights before accessing objects. Therefore, it mediates
all accesses to objects by subjects.
• The security kernel is the mechanism that actually enforces the rules of the
reference monitor concept.
• The security kernel must isolate processes carrying out the reference monitor
concept, must be tamperproof, must be invoked for each access attempt, and
must be small enough to be properly tested.
• A security domain is all the objects available to a subject.
• Processes need to be isolated, which can be done through segmented memory
addressing, encapsulation of objects, time multiplexing of shared resources,
naming distinctions, and virtual mapping.
• The level of security a system provides depends upon how well it enforces the
security policy.
• A multilevel security system processes data at different classifications (security
levels), and users with different clearances (security levels) can use the system.
• Processes should be assigned least privilege so they have just enough system
privileges to fulfill their tasks and no more.
• Some systems provide security at different layers of their architectures, which
is called layering. This separates the processes and provides more protection
for them individually.
• Data hiding occurs when processes work at different layers and have layers of
access control between them. Processes need to know how to communicate
only with each other’s interfaces.
• A security model maps the abstract goals of a security policy to computer
system terms and concepts. It gives the security policy structure and provides
a framework for the system.
• A closed system is often proprietary to the manufacturer or vendor, whereas
the open system allows for more interoperability.
• The Bell-LaPadula model deals only with confidentiality, while the Biba and
Clark-Wilson models deal only with integrity.
• A state machine model deals with the different states a system can enter. If
a system starts in a secure state, all state transitions take place securely, and
the system shuts down and fails securely, the system will never end up in an
insecure state.
• A lattice model provides an upper bound and a lower bound of authorized
access for subjects.
• An information flow security model does not permit data to flow to an object
in an insecure manner.
• The Bell-LaPadula model has a simple security rule, which means a subject
cannot read data from a higher level (no read up). The *-property rule means
a subject cannot write to an object at a lower level (no write down). The
strong star property rule dictates that a subject can read and write to objects at
its own security level.
• The Biba model does not let subjects write to objects at a higher integrity level
(no write up), and it does not let subjects read data at a lower integrity level
(no read down). This is done to protect the integrity of the data.
• The Bell-LaPadula model is used mainly in military systems. The Biba and
Clark-Wilson models are used in the commercial sector.
• The Clark-Wilson model dictates that subjects can only access objects through
applications. This model also illustrates how to provide functionality for
separation of duties and requires auditing tasks within software.
• If a system is working in a dedicated security mode, it only deals with one
level of data classification, and all users must have this level of clearance to
be able to use the system.
• Compartmented and multilevel security modes enable the system to process
data classified at different classification levels.
• Trust means that a system uses all of its protection mechanisms properly
to process sensitive data for many types of users. Assurance is the level of
confidence you have in this trust and that the protection mechanisms behave
properly in all circumstances predictably.
• The Orange Book, also called Trusted Computer System Evaluation Criteria
(TCSEC), was developed to evaluate systems built to be used mainly by the
military. Its use was expanded to evaluate other types of products.
• In the Orange Book, D classification means a system provides minimal
protection and is used for systems that were evaluated but failed to meet
the criteria of higher divisions.
• In the Orange Book, the C division deals with discretionary protection, and
the B division deals with mandatory protection (security labels).
• In the Orange Book, the A classification means the system’s design and level of
protection are verifiable and provide the highest level of assurance and trust.
• In the Orange Book, C2 requires object reuse protection and auditing.
• In the Orange Book, B1 is the first rating that requires security labels.
• In the Orange Book, B2 requires security labels for all subjects and devices, the
existence of a trusted path, routine covert channel analysis, and the provision
of separate administrator functionality.
• The Orange Book deals mainly with stand-alone systems, so a range of books
were written to cover many other topics in security. These books are called the
Rainbow Series.
• ITSEC evaluates the assurance and functionality of a system’s protection
mechanisms separately, whereas TCSEC combines the two into one rating.
• The Common Criteria was developed to provide globally recognized
evaluation criteria and is in use today. It combines sections of TCSEC,
ITSEC, CTCPEC, and the Federal Criteria.
• The Common Criteria uses protection profiles and ratings from EAL1 to EAL7.
• Certification is the technical evaluation of a system or product and its security
components. Accreditation is management’s formal approval and acceptance
of the security provided by a system.
• A covert channel is an unintended communication path that transfers data in
a way that violates the security policy. There are two types: timing and storage
covert channels.
• A covert timing channel enables a process to relay information to another
process by modulating its use of system resources.
• A covert storage channel enables a process to write data to a storage medium
so another process can read it.
• A maintenance hook is developed to let a programmer into the application
quickly for maintenance. This should be removed before the application goes
into production or it can cause a serious security risk.
• An execution domain is where instructions are executed by the CPU. The
operating system’s instructions are executed in a privileged mode, and
applications’ instructions are executed in user mode.
• Process isolation ensures that multiple processes can run concurrently and
the processes will not interfere with each other or affect each other’s memory
segments.
• The only processes that need complete system privileges are located in the
system’s kernel.
• TOC/TOU stands for time-of-check/time-of-use. This is a class of
asynchronous attacks.
• The Biba model addresses the first goal of integrity, which is to prevent
unauthorized users from making modifications.
• The Clark-Wilson model addresses all three integrity goals: prevent unauthorized
users from making modifications, prevent authorized users from making
improper modifications, and maintain internal and external consistency.
• In the Clark-Wilson model, users can only access and manipulate objects
through programs. It uses access triple, which is subject-program-object.