• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/117

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

117 Cards in this Set

  • Front
  • Back
atomic value
An attribute's value that cannot be divided into multiple pieces of information.
base tables
The relations actually defined within a database
black-box testing
Testing an application without internal knowledge of the code.
cardinality
The number of rows in a relation.
column (attribute)
A property of a relation.
commit
The final step in the successful completion of a database transaction.
composite key
A primary or foreign key that is made up of multiple attributes.
concurrency control
The management of simultaneous transactions so that the transactions do not conflict with one another.
Data Control Language (DCL)
A language used to control access to data in a database.
Data Definition Language (DDL)
A descriptive language used to create and name entities and the relationships between them in a database.
data dictionary
A set of tables created as a result of DDL operations that hold metadata about a database.
Data Manipulation Language (DML)
A language that provides the interface necessary to perform data operations on data within a database.
data model
A formal method of describing data and data relationships.
database
A collection of data organized to provide efficient management of the data as well as easy access to and updating of the data.
database administrator
The person who directs or performs all activities related to maintaining a database environment.
database management system
Software that supports the creation and management of databases.
deadlock
A situation in which two or more transactions are simultaneously waiting for another transaction to release a lock on a data item.
decomposition
The process of creating new relations from existing relations based on functional dependencies within the original relation.
degree
The number of attributes in a relation.
deletion anomaly
A circumstance in which wanted
data is deleted from a relation.
denormalization
The rejoining of relations that were decomposed during normalization.
determinant
An attribute or group of attributes on which another attribute is functionally dependent.
domain
The allowable values for an attribute.
domain constraints
A set of restrictions on the
values allowed for the attributes of relations.
entity
A uniquely identifiable element about which data is stored in a database.
entity integrity
A relational constraint stating that no part of a primary key may be null.
entity-relationship (ER) modeling
A conceptual database design method that describes relationships between entities.
fat client
A client that performs the bulk of the data-processing operations.
flat-file database
A file containing records that have no structured interrelationship.
foreign key
A column or concatenated columns that are identical to the primary key of a relation in the same database.
functional dependency
A relationship between two attributes indicating that for each unique attribute A, there is only one value of attribute B.
insertion anomaly
A circumstance in which data may not be entered into a relation because a complete primary key is not available.
intermediate entity
An entity created as a result of the decomposition of a relation while resolving a many- to-many relationship.
locking
A method used to monitor concurrent access to data in which one transaction is denied access to data until another transaction is complete.
normal form
A number of special properties or constraints that a relation must possess to achieve certain desired goals, such as minimizing unwanted data redundancy.

Three normal forms were part of Codd's original proposal. They were named first (1NF), second (2NF) and third (3NF) normal forms. Later, a more rigorous definition of 3NF was proposed by R. Boyce and Codd and became known as Boyce-Codd normal form (BCNF). A fourth and fifth normal form (4NF and 5NF) were subsequently introduced. These two forms address situations that rarely occur, so this lesson will not include information about 4NF or 5NF.
normalization
The process of organizing and refining relations within a relational database.

the process of organizing and refining relations within a relational database. Normalization usually has the effect of reducing the duplication of data items within the database, at times reducing the amount of storage space needed for the base tables of the database, while also addressing insertion, deletion and update anomalies. Normalization often results in the creation of additional tables to achieve these goals.

Dr. E.F. Codd first developed the normalization process.
parent key
The primary key referenced by a foreign
key.
null
A database value meaning "unknown."
partial functional dependency
In a functional dependency, when B is functionally dependent on A, and an attribute can be removed from A and the dependency still exists, B is said to be partially dependent on A.
primary key
A unique identifier within a row.
query language
The part of a nonprocedural DML
that retrieves data from a database.
recursive relationship
An entity relationship in which an entity is involved in a relationship with itself.
referential integrity
A relational constraint stating that every foreign key that is not null must reference an existing primary key value.
relation
A two-dimensional table consisting of columns and rows in a relational database.
relational database
A collection of data organized as a set of formally described tables.
requirements document
A formal definition of the specifications for software, an application or a database.
rollback
The undoing of a partially completed database transaction that is determined to have failed.
row (tuple)
The collection of one set of attributes in a relation.
schema
The entire organization or structure of a database.
serializability
The process used by a DBMS to determine whether a set of transactions will produce the same result as some serial execution of the same transactions.
strong entity
An entity whose data is meaningful without having to reference another entity in the data model.
Structured Query Language
A language used to create professional, high-performance corporate databases.
system catalog
Part of a database that describes the data in the database.
thin client
A client designed to be small so that the bulk of the data processing occurs on the server.
timestamp
A unique identifier assigned by a DBMS to a transaction.
transaction
A single event or sequence of actions such as a database update or data retrieval operation that is treated as a unit, both to satisfy a request and to ensure database integrity.
transitive dependency
In a functional dependency, if B is functionally dependent on A, and C is functionally dependent on B, C is then said to be transitively dependent on A.
update anomaly
A circumstance in which redundant data in a relation may not be properly updated.
view
A data operation given a name and stored in a database that creates virtual tables when invoked by a user.
virtual tables
Database relations created as a result of a data manipulation operation and not a permanent part of the database.
weak entity
An entity that must reference another entity for its data to be meaningful.
white-box testing
The use of internal knowledge of the code for an application when conducting tests.
Problems with file-based DB systems
repetition of data, separation of data, incompatibility of files, data dependence, and inflexibility of the application programs that access the data.
The main advantages a DBMS offers are
management of data redundancy, concurrency control, consistency of data, data sharing, and increased data integrity, security, and access.
The main disadvantages of a DBMS are
cost of the software and possibly hardware, and the complexity of the software
Increased data integrity:
Data integrity refers to the validity of the data in the database. A DBMS provides mechanisms for defining the type of data that can be held in a table or shared among tables. The DBMS is responsible for enforcing the constraints placed on the data in the database.
Management of data redundancy:
As has been discussed, file-based systems tend to store duplicated data across multiple files. A DBMS integrates data into a central repository, thus eliminating unneeded repetition of data. It is important to remember, however, that the database does not completely eliminate data redundancy. In many cases, repetition of data is needed to improve performance, and, most important, redundant data is needed to model data relationships. A DBMS provides mechanisms for ensuring that updates to redundant data are properly synchronized.
Consistency of data:
This item is a result of the ability to control data redundancy. The risk of compromising the accuracy and consistency of the data is greatly reduced. If data appears in more than one location in the database, the DBMS can ensure that all instances of the data remain consistent with one another.
Data sharing:
In a file-based system, the files are usually owned and maintained by the particular department that uses them. Conversely, the database is owned by the entire organization. A DBMS allows authorized users to access the data more efficiently.
Increased data integrity:
Data integrity refers to the validity of the data in the database. A DBMS provides mechanisms for defining the type of data that can be held in a table or shared among tables. The DBMS is responsible for enforcing the constraints placed on the data in the database.
Increased data security:
A DBMS provides mechanisms to protect data from unauthorized use. Typically, the platform hosting the DBMS will have security measures in place, but most experts agree that you cannot have too much security. The DBA can define security measures, and the DBMS is responsible for enforcing those measures. These measures can be in the form of user names and passwords that identify authorized users of the database. The DBMS can further enforce restrictions on the type of operation a particular user can perform. And the DBMS can restrict access to particular areas of the database for certain users.
Adherence to standards:
The use of a DMBS frees application programmers from varying methods of data access and allows them to take advantage of using standard data access methods. The use of SQL is an example of a standard afforded by a DBMS.
Increased data access:
As data is gathered into a central repository, data that may have been unavailable to certain users becomes available through the use of a DBMS. Also, many database management systems allow users to execute queries against the database without the use of a written application to access the database.
Increased productivity:
With a file-based system, the application programmer must provide low-level functionality within the data access programs. A DBMS provides many low-level data handling routines that would normally need to be written into an application. Application programmers are thus free to concentrate on the functionality needed by the users. Many database management systems also provide tools that aid in the development of database applications. These tools can increase the application programmer's productivity by helping in the rapid development of applications.
Managed concurrency:
As mentioned, file-based systems may not allow concurrent use of data resources. If the system does allow concurrent use of data, the simultaneous operations can result in the loss of data or the loss of data integrity. A DBMS provides mechanisms for handling concurrent data accesses. Concurrency will be discussed in a later lesson.
Increased data backup and recovery:
A DBMS provides mechanisms for both backing up a database and recovering a database in the event of hardware or software failure. In file-based systems, these responsibilities are placed on the user or application programmer.
Designing a database should include
creating a database strategy, defining the scope of the database application, creating a database requirements document, designing the database, selecting a DBMS, designing the database application, creating prototypes of the database and database application, pre-live and post-live testing of the database and database application, implementing the database and database application, converting legacy data and loading new data into the database, and maintaining the database and the database application.
A database requirements document includes
a well-defined mission statement; the data and data processing requirements for all participating departments and individuals; a set of objectives and requirements for the database system from the user's perspective; possible constraints placed on the database designer; a description of the graphical user interface; descriptions of operational scenarios; other issues such as security, reliability, maintainability, portability, and extensibility; and optionally a preliminary schedule and budget.
Define the scope of the database application
This step involves designating the boundaries of the database application and how it may interface with other parts of the organization's Information Technology (IT) infrastructure. This activity involves defining the users of the database and its corresponding application. This step should also take into account any possible future users of the database, as well as identify the major areas of the database application.
Database planning strategy
Define the scope of the database application
Create a database requirements document
Design the database
Select a DBMS
Design the database application
Create database and database application prototypes
Test the database and database application
Implement the database and database application
Convert legacy data and load new data into the database
Maintain the database and the database application
A requirements document can contain the following types of information:
• A well-defined mission statement for the organization in relation to the database. A mission statement declares the specific purpose of the database in broad terms without defining specific tasks pertinent to the implementation of the database.
• The data requirements for all participating departments, individuals, and so forth.
• The data processing requirements for all participating departments, individuals, and
so forth.
• A set of objectives and requirements for the database system from the user's perspective. The document may include a wish list of desirable characteristics, along with more feasible solutions that are in accordance with the organization's business objectives.
• Possible constraints placed on the database designer, including speed requirements, hardware platforms, and so on.
• A description of the graphical user interface (GUI). If the GUI is menu-driven, a description of all menus and their components should be included.
• Descriptions of operational scenarios. A scenario is a proposed specific use of the database system. More specifically, a scenario is a description of one or more beginning-to-end transactions involving the database system and its environment.
• Other issues may be included, such as: security, reliability, maintainability, portability and extensibility.
• Optionally, a preliminary schedule and a preliminary budget.
A database design methodology can be categorized into three distinct phases
conceptual, logical, and physical
Conceptual Database Design consists of the following
• Identifying entities.
• Identifying attributes and attribute domains for entities.
• Identifying relationships.
• Identifying candidate and primary keys for entities.
• Creating an entity-relationship (ER) diagram.
• Reviewing the ER model by the user and design team.
The building blocks of an ER model
entities, attributes and relationships between the entities
When you are using an IE diagram, the cardinality of the relationships is indicated in the ending of the lines that connect the entities. The characters used at the end of the lines are as follows
• ||—Oneandonlyone
• 0 | — Zero or one
• > | — One or more
• > 0 — Zero, one or more
Difference beween ER and IE when indicating relationship.
The Chen ER model uses diamond figures within the diagram to indicate a relationship. The Information Engineering (IE) model uses specific characters at the end of the connecting lines to indicate the cardinality of the relationship.
Identifying candidate and primary keys for entities
• The candidate key least likely to have its value changed.
• The candidate key least likely to lose its uniqueness.
• The candidate key that contains the fewest characters.
• The candidate key that is easiest for the user to use.
Identifying attributes and attribute domains for entities
Within the document, the database designer will look for nouns or noun phrases that are properties, qualities or characteristics of the entities.
identifying relationships
Relationships are usually expressed using verbal expressions within the requirements document.
First Normal Form 1NF
A relation is said to be in first normal form if the data is held in a two-dimensional table with each attribute represented by an atomic value. In other words, the intersection of a row and a column must contain an indivisible value.
First normal form anomalies
Relations that are in 1NF are often subject to insertion, deletion and update anomalies
Second Normal Form
A relation is said to be in second normal form if the relation is in 1NF and all attributes that are not part of the primary key are completely functionally dependent on the primary key. Functional dependency is the fundamental concept to placing relations into 2NF. A functional dependency can be thought of as a one-way relationship between two attributes in a relation. In this relationship, for every unique value of attribute A, attribute B is completely functionally dependent on A if only one value of attribute B is associated with attribute A.

Essentially, then, placing relations into 2NF involves identifying all functional dependencies within the relation. After the functional dependencies have been identified, new relations need to be created from the existing relation, removing the undesirable functional dependencies and placing them into new relations. The act of creating additional relations from existing relations is formally known as decomposition.
Second normal form anomalies
Relations that are in 2NF can still be subject to certain anomalies. For example, what if the owner attribute needs to be updated in the Owner_Info relation? The update would need to occur in each row in which that owner's information occurs. If the update does not occur in each affected row, the information in the database will become inconsistent. Third normal form is designed to address this type of anomaly.
Third Normal Form
A relation is said to be in 3NF if it is in 2NF and no attributes that are not part of the primary key are transitively dependent on the primary key. The key, then, to moving 2NF relations into 3NF is removing any transitive dependencies that may exist in the relations.
Boyce-Codd Normal Form (BCNF)
To ascertain whether a relation is in BCNF, the functional dependencies of a relation must be determined. Then it must be determined whether all the determinants are candidate keys. Remember that a determinant is the attribute (or attributes) on which another attribute is fully functionally dependent. Thus a relation is in BCNF if and only if all determinants are candidate keys.
Violation of BCNF criteria usually occurs in relations that have two or more composite candidate keys, or in relations whose composite candidate keys share attributes.
Logical Database Design
The logical database design phase involves refining the conceptual model already created. The resulting logical data model is then used to implement the physical design phase. As with the conceptual design phase, the logical design phase is carried out without concern for any particular DBMS or any other physical design issues.
The first step in the logical design phase is creating the logical data models for the various user views of the data for the enterprise.
Logical data models
Logical data models are created using the conceptual data models created in the previous phase of the database design life cycle. During the logical design phase, the conceptual data model is refined, removing or adjusting relationships that may cause data manipulation anomalies

The refinement process comprises the following activities:
• Identify one-to-one, one-to-many and many-to-many relationships.
• Resolve any many-to-many relationships.
• Identify and resolve any complex relationships.
• Identify and resolve any recursive relationships.
• Identify any relationships that have attributes.
• Create entities from relationships that have attributes.
• Identify and resolve multivalued attributes.
• Identify and remove any redundant relationships.
Resolve any many-to-many relationships
As m:n relationships are not allowed in the relational data model, they must be eliminated. Dealing with m:n relationships is similar to dealing with some functional dependencies during the normalization process. To resolve an m:n relationship, relations are decomposed, creating new entities. An entity created to resolve an m:n relationship is known as an intermediate entity.
Identify and resolve any complex relationships
Complex relationships occur when more than two entities participate in a relationship. Often, complex relationships contain m:n relationships, and the relations involved should be decomposed to create intermediate entities
Identify and resolve any recursive relationships
a recursive relationship occurs when an entity is involved in a relationship with itself.
Identify any relationships that have attributes
If a relationship in the conceptual data model has identifiable attributes, an entity needs to be created to absorb the attributes.
Identify and resolve multivalued attributes
If an attribute for an entity contains more than one value, the attribute should be decomposed and a new entity created to hold the multiple values. This should be familiar, because it is actually the first step in the normalization process. In other words, the relation is being placed into 1NF.
Identify and remove any redundant relationships
If the same data can be derived from multiple relationships, a redundant relationship may be present. Be careful when identifying redundant relationships because often the same data may be present in different contexts within the enterprise. In those cases, the relationships might not be redundant. So, it is important to thoroughly examine the meaning of each relationship when deciding if a relationship is redundant.
Creating a Logical Data Model
Once the refinement process has been completed, you will be able to create a logical data model from the refined conceptual data model. At this point, all the following should be identified and documented:
• The base relations for the database, including strong and weak entities with all their attributes.
• The primary key for each relation.
• All foreign keys for the relations.
• The cardinality of all relations.
After this information is ascertained, the relations should undergo the normalization process described in a previous lesson. Typically, the normalization process has already been started when you resolve the undesirable relationships between the relations.
Validating the Logical Data Model
After the logical data model has been completed, the next step is to validate the model against the documented user operations and system requirements. This step involves reviewing the database requirements document to ensure that all the specified data has been included in the logical data model.
Typically, the database designer will use the ER diagram to manually perform all the required data operations to ensure that they are all possible. If any required data operations are unable to be performed, a problem with the data model is likely. Possibly an entity has been omitted, or important attributes have been omitted from an entity. In any case, the logical data model needs to be tested, and any problems that are found need to be resolved. The validation process can be performed by writing each required transaction to ensure that all the data is present to carry out that operation. Or the database designer can diagram each transaction using techniques similar to those used for a Chen ER diagram.
Defining Integrity Constraints
An important step in the logical design process is to verify that all necessary integrity constraints have been identified and documented. Recall from an earlier lesson that several types of integrity constraints can be applied to a relational database.
types of integrity constraints.
Necessary data
Domain constraints
Entity integrity
Referential integrity
Necessary data:
This refers to any attributes that cannot be left blank or null under any circumstances.
Domain constraints:
Every column in a relation must have a defined domain
Entity integrity:
This step ensures that all primary key attributes are not null.
Referential integrity:
This step ensures that all foreign keys reference an existing parent key in a parent relation.
Enterprise constraints:
This step involves ensuring that all the business rules concerning the data have been enforced. Again, the database requirements document should contain this type of information, and should be checked to determine whether any additional constraints on the data need to be put in place.
Creating an Enterprise Data Model
The final step of the logical database design phase involves creating an enterprise data model from the various user views that may exist. In a large enterprise, this step is necessary to coordinate the various entities that are at the core of each user view. After combining the various user views, you may need to validate the enterprise data model by using the normalization process and by validating the data operations defined for the data model.
Creating an Enterprise Data Model Step 1
Combine various logical data models into an enterprise data model. To achieve this, the database designer must examine the names of all entities and their primary keys, as well as the names of all relationships. This is necessary to ascertain entities that (a) may be the same and (b) may have the same name but are really different entities. Entities that are found to be the same can be combined to form one entity. Entities that are similarly named, yet are different entities, need to be included after they are renamed.
Entities that are completely unique to any user view need to be included. All unique and similar entities should now be identified.
All relationships need to be examined to determine whether they are unique or are the same as relationships in other user views. Relationships with the same name and objective should be combined. Those with different names but the same objective should also be combined
All foreign keys need to be checked, along with the referential integrity for each. All integrity constraints should then be checked to ensure that no conflicts occur with the originally documented integrity constraints.
The final enterprise data model should then be produced and the appropriate documentation created.
Creating an Enterprise Data Model Step 2
The enterprise data model should then be validated using the normalization process and by validating the data model against the required data operations for the enterprise.
Creating an Enterprise Data Model Step 3
Create the finalized ER diagram that will be used to represent the enterprise logical data model.
Creating an Enterprise Data Model Step 4
Allow the users to review and assess the finalized enterprise ER diagram. This step will help to ensure that the final logical data model is a complete and true representation of the enterprise data.