• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/315

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

315 Cards in this Set

  • Front
  • Back

Acceptance Criteria

A key prerequisite for test planning is a clear understanding of what must be accomplished for the test project to be deemed successful.




Those things a user will be able to do with the product after a story is implemented (Agile).

Acceptance Testing

The objective of acceptance testing is to determine throughout the development cycle that all aspects of the development process meet the user's needs

Act

If your checkup reveals that the work is not being performed according to plan or that results are not as anticipated, device measures for appropriate action (Plan-Do-Check-Act).

Access Modeling

Used to verify that data requirements (represented in the form of an entity-relationship diagram) support the data demands of process requirements (represented in data flow diagrams and process specifications).

Active Risk

Risk that is deliberately taken on. For example, the choice to develop a new product that may not be successful in the marketplace.

Actors

Interfaces in a system boundary diagram (Use Cases).

Alternate Path

Additional testable conditions are derived from the exceptions and alternative course of the Use Case.

Affinity Diagram

A group process that takes large amounts of language data, such as a list developed by brainstorming, and divides it into categories.

Analogous

The analogy model is a nonalgorithmic costing model that estimates the size, effort, or cost of a project by relating it to another similar completed project. Analogous estimating takes the actual time and/or cost of a historical project as a basis for the current project.

Analogous Percentage Method

A common method for estimating test effort is to calculate the test estimate as a percentage of previous test efforts using a predicted size factor (SF) (e.g., SLOC or FPA)

Application

A single software product that may or may not fully support a business function.

Appraisal Costs

Resources spent to ensure a high level of quality in all development life cycle stages which includes conformance to quality standards and delivery of products that meet the user's requirements/needs. Appraisal costs include the cost of in-process reviews, dynamic testing, and final inspections.

Appreciative or Enjoyment Listening

One automatically switches to this type of listening when it is perceived as a funny situation or an explanatory example will be given of a situation. This listening type helps understand real-world situations.

Assumptions

A thing that is accepted as true

Audit

This is an inspection/assessment activity that verifies compliance with plans, policies, and procedures, and ensures that resources are conserved. Audit is a staff function; it serves as the "eyes and ears" of management.

Backlog

Work waiting to be done; for IT this includes new systems to be developed and enhancements to existing systems. To be included in the development backlog, the work must have been cost-justified and approved for development. A product backlog in Scrum is a prioritized featured list containing short descriptions of all functionality desired in the product.

Baseline

A quantitative measure of the current level of performance

Benchmarking

Comparing your company's products, services, or processes against best practices, or competitive practices, to help define superior performance of a product, service, or support process.

Benefits Realization Test

A test or analysis conducted after an application is moved into production to determine whether it is likely to meet the originating business case.

Black-Box Testing

A test technique that focuses on testing the functionality of the program, component, or application against its specification without knowledge of how the system is constructed; usually data or business process driven.

Bottom-Up

Begin testing from the bottom of the hierarchy and work up to the top. Modules are added in ascending hierarchical order. Bottom-up testing requires that development of driver modules, which provide the test input, that call the module or program being tested, and display test output.

Bottom-Up Estimation

In this technique, the cost of each single activity is determined with the greatest level of detail at the bottom level and then rolls up to calculate the total project cost

Boundary Value Analysis

A data selection technique in which test data is chosen from the "boundaries" of the input or output domain classes, data structures, and procedure parameters. Choices often include the actual minimum and maximum, the maximum value +- 1, and the minimum value +- 1.

Brainstorming

A group process for generating creative and diverse ideas.

Branch Combination Coverage

Branch Condition Combination Coverage is a very thorough structural testing technique, requiring 2^n test cases to achieve 100% coverage of a condition containing n Boolean operands.

Branch/Decision Testing

A test method that requires that each possible branch on each decision point be executed at least once.

Bug

A general term for all software defects or errors.

Calibration

This indicates the movement of a measure so it becomes more valid, for example, changing a customer survey so it better reflects the true opinions of the customer.

Causal Analysis

The purpose of causal analysis is to prevent problems by determining the problem's root cause. This shows the relation between an effect and its possible causes to eventually find the root of the issue.

Cause and Effect Diagrams

A cause and effect diagram visualizes results of brainstorming and affinity grouping through major causes of a significant process problem.

Cause-Effect Graphing

Cause-effect graphing is a technique which focuses on modeling the dependency relationships between a program's input conditions (causes) and output conditions (effects). CEG is considered a Requirements-Based test technique and is often referred to as Dependency modeling.

Change Management

Managing software change is a process. The process is the primary responsibility of the software development staff. They must assure that the change requests are documented, that they are tracked through approval or rejection, and then incorporated into the development process.

Check

Check to determine whether work is progressing according to the plan or whether the expected results are obtained. Check for performance of the set procedures, changes in conditions, or abnormalities that may appear. As often as possible, compare the results of the work with the objectives.

Check Sheets

A check sheet is a technique or tool to record the number of occurrences over a specified interval of time; a data sample to determine the frequency of an event

Checklists

A series of probing questions about the completeness and attributes of an application system. Well-constructed checklists cause evaluation of areas, which are prone to problems. It both limits the scope of the test and directs the tester to the areas in which there is a high probability of a problem.

Checkpoint Review

Held at predefined points in the development process to evaluate whether certain quality factors (critical success factors) are being adequately addressed in the system being build. Independent experts for the purpose of identifying problems conduct the reviews as early as possible.

Client

The customer that pays for the product received and receives the benefit from the use of the product

CMMI-Dev

A process improvement model for software development. Specifically, CMMI for Development is designed to compare an organization's existing development processes to proven best practices developed by members of industry, government, and academia.

Coaching

Providing advice and encouragement to an individual or individuals to promote a desired behavior.

COCOMO II

The best recognized software development cost model is the Constructive Cost Model II.COCOMO.II is an enhancement over the original COCOMO model. COCOMO II extends the capability of the model to include a wider collection of techniques and technologies. It provides support for OO software, business software, software created via spiral or evolutionary development models and software using COTS application utilities.

Code Comparison

One version of source or object code is compared to a second version. The objective is to identify those portions of computer programs that have been changed. The technique is used to identify those segments of an application program that have been altered as a result of a program change.

Common Causes of Variation

Common causes of variation are typically due to a large number of small random sources of variation. The sum of these sources of variation determines the magnitude of the process's inherent variation due to common causes; the process's control limits and current process capability can then be determined.

Compiler-Based Analysis

Most compilers for programming languages include diagnostics that identify potential program structure flaws. Many of these diagnostics are warning messages requiring the programmer to conduct additional investigation to determine whether or not the problem is real. Problems may include syntax problems, command violations, or variable/data reference problems. These diagnostic messages are a useful means of detecting program problems, and should be used by the programmer.

Complete Test Set

A test set containing data that causes each element of pre-specified set of Boolean conditions to be true. In addition, each element of the test set causes at least one condition to be true.

Completeness

The property that all necessary parts of an entity are included. Often, a product is said to be complete if it has met all requirements.

Complexity-Based Analysis

Based upon applying mathematical graph theory to programs and preliminary design language specification (PDLs) to determine a unit's complexity. This analysis can be used to measure and control complexity when maintainability is a desired attribute. It can also be used to estimate test effort required and identify paths that must be tested.

Compliance Checkers

A parse program looking for violations of company standards. Statements that contain violations are flagged. Company standards are rules that can be added, changed, and deleted as needed.

Comprehensive Listening

Designed to get a complete message with minimal distortion. This type of listening requires a lot of feedback and summarization to fully understand what the speaker is communicating.

Compromise

An intermediate approach - Partial satisfaction is sought for both parties through a "middle ground" position that reflects mutual sacrifice. Compromise evokes thoughts of giving up something, therefore earning the name "lose-lose".

Condition Coverage

A white-box testing technique that measures the number of, or percentage of, decision outcomes covered by the test cases designed. 100% condition coverage would indicate that every possible outcome of each decision had been executed at least once during testing.

Condition Testing

A structural test technique where each clause in every condition is forced to take on each of its possible values in combination with those of other clauses.

Configuration Management

Software Configuration Management (CM) is a process for tracking and controlling changes in the software. The ability to maintain control over changes made to all project artifacts is critical to the success of a project. The more complex an application is, the more important it is to control change to both the application and its supporting artifacts.

Configuration Management Tools

Tools that are used to keep track of changes made to systems and all related artifacts. These are also known as version control tools.

Configuration Testing

Testing of an application on all supported hardware and software platforms. This may include various combinations of hardware types, configuration settings, and software versions.

Consistency

The property of logical coherence among constituent parts. Consistency can also be expressed as adherence to a given set of rules.

Consistent Condition Set

A set of Boolean conditions such that complete test sets for the conditions uncover the same errors.

Constraints

A limitation or restriction. Constraints are those items that will likely force a dose of "reality" on a test project. The obvious constraints are test staff size, test schedule, and budget.

Constructive Criticism

A process of offering valid and well-reasoned opinions about the work of others, usually involving both positive and negative comments, in a friendly manner rather than an oppositional one.

Control

Control is anything that tends to cause the reduction of risk. Control can accomplish this by reducing harmful effects or by reducing the frequency of occurrence.

Control Charts

A statistical technique to assess, monitor and maintain the stability of a process. The objective is to monitor a continuous repeatable process and the process variation from specifications. The intent of a control chart is to monitor the variation of a statistically stable process where activities are repetitive.

Control Flow Analysis

Based upon graphical representation of the program process. In control flow analysis, the program graph has nodes, which represent a statement or segment possibly ending in an unresolved branch. The graph illustrates the flow of the program control from one segment to another as illustrated through branches. The objective of control flow analysis is to determine potential problems in logic branches that might result in a loop condition or improper processing.

Conversion Testing

Validates the effectiveness of data conversion processes, including field-to-field mapping, and data translation.

Corrective Controls

Corrective controls assist individuals in the investigation and correction of causes of risk exposures that have been detected.

Correctness

The extend to which software is free from design and coding defects (i.e., fault-free). It is also the extent to which software meets its specified requirements and user objectives.

Cost of Quality (COQ)

Money spent beyond expected production costs (labor, materials, equipment) to ensure that the product the customer receives is a quality (defect free) product. The Cost of Quality includes prevention, appraisal, and failure costs.

COTS

Commercial Off the Shelf (COTS) software products that are ready-made and available for sale in the marketplace.

Coverage

A measure used to describe the degree to which the application under test (AUT) is tested by a particular test suite.

Coverage-Based Analysis

A metric used to show the logic covered during a test session, providing insight to the extent of testing. The simplest metric for coverage would be the number of computer statements executed during the test compared to the total number of statements in the program. To completely test the program structure, the test data chosen should cause the execution of all paths. Since this is not generally possible outside of unit test, general metrics have been developed which give a measure of the quality of test data based on the proximity to this ideal coverage. The metrics should take into consideration the existence of infeasible paths, which are those paths in the program that have been designed so that no data will cause the execution of those paths.

Critical Listening

The listener is performing an analysis of what the speaker said. This is most important when it is felt that the speaker is not in complete control of the situation, or does not know the complete facts of a situations.

Critical Success Factors

Critical Success Factors (CSFs) are those criteria or factors that must be present in a software application for it to be successful.

Customer

The individual or organization, internal or external to the producing organization that receives the product.

Customer's/User's of Software View of Quality

Fit for use.

Cyclomatic Complexity

The number of decision statements, plus one.

Damaging Event

Damaging Event is the materialization of a risk to an organization's assets.

Data Dictionary

Provides the capability to create test data to test validation for the defined data elements. The test data generated is based upon the attributes defined for each data element. The test data will check both the normal variables for each data element as well as abnormal or error conditions for each data element.

Data Flow Analysis

In data flow analysis, we are interested in tracing the behavior of program variables as they are initialized and modified while the program executes.

DD (Decision-to-Decision) Path

A path of logical code sequence that begins at a decision statement or an entry and ends at a decision statement or an exit.

Debugging

The process of analyzing and correcting syntactic, logic, or other errors identified during testing.

Decision Analysis

This technique is used to structure decisions and to represent real-world problems by models that can be analyzed to gain insight and understanding. The elements of a decision model are the decisions, uncertain events, and values of outcomes.

Decision Coverage

A white-box testing technique that measures the number of, or percentage of, decision directions execute by the test case designed. 100% decision coverage would indicate that all decision directions had been executed at least once during testing. Alternatively, each logical path through the program can be tested. Often, paths through the program are grouped into a finite set of classes, and one path from each class is tested.

Decision Table

A tool for documenting the unique combinations of conditions and associated results in order to derive unique test cases for validation testing.

Decision Trees

This provides a graphical representation of the elements of a decision model.

Defect

From the producer's viewpoint - A product requirement that has not been met or a product attribute/function possessed by a product that is not in the statement of requirements


From the customer's viewpoint - Anything that causes customer dissatisfaction, whether in the statement of requirements or not.




A defect is an undesirable state. There are two types of defects: process and product.

Defect Management

Process to identify and record defect information whose primary goal is to prevent future defects

Defect Tracking Tools

Tools for document defects as they are found during testing and for tracking their status through to resolution.

Deliverables

Any product or service produced by a process. Deliverables can be interim or external. Interim deliverables are produced within the process but never passed on to another process. External deliverables may be used by one or more processes. Deliverables serve as both inputs to and outputs from a process.

Design Level

The design decomposition of the software item (e.g., system, subsystem, program, or module).

Desk Checking

The most traditional means for analyzing a system or a program. Desk checking is conducted by the developer of a system or program. The process involves reviewing the complete product to ensure that it is structurally sound and that the standards and requirements have been met. This tool can also be used on artifacts created during analysis and design.

Detective Controls

Detective controls alert individuals involved in a process so that they are aware of a problem.

Discriminative Listening

Directed at selected specific pieces of information and not the entire communication.

Do

Create the conditions and perform the necessary teaching and training to ensure everyone understands the objectives and the plan (Plan-Do-Check-Act).




The procedures to be executed in a process (Process Engineering).

Driver

Code that sets up an environment and calls a module for test. A driver causes the component under test to exercise the interfaces. As you move up the drivers are replaced with the actual components.

Dynamic Analysis

Analysis performed by executing the program code. Dynamic analysis executes or simulates a development phase product, and it detects errors by analyzing the response of a product to sets of input data.

Dynamic Assertion

A dynamic analysis technique that inserts into the program code assertions about the relationship between program variables. The truth of the assertions is determined as the program executes.

Ease of Use and Simplicity

These are functions of how easy it is to capture and use the measurement of data.

Effectiveness

Effectiveness means that the testers completed their assigned responsibilities.

Efficiency

Efficiency is the amount of resources and time required to complete test responsibilities.

Empowerment

Giving people the knowledge, skills, and authority to act within their area of expertise to do the work and improve the process.

Entrance Criteria

Required conditions and standards for work product quality that must be present or met for entry into the next stage of the software development process.

Environmental Controls

Environmental controls are the means which management uses to manage the organization.

Equivalence Partitioning

The input domain of a system is partitioned into classes of representative values so that the number of test cases can be limited to one-per-class, which represents the minimum number of test cases that must be executed.

Error or Defect

A discrepancy between a computed, observed, or measured value or condition and the true, specified, or theoretically correct value or condition.

Error Guessing

Test data selection technique for picking values that seem likely to cause defects. This technique is based upon the theory that test cases and test data can be developed based on the intuition and experience of the tester.

Exhaustive Testing

Executing the program through all possible combinations of values and program variables

Exit Criteria

Standards for work product quality, which block the promotion of incomplete or defect work products to subsequent stages of the software development process

Exploratory Testing

Coined by Dr. Cem Kaner in 1983, "a style of software testing that emphasizes the personal freedom and responsibility of the individual tester to continually optimize the quality of his/her work by treating test-related learning, test design, test execution, and test result interpretation as mutually supportive activities that run in parallel throughout the project."

Failure Costs

All costs associated with defective products that have been delivered to the user and/or moved into production. Failure costs can be classified as either "internal" or "external" failure costs.

File Comparison

Useful in identifying regression errors. A snapshot of the correct expected results must be saved so it can be used for later comparison.

Fitness for Use

Meets the needs of the customer/user.

Flowchart

Pictorial representations of data flow and computer logic. It is frequently easier to understand and assess the structure and logic of an application system by developing a flow chart than to attempt to understand narrative descriptions or verbal explanations. The flowcharts for systems are normally developed manually, while flowcharts of programs can be produced.

Force Field Analysis

A group technique used to identify both driving and restraining forces that influence a current situation.

Formal Analysis

Technique that uses rigorous mathematical techniques to analyze the algorithms of a solution for numerical properties, efficiency, and correctness.

FPA

Function Point Analysis is a sizing method in which the program's functionality is measured by the number of ways it must interact with the users.

Functional System Testing

Ensures that the system requirements and specifications are achieved. The process involves created test conditions for use in evaluating the correctness of the application.

Functional Testing

Application of test data derived from the specified functional requirements without regard to the final program structure.

Gap Analysis

Determines the difference between two variables. A gap analysis may show the difference between perceptions of importance and performance of risk management practices. The gap analysis may show discrepancies between what is and what needs to be done. Gap analysis shows how large the gap is and how far the leap is to cross it. It identifies the resources available to deal with the gap.

Happy Path

Generally used within the discussion of Use Cases, the happy path follows a single flow uninterrupted by errors or exceptions from beginning to end.

Heuristics

Experience-based techniques for problem solving, learning, and discovery.

Histogram

A graphical description of individually measured values in a data set that is organized according to the frequency or relative frequency of occurrence. A histogram illustrates the shape of the distribution of individual values in a data set along with information regarding the average and variation. Pareto charts are a special use of a histogram.

Incremental Model

Subdivides the requirements specifications into small buildable projects (or modules). Within each of those smaller requirements subsets, a development life cycle exists which includes the phases described in the Waterfall approach.

Incremental Testing

Disciplined method of testing the interfaces between unit-tested programs as well as between system components. It involves adding unit-tested programs to a given module or component one by one, and testing each resultant combination.

Infeasible Path

A sequence of program statements that can never be executed

Influence Diagrams

Provides a graphical representation of the elements of a decision model.

Inherent Risk

The risk to an organization in the absence of any actions management might take to alter either the risk's likelihood or impact.




The risk if no control was taken (the gross risk)

Inputs

Materials, services, or information needed from suppliers to make a process work, or build a product.

Inspection

A formal assessment of a work product conducted by one or more qualified independent reviewers to detect defects, violations of development standards, and other problems. Inspections involve authors only when specific questions concerning deliverables exist. An inspection identifies defects, but does not attempt to correct them. Authors take corrective actions and arrange follow-up reviews as needed.

Instrumentation

The insertion of additional code into a program to collect information about program behavior during program execution.

Integration Testing

This test begins after two or more programs or application components have been successfully unit tested. It is conducted by the development team to validate the technical quality or design of the application. It is the first level of testing which formally integrates a set of programs that communicate among themselves via messages or files (a client and its server(s), a string of batch programs, or a set of online modules within a dialogue or conversation).

Invalid Input

Test data that lays outside the domain of the function the program represents

ISO 29119

Set of standards for software testing that can be used within any software development life cycle or organization.

Iterative Model

The project is divided into small parts allowing the development team to demonstrate results earlier on in the process and obtain valuable feedback from system users.

Judgment

A decision made by individuals that is based on three criteria which are: fact, standards, and experience.

Keyword-Driven Testing

Also known as table-driven testing or action word based testing. A testing methodology whereby tests are driven wholly by data. Keyword-driven testing uses a table format, usually a spreadsheet, to define keywords or action words for each function that will be executed.

Leadership

The ability to lead, including inspiring others in a shared vision of what can be, taking risks, serving as a role model, reinforcing and rewarding the accomplishments of others, and helping others to act.

Life Cycle Testing

The process of verifying the consistency, completeness, and correctness of software at each stage of the development life cycle.

Management

A team or individuals who manage(s) resources at any level of the organization.

Mapping

Provides a picture of the use of instructions during the execution of a program. Specifically, it provides a frequency listing of source code statements showing both the number of times an instruction was executed and which instructions were not executed. Mapping can be used to optimize source code by identifying the frequently used instructions. It can also be used to determine unused code, which can demonstrate code, which has not been tested, code that is infrequently used, or code that is non-entrant.

Mean

A value derived by adding several quantities and dividing the sum by the number of these quantities.

Measures

A unit to determine the dimensions, quantity, or capacity (e.g., lines of code are a measure of software size).

Mentoring

Helping or supporting an individual in a non-supervisory capacity. Mentors can be peers, subordinates, or superiors. What is important is that the mentor does not have a managerial relationship to the mentored individual when performing the task of mentoring.

Metric

A software metric is a mathematical number that shows a relationship between two measures

Metric-Based Test Data Generation

The process of generating test sets for structural testing based on use of complexity or coverage metrics.

Mission

A customer-oriented statement of purpose for a unit or a team.

Model Animation

Verifies that early models can handle the various types of events found in production data. This is verified by "running" actual production transactions through the models as if they were operational systems.

Model Balancing

Relies on the complementary relationships between the various models used in structured analysis (event, process, data) to ensure that modeling rules/standards have been followed; this ensures that these complementary views are consistent and complete.

Model-Based Testing

Test cases are based on a simple model of the application. Generally, models are used to represent the desired behavior of the application being tested. The behavioral model of the application is derived from the application requirements and specification.

Moderator

Manages the inspection process, is accountable for the effectiveness of the inspection, and must be impartial.

Modified Condition Decision Coverage

A compromise which requires fewer test cases than Branch Condition Combination Coverage.

Motivation

Getting individuals to do work tasks they do not want to do or to perform those work tasks in a more efficient or effective manner.

Mutation Analysis

A method to determine test set thoroughness by measuring the extent to which a test set can discriminate the program from slight variants (i.e., mutants) of it.

Network Analyzers

A tool used to assist in detecting and diagnosing network problems.

Non-functional Testing

Validates that the system quality attributes and characteristics have been considered during the development process. Non-functional testing is the testing of a software application for its non-functional requirements.

Objective Measures

A measure that can be obtained by counting.

Open Source

Pertaining to or denoting software whose source code is available free of charge to the public to use, copy, modify, sublicense, or distribute.

Optimum Point of Testing

The point where the value received from testing no longer exceeds the cost of testing.

Oracle

A (typically automated) mechanism or principle by which a problem in the software can be recognized. For example, automated test oracles have value in load testing software (by signing on to an application with hundreds or thousands of instances simultaneously), or in checking for intermittent errors in software.

Outputs

Products, services, or information supplied to meet customer needs.

Pair-Wise

Pair-wise testing (also known as all-pairs testing) is a combinatorial method used to generate the least number of test cases necessary to test each pair of input parameters to a system.

Parametric Modeling

A mathematical model based on known parameters to predict cost/schedule of a test project. The parameters in the model can vary based on the type of project.

Pareto Analysis

The Pareto Principle states that only a "vital few" factors are responsible for producing most of the problems. This principle can be applied to risk analysis to the extent that a great majority of problems (80%) are produce by a few causes (20%). If we correct these few key causes, we have a greater probability of success.

Pareto Charts

A special type of bar chart to view the causes of a problem in order of severity: largest to smallest based on the 80/20 premise.

Pass/Fail Criteria

Decision rules used to determine whether a software item or feature passes or fails a test.

Passive Risk

A risk from inaction. For example, the choice not to update an existing product to compete with others in the marketplace.

Path Expressions

A sequence of edges from the program graph that represents a path through a program.

Path Testing

A test method satisfying the coverage criteria that each logical path through the program be tested. Often, paths through the program are grouped into a finite set of classes and one path from each class is tested.

Performance Test

Validates that both the online response time and bath run times meet the defined performance requirements.

Performance/Timing Analyzer

A tool to measure system performance.

Phase (or Stage) Containment

A method of control put in place within each stage of the development process to promote error identification and resolution so that defects are not propagated downstream to subsequent stages of the development process.

Plan

Define your objectives and determine the conditions and methods required to achieve your objective. Clearly describe the goals and policies needed to achieve the objective at this stage (Plan-Do-Check-Act).

Plan-Do-Check-Act Model

One of the best known process improvement models for continuous process improvement.

Planning Poker

In Agile Development, Planning Poker is a consensus-based technique designed to remove the cognitive bias of anchoring.

Policy

Managerial desires and intents concerning either process (intended objectives) or products (desired attributes).

Population Analysis

Analyzes production data to identify, independent from the specifications, the types and frequency of data that the system will have to process/produce. This verifies that the specs and handle types and frequency of actual data and can be used to create validation tests.

Post Conditions

A list of conditions, if any, which will be true after the Use Case finished successfully.

Pre-Conditions

A list of conditions, if any, which must be met before the Use Case can be properly executed.

Prevention Costs

Resources required to prevent defects and to do the job right the first time. These normally require up-front expenditures for benefits that will be derived later. This category includes money spent on establishing methods and procedures, training workers, acquiring tools, and planning for quality. Prevention resources are spent before the product is actually built.

Preventative Controls

Preventative controls will stop incorrect processing from occurring.

Problem-Solving

Cooperative mode - Attempts to satisfy the interests of both parties. In terms of process, this is generally accomplished through identification of "interests" and freeing the process from initial positions. Once interests are identified, the process moves into a phase of generating creative alternatives designed to satisfy identified interests (criteria).

Procedure

Describes how work must be done and how methods, tools, techniques, and people are applied to perform a process. There are Do procedures and Check procedures. Procedures indicate the "best way" to meet standards.

Process Improvement

To change a process to make the process produce a given product faster, more economically, or of higher quality.

Process Risk

Process risk is the activities such as planning, resourcing, tracking, quality assurance, and configuration management.

Producer/Author

Gathers and distributes materials, provides product overview, is available for clarification, should contribute as an inspector, and must not be defensive.

Producer's View of Quality

Meeting requirements.

Product

The output of a process: the work product. There are three useful classes of products: Manufactured Products (standard and custom), Administrative/Information Products (invoices, letters, etc.), and Service Products (physical, intellectual, physiological, and psychological). A statement of requirements defines products; one or more people working in a process produce them.

Production Costs

The cost of producing a product. Production costs, as currently reported, consist of (at least) two parts; actual production or right-the-first-time costs (RFT) plus the Cost of Quality (COQ). RFT costs include labor, materials, and equipment needed to provide the product correctly the first time.

Productivity

The ratio of the output of a process to the input, usually measured in the same units. It is frequently useful to compare the value added to a product by a process, to the value of the input resources required (using fair market values for both input and output).

Process

The process or set of processes used by an organization or project to plan, manage, execute, monitor, control, and improve its software related activities. A set of activities and tasks. A statement of purpose and an essential set of practices (activities) that address that purpose.

Proof of Correctness

The use of mathematical logic techniques to show that a relationship between program variables assumed true at program entry implies that another relationship between program variables holds at program exit.

Quality

A product is a quality product if it is defect free. To the producer, a product is a quality product if it meets or conforms to the statement of requirements that defines the product. This statement is usually shortened to: quality means meets requirements. From a customer's perspective, quality means "fit for use".

Quality Assurance (QA)

The set of support activities (including facilitation, training, measurement, and analysis) needed to provide adequate confidence that processes are established and continuously improved to product products that meet specifications and are fit for use.

Quality Control (QC)

The process by which product quality is compared with applicable standards, and the action taken when nonconformance is detected. Its focus is defect detection and removal. This is a line function; that is, the performance of these tasks is the responsibility of the people working within the process.

Quality Improvement

To change a production process so that the rate of which defective products (defects) are produced is reduced.

RAD Model

A variant of prototyping, is another form of iterative development. The RAD model is designed to build and deliver application prototypes to the client while in the iterative process.

Reader (Inspections)

Must understand the material, paraphrases the material during the inspection, and sets the inspection page.

Recorder (Inspections)

Must understand error classification, is not the meeting stenographer (captures enough detail for the project team to go forward to resolve errors), classifies errors as detected, and reviews the error list at the end of the meeting.

Recovery Test

Evaluates the contingency features built into the application for handling interruptions and for returning to specific points in the application processing cycle, including checkpoints, backups, restores, and restarts. This test also assures that disaster recovery is possible.

Regression Analysis

A means of showing the relationship between two variables. Regression analysis will provide two pieces of information. The first is a graphic showing the relationship between two variables. Second, it will show the correlation, or how closely related the two variables are.

Regression Testing

Testing of a previously verified program or application following program modification for extension or correction to ensure no new defects have been introduced.

Reliability

Consistency of measurement. Two different individuals take the same measurement and get the same result. This measure is reliable.

Requirement

An attribute to be possessed by the product or a function to be performed by the product, the performance standard for the attribute or function, and/or the measuring process to be used in verifying that the standard has been met.

Requirements-Based Testing

RBT focuses on the quality of the Requirements Specification and requires testing throughout the development life cycle. Specifically, RBT performs static tests with the purpose of verifying that the requirements meet acceptable standards defined as: complete, correct, precise, unambiguous, and clear, consistent, relevant, testable, and traceable.

Residual Risk

The risk that remains after management responds to the identified risks.

Reuse Model

Systems should be built using existing components, as opposed to custom-building new components. The Reuse Model is clearly suited to OO computing environments, which have become one of the premiere technologies in today's system development industry.

Risk

Measured by performing risk analysis.

Risk Acceptance

The amount of risk exposure that is acceptable to the project and the company and can either be active or passive.

Risk Analysis

An analysis of an organization's information resources, its existing controls, and its organizations and computer system or application system vulnerabilities. It combines the loss potential for each resource or combination of resources with an estimated rate of occurrence to establish a potential level of damage in dollars or other assets.

Risk Appetite

The amount of loss management is willing to accept for a given risk.

Risk Assessment

An examination of a project to identify areas of potential risk. The assessment can be broken down into analysis, identification, and prioritization.

Risk Avoidance

A strategy for risk resolution to eliminate the risk altogether. Avoidance is a strategy to use when a lose-lose situation is likely.

Risk Event

A future occurrence that may affect the project for better or worse. The positive aspect is that these events will help you identify opportunities for improvement while the negative aspect will be the realization of threats and losses.

Risk Exposure

The measure that determines the probability of likelihood of the event times the loss that could occur.

Risk Identification

A method used to find risks before they become problems. The risk identification process transforms issues and concerns about a project into tangible risks, which can be described and measured.

Risk Leverage

A measure of the relative cost-benefit of performing various candidate risk resolution activities.

Risk Management

The process required to identify, quantify, respond to, and control project, process, and product risk.

Risk Mitigation

The action taken to reduce threats and/or vulnerabilities.

Risk Protection

A strategy to employ redundancy to mitigate (reduce the probability and/or consequence of) a risk.

Risk Reduction

A strategy to decrease risk through mitigation, prevention, or anticipation. Decreasing either the probability of the risk occurrence or the consequence when the risk is realized reduces risk. Reduction is a strategy to use when risk leverage exists.

Risk Reserves

A strategy to use contingency funds and build-in schedule slack when uncertainty exists in cost or time.

Risk Transfer

A strategy to shift the risk to another person, group, or organization and is used when another group has control.

Risk-Based Testing

Prioritizes the features and functions to be tested based on the likelihood of failure and the impact of a failure should it occur.

Run Chart

A graph of data (observation) in chronological order displaying shifts or trends in the central tendency (average). The data represents measures, counts or percentages of outputs from a process (products or services).

Sad Path

A path through the application which does not arrive at the desired result.

Scatter Plot Diagram

A graph designed to show whether there is a relationship between two changing variables.

Scenario Testing

Testing based on a real-world scenario of how the system is supposed to act.

Scope of Testing

The extensiveness of the test process. A narrow scope may be limited to determining whether or not the software specifications were correctly implemented. The scope broadens as more responsibilities are assigned to the software testers.

Selective Regression Testing

The process of testing only those sections of a program where the tester's analysis indicates programming changes have taken place and the related components.

Self-Validating Code

Code that makes an explicit attempt to determine its own correctness and to proceed accordingly.

SLOC

Source Lines of Code

Smoothing

An unassertive approach - Both parties neglect the concerns involved by sidestepping the issue, postponing the conflict, or choosing not to deal with it.

Soft Skills

The personal attributes which enable an individual to interact effectively and harmoniously with other people.

Software Feature

A distinguishing characteristic of a software item (e.g., performance, portability, or functionality).

Software Item

Source code, object code, job control code, control data, or a collection of these.

Software Quality Criteria

An attribute of a quality factor that is related to software development.

Software Quality Factors

Attributes of the software that, if they are wanted and not present, pose a risk to the success of the software and thus constitute a business risk.

Software Quality Gaps

The first gap is the produce gap. It is the gap between what was specified to be delivered, meaning the documented requirements and internal IT standards, and what was actually delivered. The second gap is between what the producer actually delivered compared to what the customer expected.

Special Causes of Variation

Variation not typically present in the process. They occur because of special or unique circumstances.

Special Test Data

Test data based on input values that are likely to require special handling by the program.

Spiral Model

Model designed to include the best features from Waterfall and Prototyping, and introduces a new component risk-assessment.

Standardize

Procedures that are implemented to ensure that the output of a process is maintained at a desired level.

Standardizer

A person who must know IT standards & procedures, ensures standards are met and procedures are followed, meets with project leader/manager, and ensures entrance criteria are met (product is ready for review).

Standards

The measure used to evaluate products and identify nonconformance. The basis upon which adherence to policies is measured.

Statement of Requirements

The exhaustive list of requirements that define a product. Note that the statement of requirements should document requirements proposed and rejected (including the reason for the rejection) during the requirement determination process.

Statement Testing

A test method that executes each statement in a program at least one during program testing

Static Analysis

Analysis of a program that is performed without executing the program. It may be applied to the requirements, design, or code.

Statistical Process Control

The use of statistical techniques and tools to measure an ongoing process for change or stability.

Story Points

Measurement of a feature's size relative to other features. Story points are an analogous method in that the objective is to compare the sizes of features to other stories and reference stories.

Stress Testing

This test subjects a system, or components of a system, to varying environmental conditions that defy normal expectations. For example, high transaction volume, large database size or restart/recovery circumstances. The intention of stress testing is to identify constraints and to ensure that there are no performance problems.

Structural Analysis

A technique used by developers to define unit test cases. Structural analysis usually involves path and condition coverage.

Structural System Testing

Verifies that the developed system and programs work. The objective is to ensure that the product designed is structurally sound and will function correctly.

Structural Testing

A testing method in which the test data is derived solely from the program structure.

Stub

Special code segments that when invoked by a code segment under testing, simulate the behavior of designed and specified modules not yet constructed.

Subjective Measures

A person's perception of a product or activity.

Supplier

An individual or organization that supplies inputs needed to generate a product, service, or information to a customer.

System Boundary Diagram

Depicts the interfaces between the software under test and the individuals, systems, and other interfaces. These interfaces or external agents are referred to as "actors". The purpose of the system boundary diagram is to establish the scope of the system and to identify actors (i.e., the interfaces) that need to be developed (Use Cases).

System Test

The entire system is tested to verify that all functional, information, structural and quality requirements have been met. A predetermined combination of tests is designed that, when executed successfully, satisfy management that the system meets specifications. System testing verifies the functional quality of the system in addition to all external interfaces, manual procedures, restart and recovery, and human-computer interfaces. It also verifies that interfaces between the application and the open environment work correctly, that JCL functions correctly, and that the application functions appropriately with the Database Management System, Operations environment, and any communications systems.

Test

A set of one or more test cases (and procedures).

Test Case Generator

A software tool that creates test cases from requirements specifications. Cases generated this way ensure that 100% of the functionality specified is tested.

Test Case Specification

An individual test condition, executed as part of a larger test that contributes to the test's objectives. Test cases document the input, expected results, and execution conditions of a given test item. Test cases are broken down into one or more detailed test scripts and test data conditions for execution.

Test Cycle

Test cases are grouped into manageable (and schedulable) units called test cycles. Grouping is according to the relation of objectives to one another, timing requirements, and on the best way to expedite defect detection during the testing event. Often test cycles are linked with execution of a batch process.

Test Data

Data points required to test most applications; one set of test data to confirm the expected results (data along the happy path), a second set to verify the software behaves correctly for invalid input data (alternate paths or sad path), and finally data intended to force incorrect processing (e.g., crash the application).

Test Data Management

A defined strategy for the development, use, maintenance, and ultimately destruction of test data.

Test Data Set

Set of input elements used in the testing process.

Test Design Specification

A document that specifies the details of the test approach for a software feature or a combination of features and identifies the associated tests.

Test Driver

A program that directs the execution of another program against a collection of test data sets. Usually, the test driver also records and organizes the output generated as the tests are run.

Test Environment

A collection of hardware and software components configured in such a way as to closely mirror the production environment. The Test Environment must replicate or simulate the actual production environment as closely as possible.

Test Harness

A collection of test drivers and stubs.

Test Incident Report

A document describing any event during the testing process that requires investigation.

Test Items

A software item that is an object of testing.

Test Item Transmittal Report

A document that identifies test items and includes status and location information.

Test Labs

Another manifestation of the test environment which is more typically viewed as a B&M environment (designated, separated, physical location).

Test Log

A chronological record of relevant details about the execution of tests.

Test Plan

A document describing the intended scope, approach, resources, and schedule of testing activities. It identifies test items, the features to be tested, the testing tasks, the personnel performing each task, and any risks requiring contingency planning.

Test Point Analysis (TPA)

Calculates test effort based on size (derived from FPA), strategy (as defined by system components and quality characteristics to be tested and the coverage of testing), and productivity (the amount of time needed to perform a given volume of testing work).

Test Procedure Specification

A document specifying a sequence of actions for the execution of a test.

Test Scripts

A specific order of actions that should be performed during a test session. The script also contains expected results. Test scripts may be manually prepared using paper forms, or may be automated using capture/playback tools or other kinds of automated scripting tools.

Test Stubs

Simulates a called routine so that the calling routine's functions can be tested. A test harness (or driver) simulates a calling component or external environment, providing input to the called routine, initiating the routine, and evaluating or displaying output returned.

Test Suite Manager

A tool that allows testers to organize test scripts by function or other grouping.

Test Summary Report

A document that describes testing activities and results and evaluates the corresponding test items.

Testing

The process of operating a system or component under specified conditions, observing or recording the results, and making an evaluation of some aspect of the system or component.




The process of analyzing a software item to detect the difference between existing and required conditions, and to evaluate the features of software items.

Testing Process Assessment

Thoughtful analysis of the testing process results, and then taking corrective action on the identified weaknesses.

Testing Schools of Thought

A school of thought is simply defined as "a belief shared by a group". For example, the Agile School.

Therapeutic Listening

The listener is sympathetic to the speaker's point of view. During this type of listening, the listener will show a lot of empathy for the speaker's situation.

Thread Testing

Often used during early integration testing. Demonstrates key functional capabilities by testing a string of units that accomplish a specific function in the application.

Threat

Something capable of exploiting a vulnerability in the security of a computer system or application. Threats include both hazards (any source of potential damage or harm) and events that can trigger vulnerabilities.

Threshold Values

Define the inception of risk occurrence. Predefined thresholds act as a warning level to indicate the need to execute the risk action plan.

Timeliness

Whether the data was reported in sufficient time to impact the decisions needed to manage effectively.

TMMi

A process improvement model for software testing. The Test Maturity Model integration is a detailed model for test process improvement and is positioned as being complementary to the CMMI.

Tools

Any resources that are not consumed in converting the input into the deliverable.

Top-Down

Begin testing from the top of the module hierarchy and work down to the bottom using interim stubs to simulate lower interfacing modules or programs. Modules are added in descending hierarchical order.

Top-Down Estimation

Generate an overall estimate based on initial knowledge. It is used at the initial stages of the project and is based on similar projects. Past data plays an important role in this form of estimation.

Tracing

A process that follows the flow of computer logic at execution time. Tracing demonstrates the sequence of instructions or a path followed in accomplishing a given task. The two main types of trace are tracing instructions in computer programs as they are executed, or tracing the path through a database to locate predetermined pieces of information.

Triangulation

Story Triangulation is a form of estimation by analogy. After the first few estimates have been made, they are verified by relating them to each other (Agile methods).

Triggers

A device used to activate, deactivate, or suspend a risk action plan. Triggers can be set by the project tracking system.

Use Case Points

A derivative of the Use Cases method in the estimation technique known as Use Case Points. Use Case Points are similar to Function points and are used to estimate the size of a project.

Unit Test

Testing individual programs, modules, or components to demonstrate that the work package executes per specification, and validate the design and technical quality of the application. The focus is on ensuring that the detailed logic within the component is accurate and reliable according to pre-determined specifications. Testing stubs or drivers may be used to simulate behavior of interfacing modules.

Usability Test

Review of the application user interface and other human factors of the application with the people who will be using the application. This is to ensure that the design enables the business functions to be executed as easily and intuitively as possible. Assurance that the user interfaces adheres to documented UI standards, and should be conducted early in the design phase of development. Ideally, an application prototype is used, but can also be paper copies.

Use Case

A technique for capturing the functional requirements of systems through the interaction between an Actor and the System.

User

The customer that actually uses the product received.

User Acceptance Testing

Conducted to ensure that the system meets the needs of the organization and the end user/customer. Validates that the system will work as intended by the user in the real world, and is based on real world business scenarios, not system requirements. Essentially, this test validates that the right system was build, regardless of what system requirements indicate.

User Story

A short description of something that a customer will do when they use an application. The User Story is focused on the value or result a customer would receive from doing whatever the application does.

Validation

Physically ensures that the system operates according to the desired specifications by executing the system functions through a series of tests that can be observed and evaluated.

Validity

Indicates the degree to which a measure actually measures what it was intended to measure.

Values (Sociology)

The ideals, customs, instructions, etc., of a society toward which the people have an affective regard. These values may be positive as cleanliness, freedom, or education, or negative as cruelty, crime, or blasphemy. Any object or quality desired as a means or as an end in itself.

Verification

The process of determining whether the products of a given phase of the software development cycle fulfill the requirements established during the previous phase. The act of reviewing, inspecting, testing, checking, auditing, or other establishing and documenting whether items, processes, services, or documents conform to specified requirements.

Virtualization

Running multiple operating systems on a single machine.

Vision

A statement that describes the desired future state of a unit.

V-Model

An extension of the Waterfall Model. The purpose of the "V" shape is to demonstrate the relationships between each phase of specification development and its associated dynamic testing phase.

Vulnerability

A design, implementation, or operations flaw that may be exploited by a threat. The flaw causes the computer system or application to operate in a fashion different from its published specifications and results in destruction or misuse of equipment or data.

Walkthrough

An informal review (static) testing process in which the author "walks through" the deliverable with the review team looking for defects.

Waterfall

A development model in which progress is seen as flowing steadily downwards through the phases of conception, initiation, requirements, design, construction, dynamic testing, production/implementation, and maintenance.

WBS

A Work Breakdown Structure (WBS) groups project components into deliverable and accountable pieces.

White-Box Testing

A testing technique that assumes that the path of the logic in a program unit or component is known. Usually consists of testing paths, branch by branch, to produce predictable results. This technique is usually used during tests executed by the development team, such as Unit or Component testing.

Wideband Delphi

A method for the controlled exchange of information within a group. It provides a formal, structured procedure for the exchange of opinion, which means that it can be used for estimating.

Withdrawal

Conflict is resolved when one party attempts to satisfy the concerns of others by neglecting its own interests or goals. This is a lose-win approach.

Workbench

The objective of this is to produce the defined output products (deliverables) in a defect-free manner. The procedures and standards established for each workbench are designed to assist in this objective.