• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/30

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

30 Cards in this Set

  • Front
  • Back

driver

A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.

dynamic analysis tool

A tool that provides run-time information on the state of the software code. These tools are most commonly used to identify unassigned pointers, check pointer arithmetic and to monitor the allocation, use and de-allocation of memory and to flag memory leaks.

dynamic testing

Testing that involves the execution of the software of a component or system.

effectiveness

The capability of producing an intended result.

efficiency

(1) The capability of the software product to provide appropriate performance, relative to the amount of resources used, under stated conditions.


(2) The capability of a process to produce the intended outcome, relative to the amount of resources used.

entry criteria

The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g., test phase. The purpose of it is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed one.

equivalence class

A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.

equivalence partitioning

A black-box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle, test cases are designed to cover each partition at least once.

error

A human action that produces an incorrect result.

error guessing

A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.

executable statement

A statement which, when compiled, is translated into object code, and which will be executed procedurally when the program is running and may perform an action on data.

exercised

A program element is said to be ___ by a test case when the input value causes the execution of that element, such as a statement, decision, or other structural element.

exhaustive testing

A test approach in which the test suite comprises all combinations of input values and preconditions.

exit criteria

The set of generic and specific conditions, agreed upon with the stakeholders for permitting a process to be officially completed. The purpose of it is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. These are used to report against and to plan when to stop testing.

expected result

The behavior predicted by the specification, or another source, of the component or system under specified conditions.

experience-based test design technique

Procedure to derive and/or select test cases based on the tester's experience, knowledge and intuition.

experience-based testing

Testing based on the tester's experience, knowledge and intuition.

exploratory testing

An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.

factory acceptance testing

Acceptance testing conducted at the site at which the product is developed and performed by employees of the supplier organization, to determine whether or not a component or system satisfies the requirements, normally including hardware as well as software.

test fail

A test is deemed to fail if its actual result does not match its expected result.

failure

Deviation of the component or system from its expected delivery, service or result.

failure rate

The ratio of the number of failures of a given category to a given unit of measure, e.g., failures per unit of time, failures per number of transactions, failures per number of computer runs.

attack

Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur.

feature

An attribute of a component or system specified or implied by requirements documentation (for example reliability, usability or design constraints).

formal review

A review characterized by documented procedures and requirements, e.g., inspection.

functional requirement

A requirement that specifies a function that a component or system must perform.

functional testing

Testing based on an analysis of the specification of the functionality of a component or system.

functionality

The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions.

high-level test case / abstract test case

A test case without concrete (implementation level) values for input data and expected results. Logical operators are used: instances of the actual values are not yet defined and/or available.

impact analysis

The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.