Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
159 Cards in this Set
- Front
- Back
Formal testing with respect to user needs, requirements, and business processes conducted to determine whether or not a system satisfies the __________ criteria and to enable the user, customers or other authorized entity to determine whether or not to ______ the system. |
Acceptance testing
|
|
Simulated or actual operational testing by potential users/customers or an independent test team at the developers site, but outside the development organization. _____ _______ is often employed for off-the-shelf software as a form of internal acceptance testing.
|
Alpha testing
|
|
Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes.
|
Beta testing
|
|
A flaw in a component or system that can cause the component or system to fail to perform its required function.
|
Bug - Defect - Fault
|
|
The testing of individual software c__________.
|
Component testing
|
|
The degree, expressed as a percentage, to which a specified ________ item has been exercised by a test suite.
|
Coverage - Test Coverage
|
|
The process of finding, analyzing and removing the causes of failures in software.
|
Debugging
|
|
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
|
Driver
|
|
A human action that produces an incorrect result.
|
Error - Mistake
|
|
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
|
Error Guessing
|
|
A test approach in which the test suite comprises all combinations of input values and preconditions.
|
Exhaustive Testing
|
|
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed.
|
Exit Criteria
|
|
Deviation of the component or system from its expected delivery, service or result.
|
Failure
|
|
A ___________ that specifies a function that a component or system must perform.
|
Functional requirement
|
|
Any event occurring that requires investigation.
|
Incident
|
|
A development life cycle where a project is broken into a series of __________.
|
Incremental development model
|
|
Separation of responsibilities, which encourages the accomplishment of objective testing.
|
Independence
|
|
The process of combining components or systems into larger assemblies.
|
Integration
|
|
Testing performed to expose defects in the interfaces and in the interactions between components or systems.
|
Integration testing
|
|
A ___________ that does not relate to functionality, but to attributes such as reliability, efficiency, usability, maintainability and portability.
|
Non-functional requirement
|
|
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
|
Off-the-shelf software
|
|
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
|
Quality
|
|
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made. It is performed when the software or its environment is changed.
|
Regression Testing
|
|
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
|
Requirement
|
|
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
|
Retesting - Confirmation Testing
|
|
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements.
|
Review
|
|
A factor that could result in future negative consequences; usually expressed as impact and likelihood.
|
Risk
|
|
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
|
Robustness
|
|
Testing to determine the ro________ of the software product.
|
Robustness testing
|
|
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaces a called component.
|
Stub
|
|
The process of testing an integrated ______ to verify that it meets specified requirements.
|
System testing
|
|
The implementation of the test strategy for a specific project. It typically includes the decisions made based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
|
Test Approach
|
|
All documents from which the requirements of a component or system can be inferred.
|
Test Basis
|
|
A set of input values, execution preconditions, expected results and execution postconditions developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
|
Test Case
|
|
An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
|
Test Condition
|
|
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
|
Test Control
|
|
Information that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
|
Test Data
|
|
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and the associated high-level test cases.
|
Test Design Specification
|
|
A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
|
Test driven development
|
|
An ___________ containing hardware, instrumentation, simulators, software tools, and other support elements needed to conduct a test.
|
Test environment
|
|
The process of running a test on the component or system under test, producing actual results.
|
Test Execution
|
|
A group of test activities that are organized and managed together.
|
Test level
|
|
A chronological record of relevant details about the execution of tests.
|
Test Log
|
|
A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.
|
Test Monitoring
|
|
A reason or purpose for designing and executing a test.
|
Test Objective
|
|
A document describing the scope, approach, resources and schedule of intended test activities. It identifies, amongst others, test items, the features to be tested, the testing tasks, who will do each task, the degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used.
|
Test Plan
|
|
A high-level document describing the principles, approach and major objectives of the organization regarding testing.
|
Test Policy
|
|
A document specifying a sequence of actions for the execution of a test.
|
Test Procedure Specification
|
|
A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects).
|
Test Strategy
|
|
A set of several test cases for a component or system under test, where the postcondition of one test is often used as the precondition for the next one.
|
Test Suite
|
|
A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
|
Test Summary Report
|
|
The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
|
Testing
|
|
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
|
Testware
|
|
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
|
Validation
|
|
Confirmation by examination and through provision of objective evidence that specified requirements have been fulfilled.
|
Verification
|
|
A framework to describe the software development life cycle activities from requirements specification to maintenance.
|
V-model
|
|
A group of test activities aimed at testing a component or system focused on a specific test objective, i.e. functional test, usability test, regression test etc.
|
Test type
|
|
Testing based on an analysis of the specification of the functionality of a component or system.
|
Functional testing
|
|
Testing, either functional or non-functional, without reference to the internal structure of the component or system.
|
Black-box testing
|
|
Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
|
Black-box test design technique
|
|
The capability of the software product to provide functions which meet stated and implied needs when the software is used under specified conditions.
|
Functionality
|
|
The process of testing to determine the f____________ of a software product.
|
Functionality testing
|
|
The capability of the software product to interact with one or more specified components or systems.
|
Interoperability
|
|
The process of testing to determine the i_______________ of a software product.
|
Interoperability testing
|
|
Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data.
|
Security
|
|
Testing to determine the s_______ of the software product.
|
Security testing
|
|
A type of performance testing conducted to evaluate the behavior of a component or system with increasing ____, e.g. numbers of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
|
Load testing
|
|
The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
|
Performance
|
|
The process of testing to determine the pe_________ of a software product. See also efficiency testing.
|
Performance testing
|
|
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
|
Stress testing
|
|
The process of testing to determine the re_________ of a software product.
|
Reliability testing
|
|
The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.
|
Usability
|
|
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.
|
Usability testing
|
|
The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions.
|
Efficiency
|
|
The process of testing to determine the e_________ of a software product.
|
Efficiency testing
|
|
The ease with which the software product can be transferred from one hardware or software environment to another.
|
Portability
|
|
The process of testing to determine the po_________ of a software product.
|
Portability testing
|
|
Testing based on an analysis of the internal structure of the component or system.
|
White-box testing (structural testing)
|
|
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed.
|
Code coverage
|
|
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
|
White-box test design technique
|
|
Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
|
Maintenance
|
|
Testing the changes to an operational system or the impact of a changed environment to an operational system.
|
Maintenance testing
|
|
The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.
|
Maintainability
|
|
The process of testing to determine the m______________ of a software product.
|
Maintainability testing
|
|
The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.
|
Impact analysis
|
|
The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
|
Testing
|
|
Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews.
|
Static testing
|
|
Testing that involves the execution of the software of a component or system.
|
Dynamic Testing
|
|
A review not based on a documented procedure.
|
Informal review
|
|
A review characterized by documented procedures and requirements, e.g. inspection.
|
Formal review
|
|
The leader and main person responsible for an inspection or other review process.
|
Moderator
|
|
The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase.
|
Entry criteria
|
|
The person that identifies and describes anomalies in the product or project. Can be chosen to represent different viewpoints and roles.
|
Reviewer (inspector)
|
|
The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The _____ has to ensure that the logging form is readable and understandable.
|
Scribe
|
|
A ______ of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.
|
Peer review
|
|
A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.
|
Walkthrough
|
|
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.
|
Technical review
|
|
A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher level documentation. The most formal review technique and therefore always based on a documented procedure.
|
Inspection
|
|
Analysis of software artifacts, e.g. requirements or code, carried out without execution of these software artifacts.
|
Static analysis
|
|
A software tool that translates programs expressed in a high order language into their machine language equivalents.
|
Compiler
|
|
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
|
Complexity
|
|
The number of independent paths through a program. Defined as: L – N + 2P, where - L = the number of edges/links in a graph - N = the number of nodes in a graph - P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine)
|
Cyclomatic complexity
|
|
A sequence of events (paths) in the execution through a component or system.
|
Control flow
|
|
An abstract representation of the sequence and possible changes of the state of data objects, where the state of an object is any of: creation, usage, or destruction.
|
Data flow
|
|
A measurement scale and the method used for measurement.
|
Metric
|
|
Procedure used to derive and/or select test cases.
|
Test design technique
|
|
The ability to identify related items in documentation and software, such as requirements with associated tests.
|
Traceability
|
|
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases.
|
Test design specification
|
|
A document specifying a set of test cases (objective, inputs, test actions, expected results, and execution preconditions) for a test item.
|
Test case specification
|
|
A document specifying a sequence of actions for the execution of a test. Also known as test script or manual test script.
|
Test procedure specification
|
|
Commonly used to refer to a test procedure specification, especially an automated one.
|
Test script
|
|
Procedure to derive and/or select test cases based on an analysis of the specification, either functional or non-functional, of a component or system without reference to its internal structure.
|
Black-box test design technique
|
|
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system. Another name for white-box test design technique.
|
Structure-based test design technique
|
|
Procedure to derive and/or select test cases based on the tester’s experience, knowledge and intuition.
|
Experienced-based test design technique
|
|
A black box test design technique in which test cases are designed to execute representatives from equivalence partitions. In principle test cases are designed to cover each partition at least once.
|
Equivalence partitioning
|
|
A portion of an input or output domain for which the behavior of a component or system is assumed to be the same, based on the specification.
|
Equivalence partition
|
|
An input value or output value which is on the edge of an equivalence partition or at the smallest incremental distance on either side of an edge, for example the minimum or maximum value of a range.
|
Boundary value
|
|
A black box test design technique in which test cases are designed based on boundary values.
|
Boundary value analysis
|
|
A table showing combinations of inputs and/or stimuli (causes) with their associated outputs and/or actions (effects), which can be used to design test cases.
|
Decision table
|
|
A black box test design technique in which test cases are designed to execute the combinations of inputs and/or stimuli (causes) shown in a decision table.
|
Decision table testing
|
|
A black box test design technique in which test cases are designed to execute valid and invalid state transitions. See also N-switch testing.
|
State transition testing
|
|
A diagram that depicts the states that a component or system can assume, and shows the events or circumstances that cause and/or result from a change from one state to another.
|
State diagram
|
|
A grid showing the resulting transitions for each state combined with each possible event, showing both valid and invalid transitions.
|
State table
|
|
A black box test design technique in which test cases are designed to execute user scenarios.
|
Use case testing
|
|
Procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
|
Structure-based test design techniques
|
|
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
|
Coverage (test coverage)
|
|
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed
|
Code coverage
|
|
The percentage of executable statements that have been exercised by a test suite.
|
Statement coverage
|
|
The percentage of decision outcomes that have been exercised by a test suite.
|
Decision coverage (100% branch coverage & 100% statement coverage)
|
|
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as a result of errors made, and to design tests specifically to expose them.
|
Error guessing
|
|
An informal test design technique where the tester actively controls the design of the tests as those tests are performed and uses information gained while testing to design new and better tests.
|
Exploratory testing
|
|
Directed and focused attempt to evaluate the quality, especially reliability, of a test object by attempting to force specific failures to occur.
|
Attack (fault attack)
|
|
A skilled professional who is involved in the testing of a component or system.
|
Tester
|
|
The person responsible for project management of testing activities and resources, and evaluation of a test object. The individual who directs, controls, administers, plans and regulates the evaluation of a test object.
|
Test Manager (test leader)
|
|
A document describing the scope, approach, resources and schedule of intended test activities. It identifies amongst others test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used, and the rationale for their choice, and any risks requiring contingency planning.
|
Test plan
|
|
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project.
|
Test level
|
|
The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase.
|
Entry criteria
|
|
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed.
|
Exit criteria
|
|
A high-level description of the test levels to be performed and the testing within those levels for an organization or programme (one or more projects).
|
Test strategy
|
|
The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project’s goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
|
Test approach
|
|
A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned.
|
Test monitoring
|
|
The planning, estimating, monitoring and control of test activities.
|
Test management
|
|
The ratio of the number of failures of a given category to a given unit of measure, e.g. failures per unit of time, failures per number of transactions, failures per number of computer runs.
|
Failure rate
|
|
The number of defects identified in a component or system divided by the size of the component or system (expressed in standard measurement terms, e.g. lines-ofcode, number of classes or function points).
|
Defect density
|
|
A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
|
Test summary report (test report)
|
|
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
|
Test control
|
|
A discipline applying technical and administrative direction and surveillance to: identify and document the functional and physical characteristics of a configuration item, control changes to those characteristics, record and report change processing and implementation status, and verify compliance with specified requirements.
|
Configuration management
|
|
An element of configuration management, consisting of the evaluation, co-ordination, approval or disapproval, and implementation of changes to configuration items after formal establishment of their configuration identification.
|
Configuration control (version control)
|
|
A factor that could result in future negative consequences; usually expressed as impact and likelihood.
|
Risk
|
|
A risk directly related to the test object.
|
Product risk
|
|
An approach to testing to reduce the level of product risks and inform stakeholders on their status, starting in the initial stages of a project. It involves the identification of product risks and their use in guiding the test process.
|
Risk-based testing
|
|
A risk related to management and control of the (test) project, e.g. lack of staffing, strict deadlines, changing requirements, etc.
|
Project risk
|
|
A source of a defect such that if it is removed, the occurance of the defect type is decreased or removed.
|
Root cause
|
|
Recording the details of any incident that occurred, e.g. during testing.
|
Incident logging
|
|
A document reporting on any event that occurred, e.g. during the testing, which requires investigation.
|
Incident report
|
|
A document reporting on any flaw in a component or system that can cause the component or system to fail to perform its required function.
|
Defect report (bug report)
|
|
The number of defects found by a test phase, divided by the number found by that test phase and any other means afterwards.
|
Defect detection percentage (DDP)
|
|
The level of (business) importance assigned to an item, e.g. defect.
|
Priority
|
|
The degree of impact that a defect has on the development or operation of a component or system.
|
Severity
|