Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
67 Cards in this Set
- Front
- Back
Test Control |
Corrective procedures to get a test project back on track when we deviate from the test plan |
|
Test Analysis |
Identify what to test, what test conditions needed. Each condition should be verified using 1 or more test cases |
|
Test Design |
Determine how we will test what we decided during test analysis. Transforming test objectives and conditions from the plan into tangible test cases. |
|
Test case |
Set of inputs , execution, and preconditions with an expected result in order to test against to verify specific requirements. |
|
Test implementation |
Carrying out activities to prepare for execution such as developing procedures, test data, and test environments. |
|
Test execution |
Run the test against the test object (also referred to as system under test) |
|
Checking Results |
-Part of execution See the actual results of the test case, consequences and outcomes. |
|
Evaluating Exit Criteria |
A set of conditions that would allow some part of a process to complete. Usually defined during test planning. Help us report our results. |
|
Test Results Reporting |
To report our progress Includes details related to the process, status of the test, and the quality of the system under test. |
|
Test Closure |
Collecting test process data related to the various completed test activities in order to consolidate our experience, re use able test ware, important facts, and relevant metrics. |
|
Test Objective |
Reason or purpose for designing and executing a test |
|
Confirmation Testing (re test) |
Testing that runs test cases that failed the last time they were run. |
|
Debugging |
Testing != Debugging. Debugging fixes the defects found in testing. |
|
Test Strategy |
A High Level description of the test levels to be performed and the testing within those levels. |
|
Fundamental Test Process (5 parts) |
-Planning and Control -Analysis and Design -Implementation and Execution -Evaluating exit criteria and reporting -Test Closure Activities |
|
Test Approach |
The implementation of the test strategy for the specific project. I.G choosing the way you will approach the test by following the test process. |
|
Test Plan |
A document that describes the scope, approach, resources and schedule of intended test activities. "A road map" |
|
Test Monitoring |
Test management task that checks the status of each test project. |
|
Test Condition |
Item or event of a component or system that can be verified by 1 or more test cases. Ex. Test Case = Shadow Bolt should hit the boss normaly |
|
Test Basis |
All documents from which the requirements can be inferred. The documentation on which the test cases are based. Includes documentation, interface requirements, risk analysis reports |
|
Test Data |
Data that exists before the test is executed and that affects or is affected by the component or system under test. |
|
Test Procedure Specification |
Document specifying a sequence of actions for the execution of a test. |
|
Test Suite |
Set of several test cases for system test or component. where the post condition of one test is often used a the precondition for the next one. |
|
Testware |
Artifacts produced during the test process required to carry out the test. |
|
Regression Test |
Testing for defects that were not previously present in (usually) already tested components or systems. |
|
Exit Criteria |
The set of generic and specific conditions for permitting a process to be officially completed. |
|
Test Log |
Chronological record of relevant details about the execution of tests |
|
Test Summary report |
Document summarizing testing activities and results. |
|
Dynamic vs Static Testing |
Dynamic: actually executing the software and checking for responses ect. |
|
Error guessing |
Designing tests based on previous experience with similar situations and expectations |
|
Indepence |
the level (low to high, low being not so good) of the efficacy and willingness to spot flaws |
|
Test Policy |
a high level document describing the principles, approach and major objectives of the organization regarding testing. |
|
Verification Testing |
Confirmation by examining through provision of objective evidence that specified requirements have been fulfilled. |
|
Validation Testing |
Confirmation by examining and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled. |
|
Waterfall Model |
NEVER USE. Test at the end, can lead to huge problems and very inefficient. |
|
V-Model |
Framework to describe the software development life cycle actives from reqs spec to maintenance. Illustrates how testing can be integrated into each phase of development. |
|
Test Levels (V-Model) |
A group of test activities that are organized and managed together. It is linked to the responsibilities in a project. 1 Component Testing 2 Integration testing 3 System Testing 4 Acceptance Testing |
|
V-Model: Component Testing |
Search for defects in and verifies the functioning of software components that separately testable. |
|
V-Model: Integration Testing |
Tests interfaces between components, interactions to different parts of a system like the OS or file system |
|
V-Model: System Testing |
Concerned with the behavior of the whole system/product. Verifies against specific requirements. |
|
V-Model: Acceptance Testing |
Validation testing with respect to user needs, requirements and business process conducted to determine whether or not accept the system (final?). |
|
Incremental development models |
A development life cycle where a projects in broken into a series of increments, each of which delivers a portion of the functionality in the over all projectile reqs. Each sub projects follows a mini-v-model. Examples: Prototyping, RAD, RUP, and Agile. |
|
Iterative development models |
Development life cycle where a project is broken into a large number of iterations. An iteration is a complete development loop resulting in a executable project, a subset of the final project that grows with each iteration. |
|
Rapid Application Development |
A model where components / functions are dev in parallel as if there were mini projects, that are time boxed, delivered, and then assembled into a working prototype. Pros: Rapid change and development is possible |
|
Agile |
A group of software development methodologies based on iterative incremental development. |
|
Component Testing: Stub |
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. It replaced a called component. Called from the software component |
|
Component Testing: Driver (test driver) |
A software component or test tool that replaces a component that takes care of the control and or the calling of a component or system. Calls the software component |
|
System Testing: Functional Requirement |
Req that specifies a function that a component or system must perform. |
|
System Testing: Non-Functional Requirment |
Req that does not relate to functionality, but to attributes such as reliability, efficiency etc. |
|
System Testing: Test Environment (Test Bed) |
Required. An environment containing hardware, instrumentation, sims, software tools, and other support elements needed to conduct a test. |
|
Acceptance Testing: Alpha Testing |
Simulated or actual; operational testing by potential users or an independent test team, but outside the development org. A form of internal acceptance testing. The first stage |
|
Acceptance Testing: Beta Testing (Field Test) |
Operational testing by potential and or existing users at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies user needs. |
|
Test Type |
Group of test activities aimed at a testing component or system focused on a specific testing objective. A means of clearly defining the objective of the test level. |
|
Functional Testing |
Considers the specified behavior and is based on the analysis of the functionality of a component. |
|
Black Box Testing |
Testing, both functional and non functional, without reference ti the internal structure of the component. |
|
Structural Testing |
"White Box" , wanting to see what happens inside. |
|
Static testing defects find able: |
deviation from standards, missing requirements, design defects, bad code etc. Static testing finds defects not failures - dynamic does that |
|
Entry criteria |
Set of generic and specific conditions for permitting a process to go forward with a task/ test phase. |
|
Traceabilitiy |
The ability to identify related items in documentation and software, such as requirements with associated tests. |
|
HorizontalTractability |
The tracing of requirements for a given test level through it's test documentation, specification etc. |
|
Vertical Tractability |
Tracing of requirements through several layers of development. |
|
Test Script |
Procedure specification |
|
Experience based test design technique |
Procedure to derive and/or select test cases based on testers experience. |
|
Equivalence Partitioning |
Black Box testing where you partition a set of test conditions into groups or sets that can be considered the same. Example if(num < 13 || num > 30) 11 12 |
|
Boundary Value Analysis |
Black Box testing based on boundaries between partitions. always use the boundary values and then 1 below and above etc. I.E 1 and 99 being those values. ex. input 1-99 invalid 0 valid 1, 99 invalid 100 |
|
Decision Table Testing |
Black Box testy where test cases are designed to execute the combinations of inputs shown in a decision table. |
|
State Transition Testing |
Black Box test where test cases are designed to execute valid and invalid state transitions. |