• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/165

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

165 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)
Software
Computer programs, procedures, and possibly associated documentation and data pertaining to the operation of a computer system.
Risk
A factor that could result in future negative consequences; usually expressed as impact and likelihood.
Error (mistake)
A human action that produces an incorrect result.
Defect (bug, fault)
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system.
Failure
Deviation of the component or system from its expected delivery, service or result.
Quality
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations.
Exhaustive testing (complete testing)
A test approach in which the test suite comprises all combinations of input values and preconditions.
Testing
The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Software development
All the activities carried out during the construction of a software product (the software development life cycle), for example, the definition of requirements, the design of a solution, the building of the software products that make up the system (code, documentation, etc.), the testing of those products and the implementation of the developed and tested products.
Code
Computer instructions and data definitions expressed in a programming language or in a form output by an assembler, compiler or other translator.
Test basis
All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Requirement
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
Review
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
Test case
A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test objective
A reason or purpose for designing and executing a test.
Debugging
The process of finding, analyzing and removing the causes of failures in software
Testing shows presence of defects
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but, even if no defects are found, it is not a proof of correctness.
Exhaustive testing is impossible
Testing everything (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, we use risks and priorities to focus testing efforts.
Early testing
Testing activities should start as early as possible in the software or system development life cycle and should be focused on defined objectives.
Defect clustering
A small number of modules contain most of the defects discovered during prerelease testing or show the most operational failures.
Pesticide paradox
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new bugs. To overcome this 'pesticide paradox', the test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to potentially find more defects.
Testing is context dependent
Testing is done differently in different contexts. For example, safety-critical software is tested differently from an e-commerce site.
Absence-of-errors fallacy
Finding and fixing defects does not help if the system built is unusable and does not fulfill the users' needs and expectations.
Fundamental Test Process
1. planning and control;

2. analysis and design;

3. implementation and execution;

4. evaluating exit criteria and reporting;

5. test closure activities.
Test plan
A document describing the scope, approach, resources and schedule of intended test activities. It identifies, amongst others, test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used and the rationale for their choice, and any risks requiring contingency planning. It is a record of the test planning process.
Test policy
A high-level document describing the principles, approach and major objectives of the organization regarding testing.
Test strategy
A high-level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects).
Test approach
The implementation of the test strategy for a specific project. It typically includes the decisions made based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed.
Coverage (test coverage)
The degree, expressed as a percentage, to which a specified coverage item has been exercised by a test suite.
Exit criteria
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used by testing to report against and to plan when to stop testing.
Test control
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows a deviation from what was planned.
Test monitoring
A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actual status to that which was planned.
Test basis
All documents from which the requirements of a component or system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a frozen test basis.
Test condition
An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element.
Test case
A set of input values, execution preconditions, expected results and execution postconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement.
Test design specification
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and the associated high-level test cases.
Test procedure specification (test script, manual test script)
A document specifying a sequence of actions for the execution of a test.
Test data
Data that exists (for example, in a database) before a test is executed, and that affects or is affected by the component or system under test.
Test suite
A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one.
Test execution
The process of running a test by the component or system under test, producing actual results.
Test log
A chronological record of relevant details about the execution of tests.
Incident
Any event occurring that requires investigation.
Re-testing (confirmation testing)
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Regression testing
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made.
Test summary report
A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria.
Testware
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up and clear-up procedures, files, databases, environment, and any additional software or utilities used in testing.
Independence
Separation of responsibilities, which encourages the accomplishment of objective testing.
Recall that the success of testing is influenced by psychological factors:
- clear objectives;

- a balance of self-testing and independent testing;

- recognition of courteous communication and feedback on defects.
Describe, with examples, the way in which a defect in software can cause harm to a person, to the environment or to a company. (K2)
ascdsad
Distinguish between the root cause of a defect and its effects. (K2)
asdsad
Give reasons why testing is necessary by giving examples. (K2)
asd
Describe why testing is part of quality assurance and give examples of how testing contributes to higher quality. (K2)
asdsa
Recall the terms 'mistake', 'defect', 'fault', 'failure' and the corresponding terms 'error' and 'bug'. (K1)
asdsad
Recall the common objectives of testing. (K1)
asdsad
Provide examples for the objectives of testing in different phases of the software life cycle (K2)
sdfd
Differentiate testing from debugging (K2)
sdfdf
Explain the seven principles of testing (K2)
dsafsf
Recall the 5 fundamental test activities from planning to test closure activities and the main tasks of each test activity. (K1)
asfddsf
Recall that the success of testing is influenced by psychological factors: (K1)
clear objectives;

a balance of self-testing and independent testing;

recognition of courteous communication and feedback on defects.
Contrast the mindset of a tester and that of a developer. (K2)
sdd
Verification
Confirmation by examination and through the provision of objective evidence that specified requirements have been fulfilled.
Validation
Confirmation by examination and through provision of objective evidence that the requirements for a specific intended use or application have been fulfilled.
V-model
A framework to describe the software development life cycle activities from requirements specification to maintenance. The V-model illustrates how testing activities can be integrated into each phase of the software development life cycle.
Test level
A group of test activities that are organized and managed together. A test level is linked to the responsibilities in a project.
Off-the-shelf software (commercial off-the-shelf software, COTS)
A software product that is developed for the general market, i.e. for a large number of customers, and that is delivered to many customers in identical format.
Incremental development model
A development life cycle where a project is broken into a series of increments, each of which delivers a portion of the functionality in the overall project requirements. The requirements are prioritized and delivered in priority order in the appropriate increment. In some (but not all) versions of this life cycle model, each subproject follows a 'mini V-model' with its own design, coding and testing phases.
Component testing
The testing of individual software components.
Stub
A skeletal or special-purpose implementation of a software component, used to develop or test a component that calls or is otherwise dependent on it. it replaces a called component.
Driver
A software component or test tool that replaces a component that takes care of the control and/or the calling of a component or system.
Robustness
The degree to which a component or system can function correctly in the presence of invalid inputs or stressful environmental conditions.
Robustness testing
Testing to determine the robustness of the software product.
Test-driven development
A way of developing software where the test cases are developed, and often automated, before the software is developed to run those test cases.
Integration
The process of combining components or systems into larger assemblies.
Integration testing
Testing performed to expose defects in the interfaces and in the interactions between integrated components or systems.
System testing
The process of testing an integrated system to verify that it meets specified requirements.
Requirement
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification, or other formally imposed document.
Functional requirement
A requirement that specifies a function that a component or system must perform.
Non-functional requirement
A requirement that does not relate to functionality, but to attributes of such as reliability, efficiency, usability, maintainability and portability.
Test environment
An environment containing hardware,instrumentation, simulators, software tools and other support elements needed to conduct a test.
Acceptance testing
Formal testing with respect to user needs, requirements and business processes conducted to determine whether or not a system satisfies the acceptance criteria and to enable the users, customers or other authorized entity to determine whether or not to accept the system.
Operational testing
testing conducted to evaluate a component or system in its operational environment.
Compliance
The capability of the software product to adhere to standards, conventions or regulations in laws and similar prescriptions.
Compliance testing
The process of testing to determine the compliance of component or system.
Alpha testing
Simulated or actual operational testing by potential users/customers or an independent test team at the developers' site, but outside the development organization. Alpha testing is often employed for off-the-shelf software as a form of internal acceptance testing.
Beta testing
Operational testing by potential and/or existing users/customers at an external site not otherwise involved with the developers, to determine whether or not a component or system satisfies the user/customer needs and fits within the business processes. Beta testing is often employed as a form of external acceptance testing for off-the-shelf software in order to acquire feedback from the market.
Test type
A group of test activities aimed at testing a component or system focused on a specific test objective, i.e. functional test, usability test, regression test, etc. A test type may take place on one or more test levels or test phases.
Functional testing
Testing based on an analysis of the specification of the functionality of a component or system.
Black-box testing
Testing, either functional or non-functional, without reference to the internal structure of the component or system.
Black-box (specification-based) test design technique
Procedure to derive and/or select test cases based on an analysis of the specification, either functional or nonfunctional, of a component or system without reference to its internal structure.
Functionality
The capability of the software product to provide functions that meet stated and implied needs when the software is used under specified conditions.
suitability, accuracy, security, interoperability and compliance
Functionality testing
The process of testing to determine the functionality of a software product.
Interoperability
The capability of the software product to interact with one or more specified components or systems.
Interoperability testing
The process of testing to determine the interoperability of a software product.
Security
Attributes of software products that bear on its ability to prevent unauthorized access, whether accidental or deliberate, to programs and data.
Security testing
Testing to determine the security of the software product.
Load testing
A test type concerned with measuring the behavior of a component or system with increasing load, e.g. the number of parallel users and/or numbers of transactions, to determine what load can be handled by the component or system.
Performance
The degree to which a system or component accomplishes its designated functions within given constraints regarding processing time and throughput rate.
Performance testing
The process of testing to determine the performance of a software product.
Stress testing
Testing conducted to evaluate a system or component at or beyond the limits of its specified requirements.
Reliability
The ability of the software product to perform its required functions under stated conditions for a specified period of time, or for a specified number of operations.
maturity (robustness), fault-tolerance, recoverability and compliance
Reliability testing
The process of testing to determine the reliability of a software product.
Usability
The capability of the software to be understood, learned, used and attractive to the user when used under specified conditions.
understandability, learnability, operability, attractiveness and compliance
Usability testing
Testing to determine the extent to which the software product is understood, easy to learn, easy to operate and attractive to the users under specified conditions.
Efficiency
The capability of the software product to provide appropriate performance, relative to the amount of resources used under stated conditions.
time behavior (performance), resource utilization and compliance
Efficiency testing
The process of testing to determine the efficiency of a software product.
Portability
The ease with which the software product can be transferred from one hardware or software environment to another.
adaptability, installability, co-existence, replaceability and compliance
Portability testing
The process of testing to determine the portability of a software product.
White-box testing (structural testing)
Testing based on an analysis of the internal structure of the component or system.
Code coverage
An analysis method that determines which parts of the software have been executed (covered) by the test suite and which parts have not been executed, e.g. statement coverage, decision coverage or condition coverage.
Test suite
A set of several test cases for a component or system under test, where the postcondition of one test is often used as the precondition for the next one.
White-box test design technique
A procedure to derive and/or select test cases based on an analysis of the internal structure of a component or system.
Re-testing (confirmation testing)
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions.
Regression testing
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software as a result of the changes made. It is performed when the software or its environment is changed.
Test automation
The use of software to perform or support test activities, e.g. test management, test design, test execution and results checking.
Maintenance
Modification of a software product after delivery to correct defects, to improve performance or other attributes, or to adapt the product to a modified environment.
Maintenance testing
Testing the changes to an operational system or the impact of a changed environment to an operational system.
Maintainability
The ease with which a software product can be modified to correct defects, modified to meet new requirements, modified to make future maintenance easier, or adapted to a changed environment.
analyzability, changeability, stability, testability and compliance
Maintainability testing
The process of testing to determine the maintainability of a software product.
Test oracle
A source to determine expected results to compare with the actual result of the software under test. An oracle may be a requirements specification, the existing system (for a benchmark), a user manual, or an individual's specialized knowledge, but should not be the code.
Impact analysis
The assessment of change to the layers of development documentation, test documentation and components, in order to implement a given change to specified requirements.
Operational environment
Hardware and software products installed at users' or customers' sites where the component or system under test will be used. The software may include operating systems, database management systems, and other applications.
Operational testing
Testing conducted to evaluate a component or system in its operational environment.
Characteristics of Good Testing
- for every development activity there is a corresponding testing activity;

- each test level has test objectives specific to that level;

- the analysis and design of tests for a given test level should begin during the corresponding development activity;

- testers should be involved in reviewing documents as soon as drafts are available in the development cycle.
Understand the relationship between development, test activities and work products in the development life cycle and give examples based on project and product characteristics and context. (K2)
sdfds
Recognize the fact that software development models must be adapted to the context of project and product characteristics. (K1)
sdfsdf
Recall characteristics of good testing that are applicable to any life cycle model (K1)
dfgfgd
Compare the different levels of testing: major objectives, typical objects of testing, typical targets of testing (e.g. functional or structural) and related work products, people who test, types of defects and failures to be identified. (K2)
asdads
Compare four software test types (functional, non-functional, structural and change-related) by example. (K2)
dsafd
Recognize that functional and structural tests occur at any test level. (K1)
sadf
Identify and describe non-functional test types based on non-functional requirements. (K2)
dsffd
Identify and describe test types based on the analysis of a software system's structure or architecture. (K2)
sdfsf
Describe the purpose of confirmation testing and regression testing. (K2)
dsfsf
Compare maintenance testing (testing an operational system) to testing a new application with respect to test types, triggers for testing and amount of testing. (K2)
sdfs
Recognize indicators for maintenance testing (modifications, migration and retirement). (K2)
sdfsd
Describe the role of regression testing and impact analysis in maintenance. (K2)
sdfsdfs
Testing
The process consisting of all life cycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects.
Static testing
Testing of a component or system at specification or implementation level without execution of that software, e.g. reviews or static code analysis.
Dynamic testing
Testing that involves the execution of the software of a component or system.
Review
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walkthrough.
Informal review
A review not based on a formal (documented) procedure.
Formal review
A review characterized by documented procedures and requirements, e.g. inspection.
Moderator (inspection leader)
The leader and main person responsible for an inspection or other review process.
Entry criteria
The set of generic and specific conditions for permitting a process to go forward with a defined task, e.g. test phase. The purpose of entry criteria is to prevent a task from starting which would entail more (wasted) effort compared to the effort needed to remove the failed entry criteria.
Reviewer (inspector)
The person involved in the review who identifies and describes defects in the product or project under review. Reviewers can be chosen to represent different viewpoints and roles in the review process.
Exit criteria
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing.
Scribe
The person who records each defect mentioned and any suggestions for process improvement during a review meeting, on a logging form. The scribe has to ensure that the logging form is readable and understandable.
Peer review
A review of a software work product by colleagues of the producer of the product for the purpose of identifying defects and improvements. Examples are inspection, technical review and walkthrough.
Walkthrough
A step-by-step presentation by the author of a document in order to gather information and to establish a common understanding of its content.
Technical review
A peer group discussion activity that focuses on achieving consensus on the technical approach to be taken.
Inspection
A type of peer review that relies on visual examination of documents to detect defects, e.g. violations of development standards and non-conformance to higher-level documentation. The most formal review technique and therefore always based on a documented procedure.
Static analysis
Analysis of software artifacts, e.g. requirements or code, carried out without execution of those software artifacts.
Compiler
A software tool that translates programs expressed in a high-order language into their machine-language equivalents.
Complexity
The degree to which a component or system has a design and/or internal structure that is difficult to understand, maintain and verify.
Cyclomatic complexity
The number of independent paths through a program. Cyciomatic complexity is defined as: L-N + 2P, where

L = the number of edges/links in a graph

N = the number of nodes in a graph

P = the number of disconnected parts of the graph (e.g. a called graph and a subroutine).
Control flow
A sequence of events (paths) in the execution through a component or system.
Data flow
An abstract representation of the sequence and possible changes of state of data objects, where the state of an object is any of: creation, usage, modification or destruction.
Recognize software work products that can be examined by different static techniques. (K1)
sdfdsf
Describe the importance and value of considering static techniques for the assessment of software work products. (K2)
sadfdsf
Explain the difference between static and dynamic techniques, considering objectives, types of defects to be identified, and the role of these techniques within the software life cycle. (K2)
sdfsd
Recall the activities, roles and responsibilities of a typical formal review. (Kl)
dfgdg
Explain the differences between different types of review: informal review, technical review, walkthrough and inspection. (K2)
sdfdsf
Explain the factors for successful performance of reviews. (K2)
dsafdsf
Recall typical defects identified by static analysis and compare them to reviews and dynamic testing. (K1)
sdfdsf
Describe, using examples, the typical benefits of static analysis (K2)
fdgdgf
List typical code and design defects that may be identified by static analysis tools. (K1)
sdfdsf