Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
64 Cards in this Set
- Front
- Back
Development Teams consist of: |
Business Analysts (BA) Testers Technical Specialists Technical Writers Trainers Project Managers User Representatives - audience who uses product |
|
Successful Software Development Projects include: |
- User involvement - Clear Requirements - Solid Planning - Realistic Expectations - Support from Management - Trained Professionals - People who strive for a job well done & quality work |
|
How Do We Sell Testing |
- Cold Calling - Networking - Responding to RFPs - Request for Proposals - Typically Crown Corporations |
|
Analysts do what |
Define the features to include in scope of project |
|
Developers |
- create application - code |
|
Project Managers |
- Ensure dates and timelines are met - Adhere to budget - Define what the scope is and stay within it |
|
Testers |
- Review the requirements - Perform tests to ensure product runs as it was designed to |
|
PQA |
Founded in 1997 in Fredericton NB Privately owned Canada's leading independent provider of quality assurance QA and testing solutions 110 employees testing professionals 6 offices across Canada Fredericton - head office 40 Miramichi 1? Moncton 10 Halifax 10 Vancouver 25 Calgary 15 |
|
PLATO |
founded 2015 one head office in Fredericton 20 one head office in Mirimichi 12 |
|
PQA/PLATO |
QA and testing assessments Training Mentoring/coaching |
|
Phase Containment |
The extent to which defects are removed in the phase of introduction |
|
False-fail result |
A test result in which a defect is reported although no such defect actually exists |
|
False-positive result |
A test result in which a defect is reported although no such defect actually exists |
|
Error/mistake |
A human action that produces an incorrect result |
|
Defect (bug, fault) |
A flaw in a component or system that can cause the component or system to fail to perform its required function, e.g. an incorrect statement or data definition. A defect, if encountered during execution, may cause a failure of the component or system |
|
Failure |
Deviation of the component or system from its expected delivery, service or result. |
|
Quality |
The degree to which a component, system or process meets specified requirements and/or user/customer needs and expectations |
|
Test Design Specification |
A document specifying the test conditions (coverage items) for a test item, the detailed test approach and identifying the associated high level test cases |
|
Test Control |
A test management task that deals with developing and applying a set of corrective actions to get a test project on track when monitoring shows as deviation from what was planned. |
|
Test Case |
A set of input values, execution preconditions, expected results and execution preconditions, developed for a particular objective or test condition, such as to exercise a particular program path or to verify compliance with a specific requirement. |
|
Test Objective |
A reason or purpose for designing and executing a test. |
|
Testing |
The process consisting of all lifecycle activities, both static and dynamic, concerned with planning, preparation and evaluation of software products and related work products to determine that they satisfy specified requirements, to demonstrate that they are fit for purpose and to detect defects. |
|
Requirement |
A condition or capability needed by a user to solve a problem or achieve an objective that must be met or possessed by a system or system component to satisfy a contract, standard, specification or other formally imposed document. |
|
Review |
An evaluation of a product or project status to ascertain discrepancies from planned results and to recommend improvements. Examples include management review, informal review, technical review, inspection, and walk through. |
|
Debugging |
The process of finding, analyzing and removing the causes of failure in software. |
|
Confirmation Testing (re-testing) |
Testing that runs test cases that failed the last time they were run, in order to verify the success of corrective actions. |
|
Test Strategy |
A high level description of the test levels to be performed and the testing within those levels for an organization or program (one or more projects) |
|
Seven Testing Principles |
1. Testing shows the presence of defects 2. Exhaustive testing is impossible 3. Early testing 4. Defect clustering 5. Pesticide Paradox 6. Testing is Context dependent 7. Absence of errors fallacy |
|
Complete Testing AKA Exhaustive Testing |
A test approach in which the test suite comprise all combinations of input values and preconditions |
|
Zeno's Paradox |
Testing reduces possibility of defects found but it cannot be proven that no defects will be found. |
|
Principle 1 Testing shows presence of defects |
Testing can show that defects are present, but cannot prove that there are no defects. Testing reduces the probability of undiscovered defects remaining in the software but , even if no defects are found, it is not a proof of correctness. |
|
Principle 2 Exhaustive Testing is Impossible |
Testing everything, (all combinations of inputs and preconditions) is not feasible except for trivial cases. Instead of exhaustive testing, risk analysis and priorities should be used to focus testing efforts. |
|
Principle 3 Early Testing |
To find defects early, testing activities shall be started as early as possible in the software or system development life cycle, and shall be focused on defined objectives. |
|
Principle 4 Defect Clustering |
Testing effort shall be focused proportionally to the expected and later observed defect density of modules. A small number of modules usually contains most of the defects discovered during pre-release testing, or is responsible for most of the operational failures. |
|
Principle 5 Pesticide Paradox |
If the same tests are repeated over and over again, eventually the same set of test cases will no longer find any new defects. To overcome this "pesticide paradox", test cases need to be regularly reviewed and revised, and new and different tests need to be written to exercise different parts of the software or system to find potentially more defects. |
|
Principle 6 Testing is context dependent |
Testing is done differently in different contexts. eg. safety critical software is tested differently from an e-commerce site. |
|
Principle 7 Absence-of-errors Fallacy |
Finding and fixing defects does not help if the system built is unusable and does not fulfil the users' needs and expectations. |
|
Test Execution |
The process of running a test on the component or system under test, producing actual results. |
|
Test Approach |
The implementation of the test strategy for a specific project. It typically includes the decisions made that follow based on the (test) project's goal and the risk assessment carried out, starting points regarding the test process, the test design techniques to be applied, exit criteria and test types to be performed. |
|
Test Plan |
A document describing the scope, approach, resources and schedule of intended test activities. It identifies among others, test items, the features to be tested, the testing tasks, who will do each task, degree of tester independence, the test environment, the test design techniques and entry and exit criteria to be used and the rationale for their choice and any risks requiring contingency planning. It is a record of the test planning process. |
|
Test Monitoring |
A test management task that deals with the activities related to periodically checking the status of a test project. Reports are prepared that compare the actuals to that which was planned. |
|
Test Condition |
An item or event of a component or system that could be verified by one or more test cases, e.g. a function, transaction, feature, quality attribute, or structural element. |
|
Test Basis |
All documents from which the requirements of a system can be inferred. The documentation on which the test cases are based. If a document can be amended only by way of formal amendment procedure, then the test basis is called a FROZEN TEST BASIS. |
|
Test Data |
Data that exists (for example in a database) before a test is executed, and that affects or is affected by the component or system under test. |
|
Coverage (test coverage) |
The degree expressed as a percentage, to which a specified coverage item has been exercised by a test suite. |
|
Test Procedure Specification test procedure test script manual test script |
A document specifying a sequence of actions for the execution of a test. |
|
Test Suite |
A set of several test cases for a component or system under test, where the post condition of one test is often used as the precondition for the next one. |
|
Incident |
Any event occurring that requires investigation |
|
Testware |
Artifacts produced during the test process required to plan, design, and execute tests, such as documentation, scripts, inputs, expected results, set-up, and clear-up procedures, files databases, environment, and any additional software or utilities used in testing. |
|
Regression Testing |
Testing of a previously tested program following modification to ensure that defects have not been introduced or uncovered in unchanged areas of the software, as a result of the changes made. It is performed when the software or its environment is changed. |
|
Exit Criteria |
The set of generic and specific conditions, agreed upon with the stakeholders, for permitting a process to be officially completed. The purpose of exit criteria is to prevent a task from being considered completed when there are still outstanding parts of the task which have not been finished. Exit criteria are used to report against and to plan when to stop testing. |
|
Test Log |
A chronological record of relevant details about the execution of tests. |
|
Test Summary Report |
A document summarizing testing activities and results. It also contains an evaluation of the corresponding test items against exit criteria. |
|
Error Guessing |
A test design technique where the experience of the tester is used to anticipate what defects might be present in the component or system under test as s result of errors made, and to design tests specifically to expose them. |
|
Independence Testing |
Separation of responsibilities, which encourages the accomplishment of objective testing. |
|
Test Policy |
High level document describing the principles, approach, and major objectives of the organizations regarding the testing. |
|
Code of Ethics Public |
Public: |
|
Code of Ethics Client and Employer |
Certified software testers shall act in a manner that is in the best interests of their client and employer, consistent with the public interest. |
|
Code of Ethics Product |
Certified software testers shall ensure that the deliverables they provide (on the products and systems they test) meet the highest professional standards possible. |
|
Code of Ethics Judgement |
Certified Software Testers shall maintain integrity and independence in their professional judgement. |
|
Code of Ethics Management |
Certified software testers and leaders shall subscribe to and promote an ethical approach to the management of software testing. |
|
Code of Ethics Profession |
Certified software testers shall advance the integrity and reputation of the profession consistent with the public interest. |
|
Code of Ethics Colleagues |
Certified software testers shall be fair to and supportive of their colleagues and promote cooperation with software developers. |
|
Code of Ethics Self |
Certified Software Testers shall participate in lifelong learning regarding the practice of their profession and shall promote and ethical approach to the practice of the profession. |