• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/90

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

90 Cards in this Set

  • Front
  • Back

Testing

Executing software in a controlled manner to determine if it behaves as specified.

Systematic Testing

A specific procedure/system for testing:


- choosing creating test cases


- executing the tests/documenting results


- evaluating the results


- deciding when we are done

Verification

Checking software for conformance with a given specification.



Are we doing the job right?

Validation

Checking if we're doing what the user wanted.



Are we doing the right job?

Levels of Specification

1) Functional Specifications


2) Design Specifications


3) Detailed Design Specificiations

Functional Specifications (General)

Precise description of required behaviour of the system. What should the software do?

Design Specifications (General)

Describes the architecture of the design to implement the functional specifications. Illustrates software components and their relationships.

Detailed Design Specifications (General)

How each component of the architecture will be implemented.

Levels of Testing

3) Unit Testing


2) Integration Testing


1) System Testing


0) Acceptance Testing

Unit Testing

Do individual components meet detailed design specifications?

Integration Testing

Do individual units work together as a whole?

System Testing

Does the product meet the functional requirements?

Acceptance Testing

Customer validates that the product meets their expectations.

-ility Testing

- Capability


- Reliability


- Usability


- Performance


- Security

Regression Testing (General)

Ensure existing behaviour does not break with new changes/features.

Failure Testing (General)

Ensure we actually fixed failures and we don't cause them again.

Test Design Stages

- Strategy


- Planning


- Case Design


- Procedure


- Documentation

Black Box Testing

What did we forget?



Cannot see the software code, tests are based purely on requirements and specifications.

Functional Specifications

Formal (mathematical) or informal, has information about inputs, actions, and outputs.

Functionality Coverage

Attempts to partition the functional specifications into a set of small, separate requirements.

Black Box Functionality Coverage

Examines portions of requirements therefore cannot replace acceptance testing.

Black Box: Experimental Design

Black Box testing performs an experiment on software, we have a hypothesis and develop a method to test it, we then observe the results, and draw a conclusion. Isolate variables.

Test Plan Design

Inputs should isolate different causes of failure.



Test cases should be ordered such that each test only assumes previously tested features work.



Vary only one input at a time in each test case.

Input Coverage Testing

Analyze all possible inputs allowed by the functional specifications.


Methods:


- exhaustive


- input partitioning


- shotgun


- (robustness) boundary

Exhaustive Input Testing

Test every possible input to program. IMPRACTICAL!

Input Partition Testing

Make input equivalence classes which characterize sets of inputs. Test each input class. (Choose simplest input value for each class.)

Shotgun Testing

Choose random values for inputs, test these. This is not systematic, not very useful. We would have to choose a lot of random values to be thorough.

Input Partitioning + Shotgun Testing

Use shotgun method combined with input partitioning, choose random values within each equivalence class. Test these.

Input Robustness Testing

Check that program doesn't crash unexpectedly no matter the input.


1) Shotgun (random garbage input)


2) Boundary value

Boundary Values

Values close to boundaries as allowed in software specifications.

Output Coverage Testing

Analyze all possible outputs specified in functional specifications, create tests to cause each output.

Exhaustive Output Testing

Slightly more practical that Exhaustive Input Testing. Still not that practical.

Output Partitioning

Analyze possible output classes, design inputs to cause outputs in each class. This is difficult!

Multiple Input/Output Streams

Separate concerns, create coverage test for each stream. Treat each stream as a pre-made partition within which we make a set of smaller partitions.

"Grey" Box Testing

When we already have a design, use Black Box Testing methods when we have architectural (class) level or detailed (method) level design.

Model Driven Engineering (MDE)

Uses formal state-model to visually model software process/requirements.

State Model

High level abstraction of program's intent (expressed at problem domain level).

Applications of State Models

- verify model is correct (formal verification)


- generate some of software implementation


- test implementation is consistent

Model Based Testing

Pros: automatic, test against a formal specification, covers all essential behaviour, requires no code, very high confidence



Cons: heavyweight test (only used for safety critical systems)

Black Box Unit Testing

Create test for units (methods, statements, classes .. ect) that follow Black Box criteria. We look at requirements, inputs, and outputs for each unit.

Method Testing

Requirements: specifications of method


Input: values of parameters and global vars


Output: value of returned vars, global vars, and any exceptions thrown (referred to as outcome)

Test Harness

A program designed to exercise a specific method, class, or subsystem. Test cases are programmed as a sequence of calls to the unit with specific input values.

Factoring Out Unit Dependencies

Test harnesses provide stubs for other units.



Stub: gives typical output for a given unit.

Assertions, please

Using pre and post conditions to help input and output coverage analysis.

Testing Tools

JTest and C++test, use explicit pre/post conditions to implement naive Black Box testing. Automatically generate test harness, stubs, and naive input coverage test cases.



Outcome must be checked by hand, cannot provide stubs (unless code is complete), therefore not really Black Box testing!

Naive Class Testing (Black Box)

Done by testing tools, tests every method in class.



Input is current state of all class, object, global variables, and method parameters.



Output is new state of all class, object, global variables, and results/exceptions of method.

Class Traces

Sequence of method calls.

Trace Specifications

An explicit method for specifying behaviour of a sequence of method calls.

Trace Expressions

Used to specify legal sequences of method calls.



Ex// s.new(), s.push(x), s.pop(x), s.Empty() == true

Implementing Assertions

Checked assertions help with all kinds of systematic testing. Well documented assertions help us to understand the programmers assumptions and intentions.

Run Time Checking (Assertions)

Checked assertion every time method/class is used.

Black Box Integration Testing

Testing Subsystems: gradually replace stubs in Unit Tests with the real thing. Build up to testing the integrated system.

White Box Testing

What did we do wrong?



Code Coverage


Logic Path/Decision Coverage


Mutation Testing


Data Coverage

Code Coverage

Execute every method at least once.

Logic Path/Decision Coverage

Cover every path of execution.

Mutation Testing

Create many, slightly different versions of code. Checks sufficiency of test suite.

Code Injection

Modification of source code that does not affect functionality.

Implementing White Box Testing

Validation of coverage with code injection at the source, executable code, and execution sampling level.

Static Analysis

Reduces White Box testing effort and cost using automatic proofs (of parts of program).

Code Coverage Methods

Statement Analysis


- statement coverage


- basic block coverage



Decision Analysis


- decision coverage


- loop coverage


- path coverage

Statement Coverage

Execute every statement at least once (where one line of code = one statement). Create a test case for each unique set of inputs.

Basic Block Coverage

Cause basic block to execute at least once, what inputs are needed for block to be entered?



Basic Block: indivisible sequence of statements

Decision Coverage

Every decision is executed every way (True/False)

Condition Coverage

Each condition expression within a decision is exercised.

Loop Coverage

Exercise each loop:


- once


- twice


- zero times


- many times

Execution Paths

Sequence of executed statements starting at entry to a unit. Two paths are independent if there is at least one statement on one path which is not executed in the other.

Flow Graphs

Used to show program control flow between basic blocks.

Path Coverage Testing

Make one test for each independent path.



Pros: Covers all basic blocks, covers all conditions, fewer tests, automatable.



Cons: does not take data complexity into account.

Data Coverage Methods

Covers data aspect of program code versus control aspect.

Data Value Coverage

Identify critical variables, analyze code for parts that change the variables, partition, test.

Data Path Coverage

Identify output variables, analyze code for flow paths that affect values.

Data Interface Coverage

Identify unit's interface input variables, analyze code to find classes of values that cause different behaviour, partition, test.

Mutation Testing

Checks adequacy of test suite.


- mutate source


- run test on each mutant, compare to original


- results differ? mutant has been detected/killed

Mutants

Syntactic changes to program source representing errors in the code. Each mutant has only one change.

Value Mutations

Change value of constants, subscripts, parameters, ect.

Decision Mutations

Invert sense of each decision condition.



Ex// == becomes !=

Statement Mutations

Delete, exchange, shuffle individual statements.

Mutation Adequacy Score

MAS = # dead mutants / total # mutants

Software Maintenance/Evolution

Done while software is in production, evolves in response to run-time failures or new requirements.

Corrective Maintenance

Fix reported errors.


- coding errors


- design errors


- requirements errors

Adaptive Maintenance

Change software to work in a new environment (Operating System) without changing functionality.

Perfective Maintenance

Adding functionality due to changes in requirements. Requests from customers or new business environment.

Maintenance Testing

Always test! Ensure we don't break code.

Failure Testing/Suites

Suites of examples that have caused failures in the past. Characterize each failure with a failure test. Ensure failures do not reappear.

Operational Testing

"Real Thing" Suites of tests that are actual production runs.

Regression Testing

Automated testing to ensure product doesn't regress (become less effective). Don't break existing functionality with new code. Catch unintentional changes.

Regression Series/Set

Incrementally compare results of one regression test to the results of the previous test.

Establishing a Baseline Regression Set

Original functionality suite/failure tests/operational tests. Choose set of observable artifacts to track. Run tests, save new artifacts in an easy to compare form.

Adding/Retiring Tests from Regression Set

Add tests whenever we make a new change. Retire tests that have been incorporated into a new test.

Observable Artifact

Direct outputs of software and indicators of behaviour. Store as a text file.



We want to combine Observable Artifacts into one text file.