Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
91 Cards in this Set
- Front
- Back
First Law of Software Development |
EARLIER IS CHEAPER
The later in the development cycle a fault is detected the more expensive it is to fix |
|
Artifacts of Software Development |
What we produce when making software
Plans, procedures, source code, comments, test cases, test reports, user doc, technical doc |
|
Review |
usually refers to the management practice of meetings to informally consider state of the project at certain stages |
|
Walkthroughs |
refer to an informal technical review, normally carried out by developers
used by dev teams to improve quality, by involving whole team |
|
Inspections |
refers to completely formal process of review, aka formal technical reviews
involves formal written reports, defect data collection, and analysis |
|
Inspections in the Software Process |
Requirements defn (requirements review) System and software design (design inspection) Implementation and unit testing (code inspection) Integration (functional audit) Operation and maintenance |
|
Kinds of Inspections |
A generic technique = inspections can assist at every stage, the earlier the better |
|
PDR Preliminary Design Review |
1 - EVALUATE the progress, technical adequacy, risk resolution 2 - DETERMINE its compatibility with performance and engineering requirements 3 - EVALUATE the degree of definition and assess the technical risk of manufacturing methods 4 - ESTABLISH the existence and compatibility of the physical and functional interfaces |
|
Example: PDR |
For CSCIs the review would focus on 1 - evaluation of the progress, consistency and technical adequacy of the selected top level design 2- compability between software req's and prelim design 3 - on the prelim version of the operation and support documents |
|
The Prevention Principle |
Prevention is better than cure. |
|
IEEE Definition of Inspection |
"... a formal evaluation technique in which software requirements, design, or code are examined in detail by a person or group other than the author to detect faults, violations of development standards, and other problems..." |
|
IEEE Objective of Inspection |
to detect and identify software element defects. This is a rigorous, formal peer examination. |
|
Inspection |
-verifies that the software elements satisfy the specifications -verifies that software elements conform to applicable standards -id's deviations from standards and specifications -Collects software engineering data -does not examine alternatives or stylistic issues |
|
Fagan Inspections |
150 lines/ hour paraphrased
Moderator, chairs the meeting 2-3 Inspectors, paraphrase line by line Author, silent clarifies when asked |
|
Choosing Inspectors (FAGAN) |
Good - review specialists, technical people from same team as author
Bad - managers, supervisors, anyone with personality clash with author, all management, anyone with conflict of interest |
|
Side benefits of Inspection |
Cultural, promotes shared quality culture
Organizational, coding standards and practises are learned and enforced
Educational, quality improves over time, as authors become more aware of the kinds of faults they are prone to |
|
Inspection Process |
Inspections may be used at any stage of software development
Ideally inspections can be applied at every stage to catch problems as early as possible No matter what stage, the process time stays the same |
|
Generic Inspection Process |
Planning - choose team, schedule, materials Orientation - introduce artifact, process Prep - individually check artifact, note issues Review meeting - meet to discuss Rework - correct defects noted Verify - verify artifact and process quality |
|
Planning |
Objectives - gather review package, artifact being inspected, form team, set up schedule |
|
Orientation Meeting |
Author provides overview of artifact inspectors obtain review package prep goals set inspectors commit to participating |
|
Preparation |
Find the max number of non-minor defects in the artifact |
|
Defect Classification |
Critical - cause system to crash, incorrect results
Severe - incorrect results
MOderate - affect limited areas of functionality
Minor - defects that can be overlooked without loss of functionality |
|
Checklists and References |
C - include list of questions about completeness, style
R - company standard docs, online resources, textbooks for reference |
|
Review Meeting |
Make consolidated, comprehensive list of non-minor defects to be addressed -help group synergy -help provide shared knowledge of artifacts |
|
Rework |
Assess each defect listed in the review defect report, determine if really a defect, and repair as neceassary written report on handling of each non minor defect resolve minor issues |
|
Verify |
Assess reworked artifact quality assess inspection process pass or fail the artefact |
|
Code Inspections |
Even if we have a highly formalized inspection process such as the generic one we looked at last time, there is still a range of actual practices that can be applied to implement the actual review of the artifact |
|
Code Checklists |
give concrete list of properties of the code to check for
may be general properties for any program, or specific for specific program
both desired properties and undesired may appear |
|
Code Checklist example |
– 1. Variable and Constant Declaration Defects 1.1 Are descriptive variable and constant names used in accord with naming conventions? 1.2 Are there variables with confusingly similar names? 1.3 Is every variable properly initialized? 1.4 Can any non-local variables be made local? |
|
Code Paraphrasing |
Reading the code in english
is the original method of review described by fagan for use in code inspections
object is to ensure that the code really does implement what we want
mainly concepts and processes, not variables |
|
Structured Code Walkthroughs |
A Guided Tour
very effective in training
less effective as an inspection method |
|
The Lions Commentary |
1977 "Source Code and Commentary on UNIX Level 6" by John Lions |
|
Lightweight Code Inspection Practices |
Learning from success
formal inspections are very successful at finding defects, but many find the process too cumbersome
As a result, many practices have been developed that can gain som advatages |
|
Lightweight Code Inspection
Four Eyes Principle |
Programmers work in loose pairs, where each module is authored by 1 programmer and informally inspected by the other |
|
Lightweight Code Inspection
White Box Testing By Hand
|
when applied manually, most white box testing methods force the test author to examine the code in detail to create the tests - in practice this is how most defects are found |
|
Heavyweight Inspection Practices |
Doing it right in the first place
formal verification
|
|
Heavyweight IP
Cleanroom Software Development |
Essentially the ideal inspection process
formal specification incremental development structured programming stepwise refinement static verification statistical testing |
|
Code Inspections Summary |
When the inspection process is applied to code = checklists, paraphrasing, walkthroughs
Lightweight gain advantages of inspections without formal process, as in XP
Heavyweight inspection, all the way to formal verification |
|
Code Inspection in XP |
Lightweight, continuous approach
XP uses two lightweight software dev processes pair programming, and refactoring |
|
Pair Programming |
-PP is continuous and immediate code inspection -observed to increase quality because all code is inspected -increases productivity
Driver = author Partner = inspector |
|
Code Refactoring |
is improving the design of the existing code
improve design without affecting external behaviour |
|
4 Constraints of Improving Design |
1. the system must communicate everything you want to communicate 2. system must contain no duplicate code 3. system should have the fewest possible classes 4. system should have the fewest possible methods ONCE AND ONLY ONCE RULE |
|
ONCE AND ONLY ONCE |
everything that must be in the program is in the program, and only in one place |
|
Code Smells |
In XP when code need to refactor it "smells"
class too long, switch statements, struct class, duplicate code, almost duplicate, too many primitive type variables, useless comments |
|
Refactoring Process |
The Refactor Cycle
Identify some code that smells, apply refactoring, run the tests, repeat
Done when we pass the tests |
|
The Fowler Catalog |
Martin Fowler published a by-example catalog of refactorings that can be applied
This catalog is rough guide for when and why certain refactorings should be used |
|
Extract Method |
One of the most common refactorings
Code fragment the can be grouped together, turn the fragment into a method whose name explains the purpose |
|
Duplicated Code |
Most significant smell in source code
Code is duplicated, duh |
|
Long Method |
the longer the method or function the more difficult it is to understand |
|
Long parameter list |
parameters are better than globals ,but long parameter lists are hard to understand
Replace parameter with method invoke a method to get the parameter |
|
Switch Statements |
Object oriented code should have comparatively fewer switch statements than imperative code
Adding a new conditional case to a switch may require changing other switch statements
-move each leg of the conditional to an overriding method in a subclass, make the original abstract |
|
Identifier Length |
Excessively long identifiers
excessively short ones
names of variables, methods, ya know |
|
Speculative Generality |
We'll probably need this some day
include generality in a program in case it is required in the future |
|
Software Quality Metrics |
Software metrics are measurable properties of software systems, their development and use
includes a wide range of diff measures of properties of software itself, process of proceeding and maintaining, and its source code design, tests |
|
What are Metrics good for?
|
Reliability and Quality Control -metrics help us to predict and control the quality of our software
Costs estimation and productivity improvement -help predict effort to produce or maintain
Quality Improvement -Metrics help us to improve code quality and maintainability |
|
Kinds of Metrics |
3 basic kinds product metrics, process metrics, project metrics |
|
Product Metrics |
are those that describe the internal and external characteristics of the product itself
ex: size, complexity, features, performance, reliability quality levels |
|
Process metrics |
measure the process of software development and maintenance to improve it
ex: effectiveness of defect removal during development |
|
Project Metrics |
ARE THOSE that describe the project characteristics
ex: # of developers, development cost, schedule, productivity |
|
Definition of Measurement |
is the process of empirical objective assignment of numbers to entities, to characterize an attribute
an entity is an object or event an attribute is a feature if the entity, such as size of program objective means the measurement must be biased on a well defined rule
each entity is a given number, which tells you about its attribute |
|
To avoid mistakes in software measurement |
1. must specify both an entity, and an attribute not just one of the other 2. you must define the entity precisely 3.you must have a good intuitive understanding of the attribute before you propose a measure for it |
|
Direct Measurement |
are numbers that can be derived directly from the entity without other info
ex length of source code, measured by # of lines |
|
Indirect Measurement |
are numbers that are derived by combining two or more direct measures to characterize an attribute
ex: programmer productivity = lines of code/person-months of effort |
|
Prediction Systems |
measurement for prediction requires prediction system
1. A mathematical model 2. A procedure for determining the model parameters 3. A procedure of interpreting results |
|
Probability of failure on demand (POFOD) |
the probability that a demand for service from a system will result in a system failure |
|
Rate of occurrence of failures (ROCOF) |
the probable number of failures likely to be observed in a certain time period |
|
Availability |
the ability of the system to deliver serves when requested |
|
External Product Metrics |
are those we can apply only by observing the software product in its environment
includes many measures but particularly: -failure rate, availability rate, defect rate |
|
Definition of Reliability |
Reliability is the probability that a system will execute without failure in a given environment over a given period of time |
|
Definition of failure |
formal view: any deviation form specified behaviour
engineering view: any deviation form required, specified, or expected behaviour |
|
Error |
is a mistake or oversight on the part of the designer or programmer who could cause a fault |
|
Fault |
is a mistake in the software which in turn could case a failure |
|
Failure |
when the program is run in certain situations |
|
Defect |
is usually defined as a fault of a failure
= faults + failures |
|
Defect Density |
is a standard reliability metric DD = # of defects found / system size
Size is normal in KLOC, 1000s of lines of code |
|
Internal Product Metrics |
most metrics in practice are internal, measures of the software code
easy to make and easy to automate, but not always clear which attributes of the program they characterize |
|
Code Metrics |
Software Size -simplest and most enduring pro cut metric is the size of the product, measured in LOC (lines of code), most often KLOC |
|
Better Size Measures than LOC |
Length: the physical size of software functionality: the capabilities provided to the user by the software complexity: how complex is this software |
|
Complexity: how complex is this software? |
problem complexity: of underlying problem algorithmic C structural C cognitive C |
|
Halstead's "Software Science" Metrics |
Operators: if, return, this, + Operands: int, bool, void, x, y
Source code is a sequence of tokens, each of which is either an operator or operand |
|
McCabe's "Cyclomatic Complexity: metric |
mainly equations which he said not to worry about it
think flow graphs |
|
Flowgraoh complexity of software |
max path length number/ interaction of cycles max number of alternative paths |
|
COCOMO |
COnstructuve COst MOdel
method of modelling software development, to yield estimates of effort and cost before undertaking the project
Software cost estimation |
|
Simple COCOMO effort prediction
|
simplest model uses estimate effort = a (size) ^b
effort is measured in person-months sizes the predicted size of the software inKDSI |
|
The downside of COCOMO |
the simple cocomo model is claimed to give goof order of magnitude estimates of required effort
but depends on a size estimate - which some say is just too hard to estimate
could be off exponentially |
|
Estimating time using COCOMO |
uses a similar model for time given effort
time = a ( effort ) ^b
time is in moths effort in person-months
again though exponentially off is possible |
|
Where does COCOMO come from? |
is based on empirical measurements of the actual effort and cost of past software projects as a function of software size
and the derivation of regression equation to explain them
analysis of data indicated a logarithm of the effort |
|
How can we predict size independently of code/ |
predictions of effort, cost and time depending on code size have 2 inherent difficulties -prediction based on KDSI or KLOC just involves just replaced one difficulty prediction problem with another one -KDSI are actually measures of length not size |
|
Function Point Analysis |
function points is currently the most popular and widely used size metric
computed from detailed system specification using the equation FP =UFC x TCF UFC = unadjusted function count TCF = technical complexity factor
|
|
Using Function Points |
are used extensively as a size metric in reference to KLOC, for example in equations for productivity, defect density and cost / effort prediction
Advantages: language independent, can be computed early in a project, does not have to be predicted
Disadvantages: unnecessarily complex, difficult to compute |
|
Function Points Example
Spell Checker Specification |
accepts as input into a document file, a dictionary file and an optional user dictionary file
the checker lists all words in the document file not contained in either of the dictionary files
user can query the number of words processed and the number of spelling errors found at any stage |