• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/116

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

116 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)

1.1.2 Defined process

A defined process is a documented sequence of steps required to do a specific job. Processes are usually defined for jobs that are done repeatedly and need to be done in the same way each time that they are performed.


1.1.3 Benefits of defining a process

A defined process provides


• a clearly delineated framework for planning, tracking, and managing work


• a guide for doing the work correctly and completely, with the steps in the proper order• an objective basis for measuring the work and tracking progress against goals, and for refining theprocess in future iterations


• a tool for planning and managing the quality of products produced


• agreed-upon, mutually-understood procedures for team members to use in coordinating their workto produce a common product


• a mechanism that enables team members to support each other throughout the course of the project


1.1.5 Processes and plans

Whereas processes are defined sets of steps for doing a task or project, plans include both the process steps and other elements required for a specific instantiation of that process, such as resources needed,roles of various project members, schedules, budget, goals and objectives, commitments, and identified risks.

Plans are derived from a


process.

1.1.7 Enactable and operational processes

An enactable process defines precisely how to do a process, and includes all of the elements required for using the process. An enactable process consists of a process definition, required process inputs,and assigned agents, resources (e.g., people, hardware, time, money), and exit criteria. An operational process defines precisely what to do by listing the required tasks in enough detail to guide a knowledgeable professional through doing that task. Operational processes provide sufficiently detailed guidance so that teams and individuals can make detailed plans for doing a project and then use the process to guide and track their work. The PSP is an example of an enactable operational process.

Enactable = How


Operational = What

1.2.1 Process elements

Process elements are components of a process. The PSP contains four basic elements: scripts, forms, measures, and standards.

scripts, forms, measures, and standards

1.3.4 Precise and accurate measures

A precise measure is one that specifies a value to a suitable level of precision, as with a specific number of digits after the decimal point. An accurate measure is one that correctly measures the property being measured. Measures can be precise and accurate, precise but inaccurate, imprecise but accurate, or both imprecise and inaccurate. For process management purposes, measures should be as precise and accurate as possible.


1.4.1 Distributions

A distribution is a set of numerical values that are generated by some common process (actual sizes of parts developed or size estimates).

Set of numbers made by the same process.

1.4.2 Mean

The mean is the arithmetic average value of a distribution. In the PSP, the mean is typically an estimate of the mean of the distribution, not the actual mean.

Average.

1.4.3 Variance

Variance is a measure of the spread or tightness of a distribution around the mean. In the PSP, the variance is typically an estimate of the variance of the distribution, rather than the actual variance.

Spread (or tightness) around the Mean.

1.4.4 Standard deviation

Standard deviation is the square root of the variance. It is often used to characterize the expected range of deviation between an estimate and an actual value. For example, one method in PSP uses standard deviation to categorize software size into relative size tables. Standard deviation is also used as part of the calculation of prediction intervals.

SQRT(variance)

1.4.5 Correlation

Correlation is a measure of the degree to which two sets of data are related. In the PSP, correlation is measured between estimated and actual size and between estimated size and actual effort.

Correlation is how much two sets of data are related.

1.4.6 Significance of a correlation

Significance measures the probability that two data sets have a high degree of correlation by chance.Estimates of size and effort in the PSP are more reliable when based on historical data that have a high degree of correlation that is significant.

Probability correlation occurred by chance.

1.4.7 Linear regression

Linear regression determines the line through the data that minimizes the variance of the data about that line. For example, when size and effort are linearly related, linear regression can be used to obtain effort estimates from size estimates.


1.4.8 Prediction interval

The prediction interval provides the range around an estimate made with linear regression within which the actual value will fall with a certain probability. For example, in PSP, the 70% prediction interval for an estimate of size or time implies a 0.7 probability that the actual value of size or time will be within the range defined by the prediction interval.

If you do something 100 times, 70 times you will do it within these ranges.

1.4.9 Multiple regression

Multiple regression is used in the PSP when estimations of size or time depend upon more than one variable. For example, if modifications to programs require much more time than additions, then“added” and “modified” can be separated into two variables for the regression calculation.


1.4.10 Standard normal distribution

The standard normal distribution is a normal distribution translated to have a mean of zero and standard deviation of one. The standard normal distribution is used in the PSP when constructing a size estimating table.

Size Estimating Table

1.4.11 Log-normal distribution

Many statistical operations assume that data values are normally distributed, but some PSP measures do not meet this requirement. For example, size values cannot be negative but can have small values that are close to zero. These distributions also typically have higher probability at large values than a normal distribution. When a log transformation is applied to data sets of this type, the resulting distribution may be normally distributed and, therefore, suitable for statistical analyses that assume normally distributed data. Statistical parameters for the normal distribution may be calculated and then transformed back to the original distribution. Size data in the PSP are generally log-normally distributed, so they must be transformed into a normal distribution for construction of a size estimating table.

Size Data

1.4.12 Degrees of freedom

Degrees of freedom (df) measures the number of data points (n), as compared to the number of parameters (p) that are used to represent them. In linear regression, two parameters (β0 and β1)describe the line used to approximate the data. Since at least two points are needed to determine a line,the number of degrees of freedom is n-2. In general, the number of degrees of freedom is n-p.

n-p


where:


n = # of data points


p = # of parameters (β0 and β1)

1.4.13 The t-distribution

The t-distribution enables estimation of the variance of a normal distribution when the true parameters are not known, thus enabling calculation of statistical parameters based upon estimates from sample data. Like the normal distribution, it is bell-shaped, but it varies depending upon the number of points in the sample. For fewer data points, the distribution is short with fat tails. As the number of data points increases, the distribution becomes taller with smaller tails and approaches the normal distribution. In PSP, the t-distribution is important because it helps to determine the significance of a correlation and the prediction interval for regression, each of which is dependent upon the number of points in the sample data set.

enables estimation of the variance when the parameters are not known.

2.1.1 Process fidelity

Process fidelity (sometimes called process discipline or process compliance) is the degree to which individuals follow their own defined personal process. The objective of process fidelity is to improve work performance and produce higher-quality products. Unless the process is followed faithfully,process improvement is not possible.

Fidelity - from the Latin Fides meaning faithful. (Where we get the name Fido)

2.1.3 Process fidelity and product quality

The quality of the product is governed by the quality of the process used to develop it. It is not enough to define a high-quality process; individuals also must follow that process when developing the product. Creating and consistently using a high-quality process will result in the production of high quality products. Product quality, in turn, has a direct effect on an individual’s ability to meet the schedule and budgetary objectives for the product.

You have to be faithful to a process in order to get quality.

2.1.4 Process fidelity and planning

When a project is planned in accordance with effective and efficient processes and estimates are made based on solid data, the resultant delivery commitment date probably will be accurate. When projects are conducted according to the details contained in an accurate plan, they are delivered on schedule consistently, as long as the work is completed using the defined processes and adjustments are made to the plan to reflect changes in the project conditions. If the defined process is not followed, the plan no longer relates to what is being done, and it becomes impossible to track the progress against the plan accurately. Precise project tracking requires accurate data.

Follow the process and your plan will be less badly.

2.1.5 Process fidelity and performance improvement

A well-defined and measured process that is followed faithfully enables individuals to select the methods that best suit their particular abilities and support the tasks that they need to perform.Individuals must personally use well-defined and measured processes in order to consistently improve their performance.

Use the same (good) process faithfully and you will see improvement.

2.3.1 Basic PSP measures

The basic PSP measures are time, size, quality (defects), and schedule data.

4 measures.

2.3.2 Time measures

Time is measured in minutes and is tracked while doing the work because time recorded later is more likely to be inaccurate. Basic components are start date, start time, end date, end time, interrupt time,off-task time, and delta time. The time in phase is the planned or actual time spent in a particular phase of the process.


• Interrupt time is not included in the time measurement for a task or process phase. If there is an interruption during the work, that time is subtracted from the time measurement.


• Off-task time is the time spent doing things other than planned project tasks; generally, it is not measured or tracked, since it does not contribute to meeting the stated schedule goals. Off-task time includes time spent in management and administrative meetings, attending training classes, reading email, or any of the other essential activities that a team member must do. Off-task time for a given task or work period is calculated by subtracting the total delta time from the total elapsed time spent on a task.


• Delta time is the actual time that it took to complete a task or process phase. It is calculated as end time minus start time (less any interrupt time).




Time data are most accurate when collected using an automated tool; the tool should be able to record start and stop times and dates, calculate the elapsed time, and subtract interruption time from elapsed time to calculate the delta time. Each entry for time data should also include the names of the process phase/step, the product and element being worked on, the project task being performed, and the person doing the work.

7 things are required. (Assume you need to include Dates.)




Don't forget OTT.

2.3.3 Size measures

A size measure is used to measure how big a work product is. Size measures are selected so that they are appropriate to the work product, for example, using pages (vs. words or letters) as a measure for text pages, or taking programming tasks and language into account for software components (see Knowledge Areas 3.1 and 3.2). Size measure data should be collected in real time to the extent possible because data collected after the fact is more likely to be inaccurate. Size measures apply not only to the final deliverable products, but also to the component parts and interim versions of the product.




Size data are most accurate when collected using an automated tool that will record both the planned and actual sizes for the various product parts or components, using the size accounting measure categories described in 3.1.6. The tool must calculate totals for each category of size data or otherwise ensure the self-consistency the data being collected.


2.3.4 Quality measures (defect data)

In PSP, product quality is measured in terms of defects. A defect is anything in the program or software product that must be changed for it to be properly designed, developed, maintained,enhanced, or used. Defects can be in the code, designs, requirements, specifications, or other documentation. Defects should be recorded as soon as they are discovered, preferably using an automated tool. The following data should be collected for every defect injected: defect identifier number, date when the defect was discovered, phase when the defect was injected, phase when the defect was removed, defect type, time to find and fix the defect, and a brief description of the defect.A new defect may be injected while fixing another defect. In this case, the second defect is recorded separately, with a reference (called the fix reference) back to the original defect. The time required to fix each defect includes the total time required to find and fix the problem, and validate the correction.Fix time is recorded separately for each defect.

7 things are required.

2.3.5 Defect type standard

The defect type standard defines categories into which similar defects can be placed. Consistent assignment of similar defects to the same defect type category is essential for process analysis.

Environment, Function, Syntax, etc.

2.3.6 Schedule measures

Schedule measures are used to plan when the project should be complete and to track progress against the plan. Schedule data are most accurate when collected using an automated tool that will record planned task names and descriptions, phases in which the work is to be done, product/element involved, applicable committed dates for completing tasks, and the dates on which tasks were completed. Schedule data should be collected in real time to the extent possible, particularly information regarding task completion dates, since this is the primary means of obtaining earned value(EV) credit that allows individuals to track their progress against the planned schedule (see 4.5).

Uses EV.

2.3.7 Derived measures

The PSP provides a set of performance and quality measures to help individuals implement and improve their personal processes. Specific derived measures are discussed in later knowledge areas.


2.5.8 Monitor performance results

To determine if implemented process improvements have been effective, PSP practitioners should periodically repeat the steps for baselining their work processes and compare their baseline performance to previously established improvement goals. When so doing, practitioners should be careful to avoid the complications of bolstering and clutching.


• Bolstering is the selective recall of only those results that reinforce an opinion or belief, usually manifest by forgetting failures and remembering only successes. Use of all PSP data from all projects should preclude bolstering.


• Clutching is the tendency to perform badly when under pressure or when a good outcome is especially critical, thereby negating successful performance on past projects when using the same processes. By following established processes and using data (rather than instinct) as a basis for instantiating process changes, clutching can be minimized or avoided.

Bolstering = Selective


Clutching = getting the Yips (like athletes)

3.1.2 Types of measures

Measures may be categorized as


• absolute or relative


• explicit or derived


• objective or subjective


• dynamic or static


• predictive or explanatory


3.1.3 Criteria for size measures

Useful size measures must be


• related to development effort


− Does the size of the product statistically correlate with development effort?


− Does time spent on development of the measured part of the product represent a significant part of the project’s work?


• precisely defined


• directly countable


• suitable for early planning

Related to Effort,


Defined,


Countable,


Early Planning

3.1.4 Counting standards

Counting standards provide guidance that is


• precise about what to count


• application/language specific


• invariant, providing the same outcome each time the standard is applied

What do you need to get the same count every time.

3.1.6 Size accounting I - PARTS/DEFINITIONS





PSP size accounting methods for planned, actual, and to-date size define the measures for


• base (B): the unmodified program to which subsequent enhancements are added


• added (A): code that is added to the base code


• modified (M): the part of the base code that is changed


• deleted (D): the part of the base code that is subsequently removed


• reused (R): an existing part or item that is copied unchanged from a source other than the base


• added and modified (A&M): all added and modified code


• new reusable (NR): a part or item that is developed with the intention of later reusing that part oritem


• total (T): the size of the entire program


3.1.6 Size accounting II - FORMULAS

Base (B)


- Base Added (BA)


- Modified (M)


- Deleted (D)


Reused (R)


Parts Added (PA)


- New Reusable (NR)


Added & Modified (A&M) = A + M


Estimated Proxy Size (E) = BA + PA + M


Projected Size (P) = β0 + β1* E


Total Size (T) = R + B + BA - D + PA




3.4.1 Using proxies instead of a size measure

Most size measures that meet the required criteria are not available during planning. A proxy is a stand-in measure that relates product size to planned function and provides a means in the planning phase for judging (and therefore, of estimating) a product’s likely size.


3.4.2 Criteria for choosing a proxy

The criteria for good proxy are as follows.


• The proxy size measure should closely relate to the effort required to develop the product and correlate with development costs.


• The proxy content of a product should be directly countable.


• The proxy should be easy to visualize at the beginning of a project.


• The proxy should be customizable to the needs of each project and individual


• The proxy should be sensitive to implementation variations that affect cost or effort.

Content must be countable


Easy to visualize


Customizable


Sensitive to variations

3.5.1 What is PROBE?

PROBE is a procedure for estimating size and effort. The overall procedure is as follows.


1. Develop the conceptual design (see 3.5.2).


2. Identify and size the proxies.


3. Estimate other elements.


4. Estimate program size. (Select the appropriate PROBE method, as described in 3.3.5.)


5. Calculate prediction intervals (for methods A and B only) (see 3.5.8).

5 steps

3.5.2 Conceptual design

The conceptual design is a high-level postulation of the product elements and their functions. The conceptual design subdivides a desired product into its major parts. The conceptual design is used solely as a basis for producing size and effort estimates (see 4.2.4) and may not necessarily reflect how the actual product is designed and built.

High-Level

3.5.5 Select the appropriate PROBE method

1. Check to see if method A can be used by ensuring that the data meet the criteria below, and assessing correlation, β0, and β1.


− There are three or more data points (estimated E and actual A&M) that correlate.


− The absolute value of β0 is less than 25% of the expected size of the new program.


− β1 is between 0.5 and 2.


If PROBE method A can be used, then calculate the projected size as y = β0+ β1(E), where


− y = projected added and modified size


− E = estimated proxy size


− β0 and β1 are calculated using estimated proxy size and actual added and modified size




2. If method A cannot be used, check to see if method B can be used.


− There are three or more data points (plan A&M and actual A&M) that correlate.


− The absolute value of β0 is less than 25% of the expected size of the new program.


− β1 is between 0.5 and 2.


If PROBE method B can be used, then calculate the projected size as y = β0+ β1(E), where


− y = projected added and modified size


− E = estimated proxy size


− β0 and β1 are calculated using plan added and modified size, and actual added and modified size




3. If methods A and B cannot be used and there are historical data, use method C. Calculate projectsize as y = β0+ β1(E), where


−y = projected added and modified size


−E = estimated proxy size


−β0 = 0


−β1 = ActualTotalAdded & ModifiedSizeToDate


− PlanTotalAdded & ModifiedSizeToDate




4. If there are no historical data, use method D, which is to use engineering judgment to estimate added and modified size.

Method A&B:


β0 < 25%


0.5 < β1 < 2

3.5.8 Prediction interval definition

The prediction interval is used in PROBE methods A and B. A prediction interval is


• the range within which the actual size is likely to fall 70% of the time


• not a forecast


• applicable only if the estimate behaves like historical data

If you do something 100 times, 70 times you will do it within these ranges.

3.6.1 Combine independent estimates

Use this method to combine independent estimates.


1. Make separate linear regression projections.


2. Add projected sizes.


3. Add the squares of the individual ranges and calculate square root to calculate prediction interval.


3.6.2 Use multiple proxies

Use multiple regression when there is (a) correlation between development time and each proxy, and(b) the proxies do not have separate size/hour data.


1. Identify and size each proxy.


2. Use multiple regression to project program size.


y = β0 + x1β1 + x2β2 + . . . xmβm


3. Calculate prediction intervals.


UPI = projected size + range (70%)


LPI = projected size – range (70%)


3.7.1 Clustered or grouped data

For data that are clustered or grouped, size estimates may not be very useful for estimating effort. However, the size estimate still may be useful in estimating average effort.


3.7.2 Extreme data points

Extreme data points can lead to erroneous β0 and β1 values, even with high correlation. Estimates made for points outside the range of the data used to calculate β0 and β1 are likely to be seriously in error.

Lead to erroneous β0 and β1


values.

3.7.3 Unprecedented products

Resist making an estimate until the completion of a feasibility study and development of prototypes.Do not confuse making an estimate with guessing.

Don't make estimates until you understand a project.

3.7.4 Data range

Estimates made for points outside the range of the data used to calculate β0 and β1 are likely to be seriously in error.

Do not calculate if β0 and β1 are not within acceptable ranges.

4.2.6 Select the appropriate PROBE method for resource estimation

1. Check to see if method A can be used.


− There are three or more data points (estimated E and actual development time) that correlate.


− The absolute value of β0 is near 0.


− β1 is within 50% of 1/(historical productivity).


2. If method A cannot be used, check to see if method B can be used.


− There are three or more data points (plan A&M and actual development time) that correlate.


− The absolute value of β0 is near 0.


− β1 is within 50% of 1/(historical productivity).


3. If method B cannot be used and there are historical data, use method C.


4. If no historical data, use method D.

A&B:


|β0| close to 0


β1 < 50% of 1/Historic Productivity

4.2.13 Cost performance index (CPI)

Cost Performance Index (CPI) =


planned total development time to date ÷


actual total development time to date



Planned Tot. Dev. Time to Date


------------------------------------------------


Actual Tot Dev. Time to Date

4.3.2 Productivity

Productivity is the ratio of a product’s size to the time expended to develop that product, generally measured as size measure per hour.

size measure/hour

4.4.1 Project plan characteristics

A project plan must be:


• accessible: easy to locate and reference


• clear: straightforward and easy to read


• specific: responsibilities and costs identified


• precise: appropriate level of precision


• accurate: based on relevant data and an unbiased estimating process

Accessible,


Clear,


Specific,


Precise,


Accurate

4.4.2 Period plans and project plans

A period plan covers a specific unit of time, such as a week or month. A project plan describes all efforts and costs for developing a product.

Period Plan


Project Plan

4.4.3 Task hours and working hours

Task hours is a measure of the time spent working on defined project tasks. Working hours includes task hours and accounts for non-task activities such as time reading and answering e-mail, attending meetings, etc.

Working Hours = Task Hours + Non-Task Hours




Non-Task Hours are reading, emailing, etc.

4.4.5 Schedule plan requirements

Required elements for producing a schedule plan are:


• a calendar of available time


• the order in which the tasks are to be completed


• estimated effort for each task

Requirements:


Calendar of available time


Task order


Est. Effort for each task

4.5.1 Planned value (PV)

The planned value of a task is equal to its planned time expressed as a percentage of the total planned time for the project.


For example, a 5-hour task in a 50-hour project would have a PV of 10.

a %age

4.5.2 Earned value (EV)

Earned value is a method used for tracking the actual progress of completed work against the overall project plan. As each task is completed, its PV is added to the cumulative EV for the project. Partially completed tasks do not contribute to the EV total.

Cumulative EV is made up of PV

4.5.3 Using EV measures

When using EV, keep these limitations in mind:


• The EV method assumes that the rate of task completion in the future will be roughly the same as it was in the past. If this is not the case, the EV projections will not be accurate.


• The EV method measures progress relative to the plan. If the plan is inaccurate, the EV projections are also likely to be inaccurate.


• The EV method assumes that the project’s resources are uniform. If the staffing level increases, the EV projections will be pessimistic, and if the staffing is cut, the projections will be optimistic.


4.5.4 EV as a measure of actual progress relative to planned progress

At any time during a project, the sum of value earned for completed tasks represents the percentage of work that has been completed. A comparison of the cumulative EV to the cumulative PV at a given time indicates progress of the work against the planned schedule:


• PV is the same as EV: work is on schedule


• EV is larger than PV: work is ahead of schedule


• PV is larger than EV: work is behind schedule


4.5.10 Estimating the project completion date

The estimated project completion date can be calculated by computing the average EV per week to date and then using the average value for EV per week to compute the time necessary to complete the remaining planned value. This assumes that the project continues to earn the average EV rate as before.

Calculate using average EV per week.

4.6.2 When to adjust a plan

A plan should reflect the way that the individual is actually working. If it does not, the plan should be revised. When work methods or processes are revised, the entire plan should be re-examined.

When it doesn't reflect an individuals work.


Re-evaluate when the process is changed.

5.1.2 The economics of quality

• It costs less to find and fix defects earlier in a process, rather than later.


• The longer a defect remains in a product, the greater the cost to remove it.


• Testing is an inefficient and ineffective way to remove defects.


• It is more efficient to prevent defects than to find and fix them.


• The right way is always the fastest and cheapest way to produce a high-quality outcome.


• Reviews are fundamentally more efficient than testing for finding and fixing defects.


5.1.4 Process quality

A quality process must meet the needs of its users to produce quality products efficiently.


A qualityprocess must:


• produce a quality product consistently


• be usable and efficient


• be easy to learn and adapt to new circumstances


5.2.4 Yield

Yield is the percentage of defects in the program that are removed in a particular phase or group of phases. A yield measure can be calculated for any individual phase or group of phases.


5.2.5 Phase yield

Phase yield is the percentage of defects removed during a phase.


5.2.6 Process yield

Process yield is the percentage of defects removed prior to entering the compile phase (or before entering unit test if there is no compile phase).


5.2.7 Review yield

Review yield is the percentage of defects in the program found during the review.


5.2.8 Percent appraisal cost of quality (COQ)

Percent appraisal COQ is the percentage of development time spent in design and code review.


5.2.9 Percent failure COQ

Percent failure COQ is the percentage of development time spent in compile and test.


5.2.10 Cost of quality (COQ)

Cost of quality is the percentage of time spent performing appraisal and failure tasks. COQ defines quality issues in management and business terms. The principal COQ measures are as follows:


Performance costs: the costs of doing the job in the first place


Appraisal costs: the costs of examining a product to determine its quality


Failure costs: the costs of fixing a defective product, including all the attendant costs of theproduct’s failure


Prevention costs: the costs of devising and implementing measures to prevent failures


5.2.11 COQ appraisal to failure ratio (COQ A/FR)

COQ A/FR is the ratio of time spent in appraisal tasks to time spent in failure tasks.


5.2.12 Defect density

Defect density is the number of defects found per size measure. It is normalized for product size to enable comparison of various products and the processes that produced them.


5.2.13 Process quality index (PQI)

The process quality index (PQI) is a derived measure that characterizes the quality of a software development process.


The PQI value is the product of five quality profile component values:


1. Design quality is expressed as the ratio of design time to coding time.


2. Design review quality is the ratio of design review time to design time.


3. Code review quality is the ratio of code review time to coding time.


4. Code quality is the ratio of compile defects to a size measure.


5. Program quality is the ratio of unit test defects to a size measure.




The PQI components are normalized to [0, 1] such that zero represents poor practice and one represents desired practice. The ratios are plotted on the axes of a pentagon with scale [0, 1]. The resulting polygon can be compared with the containing pentagon to determine the quality of the process. Recommended values for each PQI component are as follows:


• Design quality is the minimum of 1.0 or the time spent in detailed design divided by the time spent in coding.


• Design-review quality is the minimum of 1.0 or 2 times the time spent in detailed design review divided by the time spent in detailed design).


• Code-review quality is the minimum of 1.0 or 2 times the time spent in code review divided by the time spent in coding.


• Code quality is the minimum of 1.0 or 20/(10+Defects/KLOC in compile).


• Program quality is the minimum of 1.0 or 10/(5+Defects/KLOC in unit testing).


5.2.14 Calculating values for the PQI components

To calculate and interpret PQI values, do the following:


• Multiply the five PQI element measures together to give a number between 0.0 and 1.0.


• Values below 0.5 indicate that the product is likely to be of poor quality. The lower the value, the poorer the quality is likely to be.


5.2.15 Composite PQI

A composite PQI measure represents the overall process quality for a project that produced multiple programs. This composite PQI can be calculated in three ways, each of which has advantages and disadvantages:


1. The PQI product measure is calculated by taking the product of all of the PQI for the component programs.


a. Advantage: This measure will quickly indicate that a product has components with low PQI values.


b. Disadvantage: For large systems, the values are likely to be too low to be useful in managing system quality.


2. The overall PQI measure is determined by using the overall values for all of the programs for calculating the quality profile component values. For example, review time would be the sum of the review times for all the program elements and the unit test defects would be the total defect density for all of the combined programs.


a. Advantage: This measure has the advantage of being easy to calculate and providing a general indicator of overall product quality.


b. Disadvantage: A few poor quality components will be masked by the larger number of high quality components.


3. The minimum PQI measure is calculated by using the PQI value for that program component that had the minimum PQI value. a. Advantage: This measure has the advantage of rapidly pinpointing any poor-quality component.


b. Disadvantage: The measure does not indicate anything about the quality of the overall program.


Since no single composite measure is best for all purposes, composite PQI measures should be used with care and their meaning thoroughly explained.


5.2.16 Phase defect removal rate

For each phase of a process, the phase defect removal rate is the number of defects found per hour in that phase.


5.2.17 Review rate

Review rate refers to the size of product reviewed per hour. This rate is calculated for both review and inspection phases (see 5.3.3).


5.2.18 Defect-removal leverage (DLR)

Defect-removal leverage is a measure of the relative effectiveness of defect removal for any two process phases. For example, the DRL for design review relative to unit test would be defined as “DRL(DR/UT) = defects per hour in design review divided by defects per hour in unit test.”


5.3.1 Personal reviews

A personal review is conducted by the individual who examines his or her own product with the goal of finding and fixing as many defects as possible. Personal reviews should precede any other activity that uses the product (coding, compiling, testing, inspecting, etc.).


5.3.2 Personal review principles

The following principles should be followed when individuals examine their own products during personal reviews.


• Find and fix all defects in the work.


• Use a checklist derived from personal defect data.


• Follow a structured review process.


• Follow sound review practices.


• Measure the reviews.


• Use data to improve the reviews.


• Produce reviewable products.


• Use data to identify where and why defects


were injected and then to change the process to prevent similar defects in the future.


5.3.3 Inspections

An inspection is a structured team review of a component or product. The object of an inspection is to identify problems in the product. Inspections are conducted according to a defined procedure with attendees filling established roles. In a properly-run inspection, the participants do not discuss the problems identified, nor do they attempt to solve those problems.


5.3.4 Walkthroughs

A walk through is less formal than an inspection. A product, such as a design or a code segment, is presented to an audience that raises issues and asks questions.


5.3.6 Conducting effective personal reviews

For effective and efficient reviews, these practices should be followed.


• Take a break between working and reviewing.


• Review products in hard copy form, rather than electronically.


• Check off each item as it is completed.


• Update review checklists periodically to respond to changes in personal data.


• Build and use a different checklist for each design method, programming language, or product type.


• Thoroughly analyze and verify every non-trivial design construct (see 6.6).


6.1.3 The role of design in the overall software development process

Software design links the requirements for a system to its implementation. By appropriate use of abstraction, it manages complexity and ensures that the system components work together to produce the desired results.


6.1.7 Design specification structure

The elements of a complete design can be specified using the following specification structure.


• External-static (inheritance, class structure)


• External-dynamic (services, messages)


• Internal-static (attributes, program structure, logic)


• Internal-dynamic (state machine)


6.3.1 Design precision

Designs should be concise and unambiguous. The design should contain sufficient detail for all intended uses of the design documentation.


6.3.2 Design completeness

• All relevant details should be included, without any unnecessary redundancy.


• The design documentation should not be limited to individual component designs, but should also document system-wide or emergent concerns.


• It is helpful to include the rationale for design decisions; it is often helpful to document alternatives that were not chosen.


6.3.3 Design usability

The design must be accessible to and understandable by all its users.


6.4.1 The need for software design documentation

Software designs must be documented, along with the related requirements, constraints, and rationale,because designs for all but the simplest programs are needed by people who will be involved with the eventual products. Examples include the following.


• The individual: to facilitate program implementation, verification, and test


• Team members: to enable design inspections and design coordination


• Testers: to enable test planning


• Maintainers: to facilitate product enhancement and repair


• Documenters and users: to enable others to understand what the product does and how it works


6.4.2 Overall design documentation concerns

To ensure that the design documentation continues to represent the product, the design documentation must be self-consistent, and changes must be managed and properly documented.


6.4.3 Common types of design documentation

The individual produces design documentation covering


• program context


• program structure


• related components


• external variables, calls, references


• detailed program logic description for design decisions; it is often helpful to document alternatives that were not chosen


6.4.4 Design visibility

The design documentation provides the visible representation of a design used for review and verification. The design is recorded using an appropriate design notation (see 6.5.1).


6.4.5 Design documentation practice

A useful practice when implementing a design is to start with the full program’s design and, as each design section is implemented, encapsulate that design segment in a comment immediately before the implementation.


6.5.2 PSP design templates

The PSP design templates represent the static structure and the dynamic behavior of a software system,capturing both the externally visible characteristics and the internal details (see 6.1.6). A complete PSP design should contain the following four categories of design elements.


• External-dynamic: Use the operational specification template (OST) and the functional specification template (FST) to record this information (see 6.5.3 and 6.5.4).


• External-static: Use the functional specification template (FST) to record this information (see 6.5.4).


• Internal-dynamic: Use the state specification template (SST) to record this information (see 6.5.5).


• Internal-static: Use the logic specification template (LST) to record this information (see 6.5.6).


6.5.3 Operational specification template (OST)

The OST documents the external-dynamic characteristics of a part of a software system. It describes one or more scenarios involving the part and the actors (e.g., users or other systems) that interact with it. Each OST has a unique ID, a user objective, and a scenario objective. For each step in a scenario,the OST lists the following elements.


• Source (system or specified actor)


• Step number


• Action


• Comments


6.5.4 Functional specification template (FST)

The FST documents a part (e.g., a class) of a software system, its external-static relationships, and its externally visible attributes. The FST also documents the external-dynamic characteristics of a part. It describes actions (e.g., class methods) that the part makes available for external use; this description includes the defined interface for each action, including arguments, constraints, and returned results.


6.5.5 State specification template (SST)

The SST documents the internal-dynamic behavior of a software system and its parts (e.g., classes)when that behavior is represented as a set of states, transitions between states, and actions associated with the transitions. The SST can be supplemented by a separate state diagram that graphically depicts the states, transition conditions, and actions. An SST contains:


• state names and descriptions


• functions and internal parameters used in transition conditions


• details of state transitions


− current state


− next state


− transition condition (predicate)


− action performed when transition occurs


6.5.6 Logic specification template (LST)

The LST documents the internal-static characteristics of a part of a software system. It describes the internal logic of the part, using pseudocode to clearly and concisely explain its operation. Note that the LST information may be embedded as comments in the program source code, rather than using a separate form, as long as it is clear and sufficiently detailed.


6.6.2 Verification methods

Software verification methods include:


• execution table verification


• trace-table verification


• state-machine verification


• loop verification


• other analytical verification methods


6.6.4 Using execution table verification

• Identify loops and complex routines for verification.


• Choose order of analysis (e.g., top down or bottom up).


• Construct an execution table with program steps and relevant variable values, using multiple copies for loop iterations


• Verify execution results against the requirements specification.


6.6.5 Using trace-table verification

• Identify representative logical cases for analysis.




• For each logical case, verify using an execution table.


6.6.6 Execution table verification vs. trace-table verification

Differentiate between execution table and trace-table verification and know when to use each one.


6.6.8 Using loop verification

Verify loop initiation, incrementing, and termination, using the verification methods appropriateto the type of loop.


• For-loop verification


• While-loop verification


• Repeat-until verification


7.1 Defining a Customized Personal Process

A defined process should not be regarded as “one size fits all.” This knowledge area addresses situations in which processes must be tailored to meet changes in needed outputs or developed from the ground up to address new situations or environments.


7.2 Process Evolution

A process cannot be evolved to fit changing needs or situations until the current process accurately represents what is actually done when using that process. This knowledge area addresses the activities involved with incrementally evolving an initial process into one that is an accurate and complete description of the actual process.


7.3 Professional Responsibility

Exceptional work requires responsible behavior on the part of a professional. This knowledge area describes some of the practices of responsible professionals.


7.1.1 When to define a new or customized process

Different situations call for different methods: what works well in one environment may not be effective in another. For example, simple programming tasks may require little or no design time.However, larger systems or high-security systems (regardless of size), require a thorough design. A process without a design phase may require customization to include this activity when tailoring an existing process to fit a new situation, when the process scalability changes, or when security requirements change.


7.1.2 How to define a new or customized process

Defining a new or customized personal process follows the same principles as those for software development: start with user needs, and end with final test and release. There are eight general steps for tailoring or creating a personal process.


1. Determine personal needs and priorities.


2. Define process objectives, goals, and quality criteria.


3. Characterize the current process.


4. Characterize the target process.


5. Establish a process development strategy.


6. Define the initial process.


7. Validate the initial process.


8. Enhance the process.


7.1.3 Using information mapping for documenting a new or customized process

When tailoring an existing process (or developing scripts and forms from scratch), follow the following principles of information mapping [Horn 90].


Chunking: Organize information into groups that are manageable to read and/or to accomplish.


Relevance: Group “like things” together and exclude unrelated items from each chunk.


Labeling: Provide the user with a label for each chunk of information.


Consistency: Use consistent terms within each chunk of information, between the chunk and the label, in organizing the information, and in formatting the document or instrument in which the information is recorded.


Integrate graphics: Use tables, illustrations, and diagrams as an integral part of writing.


Accessible detail: Write at the level of detail that makes the document usable for all readers.


Hierarchy of chunking and labeling: Group small chunks around a single relevant topic and provide each group with a label.


7.2.1 Initial process definition

Initial process descriptions are seldom accurate, due to a phenomenon analogous to the Heisenberg Uncertainty Principle: the act of defining a process changes that process. The initial description of the process usually contains omissions, idealizations, and other inaccuracies. The process of accurately describing what really happens often affects the process during the very act of defining it.


7.2.2 Refining a personal process

1. Start with a characterization of the process as currently used.


2. Define the target or ideal process.


3. Define the steps needed to move from the current process to the target process.


4. Develop the necessary scripts, forms, standards, and measures to use in the process.


5. Review the process as it is being implemented and correct any identified errors or omissions.


7.3.1 Use effective methods for producing good work

Good practices are straightforward, but few people consistently use them. The dedicated professional finds effective methods for consistently producing high-quality work and then uses those methods.


7.3.2 Use data to discover strengths and weaknesses

Use the postmortem analysis of personal data to build an understanding of what is done well and areas where improvement is called for. Focus on making small improvements regularly, and major changes will take care of themselves.


7.3.3 Practice

The key to improving the quality of work products is to practice skills on the job to the maximum extent possible.


7.3.4 Learn from others, and pass it on

Talk to colleagues and review the literature to learn about new techniques and to learn from the mistakes of others. Share what is learned with others. Take advantage of benefits gained and contribute what is learned.


7.3.5 Find and learn new methods

Watch for innovations that are pertinent to personal needs. Allocate time for skill building whenever possible. Keeping up to date makes employees more attractive to their current employer (and to future employers) as a desirable and competent professional.