Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
51 Cards in this Set
 Front
 Back
Refers to the consistency of scores obtained by the same persons when they are reexamined by the same test on different occassions. 
Reliability 

Variance from true differences. 
True variance 

Variance from irrelevant, random sources. 
Error variance 

Ratio of true score variance and the total variance. 
Reliability coefficient 

The most common way of computing a correlation. 
Pearson product moment correlation coefficient 

Expresses the degree of correspondence or a relationship between two sets of scores. 
Correlation coefficient 

Sources of error variance: 
Test construction Test administration Test scoring and interpretation 

Reliability estimates: 
Test retest reliability Alternate forms reliability estimate Split half reliability estimate Kuder richardson reliability and coefficient alpha Interscorer reliability 

Estimate of reliability obtained by correlating pairs of scores from the same people on two different administrations of the same test. 
Test retest reliability 

If interval of test retest administration is more than 6 months. 
Coefficient of stability 

Estimate of the extent to which item sampling and other errors have affected scores on versions of the same test. 
Alternate forms reliability estimate 

Independently constructed test designed to meet same specifications. 
Alternate 

The degree of relationship between various forms of a test. 
Coefficient of equivalence 

Is obtained by correlating two pairs of scores obtained from equivalent halves of a single test administered once. 
Split half reliability estimate 

Other term for split half reliability 
Coefficient of internal consistency 

Estimates the effect of lengthening and shortening of a test. 
Spearman brown formula 

Two sources of error variance that influence inter item consistency: 
Content sampling Heterogeneity of the behavior being sampled. 

The most common formula for finding inter item consistency. 
Kuder Richardson formula 20 

Is the mean of all split half coefficients from different splitting of the test. 
Kuder richardson reliability coefficient 

Is applicable to tests whose items are scored as right or wrong, or according to some other allornone systems. 
Kuder richardson formula 

A generalized formula for tests that have multiplescored items. 
Coefficient alpha 

Degree of agreement or consistency between two or more scorers with regards to a particular measure. 
Interscorer reliability 

Index of measurement for interscorer reliability 
Coefficient of interscorer reliability 

Errors due to examiner bias: 
Error of central tendency Leniency/generosity error Severity error Halo effect Horn effect Contrast error Recency bias 

Less than accurate rating or evaluation by a rater or judge due that rater's tendency to make ratings near the midpoint of the scale. 
Error of central tendency 

Rater's tendency to be too forgiving or insufficiently critical. 
Leniency/generosity error 

Rater's tendency to be overly critical. 
Severity error 

Tendency of the leader to judge all aspects of an individual using a general impression that was formed on only one or few of the individual's characteristics . 
Halo effect 

Refers to the tendency to let one poor rating influence all other ratings, resulting in a lower overall evaluation than deserved. 
Horn effect 

Happens when raters compares examinees with one another instead of against performance standards. 
Contrast error 

Occurs when leader assigns ratings based only on the employee's most recent performance. 
Recency bias 

This measure is suited to the interpretation of individual scores. Is independent of the variability of the group on which it is computed. 
Standard error of measurement 

Is an estimate of how well a test measures what it purports to measure. 
Validity 

3 categories of validity: 
Content validity Criterion related validity Construct validity 

Two major trends in validity: 
Strengthened theoretical orientation Close linkage between psychological theory and verification through empirical and experimental hypothesis testing 

Evaluation of subjects, topics, or contenys in a test. 
Content validity 

Evaluation of the relationship of scores to scores on other tests or instruments. 
Criterionrelated validity 

Comprehensive analysis of theoretical framework+scores on other tests. 
Construct validity 

Describes a judgement of how adequately a test samples behavior representative of the universe of behavior that the test is designed to sample. 
Content validity 

Judgment of how a test score can be used to infer an individual's most probable standing based on some measure of interest. 
Criterionrelated validity 

May be broadly defined as the standard against whichba test or a test score is evaluated. 
Criterion 

Criterion has to be: 
Relevant Valid Uncontaminated 

Error where ratings become influenced based on rater's knowledge of test scores. 
Criterion contamination 

2 types of criterionrelated validity: 
Concurrent validity Predictive validity 

The extent scores on a new measure relate to scores from a criterion measure administered at the same time. 
Concurrent validity 

Uses the scores from the new measure to predict performance on a criterion measure administered at a later time. 
Predictive validity 

A correlation coefficient that provides a measure of the relationship between test scores and scores on the criterion measure. 
Validity coefficient 

Judgment about the appropriateness of inferences drawn from test scores regarding individual standings on a variable called construct. 
Construct validity 

He pointed out that in order to demonstrate construct validity, a test must correlate highly with variables it should theoretically correlate, and vice versa. 
DT Campbell 

High relationship with measures construct is supposed to be related to. 
Convergent evidence 

Low relationship with measures construct is not supposed to be related to. 
Discriminate evidence 