• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/19

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

19 Cards in this Set

  • Front
  • Back
Reliability
The degree of dependability, consistency, or stability of scores on a measure (either predictors or criteria) used in selection research.
Errors of Measurement
Factors that affect obtained scores but are not related to the characteristic, trait, or attribute being measured.
Obtained Score
X(obtained score = X(true) + X(error) - Obtained score for a person on a measure
True Score
X(obtained score = X(true) + X(error) - True score on a measure
Error Score
X(obtained score = X(true) + X(error) - error score on the measure
Reliability Coefficient
An index that summarizes the estimated relationship between two sets of measures.
Test-retest
The same measure is used to collect data from the same respondents at two different points in time.
-FACTORS THAT AFFECT RELIABILITY
- Day-to-day changes in test-takers
- Changes in site of test administration
- Memory (overestimates reliability)
- Learning (underestimates reliability)
Parallel or Equivalent Forms
Administering two equivalent versions of a measure to the same respondent group.
Internal Consistency
The various parts of a total measure should be so interrelated that they can be interpreted as measuring the same thing.
Interrater Reliability Estimates
Shows the extent to which all parts of a measure are similar in what they measure; involves single administration of one measure.
Split-half Reliability
Single administration or subdivided test reliability measures. The measure is divided or split in to two halves so that scores for each part can be obtained for each test taker.
Kuder-Richardson reliability
Takes average of the reliability coeff. that would result from all possible ways of subdividing a measure, to solve the problem of how best to divide a measure. Good for Yes/No questions. Or questions with two answers.
Cronbach's coefficient alpha
Correlates every possibility. Better for likert or multiple choice. Sensitive to # of items on scale. More is better, but don't cause fatique. 7 point scale is best(enough discrimination)
Measurement Error
Those factors that affect obtained scores but are not related to the characteristic, trait, or attribute being measured.
Interrater agreement
Used with categorical data; two raters
Interclass agreement
Used with interval data; two raters
Intraclass agreement
Used with interval data; three or more raters
Reliability coefficient
The extent (in % terms) to which indiv. dif. in scores on a measure are due to "true" dif. in the attribute measured and the extent to which they are due to chance errors. Higher is better. Depends on time(test-retest)(lower), # of items(higher), sample size (higher).
Standard Error of Measurement
The estimated error in a particular individual's score on the measure.