• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/33

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

33 Cards in this Set

  • Front
  • Back
Edouard Seguin (1812-1880)
MR; Form board for discrimination & motor control.
Alfred Binet (1857-1911)
Differentiate students for instruction (1896; 1905).
Sir Francis Galton (1883)
Individual differences research & measurement
James Cattell (1890)
Mental Test.
1900-20: Group achievement tests;
Spearman’s testing theory; Army Alpha and Beta.
20-50’s: Psych Corp born; mental measure yearbooks;
Wechsler-Bellevue; Gesell Maturity Scales; MMPI appeared. First standards.
Twentieth Century:
-Strong demand
–High optimism
–Century of extensive development
Today…Tomorrow?
•Computer Assisted
•Increased Design Complexity
•Easy Methods
–Administration
–Application
–Scoring
•Web based assessment
Basic Psychometric Principles
•Tests translate observations of behavior into numbers.
–Behavior and/or attitudes into quantities.
•Measurement involves the establishment of rules for assigning numbers to objects.
•To make sense of measurement, scales are needed to place numbers in relationships.
Common Scale Types
–Nominal: name only.
–Ordinal: rank order, has magnitude, does not have a zero point or equal distance between points.
–Interval: has magnitude, equal intervals, no zero.
–Ratio: has magnitude, equal intervals and an absolute zero.
•Norm-referenced measurement
–Performance compared to specific group.
Descriptive statistics
•Summarize data.
•Allows comparison.
•Central tendency- mean, median, mode.
•Dispersion
–Variance: measure of variability of a group of scores
–Standard deviation-square root of variance.
–(SD) Measure of the extent to which scores deviate from the mean.
Nominal:
name only.
Ordinal:
rank order, has magnitude, does not have a zero point or equal distance between points.
Interval:
has magnitude, equal intervals, no zero.
Ratio:
has magnitude, equal intervals and an absolute zero.
Score Distributions
•Normal curve is most commonly used.
–Tells us exactly how many cases fall between any two points on the curve.
–Allows us to use standard scores (transformed to a given mean and standard deviation).
–The use of the normal distribution curve allows comparisons and establishes relative position.
Standard Scores
•Defines the percent of cases that fall under the normal curve.
•A basic Z score has a mean of 0 and SD of 1.
•Deviation IQ
•McCall’s T scores
•Stanine scores (Standard Nine)
Z Score Example
A ‘z’ score is the obtained score minus the test mean divided by test’s standard deviation.
•The mean on a test was 100; standard deviation 20; your score 125.
•The z score is (125-100)/20=25/20 or 1.25
•Express this score as a T score.
•z =1.25=(X-50)/10; 12.50=x-50; 62.50
•T=10Z + 50
•What would the T score be if the z were negative?
•T=10(-1)+50=40
Z Score Comparisons
•A class test has a SD of 5 and a mean of 20; Your score was 15.
•What would be your T score?
•If this test were the GRE, what would be your score?
•Approximately what percentage of the students taking this exam performed better than you?
Steps:
 First, determine the z score.
 Second, fill in the formula with fixed variables.
 Third, do the calculation to complete the transformations of the score.
Variance
•Issue: the extent to which differences in scores are attributable to “true” differences in the characteristics under consideration and the extent to which they are attributable to chance errors.
•Error or random variance is any condition that is irrelevant to the purpose of the test.
•Chance fluctuations contributes to error.
•‘True’ variance is about relevant differences.
•Standardized administration is intended to reduce error variance.
Establishing Reliability
•Correlations are used to look at the relationship between sets of scores.
•Reliability coefficients usually fall in the .70s, .80s or .90’s.
•The more important the prediction or application, the more critical the reliability.
Standard Error of Measurement
•Psychologists assume some test “unreliability”.
•Observed Score=True Score + E (error)
•Measurement error is assumed to be random.
–The errors would form a normal distribution.
•Standard deviation of errors is the SEM or the standard error of a score.
•Provides information regarding confidence in the obtained test score.
•An obtained score is not assumed to be the ‘True’ score.
•The larger the SEM, the less confidence there is that the obtained score is close to True score.
Confidence Intervals
•A Confidence interval is a band of scores around the obtained score in which we assume the true score is located.
•Intervals are set depending upon the degree of confidence required. The greater the need to hit the true score, the greater the confidence interval.
•Use Z scores to create 68%, 95%, or 99% CI. Convention is typically 95% level.
•Most tests provide tables for Confidence Intervals.
•No confidence interval can be constructed that can predict with absolute certainty where a person’s true score will lie.
Reliability & Validity
•A test must first be dependable (Reliability).
•Once a test is dependable, the ongoing process of validation can begin (Validity).
Validity
The extent to which a test measures what it is supposed to measure and the appropriateness with which inferences can be made on the basis of the test results.
Reliability
Consistency of scores obtained by the same person when re-examined with the same test on different occasions, or with different sets of equivalent items, or under other variable examining conditions.
Establishing Reliability
•Test-retest method
–The error variance corresponds to the random fluctuations of performance from one test session to another.
–Interval of time is always specified.
–The higher the retest reliability, the more consistent the tests scores are from administration to administration.
Test-Re-test Drawbacks
•Practice and Carry over effects
–Carry over is associated with the content of questions.
–Practice effects are associated with the type of questions.
•Due to these effects, test-retest reliability is not always an appropriate measure of reliability.
Alternate-Form Reliability
•Two tests are created. Scores are correlated as an estimate of reliability.
•The error variance may reflect problems with content sampling.
–A test with strong reliability covers domain fairly.
–This method does not account for errors due to inappropriate item sampling.
•Alternate-Form Reliability does correct some for carry over effects but not fully for practice effects.
Reliability Coefficients
•Split-half reliability (cut test in half)
–Consistency of content sampling is an issue.
–Longer tests are more reliable.
–Spearman Brown formula can provide correction.
•Kuder-Richardson & Coefficient Alpha
–Error variance reflects content sampling and heterogeneity of behavior sampled.
–KD is mean of all split-half coefficients; used on True/False or right/wrong items.
–Coefficient Alpha is used on numerical scores.
Scorer Reliability
•This is error due to rater differences.
•Tests can be complicated to score.
•Tables may be confusing to read.
•Tests designed for ease of administration and scoring increase scorer reliability
•Computer scoring programs eliminate this factor.
Recommendations for Practice
•Become familiar with the tests used in your area of practice. Know about reliability and validity.
•Do read the manual on the tests in use in your practice area, even if you do not directly administer.
•Understand the meaning of any scores that are reported to the clients in your practice area.