• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/27

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

27 Cards in this Set

  • Front
  • Back

Variable

any characteristic that can tale on more than one value

Measurement

a process by which we assign numbers to indicate the amount of some variable present

Four Levels of Measurement

1. Nominal



2. Ordinal




3. Interval




4. Ratio


Nominal Measurement

Categorical data


no mathematical properties associated with this

Ordinal Measurement

Ranked Data


Interval Measurement

Scored date/ continuous data



does not have a true 0




e.g., temperature


Ratio Measurement

Score Data




HAS a true 0




e.g., reaction time

Threat of Mismeasurement

means by which we quantify our means of interest




more precise our OD the more precise our mean of measurement and the more accurate the hypothesis

Reliability

the extent to which observed scores are free from errors of measurement




consistent and reproducible

Validity

the extent to which the observed score reflects on the intended content




are you measuring what you intend to measure?

Psychometrics

sub-discipline of psychology which focuses on judging and improving the reliability and validity of psychological measures

Observed Score

1. True Score (Reflects the construct of interest)



2. Error (difference between the true and observed score)

Random Error

unpredictable




should cancel each other out




causes unreliability

Systematic Error

Predictable




does not cancel out




error that is due to reliability measuring the wrong construct




not random error

Bias

Can arise from differences between measurement devices and differences between testing situations




Solution:


Standardized measurement and procedures

Unreliability

X= T + Ew + Er



If X=Er, no construct is being measured = unreliable



If Er=0, then X= T + Ew (reliable)






Invalidity

To be valid must be reliable




reliability does not guarantee validity




X=Ew (Invalid)

Practical Issues

all measures have imperfect reliability and validity




all will contain some form of random error

Estimating Reliability

largest when there is no random error



smaller when entirely random error



Three ways to Estimate Reliability

1. Internal Consistency




2. Test-Retest reliability




3. Inter-rater reliability

Internal consistency

the extent to which scores on items measuring the same construct correlate with each other



alpha = 0, measure is entirely error


alpha = 1, measure is error-free


Test-Retest Reliability

the extent to which the measure gives consistent results over time or across situations


Inter-Rater Reliability

the extent to which the results are consistent across raters/judges



useful for more subjective measures


Three Ways to Estimate Validity

1. Criterion Validity



2. Content Validity




3. Convergent vs. Discriminant Validity


Criterion Validity

comparing the observations from our measurement device with some agreed upon gold standard

Content Validity/ Face Validity

put forward a case that your measurement device is tapping into all relative aspects of the construct interest

Convergent vs. Discriminant Validity

converge on the findings of measures of theoretically similar or related constructs and to diverge




discriminate form measures of unrelated constructs