• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/20

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

20 Cards in this Set

  • Front
  • Back
3 Types of Measures
1) Observational
2) Physiological
3) Self-Report

OPS-R
3 Types of Self-Report Measures
1) Cognitive
2) Affective
3) Behavioral

CAB
Converging Operations (Triangulation)
Using multiple types of measures to investigate aspects of the same phenomenon
4 Scales of Measurement
1) Nominal
2) Ordinal
3) Interval
4) Ratio

NOIR
Nominal
Numbers that are assigned to behavior are essentially labels
Ordinal
Rnak ordering of a set of behaviors or characteristics

ex) Audience response at talent show
Interval
Equal differences b/t the numbers reflect equal differences b/t participants

but, no Zero Point

ex) IQ test
Ratio
involves real numbers and has a true Zero Point so that scores can be compared mathematically
Observed Score
True score + Measurement Error
True Score
what the participant would have obtained if our measure were perfect
Measurement Error
result of factors that distort the score
-every measure has error
-m.e. undermines reliability of measure
5 Sources of Measurement Error
1) Transient States
2) Stable Attributes
3) Situational Factors
4) Characteristics of the Measure
5) Mistakes
Reliability
True-Score/Total Variance

Measure is reliable at .70
3 Basic methods of estimating the reliability of a measure
1) Test-Retest Reliability
2) Interitem Reliability
3) Interrater Reliability
Interitem Reliability
Assess the degree of consistency among items on a scale

tests the degree to which items are tapping into the same construct
Increasing Reliability of Measures (4 things)
1) Standardize Administration
2) Clarify instructions & Questions
3) Train observers
4) Minimize errors in data coding
Validity
the extent to which a technique actually measures what it is intended to measure
3 basic ways Validity is assessed
1) Face Validity
2) COnstruct Validity
3) Criterion-related Validity
Construct Validity
Determining whether a particular measure relates as it should to other measures

convergent vs. discriminant
Criterion-Related Validity
The extent to which a measure allows us to distinguish among participants on the basis of a particular *behavior criterion*

ex. SAT

Concurrent vs. Predictive