Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key


Play button


Play button




Click to flip

20 Cards in this Set

  • Front
  • Back
3 Types of Measures
1) Observational
2) Physiological
3) Self-Report

3 Types of Self-Report Measures
1) Cognitive
2) Affective
3) Behavioral

Converging Operations (Triangulation)
Using multiple types of measures to investigate aspects of the same phenomenon
4 Scales of Measurement
1) Nominal
2) Ordinal
3) Interval
4) Ratio

Numbers that are assigned to behavior are essentially labels
Rnak ordering of a set of behaviors or characteristics

ex) Audience response at talent show
Equal differences b/t the numbers reflect equal differences b/t participants

but, no Zero Point

ex) IQ test
involves real numbers and has a true Zero Point so that scores can be compared mathematically
Observed Score
True score + Measurement Error
True Score
what the participant would have obtained if our measure were perfect
Measurement Error
result of factors that distort the score
-every measure has error
-m.e. undermines reliability of measure
5 Sources of Measurement Error
1) Transient States
2) Stable Attributes
3) Situational Factors
4) Characteristics of the Measure
5) Mistakes
True-Score/Total Variance

Measure is reliable at .70
3 Basic methods of estimating the reliability of a measure
1) Test-Retest Reliability
2) Interitem Reliability
3) Interrater Reliability
Interitem Reliability
Assess the degree of consistency among items on a scale

tests the degree to which items are tapping into the same construct
Increasing Reliability of Measures (4 things)
1) Standardize Administration
2) Clarify instructions & Questions
3) Train observers
4) Minimize errors in data coding
the extent to which a technique actually measures what it is intended to measure
3 basic ways Validity is assessed
1) Face Validity
2) COnstruct Validity
3) Criterion-related Validity
Construct Validity
Determining whether a particular measure relates as it should to other measures

convergent vs. discriminant
Criterion-Related Validity
The extent to which a measure allows us to distinguish among participants on the basis of a particular *behavior criterion*

ex. SAT

Concurrent vs. Predictive