• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/25

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

25 Cards in this Set

  • Front
  • Back
Two ways of measuring prejudice?
1. The Godfrey-Richman ISM scale (GRISM) - pen and paper measure
2. Implicit Association Test (IAT) - behavioral measure
Levels of measurement?
1.Nominal - Differences between categories
2. Ordinal - Categories can be ordered.
3. Interval - Equal distance between categories is expressable in standardized units.
4. Ratio - a true zero point exists
Validity
The strength of conclusions we can draw from our research results.
Construct Validity
1. Do measures capture your theoretical concepts?
2. Two components: divergent validity and convergent validity.
(Statistical) Conclusion Validity
Is there a relationship between the IV and DV?
Reliability
Consistency of measurement or the degree to which your measurement instrument works the same each time it is used under the same conditions with the same unit of analysis.

Repeatability of measurement over:
1. Time
2. Across research subjects (your unit of analysis)
3. Across measurement instruments
3 Types of Reliability
1. Inter-Rater - Two observers will record the same score
2. Internal Consistency - within test people respond consistently.
3. Test-Retest - People should score about the same each time.
4 Ways to check reliability?
1. Test-Retest
2. Split Half
3. Use Established Measures
4. Check reliability of Research Workers
Index
Indicators combined in an index are given equal weights.

Subjects are scored on the basis of their total number of positive responses.
Scale
Indicators are combined to use any intensity structure that may exist among the indicators being combined.

Scored on pattern of responses.
5 Types of Scales?
1. Bogardus social-distance scale
2. Likert scales - rates strength of agreement
3. Semantic Differential Scales - rate something in terms of two opposite adjectives
4. Guttman Scales - a positive response on the scale likely means a person gave a positive response to all preceding responses.
5. Thurstone Scales - Judges score indicators. Throw it disagreeable ones. Average out agreeable scores for weight.
Conceptualization
Mental process where fuzzy and imprecise notions are made more specific and precise.

Creation of empirical statements from your theoretical statements.
Indicator
Observation we choose to consider a reflection of a variable we wish to study.
Concept
constructs derived by mutual agreement from mental images
Dimension
specifiable aspect of a concept
3 Different definitions of a concept
1. Real Definition - Does not exist
2. Nominal Definition - definition give without any claim that it represents a "real" entity
3. Operational Definition - specifies precisely how a concept will be measured.
Progression of Measurement
1. Conceptualization
2. Nominal Definition
3. Operational Definition
4. Measurements in the real world
Operationalization Choices?
1. Range of variation
2. Variation between extremes
3 Common pitfalls of indexes and scales
1. Referring to indexes as scales
2. Data items that form a scale with one set of observations may not form a scale with another set of observations
3. Use of scaling techniques does not ensure creation of a scale
4 Main steps of Index Construction
1. Selecting Items
2. Examining empirical relationships
3. Scoring the index
4. Validating the index
How do we select items for a scale?
1. Face validity
2. Unidimensionality
3. General or Specific
4. Variance - try to select items that divide people equally in terms of the variabe OR select items with different variation.
How do we examine empirical relationships of items on a scale?
1. Bivariate relationship: If an item doesn't relate to several other measures then drop it. If an item item two items strongly relate then only keep one
2.Multivariate relationship among items: cross tabulations done by computers.
How do we score and index?
Should we weight things equally or unequally? Only unequally for compelling reasons.
How do we handle missing data?
1. If only a few cases, then exclude them.
2. May have grounds for treating missing data as one of the responses
3. Careful analysis of missing data may yield meaning
4. Can assign the middle value or mean of range. Conservative because it reduces the likelihood it will relate to other variables in ways you have hypothesized.
5. Assign a proportional value (If test taker answers 2 out of 4 questions positively then give a score of 2/4 for the missing indicator)
2 ways of validating an index?
1. Item analysis - assessment of each measure to see if it makes an independent contribution rather than duplicating another item.
2. External validation - process of testing the validity of a measure by examining its a relationship to other measures.