• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/40

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

40 Cards in this Set

  • Front
  • Back
Traits
-generalized tendency to behave
-can be found in personality interests, cognitive abilities, creativity, attitudes, values, and emotions
-construct
-stable
-have individual differences
-constructed from a series of similar behaviors
-typically divided into 3 different categories (construct, content, and criterion validity - all related to each other)
Psychological Construct
putting all parts of creativity together creates a psychological construct
How do we measure traits?
-follow a person and observe their behavior
-could ask people how they behave and what their reactions are
problems with measuring traits
time consuming, not ethical, not appropriate for large numbers of people, people may lie, etc.
What variable are we measuring with self report measures or ability tests?
A response variable - the measure is a stimulus and we are directly assessing the response to that stimulus
Given the problems with self-report measures, why do we bother with them?
they are usually found to be related to some behavior of interest, for example...self esteem predicts school performance
External (Empirical) Approach
1. General pool of items
2. Items are selected on the basis of a correlation
Inductive (Internal) Approach
1. General pool of items
2. Items are selected on the basis of psychometric characteristics like reliability (take question out and see if correlation goes up or down)
Deductive (Rational) Approach
1. Start with a logical definition of the trait you want to measure
2. Items are written on a rational basis by the psychologist
We create a test that we think measures creativity. How can we be sure.
-we have to establish the v alidity of the test
-ask the question, "Does the measure really measure what we think it measures?"
Construct Validity
-process of determining whether or not our measure measure what we think it measures
-this process never ends
-experimental process
-approach suggested by cronbach and meehl (1995)
Nomological Network
your test...has a positive correlation with...hours of sleep from the night before (example)
Criterion Validity
-a specialized form of construct validity
-Is a measure correlated with what it should be and not correlated with what it shouldn't be?
Convergent Validity
-type of criterion validity
-measure should be correlated with what we expect
-can be a positive correlation
Discriminant Validity
-test does not correlate with measure we think are unrelated
-expecting to find a zero correlation
Content Validity Evidence
-specialized form of construct validity
-is the measure covering the content domain we want it to investigate?
-do we have enough questions covering every part of what we are measuring?
-typically involves subjective opinions by expert judges
Face Validity
-similar to content validity
-no data collect, so not really "validity"
-a subjective decision made by those taking a test
Benefit of Face Validity
establishes rapport with the person taking the test
Problem with Face Validity
-questions are easier to fake
-just because a test has high face validity does not meam it has high construct validity
Experimental Evidence
-2 groups of participants
-Example: Group 1 is sleep restricted, group 2 is limited to 4 hours of sleep
Reliability Evidence
-If i measure something several times, will I get the same answer each time? ( measuring height in a span of minutes)
What Does a Reliability Coefficient Mean?
-smallest to biggest
-reliability formula- x=Tte
x=observed score
T=true score
t=error - stray away from what you actually know (guessing)
Observed Score
made up of true score plus error
Reliability Coefficient
-the higher this is, the less error there is
-less than .70 is bad
-.70 squared is .49 which is only 49% of variability around the mean is due to true score
-reliability of .70 to .90 is ok
-reliability of .90 to 1.0 is excellent
Inter-Rater Evidence
-similarity of the ratings of 2 or more judges
-Example: Q. If we have low inter-rater reliability, what does this say about the construct validity?
-A. Judges are interpreting what they see differently
-problem for construct validity
Test-Retest Evidence
-measure is taken once and then again at a later time
-correlation between 2 times indicates reliability
Problem with test-retest evidence
-knowledge about first assessment may bias the subjects
-due to test knowledge
Equivalent Forms Evidence
-2 similar measures of the same construct
-the higher the correlation between the 2 tests, the more reliable both of them are
Benefit of equivalent forms evidence
you do not have interference by people
Internal Consistency Evidence
the way people answer questions in one part of a test should be the same as they do in another part
Split-Half Reliability
-break the test into 2 parts (odd and even items)
-coefficient alpha: average
If your question has low internal consistency, what does this say about the construct validity of the measure?
there is a problem
Data Reduction Techniques
-designed to help you make sense of a large number of variables
-large number of variables are reduced to a smaller number of categories
Factor Analysis
-creating categories on how things are related to each other
-categories are based on patterns of correlations
-correlation between 5 tests is factor analysis, two basic clusters of correlation (1,2,3 and 3,4, and 5)`
Multidimensional Scaling
-correlations are treated as distances
-more positive correlations meaning closer distances
-more negative correlations meaning farther away
The Evidence for a Genetic Influence on Behavior - 3 Sources of Info?
1. Extent to which Monozygotic twins resemble each other more closely than dizygotic twins

2. Extent to which adopted children resemble their adoptive relatives and their biological relatives

3. Extent to which monozygotic twins separated at birth develop greater similarities than they would by chance
Twin Study: Finding and Problem
Finding: Monozygotic twins are more similar in their behaviors than dizygotic twins
Problem: Monozygotic twins are often treated more similarly than dizygotic twins
Adoptive Study: Finding and Problem
Finding: Having a biological parent with this disorder increases the chances the adopted child will develop the disorder
Problem: Often difficult to track the biological parent
Twins Separated at Birth: Finding and Problem
Finding: Twins separated at birth have far more behavioral similarities than what would be expected by pairing 2 people by chance
Problem: The major problem with this method is that there are not many twins separated at birth
General Problems in Human Differences
1. You must use large samples
2.You often must use self-report measures
3. There are few "scientific" names for constructs or universally accepted means for measuring a construct