Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
63 Cards in this Set
- Front
- Back
Types of Assessments
|
• Self-Report Questionnaires ( S-Data)
• Observer-Report Data (O-Data) • Life-Outcome Data (L-Data) • Test Data (T-Data) • Ratings |
|
Types of S-Data
|
• Open-ended
• forced choice • Experience sampling • Questions about experience, i.e. mood |
|
Open-ended
|
• fill in the blank
|
|
Forced choice
|
• true-false; Likert rating
|
|
Experience Sampling
|
• electronic paging of research subjects every day at random intervals
|
|
S-Data
|
• Favorite method of research: correlation – how different behaviors are related
• If they are related the assumption is that there is an underlying trait that influences them |
|
What are the Problems with S-data
|
• Response acquiescence
• Response deviation • Social desirability |
|
Response Acquiescence
|
• agreeing with an item regardless of what it asks
|
|
Response Deviation
|
• tendency to give an uncommon response
|
|
Social Deviation
|
• tendency to answer in the most socially desirable direction
|
|
Ratings
|
• getting data from others who know the person i.e. letters of recommendation
• Formal rating scales clinicians use • Subject to bias |
|
Problems with rating
|
• Error of leniency
• Error of central tendency • Halo effect |
|
Definition of Error of Leniency
|
• tendency to rate people higher than they deserve
|
|
Definition of Error of central tendency
|
• use only the middle range of rating and avoid extremes
|
|
Definition of the Halo Effect
|
• if one aspect is positive, you generalize to all other characteristics
|
|
O-Data
|
• Use of multiple observers allows evaluation of degree of agreement – inter-rated reliability
• Two Types of Choices |
|
What are the Two types of choices for O-Data
|
1. Use of professional or intimate assessors
2. Naturalistic or artificial setting |
|
L-Data
|
• Data from events, activities & outcomes in a person’s life
• Matter of public record • Caspi’s Work |
|
Caspi’s
|
• Interviewed mothers of young children & constructed a personality scale to measure ill-temperedness
• In adulthood he gathered life outcome data |
|
Caspi’s Results of life outcome data with men
|
• a significant correlation bt. tantrums and life outcomes for men (lower rank in military, erratic work lives, 46% were divorced at 40 as opposed to 22% in low tantrum group)
|
|
Caspi’s results of Life outcome data with women
|
• no difference in work lives
• High tantrum tended to “marry down” in job status (40% as opposed to 24% of low tantrum women) • Twice as many were divorced at 40 as opposed to low tantrum women |
|
T-Data
|
• Standardized tests
• To see if different people react differently to identical situation • Designed to elicit behaviors that serve as indicators of personality variables |
|
Examples of T-Data
|
• Henry Murray’s bridge-building test
• Physiological measures • Projective’s • MMPI |
|
Henry Murray’s bridge-building test
|
• Person with 2 assistants is asked to build a bridge (with 2 assistants & tools etc.)
• Assistants play roles (1 can’t get the directions; 1 has own ideas) • Assessment of tolerance to frustration and performance under adversity |
|
Physiological measures
|
• sympathetic n. s. activity, sexual arousal
• Patrick’s work |
|
Patrick’s work
|
• (1994, 2005)
• Showed psychopaths in jail for violent crimes fear producing stimuli & measured their eyeblink startle response • Found that they do not feel fear or anxiety or guilt like normal people |
|
Projectives
|
• Ambiguous stimulus
• Test taking is asked to impose structure on the stimulus by describing what he or she sees |
|
Types of Projective Tests
|
• Rorshach Inkblot test
• Thematic Apperception Test • sentence completion test • Draw a picture • Draw a person |
|
What is the Major Criticism of Projective Test’s
|
• low validity and reliability
|
|
Rorshach Inkblot test
|
• (1921)
• 2nd most commonly used test in forensic assessment, after the MMPI • 2nd most widely used test for Personality Assessment. |
|
Who were the Creators of the MMPI
|
• Hathaway & McKinley (1942)
|
|
How many Items are on the MMPI
|
• 556
|
|
MMPI
|
• Raw scores are converted to T scores: mean=50 SD=10
• 10 clinical scales • 4 validity scales |
|
What are the Validity scales of the MMPI
|
• “can not say”
• L- scale • F-scale • K-scale |
|
Can Not Say
|
• if more than 30 items not answered – profile impaired – sign of resistance – avoidance or paranoia
|
|
L-scale
|
• 15 items if answered false – identify naïve, idealistic or defensive people
|
|
F-scale
|
• (faking bad) – 64 items – confusion, inability to read, anti-social tendencies or serious pathology
|
|
K-scale
|
• 30 items – faking good in a sophisticated manner – subtle defensiveness
|
|
Definition of Validity
|
• the ability of the test to measure what it claims to measure
|
|
What are the Types of Validity?
|
1. Face validity
2. Predictive or criterion validity 3. Convergent validity 4. Discriminant validity 5. Construct validity |
|
Definition of Face Validity
|
• whether a test on the surface appears to measure what it is supposed to measure
|
|
Manipulativeness scale
|
• Example of Face Validity
|
|
Definition of Predictive or criterion validity
|
• does the test predict criteria external to the test
|
|
Examples of Predictive or criterion validity
|
• sensation seeking scale should predict which people actually take risks to obtain thrills
• one such study found sensation seeking predicts gambling of all kinds |
|
Definition of Convergent validity
|
• whether a test correlates with other measures that it should (that measure the same thing)
|
|
Example of Convergent Validity
|
• a self-report measure of tolerance corresponds well with peer judgments of tolerance
|
|
Definition of Discriminant Validity
|
• refers to the measure’s ability to be specific enough
|
|
Example of Discriminant Validity
|
• a life satisfaction scale should not be the same as a scale that measures social desirability
|
|
Construct validity
|
• a broad category that subsumes face, predictive, convergent and divergent
• Personality variables are theoretical constructs • Construct validity is the ability of the test to measure what it claims |
|
What are the main criteria to measure construct validity
|
1. Convergent validity
2. discriminant validity |
|
Convergent Validity (Criteria)
|
• do all the different ways of measuring the same disposition converge – highly correlated?
|
|
Discriminant Validity (Criteria)
|
• do all the different ways of measuring the same disposition converge – highly correlated?
|
|
Reliability
|
• the consistency of results
• The degree to which an obtained measure represents the true level of the trait being measured • Repeated measurement is a way to estimate reliability • Inter-rater reliability |
|
What is Inter-rater Reliability?
|
• have different people do the assessment (O-Data)
|
|
How to Construct a Self-Report
|
• 3 ways
1. Content validation/rational method 2. Factor Analytic 3. empirical method/empirical keying |
|
What is Content validation/rational method
|
• does the questionnaire have items that measure or identify all the habits or symptoms of the disposition measured?
• Start with a theory or definition and in a rational manner construct items that related to all parts of the theory |
|
Example of construct validation/rational method
|
• depression inventories have items that measure all symptoms of depression
|
|
Issues with Content Validation
|
• Don’t really know what the test will actually measure
• Need to establish external validity: when compared to other established sources, does the test measure what it claims? • Transparency |
|
Definition of Transparency
|
• do the items reveal what the traits being measured?
|
|
Factor Analytic
|
• based on statistics
• Factor analysis • The property that makes the items similar is called a factor |
|
What is Factor Analysis
|
• a statistical method for finding groups of traits
• calculation of correlation coefficients bt. each item and every other item |
|
Steps in Factor Analytic
|
• Start out with long list of objective items
• Administer them to large number of people • Do factor analysis • Most items will not correlate highly with many others – throw those out • Items that correlate highly or show a co-occurrence will form groups or factors • Name the factors |
|
Empirical Method/Empirical keying
|
• Constructing a test by using research data
• Gather large number of items • Large subject pool of people already divided into the groups you wish to detect with your test i.e. diagnostic categories • Need a comparison group i.e. clinical populations and normals • Administer items • Compare answers given by 2 groups • keep items that differentiate between 2 groups • Items that correspond to different categories become the scales of the test |