Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
50 Cards in this Set
- Front
- Back
- 3rd side (hint)
Selection Assessment |
Standardized measure of a sample of a person's behavior. Designed to measure predictor constructs. (KSAO's or Competencies). |
|
|
Standardization |
Refers to uniformity in procedures used in administering and scoring an assessment. (a behavior sample may or may not be representative of the population of behavior) |
GPA is not standardized. A 3.0 at Yale is not the same as a 3.0 at UWRF. |
|
Measurement |
The assignment of values to observations according to some defined system. |
|
|
Error |
Results when the values assigned do not adequately represent the person's true standing on the construct being measured. |
|
|
Freedom from error depends on..... |
Both the measure itself AND how it is used. |
|
|
Reliability |
Refers to the consistency or stability of a measurement technique. |
|
|
Test-retest reliability |
Estimated by comparing subjects' scores on two administrations of an assessment (temporal stability). |
|
|
Alternate/equivalent-forms reliability |
Estimated by looking at the correlation between two forms of the same test that are supposed to yield identical scores (form stability). |
|
|
Internal-consistency reliability |
Refers to the degree to which items within a test correlate with one another (item stability). |
|
|
Inter-rater (scorer) reliability |
Refers to the degree of agreement among assessments provided by two or more raters (rater stability). |
|
|
What is acceptable when interpreting reliability? |
Research Purposes: .7 to .8 Decision Making: .9 is ideal |
|
|
Ways to improve reliability |
Standardization of procedures Increase assessment length Remove bad items/questions |
|
|
Validity |
Refers to the ability of a technique to measure what it is designed to measure (job relatedness). |
measures what it's supposed to measure. |
|
Construct Validity |
The extent to which there is evidence that an assessment measures a particular hypothetical construct. |
|
|
Unitarian Validity |
|
|
|
Trinitarian Validity |
|
|
|
Criterion-related Validity |
Refers to the degree to which assessment scores correlate with scores on some independent criterion. |
is assessed using predictive or concurrent designs |
|
Content Validity |
Refers to the degree to which assessments representatively sample the domain they are designed to cover. |
Rational, non-empirical method Based on opinions of SME's Part of the basis for work-sample assessments Subjective |
|
Face Validity |
Refers to the degree to which items in an assessment appear to be appropriate for the purpose of the assessment. |
Based on the opinions of test-takers Importance of face validity Subjective |
|
Background Sources (measures past behavior) |
Application Biodata References and letters Background checks |
|
|
Selection Testing (measures current behavior) |
Ability testing Skills testing Personality testing (typical, clinical, compound) Drug testing |
|
|
Selection Assessments that measure past and present behavior |
Interviews |
|
|
Considerations for Selection Assessments |
Validity Fairness - Specifically, adverse impact Cost |
|
|
Information on an application should be job related for what 2 reasons? |
1. improve validity 2. ensure fairness |
|
|
What can BFOQ-applications ask about? |
Standards necessary to perform a job successfully |
Airline pilot's age |
|
Weighted Application Blank |
A scoring system where application responses that are most predictive of job performance are given greater consideration in the employment decision. |
|
|
Biodata (Biography) |
Expanded application blanks that work like test items |
|
|
Biodata |
Items are empiracally keyed (i.e. have established correlations with job performance). |
|
|
Biodata |
Generally small adverse impact, but depends on questions |
|
|
Biodata |
Impression management is a problem for biodata |
|
|
Suggestions for good biodata items |
1. Should be job related and verifiable 2. Should not be invasive or subject to legal challenge |
|
|
References & Letter of Recommendation (things to remember)
|
Leniency is the norm
Tend to have low reliability and validity for predicting performance Validity is higher when confidentiality is assured |
Even Hitler could have found 3 positive references! |
|
Other Background Sources (GPA) |
Mixed results on validity Lack of standardization Context matters Large adverse impact (White v. Black GPA---high potential for discrimination) |
|
|
Other Background Sources (Criminal and Credit Chks) |
Relative low validity Large adverse impact |
|
|
Unstructured Interviews |
Series of standardized questions Most commonly used method Very off script Employers allow personal biases Low validity and numerous problems |
|
|
Structured Interviews |
Valid predictors of job performance and are generally lower in adverse impact. Higher validity & reliability Job related questions that are scored Orally administered test Higher cost |
|
|
Structured Interviews |
Based on job analysis (job-related) Standardization in questions asked, scoring strategies, and administration procedures (i.e. panel or single interviewer) |
More biases on panel (dominant personality can persuade other to agree with them). |
|
Situational Questions |
Future-focused |
|
|
Behavioral Questions |
Past-focused |
|
|
Future-focused Questions |
Hypothetical or scenario type questions When the interviewee explains how they would handle a certain situation if it arose. |
|
|
Past-focused Questions |
Non-hypothetical When the interviewee explains how they handled a certain situation in the past. |
|
|
Ability Tests |
High validity Low developmental costs High adverse impact |
|
|
Types of Ability Tests |
General Cognitive Ability i.e. Wonderlic (most generalizable) high validity Specific cognitive abilities (e.g. mechanical) high validity Job knowledge (bar exams, licensing exams, safety services) high validity Situational judgment Physical abilities (strength, RT, visual acuity) |
|
|
Application Skill Tests |
High validity Lower adverse impact High developmental costs |
|
|
Types of Application Skill Tests |
Work samples (use work as interview) Situational exercises (e.g. in-baskets, LGD's) Assessment centers (costly, Executive/CEO selection) |
|
|
Personality Tests |
Lower validity Lower adverse impact Potential problems with self-deception and impression management Typical (normal) personality traits (e.g. FFM) Atypical (abnormal) personality traits (e.g. dark triad, clinical diagnoses) |
|
|
Integrity Tests |
Originally used to measure theft Moderate validity Lower adverse impact Physiological vs. written tests Overt vs. covert (personality-based) |
|
|
Overt |
Direct questions (e.g. do you use drugs?) |
|
|
Covert |
World view (e.g. How many people in the world do you think use drugs?) |
|
|
Drug Tests |
Moderate validity Moderate to low cost Moderate to significant adverse impact Generally favorable applicant reactions if process is handled appropriately |
|