• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/494

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

494 Cards in this Set

  • Front
  • Back
any variable used to forecast a criterion.
Predictor:
Assessing the quality of predictors\

judge goodness of our measuring devices by two - reliability & validity
Psychometric criteria:
reliability and validity. Literally the measurement of properties of the mind. The standard used to measure the quality of psychological assessments.
psychometric criteria
refers to the consistency, stability, or equivalence of a measure. A standard for evaluating tests that refers to the consistency, stability, or equivalence of test scores. Often contrasted with validity.
Reliability:
perhaps simplest assessment of a measuring device's reliability. Measure something at different times and compare the scores. Generally shorter time intervals between yield higher scores. A type of reliability that reveals the stability of test scores upon repeated applications of the test.

contains

Correlation of two test scores is a coefficient of stability - reflects stability of test over time.
Test-Retest Reliability:

Coefficient of stability:
second type of reliability - aka parallel form reliability. Two forms of tests to measure the same attribute and gives both forms to the same group of people. A type of reliability that reveals the equivalence of test scores between two versions or forms of the test.

contains

Correlation of the two sets of scores is this, reflect the extent to which the two forms are equivalent measures of the same concept.
Equivalent-Form Reliability:

Coefficient of equivalence:
two types - extent to which it has homogenous content. A type of reliability that reveals the homogeneity of the items comprising a test.
Internal-Consistency Reliability:

Split-half reliability:

Cronbach’s alpha or Kuder-Richardson 20:
test given to a group of people, but in scoring, items on test are divided in half - into odd and even numbered. Each person thus gets two scores. If test is internally consistent, high degree of similarity in the responses should be present. The longer the test, the greater the reliability.
Split-half reliability:
second technique of internal-consistency reliability to compute one of those two coefficients. Both similar though not statistically identical. Each 100 item test consists of 100 minitest. Response of each item is correlated with response of other. Matrix of correlations formed. If test is homogenous - content similar, high reliability. If heterogeneous - content cover wide concepts - not internally consistent.
Cronbach’s alpha or Kuder-Richardson 20:
a type of reliability that reveals the degree of agreement among the assessments of two or more raters. Also called conspect reliability.
Inter-rater reliability:
refers to accuracy. A standard for evaluating tests that refers to the accuracy of appropriateness of drawing inferences from test scores. Often contrasted with reliability.
Validity:
theoretical concept we propose to explain aspects of behavior
Construct:
degree to which a test is an accurate and faithful measure of the construct it purports to measure
Construct validity:
correlation coefficients reflect the degree to which these scores converge – or come together – in assessing a common concept
Convergent validity coefficients:
correlation coefficients should be very low in – separate from concepts not related to what we want to measure. Reflect the degree to which these scores diverge from each other in assessing unrelated concepts.
Divergent validity coefficients:
the degree to which a test forecasts or is statistically related to a criterion. Refers to how much a predictor relates to a criterion.
Criterion-Related Validity:
used to diagnose the existing status of some criterion. Concerned with how well a predictor can predict a criterion at the same time, or concurrently. Predict student gpa on basis of test score.
Concurrent criterion-related validity:
used to forecast future status. Collect predictor information and use it to forecast future performance. High school gpa indicator of college performance.
Predictive criterion-related validity:
a statistical index (often expressed as a correlation coefficient) that reveals the degree of association between two variables. Often used in the context of prediction. When predictor scores are correlated with criterion data, the results correlation is this.
Validity coefficient:
degree to which subject matter experts agree that the items in a test are a representative sample of the domain of knowledge the test purports to measure. Degree to which predictor covers a representative sample of the behavior being assessed.
Content Validity:
appearance that items in a test are appropriate for the intended use of the test by the individuals who take the test. Concerned with appearance of test items.
Face Validity:
Predictor development

Two dimensions to classify predictors
1. whether the predictor seeks to measure directly the underlying psychological construct in question (mechanical comprehension), or whether it seeks to measure sample of the same behavior to be exhibited on the job.
2 whether they seek to measure something about the individual currently or something about the individual in the past.
Psychological tests and inventories

Test v. inventory:
in a test the answers are either right or wrong, but in an inventory there are no right or wrong answers.
method of assessment in which the responses to questions are recorded and interpreted but are not evaluated in terms of their correctness, as in a vocational interest inventory.
Inventory:
Types of tests
Types of tests
Speed versus Power Tests
Speed: type of test that has a precise time limit; a person's score on the test is the number of items attempted in the time period. Often contrasted with a power test.

Power: type of test that usually does not have a precise time limit; a person's score on the test is the number of items answered correctly. Often contrasted with a speed test.
type of test that has a precise time limit; a person's score on the test is the number of items attempted in the time period. Often contrasted with a power test.
Speed:
type of test that usually does not have a precise time limit; a person's score on the test is the number of items answered correctly. Often contrasted with a speed test.
Power:
Individual versus Group Tests
Individual: type of test that is administered to more than one test taker at a time. Often contrasted with an individual test.
Group: type of test that is administered to more than one test taker at a time. Often contrasted with an individual test.
type of test that is administered to more than one test taker at a time. Often contrasted with an individual test.
Individual:
type of test that is administered to more than one test taker at a time. Often contrasted with an individual test.
Group:
Paper-and-Pencil versus Performance Tests
Paper-and-pencil: method of assessment in which the responses to questions are recorded on a piece of paper.
Performance: type of test that requires test taker to exhibit physical skill in the manipulation of objects, as in a typing test.
method of assessment in which the responses to questions are recorded on a piece of paper.
Paper-and-pencil:
type of test that requires test taker to exhibit physical skill in the manipulation of objects, as in a typing test.
Performance:
Ethical standards in testing

issued guidelines and user qualifications to ensure that tests are administered and interpreted correctly.
APA code of professional ethics:
sometimes must be licensed professional psychologist especially in clinical psychology, to prevent misuse and maintain test security, restrictions are placed on who has access to tests.
Test user qualifications:
condition pertaining to the asking of questions that are unrelated to the assessment's intent or are inherently intrusive.
Invasion of privacy:
condition associated with testing pertaining to which parties have access to test results.
Confidentiality:
Sources of information about testing

classic set of reference books in psychology that provide reviews and critiques of published test in the public domain.
Mental Measurements Yearbook (MMY):
less detailed books – resemble bibliographies and help locate tests in the MMY.
Tests in Print VII:
tests classified according to content.
Test content:
no singular/standard means to assess it.
Intelligence tests:
symbol for "general mental ability" which has been found to be predictive of success in jobs
“g”:
proposed three party theory of intelligence. Academic intelligence as representing what intelligence tests typically measure such as fluency with words and numbers. Practical intelligence which is intelligence needed to be competent in the everyday world and is not highly related to academic intelligence and creative intelligence which pertains to the ability to produce work that is both novel – original or unexpected – and appropriate – useful (writing, art, advertising, etc.).
Sternberg’s triarchic theory of intelligence:
require person to recognize which mechanical principle is suggested by test item – underlying concepts measure by these are sound, heat conductance, velocity, gravity, and force. Bennett Test of Mechanical Comprehension.
Mechanical aptitude tests:
do not have right or wrong answers – test takers answer how much they agree with certain statements.
Personality inventories:
predicated upon 16 personality types – each type is created by person's status on four bipolar dimensions: extraversion – introversion, sensing – intuition, thinking – feeling, and judgment – perception.
Myers-Briggs Type Indicator:
theory that defines personality in terms of five major factors: neuroticism, extraversion, openness to experience, agreeableness, and conscientiousness. Also called the "five factor" theory of personality.
Big five theory of personality:
person's characteristic level of stability vs instability.
Neuroticism:
tendency to be sociable, assertive, active, talkative, energetic, and outgoing.
Extraversion:
the disposition to be curious, imaginative, and unconventional.
Openness to experience:
disposition to be cooperative, helpful, and easy to get along with.
Agreeableness:
disposition to be purposeful, determined, and organized, and controlled.
Conscientiousness:
general personality factor reflect ability to cope, parallel to the g factor in intelligence.
“p-factor”:
type of test that purports to assess a candidate's honesty or character.
Integrity tests:
job applicant clearly understands that the intent of test is to assess integrity – two sections – one deals with attitudes toward theft and other forms of dishonesty and second deals with admissions of theft and other illegal activities such as dollar amounts stolen etc.
Overt integrity tests:
makes no reference to theft. Contain conventional personality assessment items that have been found to be predictive of theft.
Personality-based measures:
total set of physical abilities may be reduced to three major constructs: strength, endurance, and movement quality.
Physical abilities:
type of test that describes a problem to the test taker and requires the test taker to rate various possible solutions in terms of their feasibility or applicability.
Situational judgment tests:
CAT – form of assessment using a computer in which the questions have been precalibrated in terms of difficulty, and the examinee's response (right/wrong) to one question determines the selection of the next question.
Computerized adaptive testing:
Interviews

format for the job interview which the questions are different across all candidates. Often contrasted with structured.
Unstructured:
format for job interview in which the questions are consistent across all candidates. Often contrasted with unstructured.
Structured:
type of job interview in which candidates are presented with a problem and asked how they would respond to it.
Situational interviews:
Experience-based v. situational questions
Experience-based: think about time you had to motivate employee to perform task that he disliked – how did you handle it?

Situational: suppose you were working with an employee who you knew greatly disliked a job task, what would you do to motivate?
people generally place confidence in highly fallible interview judgments: that is, we are not good judges of people, but we think we are.
“Illusion of validity”:
technique for assessing job candidates in a specific location using a series of structured, group-oriented exercises that are evaluated by raters.
Assessment centers:
type of personnel selection test in which the candidate demonstrates proficiency on a task representative of the work performed in the job.
Work samples:
fidelity refers to the level of realism in the assessment. Literal description of a work sample is that the candidate is asked to perform a representative sample of the work done on the job, such as using a word processor, driving a forklift, etc.
High-fidelity simulations:
method of assessment in which examinees are presented with a problem and asked how they would respond to it.
Situational exercises:
mirror only part of job; thereby, present applicants with only a description of work problem and require them to describe how they would deal with it.
Low-fidelity simulations:
carefully designed letters, memos, brief reports, etc. require applicant's immediate attention and response. Applicant goes through the contents of basket and takes the appropriate action to solve the problems presented, such as making phone call, etc. observers score the applicant on such factors as productivity and problem solving effectiveness.
In-basket Exercise:
LGD – group applicants, normally 2 to 8, engage in job related discussion in which no spokesperson or group leader has been named. Raters observe and assess each on factors of prominence, goal facilitation, and sociability.
Leaderless Group Discussion:
method of assessing individuals in which info pertaining to past activities, interests, and behaviors in their lives is considered.
Biographical information:
commonly used and least valid
Letters of recommendation:
method of assessment typically used to detect illicit drug used by the candidate.
Drug testing:

Screening test

Confirmation test
New or controversial methods of assessment
New or controversial methods of assessment
instrument that assesses responses of an individual's central nervous system that supposedly indicate giving false responses to questions
Polygraphy or Lie Detection:
method of assessment in which characteristics of a person's handwriting are evaluated and interpreted.
Graphology:
construct that reflects a person's capacity to manage emotional responses in social situations.
Tests of Emotional Intelligence:
Overview and evaluation of predictors

refers to ability of the predictor to forecast criterion performance accurately. Many authorities argue that validity is the predominant evaluative standard in judging selection methods; however, the relevance of the other three standards are also substantial.
Validity:
refers to the ability of predictor to render unbiased predictions of job success across applicants in various subgroups of gender, race, age, and so on.
Fairness:
refers to whether the selection method can be applied across full range of jobs. Some predictors have wide applicability in that they appear well suited for diverse range of jobs; other methods have particular limitations that affect it.
Applicability:
many selection differ cost which has direct bearing on their overall value.
Cost:
Chapter Summary:
Predictors are variables such as test, interview, letter of rec. used to forecast or predict a criterion.

High quality predictors must manifest two psychometric standards – reliability and validity.

Psychological tests and inventories have been used to predict relevant workplace criteria for more than 100 years.

Psychological assessment is a big business. There are many publishers of psychological tests used to assess candidates' suitability for employment.

Most commonly used predictors are tests of general mental ability, personality inventories, aptitude tests, and work samples, interviews and letters of rec.

Predictors can be evaluated in terms of their validity (accuracy), fairness, cost, and applicability.

Online testing is a major trend in psychological assessment.

Controversial methods of prediction include the polygraph, graphology, tests of emotional intelligence.

There are broad cross-cultural differences in predictors used to evaluate job candidates. Interview
Reasons for Testing

*some of the more commonly cited reasons for testing are:
-testing leads to savings in the decision-making process

-the costs of making a wrong decision are high

-the job requires attributes that are hard to develop or change

-hard-to-get information can be obtained more easily and efficiently

-individuals are treated consistently

*standardized testing system allows for everyone to be treated the same

-there are a lot of applicants
What is a Test?
standardized sample of behavior
*what is measured
-content
*conditions of measurement
-how the observations are made
*interpretation of responses
-how it is scored
Terms and Concepts
Terms and Concepts
-a timed test; more speeded a test, the harder an applicant is to get through all of the items

-they are usually all the same difficulty level
*speeded tests
-untimed test that becomes more difficult as you go through it
*power tests
-timed test that gets difficult as you go through

-i.e. SAT
*speeded power test
individual tests are harder

-an individual or group completes the test
*individual vs. group tests
-test completed using paper and pencil
*paper-and-pencil
-testing done on a computer

-i.e. GRE
*computer adaptive (based) testing
Ethical Standards in Testing
Ethical Standards in Testing
-applies to all psychologists that interact with the public, but primarily to counseling
*APA code of professional ethics
-this is who is administering the test or has access to the test
*Test user qualifications
-you should never reveal too much information
*invasion of privacy – reveal more information than necessary
-i.e. co-workers should not have access to your scores
*confidentiality – who should have access
-there should always be some specification on how long records will be kept
*retention of records
Sources of Information about Testing
*Mental Measurements Yearbook (MMY)

*Tests in Print VII
Predictors & Tests
Predictors & Tests
are standardized measures of characteristics, skills, abilities, etc.
*Tests
are measures that are used to make predictions
-predictors

>any measure (test/interview) can potentially be used to make predictions
>actual criterion versus conceptual criterion
-these are all actual criterion tests
*Good prediction requires good measurement!
*Distinction b/w tests and testing

-Tests: measurement instruments

-Testing: using of measurements to make decisions
Evaluation of Predictors

*a four standard approach to evaluating personnel selection methods:
1. validity and reliability

>validity: how well does this test assessment perform?

>reliability: consistency or stability of assessment

2. fairness of assessment

>i.e. cognitive abilities test

3. applicability/practicality

4. cost
Psychometric Criteria for Tests: Reliability
*consistency of measurement:

-occasions

>stability from one test event to another test event

-sample of items

-items within a test

-within/between test scorers

>is there a reliable conclusion about what kind of candidate a person is

*importance of reliability

-reliability sets the upper bound of validity.

-ranges b/w 0 to 1; the closer to 1 it is, the more reliable it is

-to get the upper bound, you take the reliability score and square it

-the highest correlation you can get is 0.81

-the higher the reliability, the better
Estimating Reliability
*Test-retest estimates

*Parallel forms estimates
-form A/ form B
> if you take both tests, you will score high on both or low on both

*Internal consistency estimates

*inter-rater reliability -the degree to which supervisors/interviewers view an individual

-i.e. one group sees a teacher as horrible, while others think the teacher is perfect
Psychometric criteria for tests: Validity
*attaching meaning to test scores

*appropriateness of inferences drawn from tests scores

*common kinds of inferential problems
-the degree to which someone looks at your test, and says whether you did good or not
content validity/subject matter experts
-accurate and faithful measure of what you’re saying you’re measuring

-i.e. I say I’m assessing how extroverted someone is, but I test them on introversion
construct validity
versus face validity
-assessed by the test taker (applicant)

-the degree to which test appears appropriate to the applicant

-is this assessment job relevant?
criterion validity/predictive validity
-the degree to which you can predict actual performance on an assessment

-i.e. if you do well on SAT, you will perform well the first year of college
Criterion-related Validity Designs (will be on test)
-concurrent designs

-predictive designs

-incremental designs
-giving a test to current employees, then take test score to correlate test scores

-pros: it’s easy; quick

-cons: people who have the job are less motivated to do well on the test
-concurrent designs
-take job applicants and give them assessment, then hire them not based on assessment

-correlate test performance with actual job performance

-pros: preferred method; people are motivated to get the job

-cons: asking applicants to take an assessment that does not determine whether they get the job or not
-predictive designs
*problems with implementing criterion-related designs

-long time to get data
-incremental designs
Intelligence Tests / Cognitive ability measures.

◦ Almost always individualized assessment measures

◦ Most likely now to be on computer because it is more standardized
Cab be Paper-and-pencil or Computer based (CBT or CAT)
Intelligence Tests / Cognitive ability measures

GRE, IQ Tests
General Intelligence Tests
Intelligence Tests / Cognitive ability measures

Mechanical Aptitude- manipulate mechanical knowledge

Clerical Aptitude- proof reading

Spatial Aptitude- see figure and rotate in mind testing
◦ Aptitude Tests
Sample Cognitive Ability Content – things they assess of you
 Verbal Comprehension

 Numerical Ability

 Visual Speed And Accuracy

 Space Visualization- pile of blocks, rotate to identify correct answer

 Numerical Reasoning- pattern finding, mathematical algorithm

 Word Fluency- can articulate things quickly, vocabulary
big right now, given four symbols changing and evolving, and you have to pick what comes next
Symbolic Reasoning-
not a function of language and not based on experiences, so you can use it in multiple languages. This is a huge benefit for large corporations.
Symbolic Reasoning-
Mechanical Aptitude Tests: Bennett-Mechanical Test
PICTURE/GRAPH
Sensory/Motor Ability Tests
PICTURE/GRAPH
Intelligence Tests / Cognitive ability measures

 Advantages
◦ Reliability and Validity – high, if you demonstrate once, you can keep doing it

◦ *Best predictor of job performance (esp. verbal reasoning and numerical reasoning)

◦ Administration- easily administered, standardized, group setting, scoring automatically

◦ Costs- initially expensive, overtime it is minimal
Intelligence Tests / Cognitive ability measures

 Disadvantages
◦ Fairness- minorities score lower (women and African Americans score worse)
◦ Based on the premise that the best predictor of future behavior is observed behavior under similar situations
Work Samples
Work Samples

 Advantages
◦ Reliability and Validity high reliability

◦ Fairness- has nothing to do with systematic biases (minority groups), difficult to fake (you can either do it or not do it), high face validity (ex. Correcting a memo), provide realistic job preview
Work Samples

 Disadvantages
◦ Costly to administer- very expensive, usually one at a time, takes a long time, machine based work you have to take that machine offline, so you have a loss in production, potential damages (ex. Crashing a forklift)

◦ Administration issues- most jobs are much too complex for 30 minute work sample, not applicable, don’t get into aptitudes, evolve and changing
Types of Sample Tests
 In-Basket Exercises

 Role-Plays
In-Basket exercises are typically designed to simulate the administrative tasks of a job. In the typical In-Basket, the test taker is given background information on either the actual organization or a fictitious company and is asked to assume a specific role in the organization. The test taker is often asked to respond to letters, memos, e-mails, requests, personnel issues, and so forth, in a given amount of time. A time limit may be imposed in order to simulate the time pressure experienced in many jobs. In-Basket exercises are usually designed to assess the candidate’s ability to manage multiple tasks, prioritize and delegate work, and analyze information quickly. In-Basket exercises may be administered via paper-and-pencil methods or via computer. Scoring of In-Baskets varies considerably, with some more easily scored via computerized templates and others requiring careful review by a trained evaluator.
 In-Basket Exercises
Role-Play exercises are designed to simulate the interpersonal challenges faced when working with others. In the typical role-play, the candidate is given background information regarding the scenario and asked to play a particular role (e.g., team leader, customer service representative). During the exercise, he or she interacts directly with a trained role-player (actor). This actor often plays the role of a subordinate, coworker, or customer and responds to the candidate according to a script. Role-Play exercises are usually designed to assess the candidate’s communication and interpersonal skills. Performance may be observed by a trained evaluator, or may be videotaped and evaluated at a later time. Very common for entry level jobs
 Role-Plays
Measures of Physical and Psychomotor skills

 Advantages
◦ individuals who are physically unable to perform the job (select people who can do the job overtime)

◦ can result in decreased costs / decreased absenteeism- can result in decreased costs related to disability/medical claims, insurance, and workers compensation

◦ decreased absenteeism
Measures of Physical and Psychomotor skills

 Disadvantages
◦ Cost- costly to administer (ex. Obstacle courses are expensive to maintain)

◦ Validity- difficult to demonstrate validity (ex. Carrying 100 lbs for firefighter and their rules that if the person is over a certain weight they need 2 people to carry), requirements must be shown to be job related through a thorough job analysis

◦ Fairness- females and the older are disadvantaged, age based disparate impact
Personality Inventories
Personality Inventories
A selection procedure that measures the personality characteristics of applicants that are related to future job performance.
Personality Tests:
Personality Tests:
No ‘right v. wrong’ answers

 Scale scores to predict job success

◦ Type –Myers Briggs Type Indicator (MBTI)

 16 Types relating to job-role preferences
◦ *Neuroticism, Extraversion, Openness to experience, Agreeableness, Conscientiousness (OCEAN)

◦ Personality and Intelligence are not correlated

◦ Modest validity coefficients (.20) but have incremental validity

◦ Problem of social desirability – people want to present themselves nicely
 Five-factor model personality (Big “5” Theory of Personality)
Personality Tests:

 Advantages
◦ More information- about applicant, can get interpersonal traits that correlate to the job

◦ Fairness

◦ Cost
Personality Tests:

 Disadvantages
◦ Measurement issues – hard to define personality characteristics

◦ *Experience not Personality- dictates job performance not personality

◦ Social Desirability- present yourself differently

◦ Diversity- when you get 5 extroverts you have too many leaders in one place

◦ Applicability- personality might not even matter
Integrity Tests

Premise of Integrity Testing-
 (general idea is to screen out people)
Low productivity=
 high counterproductive work behaviors
 Types of Integrity Tests
1. Overt integrity- obvious, apparent to applicant what is being asked

2. Personality-oriented- (covert), usually assessing level of consciousness, how trustworthy are you?
 Other Considerations
◦ Validity

◦ Big 5- personality, high integrity= low neuroticism

◦ Cognitive Ability- smarter people don’t steal less

◦ Fairness- women tend to score higher, older score higher, younger score lower, younger are more likely to report stealing, no race differences

◦ Faking- especially with overt integrity (duh)

◦ Cost- very expensive because they are proprietary methods (owned by companies)
Interviews: The most common selection instrument

 Prohibited Interview Questions
◦ Age
◦ Marital status / kids
◦ Religion
◦ Political affiliation
◦ Medical history
◦ Personal habits
Interviews: The most common selection instrument

 Bad interview Questions
◦ Where do you see yourself in five years?
◦ What are your weaknesses?
◦ What in particular interested you about our company?
◦ What would your past managers say about you?
◦ What is the airspeed of an unladen swallow?
Types of Interviews
 Unstructured Interview- bad, ask questions randomly, slightly different set of questions

 Structured Interviews- good, specific set of questions
 Sub-types and-or types of questions
◦ Situational Interview- applicant is give situation…

◦ Behavior Description Interviews- describe previous behaviors

◦ Structured Behavioral Interview- gold standard, asking about behavioral issues of past and everyone gets same questions
Structured Behavioral Interview Examples
 Describe the most creative work-related project you have completed.

 Give me an example of a problem you faced on the job, and tell me how you solved it.

 Give me an example of when you had to show good leadership.

 Give me an example of an important goal you had to set and tell me about your progress in reaching that goal.

 Tell me about a situation in the past year in which you had to deal with a very upset customer or co-worker.
Responding well to these types of questions

 The "S.T.A.R." technique – Describe:
◦ the Situation and/or the Task you needed to accomplish
◦ the Action you took
◦ the Results
Responding well to these types of questions

 Be specific
◦ Not general or vague
Responding well to these types of questions

 Don't describe how you would behave
◦ Describe how you did actually behave
Responding well to these types of questions

 If you later decided you should have behaved differently, explain
◦ The employer will see that you learned something from experience
Evaluating Interview Answer
PICTURE/GRAPH
Interview

• Advantages
▫ Very helpful in interviewing managers

▫ Criterion related validity = .39

▫ Most commonly used method of selection

▫ Validity hard to demonstrate yet “illusion of validity” is common bias

▫ Serves other personnel functions, e.g. degree of fit
Interview

• Disadvantages
▫ Reliability and Validity
▫ Fairness
▫ Cost
Biographical Information
 Previous life experience to predict success

 Often recorded on application blank
Biographical Information

 Usefulness:
◦ Can predict as high as .30-.40 range

◦ Reveals consistent patterns of behavior

◦ Often locates unique criterion variance
Biographical Information

 Legally defensible, but problematic:
◦ Fairness issues
◦ Invasive
◦ “Fakable”
Assessment Centers

• Standardized evaluation with multiple evaluations
▫ job-related simulations,
▫ interviews,
▫ and/or psychological tests
• Traditional Activities
▫ Leaderless Group Discussion
 Problems with this technique

▫ Role Playing
 Problems with this technique
• Four Characteristics:
1. Managers – selection, promotion, training

2. Assessed in groups against performance of other groups

3. Assessor teams as raters

4. Variety of group exercises and inventories over 1-3 days
• Criterion Contamination (Validity found is due to shared stereotypes)
1. Actual
2. Subtle
3. Self-fulfilling prophecy
4. Performance consistency
5. Managerial Situational Exercises
Letters of Recommendation
 Commonly used and least valid
 Restricted range
 Only useful if negative
Drug Testing
 Substance abuse is major global problem
Drug Testing

 Two types of assessments
◦ Screening test
◦ Confirmation test
Drug Testing
 Problems of reliability and validity

 Practical issues
◦ Cost savings
◦ Issues of uniform drug testing
◦ Some issues outside the scope of I/O Psychology
Tests of Emotional Intelligence

 Five dimensions
◦ Three intrapersonal
 Knowing one’s emotions
 Managing one’s emotions
 Motivating oneself

◦ Two interpersonal
 Recognizing emotions in others
 Handling relationships
Tests of Emotional Intelligence

 Issues:
◦ Conceptual overlap with Personality

◦ Enormous controversy and still not well-established
The Value of Testing

 Trends:
◦ Tyranny of testing

◦ Some tests are useful in predicting job performance, some are not

◦ As a class, moderately predictive, but could be due to poor criteria
The Value of Testing

 New Test Methods:
◦ Situational judgment test
◦ Online computer testing emerges
Multiple Predictors

 It’s not usually a question of which predictor to choose -- it is more often a question of which predictors to choose or which predictor to ADD
 Multiple predictors should provide:

◦ RELEVANT information (validity of X)

◦ UNIQUE information (validity of X1 + X2)
Chapter 5: Personnel Decisions
Chapter 5: Personnel Decisions
Personnel Selection Occurs in a larger context

-personnel decisions in the new millennium
*speed of technological change

*use of teams to accomplish work

*changes in communication technology

*most large corporations are new global in nature

*service orientation
Testing: Making Selection Decisions

-When making selection decision there are basically 2 primary outcomes

*they are?
-negative or positive; you get hired or not
*quality of outcomes
-were they a good candidate to hire
-features of testing that affect
-features of testing that affect
-min level of performance to either move on in process or get hired

-the higher the cutoff score, the more selective the company is
*cutoff scores
-how picky is the organizations

-ranges from 0-1

-the higher the ratio, the less picky the organization is
*selection ratios
-min level of performance to be defined as successful
>i.e. you can get a min of a D in a class and still pass

-low base rate, company is going to fail
*standards for success and base rates
Decision Outcome Analysis
PICTURE/GRAPH
Evaluation of Testing
-Decision utility of testing systems
*improving on the base rate
-is this a good selection system?
*focus on hires vs. all candidates
Benefits of correct outcomes
if you reject stupid people, then you won’t have to replace them later
Costs of incorrect outcomes
-moral issues

-hire someone who can’t do the job, and they ruin something on the job
Costs of testing procedures
-can be time consuming
Costs of tradeoffs of selection vs. other approaches to staffing
-nepotism (not what you know, but who you know)

*very effective b/c you know the person well

*can be ineffective b/c of slacking of the person, other people in company can’t advance
Impact of selection ratio: focus on hires
*focus on people you brought to the job

*situation 1: cutoff low
-true positives and false positives

*situation 2: selection ratio goes down, increase cutoff

*situation 3: cutoff is higher, ratio is lower

****higher the passing standards, the higher the success rates
****50 % passing standard maximizes overall correct decision rate ability test (r=.53)
****the more valid the test, you reduce false positives and false negatives
-make fewer bad decisions
Impact of Selection ratio on outcomes: Focus of Hires
PICTURE/GRAPH
Impact of Selection Ratio on outcomes: Focus on All Candidates
PICTURE/GRAPH
Multiple hurdle approach:
Each predictor evaluated independently
-predictors are hurdles

Each predictor has a cutoff score
-a cutoff score is a minimally acceptable score on a predictor
Applicants pass if they exceed the cutoff scores for each predictor
Multiple Hurdle Approach
PICTURE/GRAPH
Regression Approach
-a statistical procedure, allows us to forecast a criterion from a predictor

-very similar to correlation
What if we have more than one predictor?
Multiple Regression
R= the relationship b/w the set of predictors, and those 3 things as a group has on the predicted job performance
-the correlation b/w criterion and 2 or more predictors

-ranges from 0-1
-the size of R is dependent on
-relation of each predictor to the criterion

-relations b/w the predictors
*the more predictive, the lower the R

-predictors should be as independent from one another as possible
-R2 is the square of the multiple correlations
-assessment of the variance
-this value indicates the amount of variance in the criterion accounted for by the predictors
Example Predictors – Uncorrelated
Example Predictors- Correlated
PICTURE/GRAPH
Selection with Multiple Regression-
way to combine multiple pieces simultaneously
Selection with Multiple Regression- way to combine multiple pieces simultaneously
 Develop equation

 Assess people

 Compute Y for all people- what do you think their performance will be?

 Hire the people with the highest Y’s

 Outcome is a regression equation

◦ Y = a + b1X1 + b2X2 + …… bkXk

◦ a=constant, b1X1= weight, how important is the assessment tool in predicting job performance (the higher= more important)
Multiple Regression: Which Do We Choose?
Y = 12 + .25 (Know.) + .50 (Abil.) + .25 (Consc.)
Network knowledge, cognitive ability, & conscience
PICTURE/GRAPH
Multiple Regression: Which Do We Choose?
- Pick person with highest Y

- Sometimes referred to as a compensatory model, if bad at one section other sections can bring them up, have chance to get a job (ex. College admissions)

- Antithesis to a hurdle procress (opposite)
Using Multiple Predictors
(* Don’t really need to know differences for the test)
Using Multiple Predictors (* Don’t really need to know differences for the test)
 Compensatory models

 Non-compensatory models- hurdle process, same

 Sequential strategies- not quite standardized, but like a hurdle

 Clinical decision making- do not do when hiring, “gut feeling”
Costs of Available Measures:
◦ Intelligence Tests / Cognitive ability measures = $20 per applicant

◦ Work Samples = $55 per applicant

◦ Interview = $35 per applicant

◦ Measures of physical & psychomotor skills = $30 per applicant

◦ Personality inventories = $40 per applicant

◦ Assessment center = $75 per applicant

◦ Integrity tests = $40 per applicant
Validity Generalization
- Idea that if it’s been shown to be valid/ predictive in one scenario it will be again. The farther you go away from original, more varying validity- generalizability decreases
What is banding? ( he didn’t actually go over this part he said just read the rest in the book)

 What is band width a function of?

 Banding and selection ratio

 Types of banding
READ IN BOOK!!!!!!
Uniform Guidelines (on Employee Selection Procedures) – aka the Bible

-How to do selection ethically and legally
 A template for doing selection legally
 Two legal bases for discrimination
◦ Adverse impact-
◦ Disparate impact-
discrimination in which use of a specific selection instrument adversely affects on protective group, because of the test not the group (ex. Cognitive abilities test: white males score higher than black males, and women are negatively affected)

legally identified minority, includes: females, Hispanics, Asians, National origin groups, religious groups, non- Christian, age (Higher than 40 are protected), disability (mental or physical)
◦ Adverse impact-

 Protective groups-
when a protected group gets special treatment (+ of -) (ex. Gender, race, sexual orientation)
◦ Disparate impact-
Adverse Impact (4/5ths Rule or 80%) * no math on the test
GRAPH

 In other words,
◦ you hire a larger percentage of non-protected class employees
- Less than 4/5ths, then you have adverse impact, if you hire too big a percentage of non protected class
Adverse Impact
PICTURE/GRAPH
Adverse Impact- Common Methods
 Cognitive abilities -YES
 Physical abilities – YES, gender, physical
 Spatial abilities – YES, women
 Personality- NO
 Assessment Centers- NO
 Interviews- NO, sometimes, usually not though
Adverse Impact

 What if we find adverse impact?
◦ The organization is obligated to use another method OR validate the method
 Implication: No validation necessary if no adverse impact
 Implication: Adverse impact is okay if the measure is valid- if it can accurately predict job performance
Civil Rights Act of 1964
 “It is unlawful to fail to hire or otherwise discriminate against an individual because of such individual’s race, color, religion, sex, national origin OR

◦ to limit, segregate, or classify employees or applicants for employment in any way which would deprive any individual of employment opportunities because of such individual’s race, color, religion, sex, national origin”

 Exemptions: BFOQ, Seniority, Testing- can still occur, BFOQ (Bona fide Occupational Qualifications) – if you have to have this to do the job (ex. Catholic church against protestants), seniority- hired first, then they are fired last

- Was very controversial, tried to get it to fail by including sex

- For any employment decision it cannot unfairly discriminate

- *Which group is not listed here? Age1
Civil Rights Act of 1964- Title VII
 Addressed all personnel functions

 Applies to all organizations with 15+ employees, except

◦ Private clubs
◦ Employment places connected with Indian Reservations
◦ Religious Organizations

 Equal Employment Opportunities Commission (EEOC)- created

◦ Investigates allegations of discrimination

◦ Issues regulations regarding compliance with Title VII

◦ Gathers employment information
Guidelines for Employee Testing Procedures
 1964 -- CRA established the Equal Employment Opportunity Commission (EEOC)

 1966 -- EEOC created Guidelines for Employee Testing Procedures

 1972 -- CRA created Equal Employment Opportunity Coordinating Council- up until this point they could do nothing, “toothless tiger”, now they start having ability to react

 1978 -- Uniform Guidelines on Employee Selection Procedures

◦ systematic record keeping of employment decisions

◦ adverse impact and four-fifths or 80% rule
Civil Rights Act of 1991
 Disparate impact is codified and written into law for the first time

 Shifted burden of proof back to employer; once disparate impact is shown employer must show job relatedness of selection practices

 Allowed (a) limited punitive damages and (b) jury trials to award damages

- Democrat in office, then employees are favored, Republican in office, then companies are favored
Specific Supreme Court Cases- Read this in the Book …!!!!!!
 Griggs vs. Duke Power Company (1971)

◦ Burdon of proof on the defendant – “Griggs Burdon”

◦ Albermarle Paper Co. vs. Moody (1975)

◦ Organizations must use rigorous validation procedures
Specific Supreme Court Cases- Read this in the Book …!!!!!!
 Age Discrimination in Employment Act of 1967

◦ Protects people over age of 40 from discrimination on the basis of age

◦ Age can be qualification if it is a BFOQ
 Americans with Disabilities Act (1990)- ADA Law
◦ employment protection for individuals with disabilities (physical or mental impairment that substantially limits life activities) who, with or without reasonable accommodation, can perform the essential functions of the position

◦ must allow opportunity for comparable performance

 ex. Blind using computers
 ex. Wheelchair ramps
Specific Supreme Court Cases
 Bakke vs. Regents of the University of California (1978)

◦ Holding positions for certain classes is illegal
Specific Supreme Court Cases
 Price Waterhouse vs. Hopkins (1989)

◦ Decisions cannot be made on the basis of stereotypes – whether they “fit the job”
EEOC Statistics Slide
- Women- most likely to win employment discrimination cases
- Disabilities- most likely to bring up cases
Fairness Issues in Testing – he said to read rest in book!!!!!!!!!!!!!!!
 Meeting dual goals
◦ High performance
◦ Diversity

 Adverse impact, validity, and alternative predictors

 ADA: Standardization vs. Accommodation

 Emphasizing other staffing strategies

 Applicant reactions
Placement and Classification

 We are predicting performance to decide which job a person should get
◦ Placement
◦ Classification
 3 Approaches
◦ Vocational Guidance
◦ Pure selection
◦ Cut and Fit
The Real World
 Most companies do not do P & C

◦ Most applicants apply for a specific job

◦ If companies do, it is mostly placement

◦ Most P & C is done by the military

◦ Boeing is considering it
PSYC3050 Exam II – Ch. 4-6
Chapter 4: Predictors: Psychological Assessments
I. Measuring Individual Differences
a. Individual differences in people (Perspective 1)

a. Individual differences in people (Perspective 2)
I. Did you know that . . .
a. 41% of employers test job applicants in basic literacy and/or math skills

b. 34% of job applicants tested in 2000 lacked sufficient skills for the positions they sought

c. 68% of employers engage in various forms of job skill testing

d. 29% of employers use one or more forms of psychological measurement or assessment

e. 10% of employers use physical simulations of job tasks
I. Reasons for testing

a. Some of the more commonly cited reasons for testing are:
i. Testing leads to savings in the decision-making process

ii. The costs of making a wrong decision are high

iii. The job requires attributes that are hard to develop or change

iv. Hard-to-get information can be obtained more easily and efficiently

v. Individuals are treated consistently

vi. There are a lot of applicants
I. What is a “Test”?
a. Standardized sample of behavior
b. What is measured
c. Conditions of measurement
d. Interpretation of responses
I. Terms and concepts

What sort of tests?
a. Speeded Tests
b. Power Tests
c. Individual v. Group Tests
d. Paper-and-pencil
e. Computer Adaptive (Based) Testing
I. Ethical Standards in Testing
a. APA code of professional ethics

b. Test user qualifications

c. Invasion of privacy – reveal more information than necessary

d. Confidentiality – who should have access

e. Retention of records
I. Sources of Information about Testing
a. Mental Measurements Yearbook (MMY)

b. Tests in Print VII
a. are standardized measures of characteristics, skills, abilities, etc.
i. Tests
are measures that are used to make predictions
i. Predictors
can potentially be used to make predictions
1. Any measure (test/interview)
Actual criterion versus
1. conceptual criterion
Good prediction requires
a. good measurement!
Distinction between tests and testing
i. Tests: measurement instruments
ii. Testing: use of measurements to make decisions
a. A four standard approach to evaluating personnel selection methods:
i. Validity & Reliability
ii. Fairness
iii. Applicability / Practicality
iv. Cost
I. Psychometric Criteria for Tests: Reliability
a. Consistency of measurement:

i. Occasions
ii. Samples of items
iii. Items within a test
iv. Within/between test scorers
I. Estimating Reliability: quick review
a. Test-retest estimates
b. Parallel forms estimates
c. Internal consistency estimates
d. Inter-rater reliability
I. Psychometric Criteria for Tests: Validity
a. Attaching meaning to test scores
b. Appropriateness of inferences drawn from tests scores
a. Common kinds of inferential problems
i. Content
ii. Construct

1. Versus Face Validity

iii. Criterion
I. Criterion-related validity designs
i. Concurrent designs
ii. Predictive designs
iii. Incremental designs
Problems with implementing
a. criterion-related designs
I. Specific Tests & Assessments
I. Specific Tests & Assessments
I. Intelligence Tests / Cognitive ability measures
a. Cab be Paper-and-pencil or Computer based (CBT or CAT)

i. Almost always individualized assessment measures
a. These tests may be categorized as:
i. General Intelligence Tests

i. Aptitude Tests
i. Aptitude Tests
1. Mechanical Aptitude
2. Clerical Aptitude
3. Spatial Aptitude
I. Sample Cognitive Ability Content
a. Verbal Comprehension
b. Numerical Ability
c. Visual Speed And Accuracy
d. Space Visualization
e. Numerical Reasoning
f. Word Fluency
g. Symbolic Reasoning
h. Numerical reasoning: number sequence one.. 2, 4, 6.. What comes next? Etc.
i. Word fluency: list as many words in a set amount of time…in movie, John Travolta names animals in alphabetical order in 60 seconds
j. Symbolic reasoning: instead of number sequence, figure what comes up in picture sequence
(pictorial, spatial room and strong pliers)
Mechanical Aptitude Tests: Bennett-Mechanical Test
a. Which pairs of items are identical?
i. 2033220638 – 2033220638
I. Sensory/Motor Ability Tests
I. Intelligence Tests / Cognitive ability measures

a. Advantages
i. Reliability and Validity
ii. Administration
iii. Costs
I. Intelligence Tests / Cognitive ability measures

a. Disadvantages
i. Fairness
III. Intelligence Tests / Cognitive ability measures
a. Reliable and valid – smart now will be smart later on. Cannot be faked, not so much a good guesser.

b. Can be mass administered

c. Costly to develop, but once developed, very cost-effective.

d. Fairness: unfair/biased discrimination. Minorities, historically, have done badly on these tests b/c they are not culturally in tuned to the questions asked – developed by white males back then. Before, on a test, asked what a tractor was. For a minority who has never seen it, it would be a wrong answer. Difference in exposure to information among Minority groups
I. Work Samples

a. Work Sample Tests:
i. Based on the premise that the best predictor of future behavior is observed behavior under similar situations
I. Work Samples

a. Advantages
i. Reliability and Validity high reliability
ii. Fairness
I. Work Samples

a. Disadvantages
i. Costly to administer
ii. Administration issues

a. -very costly to administer the more involved they are. If it takes a machine off the line, that machine can’t be used to produce things. Applicants sometimes don’t know how to use machine and may break it – while its being repaired, it loses money
b. -useful for “can you do this job right now” but not be able to assess if the employee will be able to adapt with the job
Work sample: you work at a hamburger place,
a. can you flip a burger, etc.
if you know it now, you will later. Welding – you can do it now, you can do it later for the job.
a. Reliable –
very valid for straight-forward, simply jobs. But for a professor, one small piece of work sample is to lecture in front of class. More complex the job, the less valid something is*****
a. Validity –
very little discrimination to minority groups.
a. Fairness –
I. Work Samples serves as
a. Serves at realistic job preview.
I. Types of Work Sample Tests
a. In-Basket Exercises
b. Role-Plays
a. sit someone at desk, sit them down in front of two baskets. The in-basket and out-basket. The job of the applicant is to go into inbox to deal and sort. Get through as much of activities as effectively as possible.
In-basket:
evaluated on situation.
Role-plays:
I. Psychomotor Ability Tests
i. Dexterity (finger, manual)

ii. Control precision

iii. Multilimb coordination

iv. Response control

v. Reaction time

vi. Arm-hand steadiness

vii. Wrist-finger speed

viii. Speed-of-limb movement

ix. Manual production jobs may have interest in this.

x. Dexterity –

xi. Control precision – like the operation game

xii. Multi-limb coordination – can you balance and walk – physically demanding job
a. Used for jobs with high physical demands
I. Physical Ability
I. Physical Ability

a. Three Issues
i. Job relatedness
ii. Passing scores
iii. When the ability must be present
I. Physical Ability

a. Two common ways to measure
i. Simulations
ii. Physical agility tests
I. Physical Ability

I. Measures of physical & psychomotor skills

a. Advantages
i. individuals who are physically unable to perform the job

ii. can result in decreased costs / decreased absenteeism
I. Measures of physical & psychomotor skills

a. Disadvantages
i. Cost
ii. Validity
iii. Fairness
I. Personality Inventories

is a collection of traits that persist across time and situations and differentiate one person from another
a. Personality
I. Personality Inventories

In these types of assessments there is
a. no ‘right v. wrong’ answers
I. Personality Inventories

a. Scale scores to predict job success
i. Type –Myers Briggs Type Indicator (MBTI)

ii. 16 Types relating to job-role preferences
no right/wrong answer – try to create
a. personality profile
i. Neuroticism, Extraversion, Openness to experience, Agreeableness, Conscientiousness
ii. Modest validity coefficients (.20) but have incremental validity
iii. Problem of social desirability
a. Five-factor model personality (Big “5” Theory)
a. What does it look like:
i. Big 5

a. Big 5 not systematically related. Somewhat sufficient in predicting job behavior.
I. Personality inventories
a. Advantages
i. More information
ii. Fairness
iii. Cost
I. Personality inventories
a. Disadvantages
i. Measurement issues
ii. Experience not Personality
iii. Social Desirability
iv. Diversity
v. Applicability
can change temporarily introvertedness to fulfill job requirements etc.
****Experience, not personality
people lie to get someone to like them
Social desirability
do we have diversity in skin color, age, etc?
Diversity – surface level
a. in types of people in company, if using job assessment, may reduce diversity since you may get all extroverted people.
Diversity – deep level
a. Estimate the probability that applicants will steal money or merchandise
i. Used mostly in retail, but gaining acceptance for other occupations
I. Integrity tests
a. Types of Integrity Tests
i. Overt integrity
ii. Personality-oriented
I. Overt Integrity Tests

a. Research has shown that the “typical” employee-thief:
i. Is more tempted to steal

ii. Engages in many of the common rationalizations for theft

iii. Would punish thieves less

iv. Often thinks about theft related activities

v. Attributes more theft to others

vi. Shows more inter-thief loyalty

vii. Is more vulnerable to peer pressure to steal than an honest employee
I. Personality-Based Integrity Measures
a. Employee theft is just one element in a larger syndrome of antisocial behavior of organizational delinquency

b. Therefore, overt integrity tests overlook a number of other counterproductive behaviors that are costly to the organization
I. Other Behaviors Integrity Tests Can Predict
a. Drug and alcohol abuse

b. Vandalism

c. Sabotage

d. Assault behaviors

e. Insubordination

f. Absenteeism

g. Excessive grievances

h. Bogus workers compensation claims

i. Violence
I. Evaluation of Integrity Tests

a. Advantages
i. Good validity (ρ = .34)
ii. Inexpensive to use
iii. Easy to administer
iv. Little to no racial adverse impact
I. Evaluation of Integrity Tests

a. Disadvantages
i. Males have a higher fail rate than females
ii. Younger people have a higher fail rate than older people
iii. Failure has a negative psychological impact on applicants.
The most common selection instrument
I. Interviews:
a. Bad interview Questions
i. Where do you see yourself in five years?
ii. What are your weaknesses?
iii. What in particular interested you about our company?
iv. What would your past managers say about you?
v. What is the airspeed of an unladen swallow?
a. Prohibited Interview Questions
i. Age
ii. Maritial status / kids
iii. Religion
iv. Political affiliation
v. Medical history
vi. Personal habits
I. Structured Interviews
a. Are Valid
b. Reduce the Chance of a Legal Challenge
c. Are Cost Effective
a. Are Valid
i. Based on a job analysis (content validity)
ii. Predict work-related behavior (criterion validity)
a. Reduce the Chance of a Legal Challenge
i. Face valid
ii. Don’t invade privacy
iii. Don’t intentionally discriminate
iv. Minimize adverse impact
a. Are Cost Effective
i. Cost to purchase/create, Cost to administer, Cost to score
I. Structured Behavioral Interview Examples
a. Describe the most creative work-related project you have completed.

b. Give me an example of a problem you faced on the job, and tell me how you solved it.

c. Give me an example of when you had to show good leadership.

d. Give me an example of an important goal you had to set and tell me about your progress in reaching that goal.

e. Tell me about a situation in the past year in which you had to deal with a very upset customer or co-worker.
I. Responding well to these types of questions
a. The "S.T.A.R." technique – Describe:

b. Be specific

c. Don't describe how you would behave

d. If you later decided you should have behaved differently, explain : i. The employer will see that you learned something from experience
a. The "S.T.A.R." technique – Describe:
i. the Situation and/or the Task you needed to accomplish
ii. the Action you took
iii. the Results
I. Evaulating Interview Answer
a. Low – medium – high
I. Unstructured Interviews

a. They are:
i. Unreliable
ii. Not valid
iii. Legally problematic
I. Unstructured Interviews

a. Because they:
i. Are not job related

ii. Rely on intuition, “amateur psychology,” and talk show methods

iii. Suffer from common rating problems
i. Suffer from common rating problems
1. Primacy, Contrast, Similarity, Range restriction (e.g., leniency, strictness, central tendency)

2. Every applicant gets different set of questions – unreliable, can’t compare applicants.
I. Interview

a. Advantages
i. Very helpful in interviewing managers

ii. Criterion related validity = .39

iii. Most commonly used method of selection

iv. Validity hard to demonstrate yet “illusion of validity” is common bias

v. Serves other personnel functions, e.g. degree of fit
I. Interview

a. Disadvantages
i. Reliability and Validity
ii. Fairness
iii. Cost
a. A selection method that considers an applicant’s life, school, military, community, and work experience
I. Biographical Information

a. Often recorded on application blank
I. Biographical Information

a. Usefulness:
i. Can predict as high as .30-.40 range

ii. Reveals consistent patterns of behavior

iii. Often locates unique criterion variance
I. Example of Biodata Items
a. Member of high school student government?
i. Yes No

b. Number of jobs in past 5 years?
i. 1 2 3-5 More than 5

c. Transportation to work:
i. Walk Bus Bike Own Car Other
I. Biographical Information

a. Strengths
i. Good validity (r = .36)
ii. Can predict for variety of criterion measures
iii. Easy to administer
iv. Fairly valid
v. Can have good face validity***
I. Biographical Information

a. Weaknesses
i. Low face validity***
ii. can invade privacy (Items can be offensive)
iii. Can be expensive to develop
iv. Not always practical to develop
I. Assessment center

a. Standardized evaluation with multiple evaluations
i. job-related simulations,
ii. interviews,
iii. and/or psychological tests
I. Assessment center

a. Traditional Activities
i. Leaderless Group Discussion
1. Problems with this technique

ii. Role Playing
1. Problems with this technique
I. Assessment Centers

a. Four Characteristics
i. 1.Managers – selection, promotion, training

ii. 2.Assessed in groups against performance of other groups

iii. 3.Assessor teams as raters

iv. 4.Variety of group exercises and inventories over 1-3 days
I. Assessment Centers

a. Effectiveness

a. Criterion Contamination (Validity found is due to shared stereotypes)
i. Actual
ii. Subtle
iii. Self-fulfilling prophecy
iv. Performance consistency
v. Managerial Situational Exercises
a. Commonly used and least valid
b. Restricted range
c. Only useful if negative
Letters of Recommendation
I. Drug Testing
a. Substance abuse is major global problem

a. Two types of assessments
i. Screening test
ii. Confirmation test
a. Problems of reliability and validity

a. Practical issues
i. Cost savings
ii. Issues of uniform drug testing
iii. Some issues outside the scope of I/O Psychology
I. Drug Testing

Random facts
a. Use in 2001
b. 80% of U.S. organizations tested for drugs
c. 16% of employees admit to using drugs
d. Drug users are more likely to
e. Miss work, Use health care benefits, Be fired, Have an accident
f. Initial screening of hair or urine
g. Cheaper method ($30 for urine, $50 for hair sample)
h. Enzyme Multiplied Immunoassay Technique (EMIT)
i. Radioimmunoassay (RIA)
j. Confirmation test
k. Typically used only after a positive initial screening
l. Thin layer chromatography/mass spectrometry
m. More expensive
n. Responses to the Presence of Drugs
o. 98% of job offers withdrawn
p. Current employees who test positive
q. 25% are fired after a positive test
r. 66% are referred to counseling and treatment
s. -----
t. No poppy seeds/yogurt
u. -yogurt makes breath smell like alcohol
v. -diet influences results of test
w. -false positives/negatives as a result of prescription drugs/detox
x. -drug testing very expensive to do
I. The Value of Testing

a. Trends:
i. Tyranny of testing
ii. Some tests are useful in predicting job performance, some are not
iii. As a class, moderately predictive, but could be due to poor criteria
I. The Value of Testing

a. New Test Methods:
i. Situational judgment test
ii. Online computer testing emerges
I. Multiple Predictors
a. It’s not usually a question of which predictor to choose -- it is more often a question of which predictorS to choose or which predictor to ADD
a. Multiple predictors should provide:
i. RELEVANT information (validity of X)
ii. UNIQUE information (validity of X1 + X2)
PSYC3050 Exam II – Ch. 4-6
Chapter 5: Personnel Decisions
• Personnel Selection occurs in a larger context

o Personnel Decisions in the New Millennium- changes
Speed of technological change- computers

Use of teams to accomplish work

Changes in communication technology- cell phones


Most large corporations are now global in nature

Service orientation
 Speed of technological change- computers
• Trash collectors now have computers-find traffic jams
 Use of teams to accomplish work
more teamwork
Changes in communication technology
cell phones
 Most large corporations are now global in nature
• Employees everywhere- more diversified, more challenges
 Service orientation
• Phones, burgers, cash register
• Testing: Making selection decisions

o Outcomes
Usually 2 outcomes- pass/ fail- accept/ reject
Testing: Making selection decisions

o Quality of outcomes
How good are we at telling people they passed- do they have knowledge they are demonstrating
Testing: Making selection decisions

o Features of testing that affect outcomes and quality of outcomes
 Cutoffs

 Selection ratios

 Standards for success and base rates

 Psychometric quality of measurement instruments

 Graph- predictive of performance of job – Decision Outcome Analysis HW#2
 Cutoffs
• Hated in US, embraced- represent a standard of performance
• Objective way to categorize who you are testing
• If pass- acceptable- if fail- rejected
• Set to high- get over qualified
• Set to low- get under qualified
• More complex job- higher cut off will be
 Selection ratios
• Numeric index ratio- from 0-1—represents selectivity of hiring going on- # of people who will be hired/ # of people applying- hiring 1 person/100-.01 selection ratio—99/100- high selection ratio- .1 selective group, .9 unselective group
 Standards for success and base rates
• Standards of success

o Minimal level of performance to define someone as good or successful worker- in academics-D-pass

o Sales- 3 sales in week- considered acceptable

o Criterion for acceptable performance
 Standards for success and base rates
• Base rates

o Minimal level of performance to define someone as good or successful worker- in academics-D-pass

o Sales- 3 sales in week- considered acceptable

o Criterion for acceptable performance

o People you currently have in organization

o # of people in organization classified as being acceptable- # of students at LSU who have at least D average or higher

o Want higher base rate- more successful it will be
 Psychometric quality of measurement instruments

STUDY SLIDE!!!
• If you have a bad measure, cannot set effective cutoff score- base rate goes down

• ****SLIDE

• Cutoffs – passing score – minimum to get that “yes” anything below is a “no”

• -higher cutoff score = fewer people considered

• Selection ratios – range from 0-1. reflects selectivity of your hiring. 0.1 is more selective than 0.9

• Standards for success – standard of graduate lsu is 2.5 gpa. Base rates – want it to be as high as possible, want as many people as possible in companies to be as qualified as possible.
 Graph- predictive of performance of job – Decision Outcome Analysis HW#2
• Y axis is performance
• X axis- selection
• Cut score- vertical line
• Reliability-.81
• Validity-.63- measuring what it should- higher #, the more closer blue circle will be
o Individuals who are selected should be hired- test says are good and 6 months later are good- right of cut off scores and above criterion
• True positives- correctly identify
o People you are correctly identifying- who you say are crappy and do crappy on job
• True negatives
o People who are incorrectly rejected but would do well on job- did not get cut off score
• False negative
o People you accept but shouldn’t have- pass test and do crappy
• False positive

• Want A and C bigger- more- want more people correctly accepted or rejected and less incorrectly identified

• If you have a racially or gender discriminatory test- likely to fall in B and D- incorrectly identified
• Evaluation of testing

o Decision utility of testing systems
 Improving on the base rate
• Brining better people into organization- want it as high as possible- never will be 1

 Focus on hires vs. all candidates- utility- effects how good you say system is- most people just look at hires
o Cost effectiveness of testing systems (utility)

 Benefits of correct outcomes
• Have higher base rate- better employees- more money- bottom dollar
• Social aspects can approve- still want diversity
 Costs of incorrect outcomes
• High turnover

o Bad employees more likely to turnover- spend more money to replace them

o Can cause good employees to leave- upset with bad employees
 Costs of testing procedures
• Time- taking people off line

• If company has bad rep- good employees might not apply- legal issues- face validity or discriminatory issues
 Cost tradeoffs of selection vs. other approaches
to staffing
• Do not have to use standard of selection

• Nepotism- hiring only people you know

o You know them and will know how you will work with them- social pressures they will do well- not let down person

o Become social loafers, may not be qualified to be job

---

• Head hunting- go out and poach best people from other organizations

o In theory best person- comp A vs. comp B- quality theory in perspective

o Difficult to get people to leave- if they leave question of loyalty

o Very Expensive- pay head hunter ect And return may not be worth it
 Look at graphs- slides 6 and 7!!!!!
• Sit 1 – not worried about ppl who didn’t pass test; lots of people in due to high pass rate

• Sit 2 – (increased cutoff score from sit 1) – more selective – of people hired, more hired on in true positives.

• Sit 3 – pass rate of 20%, very high cutoff – actual people hired, very good decision. 8 out of 20 brought in are really good, 2 are bad..etc. unlike sit 1 where 5 are good and 5 are bad.

• Only one way to think about it

• Higher the passing score/cutoff score – more likely to bring in good people

• **** general rule of pass rate – cause/effect situations
• Multiple Hurdle Approach- approach to selection- have multiple tools for selection- test until fail a test
o Each predictor evaluated independently of the others

o Each predictor has a cutoff score- assessment, interview, ect

o Applicants pass if they exceed the cutoff scores for each predictor
 Predictors (tests) are hurdles
o Each predictor evaluated independently of the others
 A cutoff score is a minimally acceptable score on a predictor
o Each predictor has a cutoff score- assessment, interview, ect
 Set of people meet minimum of all- slide 9
o Applicants pass if they exceed the cutoff scores for each predictor
• Regression Approach- slide 11- way of selecting people

o Regression,
 A statistical procedure, allows us to forecast a criterion from a predictor

• Everyone gets all assessments- way of predicting future performance

 Very similar to the correlation
• What if we have more than 1 predictor
o Multiple Regression- many predictors- take all tests- slide 13 and 14
• The correlation between criterion and two or more predictors

• Tells degree of relationship between your assessment and performance- want it to be big as possible

• Ranges from 0 to 1- want closer to 1

• Who should you hire
 R
 The size of R is dependent on
• Relation of each predictor to the criterion

o How each predictor is related to criterion- how is Interview and contientiousness related to job performace

• Relations between the predictors

o How is interview related to contientousness
 R2 is the square of the multiple correlation

SLIDe!!!!!
• This value indicates the amount of variance in the criterion accounted for by the predictors

• How much job performance you are accounting for

• Bigger R2 more variance you are accounting for- want to account for more variance- 30% confidence person will do well on job- want it to be as high as possible- good predictors

• Predictors overlap each other- if you are a poor test taker, how you come across in Interview might be nervous about test- error
 correlation between criterion and two or more predictors
 Want to be closer to 1 – the bigger, the better the correlation
 Size of correlation depended on
 -more related
 - R = correlation/relationship of predictors
 -relations between predictors – the more related the predictors are to one another, the lower the R is going to be. Not added something new to understanding. Than in different situation with 3 diff predictors
 -ex: 2 predictors (unrelated) –
 -ex: 2 “ (related) – tapping into same thing
 R = 0.8; high number, predictor very related. R-squared = 0.64 --- represents percentage; 64% of understood criterion based on predictors.
 correlation between criterion and two or more predictors
 Want to be closer to 1 – the bigger, the better the correlation
 Size of correlation depended on
 -more related
 - R = correlation/relationship of predictors
 -relations between predictors – the more related the predictors are to one another, the lower the R is going to be. Not added something new to understanding. Than in different situation with 3 diff predictors
 -ex: 2 predictors (unrelated) –
 -ex: 2 “ (related) – tapping into same thing
 R = 0.8; high number, predictor very related. R-squared = 0.64 --- represents percentage; 64% of understood criterion based on predictors.
• Selection with Multiple Regression- slide 16
o Develop equation- predictive or concurrent validity study----Concurrent- current employees

o Assess people

o Compute Y for all people- outcome- predicted job performance

o Hire the people with the highest Y’s
o Outcome is a regression equation
o Y = a + b1X1 + b2X2 + …… bkXk
o Take predictors – and use to predict outcome variable – regression – when one outcome variable is applied a predictor

o Assess, first, employees

o Then assess new batch of people – try to predict what their performance will be

o Would hire people would hire high predictive performance (Y)

o Y=predictor performance
o A=constant
• Using multiple predictors- slide 18
o 0.25, 0.50, 0.25 = got from regression = referred to as beta weights

o Those numbers – the bigger they are, the more important that test is in predicting performance
 If you do bad on 1 but really well somewhere else, you could still come out just find
 Can also take regression and make it non compensatory
o Compensatory models
 Set a minimum level for performance on any of the levels- if don’t make the minimum then don’t look at them
 Compensate some but min level on all predictors
o Non-compensatory models
 Multiple hurdle- another name
o Sequential strategies
 Don’t like in I/O- manager interviews a bunch of people and decides who they like- not fair
o Clinical decision making
o Combine information about different skills and
set a minimum combined score
o -you do good in one, but bad in another, can help you out – even out.
o Compensatory models
o Multiple cutoffs set minimum “passing” scores
for each test/predictor
o Noncompensatory models
o Decisions are made in “steps”: Make decisions about some candidates after each test
o Sequential strategies
o Decisions are made on the basis of decision makers’ best judgment (judges combine “cues” to arrive at a decision)
o Clinical decision making
o What is band width a function of?
 Banding argues the difference between the 1st and 5th person may not be all that different (usually we work top down)

 Create bands – 5 people with similar scores- and then use other criterion

 The more narrow your band- function of how precise are you measuring things- how good- you can argue the score of the top person is very different from the 6th person

 With a less precise measure (interview) bands will be larger

 In banding, want as narrow band as possible- want to know that group is distinct from another
• What is banding
o Banding and selection ratio

o Types of banding- read!!!!!!!!!!!!!!!!!!! in test

o Validity generalization – if its valid for one group, should be valid for other group somewhere else – also known as validity generalization. The more different things are, the less likely to have it.
• Uniform Guidelines (on Employee Selection Procedures)- slides 22 and 23
o A template for doing selection legally

 The “Bible” of I psychs
 Tells you about best practices, how many people you need in a validity study
 Put out by Fed government but I pyschs have large hand
o Two legal bases for discrimination-Discrimination depends if bad or good- testing is always discriminating
 Adverse impact

 Disparate impact

--- Protected group
• A type of unfair discrimination in which the results of using a particular personnel selection method has an adverse impact or differential effect for a protected group vs the majority
 Adverse impact

• Gender, age, religious, sexual orientation- you are being unfairly treated to majority group
• Adverse impact occurs when- how do when know when it occurs- 4/5ths rule-80%

o (# Protected class Hired/ # Protected class Applied)/ (# Non-protected class Hired/ # Non-protected class applied)

 If # < .80—you have adverse impact

 Other ways to assess adverse impact

• Chi-Square analysis- more analytical

• Regression analysis- more analytical

• Fed government puts more emphasis on 4/5ths
o In other words, you hire a larger percentage of non-protected class employees- minority group hired less than majority group- something bad with selection system- but not always a bad thing
• Type of unfair treatments in which protected group are given different procedures in their consideration for employment

• Need cut score of 50, women don’t do as well- give cut score of 45
 Disparate impact
• Indentified by race (whites vs. everyone else), gender (men are majority although in pop minority), national origin, color (white vs everyone else), religion (Chrisianity), age (under 40-majority vs over 40), disability
 Protected group
document for how to design/conduct selection from legally sound basis.
Uniform guidelines
usually heard along with discrimination in hiring practices – unfair/biased discrimination of demographics (care about more)
 Adverse impact –
minorities (better referred to as protected classes – race, gender, national origin, color, religion, age, and disability) provided differential treatment – process different for them. Ex: men have one interview, women have five interviews.
 Disparate impact –
• Adverse Impact- Common methods- is there usually adverse impact
o Cognitive abilities- yes
 See most adverse impact
 Typically against minorities- African Americans

o Physical abilities- yes
 Women and elders are most discriminated

o Spatial abilities- yes
 Women are better than men

o Personality- typically no

o Assessment Centers- depends on how well its done- typically not

o Interviews- depends on how good, structured, how conducted

o What if we find adverse impact?
 The organization is obligated to use another method OR validate the method- if they can show cognitive abilities test is highly predictive of job performance- even if has adverse impact- they can use it

• Implication: No validation necessary if no adverse impact

• Implication: Adverse impact is okay if the measure is valid

o 4/5th rule, 80% rule, all function of adverse impact.

o (protected hire/protected applied) / (non-protected hired/non-protected applied) must be less than 80% to be adverse (0.79 = adverse, 0.81 = non)

o Spatial –(generally) against women

o Personality – against none – consistent across all groups

o Assessment centers – against none

o Interviews – against none

o You can use discriminating test if you show that it is job related; if it shows job performance – can still use it. Sociological implications – can have an organization without women if validated to have better use for skills of men – men scoring higher. Economic impact – if you find out company is discriminating, morally, you won’t want to shop there.

o If there is no adverse impact, no need to validate. But you need to validate to know if you have adverse impact. In guidelines, said you should validate every 2-3 years, practice – maybe 5.
• Civil Rights Act of 1964
o “It is unlawful to fail to hire or otherwise discriminate against an individual because of such individual’s race, color, religion, sex, national origin OR

 To limit, segregate, or classify employees or applicants for employment in any way which would deprive any individual of employment opportunities because of such individual’s race, color, religion, sex, national origin”
o Exceptions: BFOQ- Boneified occupational Qualification- hooters- you have to be of this religion, gender to do this job
 Seniority- you can give promotions to more senior persons even if younger is more qualified

 Testing- if you can show they cant do ability of job, you don’t have to hire them- women are unable to do physical labor- construction worker
EXAMPLES
o Gender/sex was only stuck in act to make it FAIL. Was not there originally.
o Exemptions – BFOQ (bonified occupational qualifications) (chinese restaurant can hire only chinese if they can argue – or entertainment, for a movie part – white female, criteria to do job, or being a certain religion to be a preacher..etc). Seniority – selected to be retained (first in, first out policy) – white males at higher rungs – first in, testing – can hire men above women if show test is job related.
o Hooters attempted to argue being female was job requirement. Lost case – don’t need to be female to serve food.
• Civil Rights Act of 1964- Title VII
o Addressed all personnel functions
o Applies to all organizations with 15+ employees, except
o Equal Employment Opportunities Commission (EEOC)- deals with laws
o Applies to all organizations with 15+ employees, except
 Private clubs- Boy scouts
 Employment places connected with Indian Reservations
 Religious Organizations- separation of church and state
o Equal Employment Opportunities Commission (EEOC)- deals with laws
 Investigates allegations of discrimination
 Issues regulations regarding compliance with Title VII
 Gathers employment information
• Guidelines for Employee Testing Procedures
o 1964 -- CRA established the Equal Employment Opportunity Commission (EEOC)

 Power they have depends on President of that time- if Liberal in office- tend to have more power and see more citations

o 1966 -- EEOC created Guidelines for Employee Testing Procedures

o 1972 -- CRA created Equal Employment Opportunity Coordinating Council- EEOC had no regulatory power until now

o 1978 -- Uniform Guidelines on Employee Selection Procedures-“The Bible”

 Systematic record keeping of employment decisions
 Adverse impact and four-fifths or 80% rule
• Civil Rights Act of 1991
o Disparate impact is codified and written into law for the first time- banding became serious issue

o Shifted burden of proof back to employer; once disparate impact is shown employer must show job relatedness of selection practices

 Before then all up to employee- not more responsibility of organization

o Allowed (a) limited punitive damages and (b) jury trials to award damages
• Specific Supreme Court Cases
• Specific Supreme Court Cases
 Burdon of proof on the defendant – “Griggs Burdon”- Griggs had to show proof that Duke Power were doing bad things
o Griggs vs. Duke Power Company (1971)
 Organizations must use rigorous validation procedures- high school diploma and janitorial work
o Albermarle Paper Co. vs. Moody (1975)
 Holding positions for certain classes is illegal
o Bakke vs. Regents of the University of California (1978)
 Decisions cannot be made on the basis of stereotypes – whether they “fit the job”
o Price Waterhouse vs. Hopkins (1989)
o Protects people over age of 40 from discrimination on the basis of age
o Age can be qualification if it is a BFOQ
• Age Discrimination in Employment Act of 1967
o Employment protection for individuals with disabilities (physical or mental impairment that substantially limits life activities) who, with or without reasonable accommodation, can perform the essential functions of the position
o Disabilities now one of big 5
• Americans with Disabilities Act (1990)- Important
• Fairness Issues in Testing
o Meeting dual goals
 High performance
 Diversity- don’t want same type of person over and over

o Adverse impact, validity, and alternative predictors- social image thing

o ADA: Standardization vs. Accommodation
 Have to make accommodation- reasonable- not going to make it an unstandardized

o Emphasizing other staffing strategies

o Applicant reactions

o *reasonable accommodations – most are about $2 (required for person with disability) – some are extremely expensive (elevator)

o Expensive to renovate old building to be accessible – but all new buildings must have.

o Restructuring – redesigning work flow for person
• Recap - Civil Rights Act - Title VII
o Who is Covered
o ◦Private employers with at least 15 employees
o ◦Federal, state, and local governments
o ◦Employment agencies
o ◦Unions
o ◦Americans working abroad for American companies
o }Who is Exempt
o ◦Bona fide tax exempt private clubs
o ◦Indian tribes
o ◦Individuals denied employment due to national security concerns
o ◦Publicly elected officials and their personal staff
Chapter 6: Organizational Learning
Chapter 6: Organizational Learning
• Trends in Organizational Learning
o Greater emphasis on skill enhancement
o Learning has ascended- role and power have increased
o Training is an important part of an organization’s long-range, long-term strategy:
o Greater emphasis on skill enhancement
 Keep worker in organization-life long learning- stay interested and in organization
o Learning has ascended- role and power have increased
 Learning – encode, retain and use information- why it is becoming more important'
 Adapt to changing work world
• More automative and technologically based
o Training is an important part of an organization’s long-range, long-term strategy:
 Organizations are becoming flatter- cutting out middle management- workload is increasing

 Life-long learning perspectives- workers seen as people- challenge- empowering employees to be better

 More global and diverse work force- diversity training big right now
• Training-A few statistics…
o }$58 Billion spent on training in U.S. Organizations
o ◦$1200 per learner
o }Costs need to consider
o ◦Direct costs
o ◦Indirect costs
o ◦Hidden costs
o }Types of formal training:
o ◦Classroom-65%
o ◦E-learning-20%
• Process by which change in knowledge or skills is acquired through education or experience
• What is learning?
o Relatively permanent change in a specific behavior of the individual due to an experience
• Cognitive aspect but also behavioral component
• Three phases of skill acquisition:
o Declarative knowledge
o Knowledge compilation
o Procedural knowledge
• *******on test – phases
• Declarative – learning about facts and other things
• Knowledge
• Procedural – no longer notice; so used to doing it, no longer think about it
o “Planned effort by a company to facilitate employee’s learning of job related competencies.” (Noe, 1999)•
What is Training?******
• knowledge, skills abilities and behaviors (KSAs)
• attitudinal change and self-awareness
 Competencies include
• trainees develop intellectual capital
• supports philosophy of continuous learning
 Competencies need to be transferred to the job
o Three phases of skill acquisition:

• Facts, very basics- mixing yellow and blue together makes green
 Declarative knowledge
o Three phases of skill acquisition:

• Integration of sequences- first I do 1, then 2, then I can move on to 3- taking facts and putting them into logical sequence
 Knowledge compilation
o Three phases of skill acquisition:

• Things become automatic- don’t have to think about what to do to get there- just do it
 Procedural knowledge
o Experts v. novices

 3 distinguishing features of experts!!!!!
• Proceduralization

• Mental models

• Meta- Cognitions
• Proceduralization
o Experts fall into procedural knowledge- don’t think about sequence
• Mental models
o Experts can re-arrange things in their head and create mental models that link things together
• Meta- Cognitions
o High-thinking- see very big picture and do not need to focus on individual pieces
o Individual differences in learning-3
Trait –like attributes

Self- efficacy

Cognitive ability
Trait –like attributes
• Personality stuff- people who are neurotic, tend to be poor test takers- show up- trait may affect performance—moody people may affect how they train
Self- efficacy
• Feeling of how confident are you- do you feel you are a good learner or do you think you are about to get fired
Cognitive ability
• How smart you are affects your learning curve- higher cognitive ability have smoother curve
• Time you invest in things in effective
• More on what training “is”- print slide 5
o “Planned effort by a company to facilitate employee’s learning of job related competencies.” (Noe, 1999)
 Competencies include
• Knowledge, skills abilities and behaviors (KSAs)
• Attitudinal change and self-awareness

o Getting rid of sexual harassment in environment
o People like you- improvement of well-being
 Competencies need to be transferred to the job
• Trainees develop intellectual capital
o Can call on more diverse employees

• Supports philosophy of continuous learning
o One of best predictors of transfer is what other employees say is important
• In other words…
o A system of learning in a work context

o Process through which the KSA’s of employees are enhanced

o Any difference between training & development? – YES

 Negative connotation to training

• Development is exciting!

 Development is considered a longer-term venture

 Focused on broader enrichment of individual

• For future jobs
• Training Practices
o 70% (more) of employers provide training

o $50-$60 billion spent on training in U.S.- late 90’s, now more

o In-House vs. Outsource- organization decision- organizationally dependent
o 70% (more) of employers provide training
 New hire training
o $50-$60 billion spent on training in U.S.- late 90’s, now more
 Transportation, communications and public utilities spend the most

• A lot of money involved in learning to fly a jet

 Service, construction & retail spend the least

• Retail- doesn’t take much training- basic knowledge transfer

• Very high-turnover jobs- flexible working environment

• Skills that have to be learned are cheap to learn on
o In-House vs. Outsource- organization decision- organizationally dependent
 Relatively balanced across levels- individually dependent

 Executive training somewhat more likely to be outsourced

• Having skill set to train executives- want newest knowledge- hot topic in psych now
• Classes of Training Goals
o Socialization & orientation of new workers

o Job-specific skills training

o Remedial training

o Personal & career development classes

o Updating

o Retraining

o Training for organizational culture change

o Cross-training and team development

o Retention & organizational commitment
 Socialization is a process of an adaptation- teaching employees of social nature of workplace
 Orientation- getting rules and procedures figured out
 Orientation is a small subset of socialization- pick up orientation quicker than socialization process
o Socialization & orientation of new workers
 Refresher coarse- know new and best software- employees already have necessary skills set- just make more effective
o Remedial training
 Stress courses
o Personal & career development classes
 New things have happened in job- lets learn a new skill that goes along with in
 Usually with Remedial training
o Updating
 Someone screwed up- retrain them- sexual harassment retraining
o Retraining
 Want organization to go down new path- new competencies emphasizing
o Training for organizational culture change
 Trained to do several things
o Cross-training and team development
 Big with companies with big turn-over
o Retention & organizational commitment
• Model of Training

o Goldstein’s 3-phase model- I down to III
o Phase I: Needs Assessment

o Phase II: Training and Development

o Phase III: Evaluation- often forgotten phase
o Phase I: Needs Assessment
 Figure out what training needs to be done- if set up needs assessment correctly you will know what you need to evaluate
 Diagnosing present problems and future challenges that can be met through training and development- what to do to address issues
• Organization Analysis
o Where does current organization stand in larger employment world- if already defective- with future changes may be more far behind

o Does it have skills needed to be effective now and in future

o Get info from
 Interviews with managers
 Survey of larger employees
 Look at hard numbers- problems with turnovers and sexual harassment
• Task and KSA analysis
o Job analysis- what tasks are needed- knowledge, skills, and abilities needed
• Person analysis
o Self- assessment- do employees have skill- can you function in environment with sexual harassment

o Asking employees to define their training needs

o Idea to identify gaps- employees say 1 thing- others say different- need to address gap
• Job analysis identifies
o Tasks

o Conditions under which tasks are performed

o KSAOs needed to perform tasks under those conditions
• Task analysis identifies how tasks are learned
o Expected at time-of-hire
o Easily taught on-the-job
o Current training program
o No training
 Basic needs assessment include
• Direct observation
• Questionnaires
• Consultation with persons in key positions, and/or with specific knowledge- managers
• Review of relevant literature

o People need to be effective on these 3 things to be effective communicator

• Interviews

o Semi-structured- set of questions by depending on answer may not be rigid on path

• Focus groups

o Bring in team- people can play off one another- coalesce information to get better outcome

• Tests
• Records & report studies
• Work samples
o Phase II: Training and Development

 Selection and Design of Instructional Programs
• Strategies for enhancing learning & transfer

• Techniques for training: presentation (e.g., lecture, videotape), simulation (e.g., role play), OJT- on the job training- shadowing (e.g., buddy system)- food industry good example

• Dealing with individual differences (e.g., trainee abilities, background, learning style, learning pace- how fast employees learn info)
o Phase II: Training and Development

 Training
• Implementation issues: standardizing- if basic knowledge can be very standardized- un-standardized with complex issue with small # of employees, maximizing participation- want employees involved, enhancing transfer- making sure training is as realistic as possible- taking what they are learning and applying it to job

• Trainee issues: motivation- do employees want to be there, personal control- what managers want
o Phase III: Evaluation- often forgotten phase

 Four levels of evaluating training effectiveness (Kirkpatrick)
• Internal criteria:
o Reaction criteria
o Learning criteria

• External criteria:
o Behavioral criteria
o Results criteria
• Internal criteria: reactions of trainees- how do they feel- during training

test of reaction, how you felt about the training
o Reaction criteria
 At end of training give test- if do bad say training was not effective
o Learning criteria
• External criteria: after training

 What is happening- more sexual harassment, more sales?
 On th job; do you see what is happening
o Behavioral criteria
 Do you see reduction in costs- is waste going down, turnover going down?
 Bottom dollar – does performance improve?
o Results criteria
 Use of evaluation models
• Science vs. practice evaluation goals
o Practice- are employees happy

• Focus on measuring and explaining change
o If training person cannot justify job- will get fired- must be good at showing training was effective somehow

• Experimental and quasi-experimental designs
• Preconditions for Learning
o Trainee Readiness
o Trainee Motivation
o Trainee Readiness
 Are they ready to be trained, do they want to be trained
o Trainee Motivation
 Expectancy Model
 Goal Setting Behavior
 Self Efficacy
 Participation in Decision Making
 Expectancy Model
• Do they expect to get something out of training
• If employees expect something out of training, it will be more effective
 Goal Setting Behavior
• Have employees set smart goals- if employees set goals for themselves the transfer of knowledge will be more effective
 Self Efficacy
• How well do they feel they will do in training- effects learning
 Participation in Decision Making
• Do they get to decide which skills are being trained- related to idea of self determination- people who feel they have control over their life tend to do better
• Training & Development- Learning Research*****2 questions
o Practice and Recitation
o Distributed vs. massed practice
o Whole-task vs. Part-task
o Knowledge of Results
o Goal setting
o Positive reinforcement
o Models/ Social Learning Theory- various models
o Cooperative learning
 Keep reading same things- eventually gets you to certain point on learning curve
 Can lead to over learning
o Practice and Recitation
 Distributed better- a little bit of learning over certain period of time- learning to play the piano
o Distributed vs. massed practice
 Whole task better for learning complex task- you need to see how everything fits together
 Learning to play baseball- part-task- don’t learn everything at once
o Whole-task vs. Part-task
 Important when doing training to know what expected results are- need to be stated in training session
 Need to give feedback- the more immediate, the better
o Knowledge of Results
 Setting goals is very important
o Goal setting
 Little rewards for doing something successfully- reinforcing good or positive behavior
o Positive reinforcement
 By curious learning – learning through others
o Models/ Social Learning Theory- various models
 Depending on nature of what people learning, it can be effective- learning in groups- works well in education- study groups
o Cooperative learning
• Transfer and Maintenance of Training

o Types of transfer-3
 1. Positive transfer = what trainer wants

 2. Zero transfer

 3. Negative Transfer
• What you learn in training improves performance on job
Positive transfer = what trainer wants
• What you learn does not effect performance- no change in behavior
 2. Zero transfer
• What you learned in training negatively effects performing- goes down- can be short term, but over long term may turn to positive transfer
 3. Negative Transfer
 Might have to do retraining- depends on how quickly skills decline
 If employees enjoy training, may want to do more
o Maintenance of training
• Facilitating Positive Transfer
o Model of Identical Elements Contextual interference
 Trying to enhance positive transfer- make it as realistic as possible- use tool needed

o Variety of examples
 Different ways of thinking and seeing- more examples more positive transfers- may remember 1

o Reducing amount of feedback
 When first off training- want lots of feedback- over time, reduce so they can integrate knowledge and feel they know what they are doing- over time expect it and has no value

o Setting goals
 Want goals realistic and achievable

o Realistic expectations
 Helping employees understanding what training meant to achieve and what to expect on job

o Characteristics of work environment
 If manager doesn’t care- his way or highway
• Training Methods- On-Site
o Most common; 93% of companies

o Generally informal, but may be formal

o Internships, apprenticeships, job rotation, mentoring

 PRO: produce while learn; eases transfer

 CON: loss of (1) trainer time and (2) use of (& possible damage to) equipment
• Strategic Value of Training and Development

o Four strategies companies employee to place themselves strategically in the market place
 Speed strategy
 Innovation strategy
 Quality enhancement strategy
 Cost-reduction strategy
• Training Methods- Off-Site
o Lecture
o AV materials
o Computer-based training (CBT/WBT)
 Linear programs
 Branching

o Simulation

o Trainer interactive approaches
 Roundtable discussion
 Case study
 Role playing
 Behavioral modeling
• Managerment Development Training
o About individuals learn to perform effectively in managerial roles

 Cultural diversity training

• Melting pot vs multicultural conception

 Sexual harassment training ….
• Group sensitivity training
o Group interaction with little direction for the purpose of promoting self-enhancement and behavioral change
o May work better for groups meeting for more sessions & for larger groups
• Mentoring*****
o Mutually recognized relationship between an older (usually), more knowledgeable