• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/61

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

61 Cards in this Set

  • Front
  • Back
What is employment security?
Similar to job security- what workers are now concerned with- having skills employers are willing to pay for.
What IS personnel psych?
Overlap b/w psychology and HRM.
In the changing marketplace what is the role of the manager?
emphasize democracy, explain & communicate how org's create value
In the changing marketplace what is the role of the employee?
adapt to changing circumstances, be prepared for multiple careers.
What are some of the problems that could surface in the changing marketplace?
Insecurity, uncertainty, stress, social friction
What are some of the benefits of the changing marketplace?
challenge, creativity, flexibility, control & inter-relatedness
What are the two types of discrimination and how are they different?
Disparate treatment- intentional discrimination
Adverse impact (also known as disparate impact)- unintentional discrimination
What did Equal Pay Act of 1963 do?
comparable worth
CRA 1964
Title VII- created by EEOC- illiegal to discriminate in employment setting on basis of sex, religion, race, national origin. Exemptions: BFOQ, BFSS, Veterans preference rights
ADEA 1967
40+ protected, must show linkage b/w test and business necessity (age-related skill declines)
Older Workers Benefits Protection Act 1990
Relevant to companies in trouble, it givens them a way to induce older workers to leave first, legally.
Immigration Reform and Control act of 1986
Can't hire people who aren't allowed to work in the country. One native & one non-native candidates = preference can be given to native.
ADA 1990
Americans with Disabilities Act
Disability- physical/mental impairment that limits major life activities. Protected if you have an impairment, if your employer thinks you have an impairment, if you have record of an impairment. Have to make reasonable accommodation but not beyond undue hardship.
CRA 1991
Dealt with in-group norming, making it unlawful to adjust scores of or use different cutoff scores for or otherwise alter the results of employment-related tests on the basis of race, color, religion, sex or national origin.
FMLA 1993
Medical leave- 12 weeks tops
EEOC
Equal Employment Opportunity Commission
Enforces federal civil rights laws
What are the two types of sexual harassment?
Quid pro quo & hostile work environment.
Both prohibited by CRA Title VII, unwelcome advances, requests for sexual favors, other verbal or physical conduct when submission to the conduct is either explicitly or implicitly a term or condition of the indiv's employment. Oncale vs sundowner= same sex harassment
What has research found about background checks and reference checks?
They may be legally risky & reference checks do not provide predictive data. Also- government positions more prone to litigation.
What is negligent hiring and how can a company avoid it?
Employers can be held liable for the actions of their employees when they aren't investigated thoroughly before hiring. Get around this by requiring that applicants sign waivers saying they will not sue and that they waive their right to privacy.
What are the three common law exceptions to employment-at-will?
Breach of implied contract, breach of good faith, discharge in violation of public policy.
What are some alternative perspectives of cognitive ability?
Emotional intelligence, crystallized intelligence, fluid intelligence, practical intelligence, academic intelligence.
What is EI and is it useful in personnel research?
Emotional intelligence- the capacity to reason about emotions: perceive emotion, use emotion to facilitate thought, understand emotion & manage emotion. Mixed research.
Does cognitive ability predict performance and if so what are the implications?
Yes, but it is not the whole story. Emotional intelligence can help employees accommodate for lower cog ability in the work place.
What is practical intelligence and how does it relate to cognitive ability?
Practical intelligence is the ability that individuals use to find a more optimal fit between themselves and the demands of the environment. It is another form of cognitive ability and when used in conjunction with g to predict outcomes there is improved prediction.
How does culture affect our perceptions of intelligence?
Indiv's from different cultural context behave and think differently. If we don't consider the context then we risk imposing our own values & views of the world on the people being studied.
Why is cognitive ability testing so controversial? How can a company manage cognitive ability tests?
It appears to be a good, valid way to predict performance but it produces adverse impact against protected groups. This is largely unexplained but researchers believe it is due to social and cultural differences (access to resources, different cultural references, etc.).
The testing itself looks at one's ability to process information (infers ones ability to do complex jobs). The value/outcome is G (also known as GMA, General mental ability). If you must use, use multiple types of test (not G alone).
What is differential validity?
When you have valid predictors but they create adverse impact. You could also end up with a predictor that only appears valid when you consider the groups aggregate but really is not valid.
What is the proportional representation rule in adverse impact?
4/5's rule- members of one group are selected at substantially greater rates than members of another group.
What is differential prediction?
Looking at the differences in slopes (bias 1) & intercepts (bias 2) in regression lines for subgroups. You see this most frequently in minorities doing less well on the job than their scores would predict. This happens when you have the same regression line for both.
What are different ways you can evaluate selection fairness?
Regression model (same slope, different intercept),
Subjective regression model (Watered down quota),
equal risk model (low cutoff),
constant ration model (quota system- places people into areas they are under suited for),
conditional probability model (uses within-group norming-illegal),
sliding bands (controversial)
When looking at recommendations & background checks what does research say and what do companies do?
Research says they have low validity, companies still use them.
What is biodata? What are the implications of using biodata?
Biodata = life history items.
They are pretty good predictors of later performance. Faking is possible and there are still many unknowns (is the info redundant? what does it mean?). Ways to assess: Weighted Application Blanks (WAB) & Biographical information blanks (BIB).
Is drug screening recommended? What are some things you must consider before using them?
Drug screening is controversial. There are privacy issues and also the courts do not see job performance alone as a good enough reason. You can create trust issues within your organization. ADA issues may also arise (alcoholism as disease).
Why are polygraphs illegal?
They have low validity.
How can you improve interviews to have higher validity?
Structure them. Take notes (if not lose up to 50% of info). consider social/interpersonal biases, individual differences. Also experience based situation based questions.
What are some different ways that people make decisions when it comes to selection? How can you evaluate those decisions?
Linear models, using moderator & suppressor variables
Alternative prediction models- MR, multiple cutoff, multiple hurdles (meet min criteria to pass to next stage)
Angoff method (experts)
Expectancy charts- expected perf given test score
Evaluation: Selection efficiency, models of utility (Naylor-Hsine & Brogden-Cronbach-Gleser model)
In personnel, how is face validity conceptualized? What are the advantages of it?
the extent to which applicants perceive the content of the selection procedure to be related to the content of the job. Enhances: test-taking motivation, trust, liking for org, double as realistic job preview, susceptibility to legal challenge reduced.
What is banding, how are org's using it and what are applicant reactions?
Banding- grouping applicants by test scores into ranges called bands, all people in band are considered equal and candidate is chosen randomly from band w/preference for protected groups or by using additional selection criteria.
Reactions to banding are mixed and are tied closely to the applicants self-interest and the association of banding with affirmative action.
What is affirmative action?
Affirmative action refers to policies that take factors including "race, color, religion, sex, or national origin" into consideration in order to benefit an underrepresented group, usually as a means to counter the effects of a history of discrimination.
What is measurement in personnel and what are the four scales of measurement?
Measurement is the assignment of numerals to objects or events according to rules (quantitative or qualitative).
4 Scales:
Nominal, ordinal, interval & ratio
Psych traits usually on nominal or ordinal, convert to intervals for more complicated analysis.
What is Item Response Theory and what are it's advantages?
IRT- the closest we can get to a ratio scale in psychology, using 3 parameters
1. Difficulty (assessed with ICC)
2. Discrimination (steepness of slope of the curve (correlation b/w item and total test score)
3. Guessing parameter (chance)
Advantages- uick assessment of ability (adaptive testing), can look at difficulty across subgroups, can examine if items are appropriate/valid for test. Make sure to factor analyze scales.
What are some different classifications of tests? How can you tell?
Speed test, power test.
Look at content, administration, scoring. Factor analyze!
What are some things that might lower the reliability of an item or test?
Range restriction, difficult of test, sample size, representativeness of sample.
What is the Standard Error of Measurement (SEM) and what are it's uses?
SEM- estimate of standard deviation of distribution of scores of a single individual, what they would get if they took a test an infinite number of times... with this we can say with some confidence whether a persons "true" score will fall within a defined range.
Useful for- can determine whether measures describing individuals differ sig, determine if test discriminates differently in different groups, allows us to think in terms of confidence intervals.
What is generalizability theory and why does it matter?
Can we generalize a score to other situations (with same specific controls)? The way we interpret the results relies on the norms (norm-referenced).
Can you have validity without reliability? Reliability without validity?
A test can be reliably crappy but it may not be valid, a test cannot be valid unless it is also reliable.
What IS validity?
Validity tells us what a test is measuring and how well it measures it.
What are some ways to collect inferential data? What is it for?
Inferential data is collected to assess validity- it is the concept that the test is measuring what it's supposed to measure but there are many ways to assess this.
Ways to collect:
What are the different types of evidence you can collect to make inferences about validity?
Content-related- tapping appropriate content
Criterion-related- predictive, how people on job do
construct-related- internal consistency, MTMM matrices, cross-validation.
What is coefficient alpha and how is it affected by different aspects of a test?
coefficient alpha is cronbach's alpha, which is a statistical measure of RELIABILITY
Affected by number of items, item inter-correlations, and dimensionality
Assumes uni-dimensionality, useful for ITEM-SPECIFIC variance in a uni-dimensional test of interest.
When constructing a parallel-forms multi-dimensional test how do you replicate the factor structure in each form?
Separately consider each item on the test and write one or more items that parallel it
What pieces of information are used in utility analysis?
1. Base rate (proportion of ppl who would be successful w/out intervention
2. Selection ratio- proportion of ppl we're going to hire
3. validity of the test (correlation b/w test score & future jp
4. general principle- anticipate consequences, what is value of investment in intervention
What are the 8 steps of a JA?
1. Collect info about job including existing job descriptions, direct observation, and O*Net
2. Determine who are SME's & demographics.
3. Meet/observe SME's
4. Develop initial KSA statements based on SME info
5. Develop final task/KSAO lists
6. Determine which KSA's are most essential for job
7. Develop critical incidents w/SME's
8. Begin to develop test items based on critical incidents.
What are some different things you can do with a JA?
Workforce planning, initial screening, T & D, Perf management, org exit
What is erroneous acceptance and erroneous rejection in the selection process?
Erroneous acceptance- admitting someone to the next stage that cannot pass that hurdle
erroneous rejection- someone rejected at a prior stage who could have passed a later stage.
What are 3 challenges in criterion development?
JP unreliability, JP observation, dimensionality of JP (multidimensional/dynamic.
When developing criterion it is important to note that it is dynamic. What are the things that relate to the criterion that change over time?
Group average performance, validity of test, rank orders of people.
what is criterion sensitivity and what are some types of bias?
criterion sensitivity- the ability to make decisions based on criterion that will lead to accurate decisions b/w good & bad employees.
Types of bias: knowledge of predictors (cuing), group membership bias, rating bias.
What is criterion equivalence?
Like a unicorn, it does not really exist. It is the same construct measured in exactly the same way (related to collinearity).
What does research have to say about utility analyses?
Mixed support. Some say it enhances managerial decision making and some don't.
Utility analyses are thought to evaluate "what if" scenarios- encourage practitioners to consider validity, cut score, AI & utility trade-offs b/w different selection methods.
What is test score banding? What is a sliding band?
Banding refers to a set of
approaches to the use of tests where people whose scores fall within a specific range, called the
band, are regarded as having the same score. It’s a way of breaking ties in test scores.

It is common to define bandwidth in terms of the standard error of the difference between scores. Essentially, banding is an alternative to top-down selection of candidates by test score.Banding can reduce the adverse impact of tests that are used for
personnel selection.

There are various kinds of sliding vs fixed bands (random, minority preferences, etc.) but with sliding bands as you exhaust the top candidates in the band the band falls downward (always the same number in the band). One motivating factor for using the sliding band approach is increased minority selection.

They are controversial.