Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
81 Cards in this Set
- Front
- Back
Effective measurement and data analytics can result in a
|
competitive edge
|
|
Improperly assessing and measuring candidate characteristics can lead to:
|
Systematically hiring the wrong people
Offending and losing good candidates Exposing your company to legal action |
|
the process of assigning numbers according to some rule or convention to aspects of people, jobs, job success, or aspects of the staffing system
|
measurement
|
|
measures relevant to staffing
|
characteristics of the job
aspects of the staffing system characteristics of the job candidate staffing outcomes |
|
enables the creation of job requirements and job rewards matrices
|
characteristics of the job
|
|
such as the number of days a job post is run, where it is run, and the recruiting message
|
aspects of the staffing system
|
|
ability or personality
|
characteristics of job candidate
|
|
performance or turnover
|
staffing outcomes
|
|
The numerical outcomes of measurement are
|
data
|
|
there are 2 types of data
|
predictive
criterion |
|
is information about measures used to make projections about outcomes.
|
predictive data
|
|
is information about important outcomes of the staffing process.
|
criterion data
|
|
4 types of measurement
|
Nominal
Ordinal Interval Ratio |
|
The process of assigning numerical values during measurement
|
scoring
|
|
the unadjusted scores on a measure
|
raw scores
|
|
: measures in which the scores have meaning in and of themselves
|
Criterion-referenced measures
|
|
measures in which the scores have meaning only in comparison to the scores of other respondents
|
Norm-referenced measures:
|
|
a symmetrical, bell-shaped curve representing the distribution of a characteristic
|
Normal curve
|
|
: converted raw scores that indicate where a person’s score lies in comparison to a referent group
|
standard scores
|
|
Indicates how many units of standard deviations the individual’s score is above or below the mean of the referent group
|
standard scores
|
|
A standard score is negative when the target individual’s raw score is
|
below the referent group’s mean
|
|
A standard score is positive when the target individual’s raw score is
|
above the referent group’s mean
|
|
Correlation coefficient, also called “Pearson’s r” or the “bivariate correlation,” is a
|
single number that ranges from -1 to +1 that reflects the direction (positive or negative) and magnitude (strength) of the relationship between two variables.
|
|
Sampling error is the
|
variability in sample correlations due to chance.
|
|
You can address sampling error
|
through statistical significance testing procedures.
|
|
: the degree to which the observed relationship is not likely due to sampling error.
(This is a minimum requirement for establishing a meaningful relationship) |
Statistical significance
|
|
the observed relationship is large enough to be of value in a practical sense.
|
Practical significance:
|
|
In a large enough sample, a very small correlation would be statistically significant but the relationship
|
may not be strong enough to justify the expense and time of using the predictor
|
|
A statistical technique that predicts an outcome using one or more predictor variables;
|
multiple regression
|
|
it identifies the ideal weights to assign each predictor to maximize the
|
validity of a set of predictors;
|
|
the analysis is based on each predictor’s correlation with the outcome and the degree
|
to which the predictors are themselves intercorrelated
|
|
Multiple regression examines the effect of each predictor variable after
|
statistically controlling for the effects of other predictors in the equation
|
|
refers to how dependably or consistently a measure assesses a particular characteristic
|
Reliability
|
|
Measurement error influences
|
reliability.
|
|
Measurement error can
|
be random or systematic.
|
|
To evaluate a measure’s reliability, you should consider:
|
The type of measure
The type of reliability estimate reported The context in which the measure will be used |
|
4 types of error
|
random
systematic deficiency contamination |
|
: error that is not due to any consistent cause
|
Random error
|
|
error that occurs because of consistent and predictable factors
|
Systematic error:
|
|
error that occurs when you fail to measure important aspects of the attribute you would like to measure
|
Deficiency error:
|
|
error that occurs when other factors unrelated to whatever is being assessed affect the observed scores
|
Contamination error:
|
|
4 types of reliability
|
test-retest
alternate or parallel form internal consistency inter-rater |
|
reflects the repeatability of scores over time and the stability of the underlying construct being measured
|
Test-retest reliability
|
|
indicates how consistent scores are likely to be if a person completes two or more forms of the same measure
|
Alternate or parallel form reliability
|
|
indicates the extent to which items on a given measure assess the same construct
|
Internal consistency reliability
|
|
indicates how consistent scores are likely to be if the responses are scored by two or more raters using the same item, scale, or instrument
|
Inter-rater reliability
|
|
is the margin of error that you should expect in an individual score because of the imperfect reliability of the measure. It represents the spread of scores you might have observed had you tested the same person repeatedly.
|
The standard error of measurement (SEM)
|
|
represents the degree of confidence that a person’s “true” score lies within their earned score plus or minus the SEM, given some level of desired confidence.
|
The confidence interval
|
|
The lower the standard error
|
, the more accurate the measurements.
|
|
If the SEM is 0, then each observed score is that
|
person’s true score
|
|
refers to how well a measure assesses a given construct and the degree to which you can make specific conclusions or predictions based on observed scores.
|
Validity
|
|
will tell you how useful a measure is for a particular situation; ______ will tell you how consistent scores from that measure will be.
|
Validity, reliability
|
|
You cannot draw valid conclusions unless you are sure that the measure is______ Even when a measure is _______ it may not be _____
|
reliable.reliable, valid.
|
|
is the cumulative and ongoing process of establishing the job relatedness of a measure
|
Validation
|
|
3 types of validation
|
content-related
construct-related criterion-related |
|
Demonstrating that the content of a measure assesses important job-related behaviors
|
Content-related validation:
|
|
Demonstrating that a measure assesses the construct, or characteristic, it claims to measure
|
Construct-related validation:
|
|
Demonstrating that there is a statistical relationship between scores from a measure and the criterion, usually some aspect of job success
|
Criterion-related validation:
|
|
is a number between 0 and +1 that indicates the magnitude of the relationship between a predictor (such as test scores) and the criterion (such as a measure of actual job success).
|
A validity coefficient
|
|
The validity coefficient is the
|
absolute value of the correlation between the predictor and criterion.
|
|
Validity coefficients rarely exceed
|
.40 in staffing contexts
|
|
is a subjective assessment of how well items seem to be related to the requirements of the job.
|
Face validity
|
|
Face validity is often important to job applicants who tend to react
|
negatively to assessment methods if they perceive them to be unrelated to the job or not face valid.
|
|
Even if a measure seems face valid, if it does not predict __________, then it should not be used.
|
job performance
|
|
a valid assessment system can result in adverse impact
|
Applicants—
|
|
a valid assessment system can have an unacceptably long time to fill or cost per hire
|
Organization’s time and cost—
|
|
a system can be valid but if the system is too long or onerous then applicants, particularly high-quality applicants, are more likely to drop out of consideration
|
Future recruits—
|
|
a valid assessment system may favor external applicants or not give all qualified employees an equal chance of applying for an internal position
|
Current employees—
|
|
the degree to which evidence of validity obtained in one situation can be generalized to another situation without further study
Based on meta-analysis Legal acceptability not yet established No guarantee that the same validity will be found in any specific workplace |
Validity generalization:
|
|
Using existing assesment methods...
|
Examine available validation evidence supporting using the measure for specific purposes.
Identify the possible valid uses of the measure. Establish the similarity of the sample group(s) on which the measure was developed with the group(s) with which you would like to use the measure. Confirm job similarity. Examine adverse impact evidence. |
|
occur when you fail to hire someone who would have been successful at the job (false negatives) or you hire someone who is not successful at the job (false positives).
|
Selection errors
|
|
even though there are errors, we use assessments to
|
enables organizations to make more effective staffing decisions than does the use of simple observations or random decision making, even if they are not perfect.
|
|
The practice of using a variety of measures and procedures to more fully assess people is referred to as the _____________________, and will help reduce the number of selection errors and boost the effectiveness of your overall decision making.
|
whole-person approach to assessment
|
|
refers to the amount of judgment or bias involved in scoring an assessment measure.
|
Objectivity
|
|
The scoring of objective measures is free of
|
personal judgment or bias.
|
|
Subjective measures contain items for which the score can be influenced by the (e.g., essay or interview questions).
|
attitudes, biases, and personal characteristics of the person doing the scoring
|
|
Because they produce the most accurate measurements, it is best to use _______________whenever possible.
|
standardized, objective measures
|
|
creating an assessment system
|
Conduct a job analysis to identify the important KSAOs and competencies required of a successful employee.
Identify reliable and valid methods of measuring these KSAOs and competencies, and create a system for measuring and collecting the resulting data. Examine the data collected from each measure to ensure that it has an appropriate mean and standard deviation. Use correlation or regression analysis to evaluate any redundancies among the measures and to assess how well the group of measures predicts job success. Consider adverse impact and the cost of the measures in evaluating each measure. After the final set of measures are identified, develop selection rules to determine which scores are passing. Periodically reevaluate the usefulness and effectiveness of the system to ensure that it is still predicting job success without adverse impact. |
|
benchmarking comparative dimmensions
|
Application rates
Average starting salaries Average time to fill Average cost per hire |
|
It is sometimes useful to compare an organization’s staffing data with other similar organizations
|
benchmarking
|
|
Determinants of effectiveness of an assessment method include:
|
Validity
Return on investment (ROI) Applicant reactions Usability Adverse impact Selection |