Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
25 Cards in this Set
- Front
- Back
Ability
|
A defined domain of cognitive, perceptual, psychomotor, or physical functioning.
|
|
Bias
|
In a statistical context, a systematic error in a score. In discussing fairness, bias refers to variance due to contamination or deficiency that
differentially affects the scores of different groups of individuals |
|
Concurrent validity evidence
|
Demonstration of the relationship between
job performance and other work outcomes, and scores on selection procedures obtained at approximately the same time. |
|
Confidence Interval
|
An interval between two values on a score scale within which, with specified probability, a score or parameter of interest is
expected to lie. |
|
Construct
|
A concept or characteristic of individuals inferred from empirical evidence or theory
|
|
Criterion
|
A measure of work performance or behavior, such as productivity, accident rate, absenteeism, tenure, reject rate, training score, and
supervisory ratings of job relevant behaviors, tasks or activities. |
|
Criterion-related validity evidence
|
Demonstration of a statistical relationship between scores on a predictor and scores on a criterion measure.
|
|
Cross-validation
|
The application of a scoring system or set of weights
empirically derived in one sample to a different sample from the same population to investigate the stability of relationships based on the original weights |
|
Cutoff score
|
A score at or above which applicants are selected for further
consideration in the selection process. The cutoff score may be established on the basis of a number of considerations (e.g., labor market, organizational constraints, normative information). Cutoff scores are not necessarily criterion referenced, and different organizations may establish different cutoff scores on the same selection procedure based on their needs. |
|
Fairness
|
There are multiple perspectives on fairness. There is agreement that
issues of equitable treatment, predictive bias, and scrutiny for possible bias when subgroup differences are observed are important concerns in personnel selection; there is not, however, agreement that the term “fairness” can be uniquely defined in terms of any of these issues. |
|
Generalized evidence of validity
|
Evidence of validity that generalizes to
setting(s) other than the setting(s) in which the original validation evidence was documented. Generalized evidence of validity is accumulated through such strategies as transportability, synthetic validity/job component validity, and meta-analysis. |
|
Internal consistency reliability
|
An indicator of the reliability of a score
derived from the statistical interrelationships of responses among item responses or scores on different parts of an assessment. |
|
Multiple-hurdle model
|
The implementation of a selection process whereby two or more separate procedures must be passed sequentially
|
|
Predictive bias
|
The systematic under- or overprediction of criterion performance for people belonging to groups differentiated by characteristics not relevant to criterion performance.
|
|
Predictive validity evidence
|
Demonstration of the relationship between
selection procedure scores and some future work behavior or work outcomes. |
|
Predictor
|
A measure used to predict criterion performance.
|
|
Reliability
|
The degree to which scores for a group of assessees are consistent over one or more potential sources of error (e.g. time, raters, items,
conditions of measurement, etc.) in the application of a measurement procedure. |
|
Restriction of range or variability
|
Reduction in the observed score variance of a sample, compared to the variance of an entire population, as
a consequence of constraints on the process of sampling. |
|
Skill
|
Level of proficiency on a specific task or group of tasks
|
|
Standardization
|
(a) In test construction, the development of scoring
norms or protocols based on the test performance of a sample of individuals selected to be representative of the candidates who will take the test for some defined use; (b) in selection procedure administration, the uniform administration and scoring of a selection procedure in a manner that is the same for all candidates |
|
Synthetic validity evidence
|
Generalized evidence of validity based on previous demonstration of the validity of inferences from scores on the selection procedure or battery with respect to one or more domains of work
(job components); also referred to as “job component validity evidence.” |
|
Systematic error
|
A consistent score component (often observed indirectly), not related to the intended construct of measurement.
|
|
Type I and Type II errors
|
Errors in hypothesis testing; Type I error
involves concluding that a significant relationship exists when it does not; Type II error involves concluding that no significant relationship exists when it does. |
|
Validation
|
The process by which evidence of validity is gathered, analyzed, and summarized. (Note: laypersons often misinterpret the term
as if it implied giving a stamp of approval; the result of the research might be zero validity.) |
|
Validity
|
The degree to which accumulated evidence and theory support
specific interpretations of scores from a selection procedure entailed by the proposed uses of that selection procedure. |