Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
42 Cards in this Set
- Front
- Back
Hawthorne Effect
|
Hawthorne Effect – a positive change in behavior that occurs at the onset of an intervention followed by a gradual decline, often to the original level of behavior
� |
|
Taylorism
|
Taylorism – excessive focus on efficiency
|
|
Walter Dill Scott
|
Worked to apply personnel
research to the Army during WWI Improving efficiency with imitation, competition, loyalty and concentration Scientist-practitioner Worked in advertising |
|
Walter Dill Scott – What makes a good ad?
|
Repetition
Intensity Association Ingenuity |
|
Frederick W. Taylor
� |
Frederick W. Taylor (redesign of work situations; The Principles of Scientific Management)
Science over the rule of thumb Scientific selection and training Cooperation over individualism Equal division of work Resting improved quality of work (rest = more work = more salary for employee) |
|
Lillian Mohler Gilbreth (effects of stress and fatigue) & Frank Gilbreth (industrial time management)
|
Lillian Mohler Gilbreth (effects of stress and fatigue) & Frank Gilbreth (industrial time management)
Husband and wife team Invented management techniques that are still used Concerned with the human aspects of management Lillian Gilbreth was also wrote popular books |
|
Hugo Münsterberg
|
Hugo Münsterberg
father of I/O psychology applied scientific method to practical problems Safe Trolley Car research |
|
Control variable
|
An extraneous variable that an investigator does not wish to examine in a study. Thus the investigator controls this variable. Also called a covariate.
|
|
Predictor variable
|
The presumed “cause” on a nonexperimental study. Often used in correlational studies. For example, SAT scores predict first semester GPA. The SAT score is the predictor variable.
|
|
Correlations
|
Positive is if one moves, other moves in same direction.
Negative is if one moves, other moves in opposite direction Does not describe cause/effect. |
|
Predictive Validity (Criterion Related)
|
Predictive validity: the extent to which test scores obtained at one point in time predict criteria obtained at some later time
Example: SAT scores predicting success in college (GPA) |
|
Concurrent Validity (Criterion Related)
|
Concurrent validity: how well a test predicts a criterion that is measured at the same time the test is administered
� |
|
Convergent validity
|
Convergent validity: degree to which measure is related to or predicts measures of other similar constructs
� |
|
Divergent/Discriminant validity
|
Discriminant validity: degree to which measure is not related to measures of dissimilar constructs
� |
|
Convergent and Divergent similarity?
|
Both types of validity are demonstrated by using predictive and/or concurrent validity designs
� |
|
Control in experimental vs real life setting
|
Table 2-1, Slide 22
|
|
Random Assignment
|
Used in experimental methods but not field/Quasi-experiments
|
|
Test-Retest Reliability
|
The stability of a test over time
Coefficient of stability Participants are given a test at Time 1 and then given the exact same test at Time 2 Minimize error so that high scorers at Time 1 are also high scorers at Time 2 Virtually no measure used in I/O has perfect reliability |
|
Interrater Reliability
|
The consistency with which multiple raters view, and thus rate, the same behavior or person
Relevant in performance appraisal Measured by examining correlation between ratings of two judges |
|
Informed consent
|
Informed consent
Right to know the purpose and potential risks Right to decline or withdraw |
|
Internal Validity
|
Internal Validity – the extent to which results are attributed to variables being studied
|
|
External Validity
|
External Validity – the extent to which findings generalize beyond the study
|
|
Internal Consistency Reliability
|
Indication of the relatedness of the test items
Tells us how well our test items are hanging together Split half reliability: split the test in half to see if one-half is equivalent to the other Inter-item reliability (Cronbach’s coefficient alpha): examining the correlations among all test items to determine consistency *Rule of thumb for reliability level: .70 |
|
3.1 Job Analysis
|
The process of defining a job in terms of its component tasks or duties and the knowledge or skills required to perform them
The basis for the solution to any human resource problem Many different techniques |
|
3.2 SMEs
|
Subject Matter Experts
|
|
3.3 Task Statements
|
Develop a series of task statements
Concise expressions of tasks performed on a regular basis Neither too specific nor too vague Average job requires 300-500 statements SMEs rate these statements for how important and how often the activity is and how oftenthey engage in that activity |
|
3.4 KSA's
|
Job Element Method (JEM) – job analysis designed to identify important or frequently utilized human attributes as a means of understanding the work performed
Human attributes categorized into 4 categories: Knowledge – types of information needed (e.g., education, training) Skills – proficiencies needed that can be practiced (e.g., driving) Abilities – relatively enduring innate proficiencies (e.g., cognitive ability, spatial ability) Other – personality or capacities (e.g., remaining calm in dangerous situations) |
|
3.5 Design Selection, Training, Performance evaluations
|
Designing a selection process based on the results of your job analysis
Provides information as to content of training needed Provides a basis for performance evaluation Analysis reveals what is necessary for good performance |
|
3.6 Position Analysis Questionnaire (PAQ)
|
Position Analysis Questionnaire (PAQ)
A method of job analysis that assesses the content of jobs on the basis of approximately 200 items in the questionnaire Six major categories: Information input Mental processes Relationships with others Job context Work output Other |
|
3.7 Competency Modeling
|
Competency modeling – a process for determining the human characteristics (or competencies) needed to perform a job successfully.
Relationship to job analysis: Core competency is a critically important KSAO “Model” is array of competencies the organization desires Competency modeling is very broad compared to job analysis |
|
3.8 What is Job analysis used for?
|
To learn what KSAOs are needed for the job
Offers rationale for personnel selection tests Organizes positions into job families to find compensation levels (job evaluation) Provides information as to content of training needed Provides a basis for performance evaluation Analysis reveals what is necessary for good performance Useful in career counseling |
|
4.1 Job Criteria
|
Criteria – Evaluative standards that can be used as yardsticks for measuring an employee’s success or failure
Evaluative standards Disagreements occur in: Choices over the proper criteria to use Different judgments Criterion Problem – no perfect measure of performance |
|
4.2 Ultimate Criterion
|
Ultimate Criterion – the theoretical construct encompassing all performance aspects defining success on the job (Theoretical)
|
|
4.3 Actual Criterion
|
Actual Criterion – the operational or actual standard that researchers measure or assess (Measures)
|
|
4.4 Composite Criterion
|
Composite criterion – weighted combinations of multiple criteria (****** professor does lots of research example)
Equal weighting Unequal weighting |
|
4.5 Sensitivity
� |
Sensitivity – must discriminate between effective and ineffective performance
|
|
4.6 Criterion Deficiency
|
Criterion Deficiency –
Degree to which actual criteria fails to overlap ultimate criteria Can reduce but not eliminate |
|
4.7 Criterion Relevance
|
Criterion Relevance
Degree to which actual criteria and ultimate criteria coincide |
|
4.8 Criterion Contamination
|
Criterion Contamination
The extent to which the actual criterion measures something unrelated to ultimate criterion Bias – extent actual criteria consistently measures something else Error – extent to which actual criteria is related to nothing at all |
|
Dynamic Criteria
|
A measure of change
|
|
Contextual Performance
|
Contextual performance, which is defined as activities that contribute to the social and psychological core of the organization, is beginning to be viewed as equally important to task performance
|
|
Meta-analysis
|
Meta-analysis is the use of statistical methods to combine results of individual studies.
|