• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/16

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

16 Cards in this Set

  • Front
  • Back

training evaluation

process to assess value of training programs to employees and the organization

formative evaluations vs. summative evaluations

Form - provides data about various aspects of a training program




Summ - data about worthiness or effectiveness of a training program

descriptive evaluations vs. causal evaluations

descriptive - provides info that describes the trainee once he/she completes training program




causal - provides info to determine whether training caused the post-training behaviours

Models of Training Evaluation

1) Kirkpatrick's Hierarchical Model


2) COMA model


3) Decision Based Evaluation

Kirkpatrick's model

Training effective when:


L1) trainees report positive reactions


L2) trainees learn the material


L3) trainees apply what they learn on job


L4) training has positive effect on organizational outcomes

COMA model

suggests measurement of variables falls into four categories:


Cognitive variables - declar./proced. learning


Organizational environment - support, culture


Motivation - desire to learn/to transfer learning


Attitudes - self-efficacy, perceptions of control




questionnaires given to assess above components

COMA improves on Kirkpatrick's model in 4 ways

1) defines new variables with greater precision


2) greater number of measures


3) measures proven to be causally related to improved transfer of training


4) especially useful for formative evaluations

Decision-based evaluation

3 assessors to custom-fit evaluation to requirements of situation:


1) target of theevaluation (what we wish to find out),


2) focus (what are the variables to bemeasured)


3) methods that would be most appropriate to conductthe evaluation

DBE's 3 potential targets

1) trainee change


2) organizational payoff


3) program improvement

Advantages of DBE

- does not specify a single best way for evaluation


-does not compel the use of a single set of variables for all situations


-allows for different variables to be measured depending on goals


-more flexible, used for both formative and summative evals.

Reaction measures

-Affective reactions - assesses trainee likes and dislikes of a program


-Utility reactions - perceived usefulness of program (more important for transfer)




-questionnaires more efficient, cost effective, consistent, can be used repeatedly

Measuring learning

Declarative - acquisition of facts, most assessed. Multiple choice, true/false tests


Procedural - organizational of facts (more strongly related to transfer), re-organizing steps into correct order, case scenarios with carefully created answers to detect comprehension levels




Post-training tests can increase motivation to learn. Tests also provide legal defense to prove competency, and indicate which course material is insufficient

Measures to measure behaviour

1) self reports - trainee indicates how often she/he used new knowledge. easiest to use. accuracy is questionable.


2) observation - others observe whether trainee uses new knowledge. accuracy questionable when supervisor/peers not in direct line of observing


3) production indicators - output assessed through records, i.e. sales or absenteeism. can be highly precise on specific behaviours.

Other factors to measure

-Motivation - questions based on Valence, instrumentality, expectancy


-Self-efficacy - "how confident are you..."


-Perceived/anticipated support - "I know I can obtain help from my supervisor - agree or not?"


-organizational perceptions - i.e. "do you agree that supervisors set goals for trainees?"


-organizational results - how has org. befitted? hard to measure

hard vs. soft data

hard - results assessed objectively


soft - results assessed through perceptions and judgements

Data collecting designs

non-experimental - comparison not made to another group of people


experimental - trained group compared to another group who has received no training, groups are random


quasi-experimental - like above but groups not random




last two stronger evidence of causality, demonstrating changes in trainee behaviour