Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
10 Cards in this Set
- Front
- Back
What is Validity?
|
measures the assessment's accuracy to determine if the tool measures what it was intended to measure
|
|
What is Face Validity?
|
establishes how well the assessment instrument appears "on the face of it" to meet its stated purpose (e.g., an activity configuration looks like it measures time use)
|
|
What is Content Validity?
|
establishes that the content included in the evaluation is representative of the content that could be measured (e.g., does the content of a role checklist provide an adequate listing of roles?)
|
|
What is Concurrent Validity?
|
A type of criterion validity that compares the results of two instruments given at about the same time
|
|
What is Predictive Validity?
|
A type of criterion validity that compares the degree to which an instrument can predict performance on a future criterion
|
|
How is Criterion Validity reported?
|
As a correlation; the higher the correlation, the better the criteriod validity
|
|
What is Reliability?
|
establishes the consistency and stability of the evaluation; if the evaluation is reliable, the measurements/scores are the same from time to time, place to place, and evaluation to evaluation
|
|
What is Inter-rater Reliability (or Inter-observer Reliability)?
|
establishes that different raters using the same assessment tool will achieve the same results
|
|
What is Test-retest reliability?
|
establishes that the same results will be obtained when the evaluation is administrated twice by the same administrator
|
|
How is Reliability scored?
|
As either a correlation or a percentage to identify the degree to which the two items agree/relate
|