• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/10

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

10 Cards in this Set

  • Front
  • Back
Validity: Whether a test measures what its supposed to measure

Content Validity
*Evidence that test items represent the proper domain.
*Is the content of the test valid for the kind of test that it is?
*Most basic form
*Information from books, teachers, experts, curriculum guides
*Survey Domain, Content Matches Domain, Test Items Reflect Content, Adjusted for Relative Importance
Validity: Whether a test measures what its supposed to measure

Criterion Validity:
*An external source

*What is the relationship between a test and a criterion standard
Criterion Validity: 2 Kinds

1) Concurrent Validity

2) Predictive Validity
*Test shown to be related to an external source that can be measured at around the same time the test is given, another obtainable benchmark

*Relates a test to a criterion in the future (GRE's to GPA in graduate school)
Validity: Whether a test measures what its supposed to measure

Construct Validity
*Showing a construct is being measured by a test.

*Something is being measured

*Different ways to gather evidence
Construct Validity: 4 kinds

1) Experimental Design Validity
2) Factor Analysis
3) Convergent Validity
4) Discrimination Validity
*Using experimentation to show that a test measures a concept

*Demonstrates statistical relationships among subscales/items on a test

*Relationship between a test and other similar tests

*Looking to find little or no relationship between your test and measures of constructs that are not theoretically related to your test
Reliability: Whether score individual received on a test is an accurate measure of his/her true score

Test-Retest Reliability
*Give the test twice to the same group of people

*Two administrations should be strongly correlated

*More effective in areas that do not change much over time
Reliability: Whether score individual received on a test is an accurate measure of his/her true score

Alternate/Parallel/Equivalent Reliability
*Make two or more forms of the test

*Tests are made to mimic each other but be different enough

*Resolves some problems associated with test-retest
Reliability: Whether score individual received on a test is an accurate measure of his/her true score

Internal Consistency Reliability
*How scores relate to each other or to the test as a whole

*Examining within the test itself, not things going on outside of the test
Internal Consistency Reliability: 2 kinds

1) Split-Half

2) Cronbach's Coefficient Alpha/Kuder Richardson
*Split the test in half and correlate the test halves

*Methods that attempt to estimate the reliability of all possible split-half combinations
What are the steps involved in choosing a good test?
1) Determine the goals of the client
2) Choose instruments to reach client goals
3) Access information about possible instruments
4) Examine test worthiness of possible instruments
5) Choose an instrument wisely