• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/24

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

24 Cards in this Set

  • Front
  • Back

Test-Retest

Most straightforward way to determine reliability



Must have recovery time between measurements

Interclass Correlations:

A statistical technique used to compute the reliability coefficient to assess the relationship between measures of the same class as in a test–retest study.

Split-half reliability

Compare one half of a test with the other half


Spearman-Brown Prophecy Formula

Internal consistency reliability

Average all possible split-half estimates (Cronbach’s alpha)

To increase reliability

Repeat a measurement several times—


To improve both validity and reliability


To discover and minimize errors


To average out the errors

Some Common Ideas

It is possible to have high reliability or objectivity without high validity.


Good reliability or objectivity will always be present with a valid measurement.


Good reliability and objectivity do not establish good validity; they simply suggest that a measurement may be valid.

Pilot test:

The first test of a newly developed measure


Purpose:


Identify problems/errors in test procedure.


Determine if it’s possible to “cheat” the test or to make it easier and faster.


Review the instructions to make sure they’re complete, clear, and understandable.


When you devise a test why you need clear direction

Articulate an operational definition that describes what is being measured.


Define what to measure.


Research relevant information.


Establish testing procedures.


Determine scoring for the test.


Pilot-test the test, the scoring system, and instructions.


Evaluate, modify, and retest the pilot test.


Develop norms.

Norm table

:Provides criteria for determining a good score, an average score, and a poor score.


Ask questions about the norms.


Remember that any change to the test will also change the norms.

Construct validity:

Look at the underlying qualities that account for good performance on a test.

Criterion validity:

If possible, compare the measurement with a criterion measure.

Content validity:

Determine if the measurement logically appears to measure what it claims to measure; make sure definitions are precise.

Several reasons why you want to evaluate an employee:

Define what is expected in a particular employment position.


Conduct research to find a measurement system that will work for the situation.


Modify an existing test, or devise one.


Develop an evaluation procedure that is then piloted and modified.


Evaluate the validity, reliability, and objectivity of the final procedure.

Types of Numbers

Nominal numbers


Ordinal numbers


Interval (scalar) numbers


Ratio numbers

Significant digits:

The number of digits in a measurement or calculation that have meaning.


75.45 has 4 significant digits

Z score:

A standard score that allows us to compare any score to the mean score and then express it as a fraction of the standard deviation.


Low is better

Correlations:

Measures the degree of relationship (strength and direction) between two variables.


Does not explain why variables move as they do; only that they do.

Correlation Coefficient

Ranges between 0 and +1 and –1


0 indicates no relationship


+1 indicates a strong relationship


–1 indicates a negative relationship


Positive/negative sign indicates the direction (not the strength) of the relationship.

Negative correlation

When an increase in one variable goes along with a decrease in another variable there is a negative relationship.

Multiple Correlation

A measurement that uses several independent measures to predict the success of an outcome.


For example, look at variables of size, speed, strength, years of experience, age, etc., when predicting an athlete’s playing success.

Type I error:

The risk of assuming a difference exists when none really does.

Type II error:

A failure to find a difference between means that really does exist.

Statistical Power:

The ability to find the difference between means.


Dependent on four factors:


Size of the sample


Size of the effect


p value


Standard deviation; variability of the groups

ANOVA

Analysis of variance (three or more group means)