• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/27

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

27 Cards in this Set

  • Front
  • Back
In regard to Precision Vs. Accuracy, Reliability is ____ while Validity is _______
Precision = Reliability
Accuracy = Validity
Definition of Reliability
Measure repeated outcomes with consistent scores; the extent to which a measurement is consistent and free from error
What is Test-Retest Reliability
a basic premise of reliability is the stability of the measuring instrument; will obtain the same results with repeated tests
What is intrarater reliability
Refers to the stability of data recorded by one individual
What is interrater reliability
concerns variation between two or more raters who measure the same group of subjects
What is the primary criteria for choosing an appropriate time interval in test-retest intervals
the stability of the response variable and the test's intended purpose.
Time intervals in test-retest intervals should be far apart enough to avoid
fatigue, learning, or memory effects, while being close to enough to avoid genuine changes in the measured variable
reliability can be influenced by the ____ of the first test on the _____ of the second test
effect, outcome,
and example is that a test of dexterity may improve because of motor learning.
Rater Bias is
for when the rater has a vested interest in the outcome, or when one rater takes two measurements, because raters can be influenced by their memory of the first score.
what is systematic error
predictable errors of measurement, consistently overestimating or underestimating the true score, constant and biased
If a systematic error is detected, you can correct it by
recalibrating the system or to adjust for it by adding or subtracting the appropriate constant
What is Random Error
measurement are due to chance and can affect the score in an unpredictable way from trial to trial.
An example of Random error is
measuring a patient's height and the patient moves while his height is being measured each time, so it will be inconsistent
What is regression toward the mean
single test's potentially extreme score; multiple tests reveal score closer to group average
what is the classical reliability theory
Single score made up of true score & random error gives best estimate of actual value
what is the generalizability theory
single score made up of true score and various types of error. Must consider the specific measurement scenario when considering reliability of measure, including test-retest error, rater error, random and other measurable errors
What is Reliability Coefficient
an estimate of the extent to which a test score is free from error. Is expressed as a ratio True score Variance/ True score variance + Error Variance. Worst value is 0, best reliability value is 1
What is correlation
Correlation reflects the degree of association between two sets of data, or the consistency of position within two distributions. (relationship)
what is agreement
Recording same actual values. You can have perfect correlation but if there is no agreement it can illustrate poor reliability
For statistical tests of reliability, the ICC (intraclass correlation coefficient) has become the preferred index because it reflects both ____ and ____
correlation and agreement
Kappa Statistic and Percent Agreement are used with
nominal data / categorical data
Pearson's and Spearman's correlation coefficients are used with
interval-ratio data and ordinal data in the continuous data category
Standard error of measurement is
how confident you are that the measure falls within a certain range. SEM = std dev x sqrt(1- reliability coefficient)
The greater the reliability, the _____ the SEM (standard error of measurement)
smaller
Variation around a measure example. While using grip strength and a dynomometer, if there was a 95% variation around a measure = +/- 3 ft.lbs, and the subject measured to have 25ft.lbs of grip strength you would be ____ confident the true grip strength lies between ___ and ___ ft.lbs
95%, 22 and 28 ft.lbs
minimal detectable change/difference (MDC) is
used to define the amount of change in a variable that must be achieved to reflect a true difference
minimal detectable change is not a clinical measure for measuring meaningful improvement, it is only a _____ measure
statistical