• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/45

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

45 Cards in this Set

  • Front
  • Back
the degree to which data or results of a study are correct or true
validity
extend to which a question or scale is measuring the concept, attribute or property it says it is
validity
instrument which established validity that can be used as a standard for assessing other instruments.
gold standard
the degree to which results of the research study are generalizable
external validty
2 types of external validity
1. ecological validity
2. population validity
in order for an experiment to possess this validity, the methods, materials and settings of the experiment must approximate the real-life situation that is under study
ecological validity
not sufficiently described for others to replicate
explicit description of the experimental treatment
if the researcher fails to adequately describe how he or she conducted a study, it is difficult to determine whether the results are applicable to other settings
explicit description of the experimental treatment
catalyst effect
multiple-treatment interference
if a researcher were to apply several treatments, it is difficult to determine how well each of the treatments would work individually. it might be that only the combination of the treatments is effective
multiple-treatment interference
attention causes differences
hawthorne effect
subjects perform differently because they know they are being studied
hawthorne effect
anything different makes a difference
novelty and disruption effect
the treatment might have worked bec of the person implementing it. given a different person, the treatment might not work at all
experimenter effect
to everything there is a time
interaction of history and treatment effect
not only should researchers be cautious about generalizing to other population, caution should be taken to generalize to a different time period. as time passes, the conditions under which treatments work change
interaction of history and treatment effect
may only works with M/C tests
measurement of the dependent variable
a treatment may only be evident with certain types of measurements. a teaching method may produce superior results when its effectiveness is tested with an essay test, but show no differences when the effectiveness is measured with a multiple choice test
measurement of the dependent variable
it takes a while for the treatment to kick in
interaction of time of measurement and treatment effect
it may be that the treatment effect does not occur until several weeks after the end of the treatment. in this situation, a posttest at the end of the treatment would show no impact, but a posttest a month later might show an impact
interaction of time of measurement and treatment effect
a form of experimental validity. an experiment is said to possess this validity if it properly demonstrates a causal relation between two variables and not the result of extraneous factors
internal validity
threats to internal
1. history
2. maturation
3. testing threat
4. instrumentation threat
5. statistical regression
6. selection
7. mortality
threat to internal validity in which an outside event or occurrence might have produced effects on the dependent variable
history
produced by a previous administration of the same test or other measure.
testing threat
produced by changes in the measurement instrument itself.
instrumentation threat
threat to internal validity that can occur when subjects are assigned to conditions on the basis of extreme scores on a test
statistical regression
threat to internal validity that can occur when nonrandom procedures are used to assign subjects to conditions or when random assignment fails to balance out differences among subjects across the different conditions of the experiment
selection
threat to internal validity produced by differences in dropout rates across the conditions of the experiment
mortality
types of validity
1. face validity
2. construct validity
3. content validity
4. criterion validity
it pertains to whether the test looks valid to the examinees who take it, the administrative personnel who decide on its use, and other technically untrained observers
face validity
the relationship between an instrument and an established theoretical framework
construct validity
the extend to which the measures are demonstrably related to concrete criteria in the real world
criterion validity
types of criterion validity
1. predictive validity
2. concurrent validity
3. discriminant validity
the ability of an instrument to predict the occurrence of a future behavior or event
predictive validity
the degree to which the measurement being validated agrees with an established measurement standard administered at approximately the same time
concurrent validity
describes the degree to which the operationalization is not similar to (diverges from) other operationalizations that it theoretically should not be similar to
discriminant validity
the degree of consistency that a measuring method or device produces.
reliability
forms of reliability
intrarater reliability
interrater reliablity
the consistency of repeated measurements of the same observations by the same rater
intrarater reliability
the consistency of repeated measurements of the same observation by different raters
interrater reliablity
describes the degree to which the operationalization is not similar to (diverges from) other operationalizations that it theoretically should not be similar to
discriminant validity
the degree of consistency that a measuring method or device produces.
reliability
forms of reliability
intrarater reliability
interrater reliablity
the consistency of repeated measurements of the same observations by the same rater
intrarater reliability
the consistency of repeated measurements of the same observation by different raters
interrater reliability