• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/66

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

66 Cards in this Set

  • Front
  • Back
nominal
lowest level of measurement .

may be thought of as "naming" level

respond with words, not numbers
ordinal
measurement place participants in order from high to low
interval
equal interval without an absolute zero
ratio
equal interval with an absolute zero
reliable
a test is said to be _________ if it yields consistent results
validity
accurately measures what you're testing
1. A test with high realiability may have low validity

2. Validity is more important than reliability

3. To be useful, an instrument must be both reasonably valid and reliable
3 principles of realibility and validity
interobserver reliability
when two or more researchers agree on the realiability of the measurments taken
correlation coefficient
check the degree of relationship between 2 quanatative results.

1.00=perfect reliability
test-retest reliability
obtain measurments at two different points in time
parallel forms of reliabilty
some published tests come in two parallel forms that are designed to be interchangable with each other; they have different items that cover the same content
split-half reliability
checks on the consistency of scores within the test itself
test-retest reliability
parallel forms of reliability
which measure the consistency of scores over time?
split-half
alpha
which measures the consistency of items within a test at a single point in time?
instrument
any type of measurment device
content validity
researchers make judgements on the appropriateness of its contents

*essential for achievement tests
face validity
judgement is based on whether the instrument APPEARS to measure what it is supposed to measure
content and face validity
what two types of validity rely on judgements?
predictive validity
To what extent does the test predict the outcome it is supposed to predict
construct validity
the type of validity that relies on subjective judgements and empirical data
1. origante question
2. develope rationale
3. fesability of answering question
3 steps to formulate a question
efficacy research
the process of demonstrating under ideal conditions
outcomes measurement
the process of demonstrating under average or less than average conditions
comparative or standard group
*type of descriptive research

strategy to measure behvior of two or more types of participants at one point in time in order to draw conclusions about similarities and differences between them
cohort studies
pateints who presently have a condition and/or receive treatment are followed over a period of time and compared with another group who are not affected by the condition under investigation
developmental or normative
designed to measure changes over time in behavior or characteristic, usually with reference to aging and maturation
cross-sectional
select participants from various age groups and observe differences

*norms should be established on random selected participants who are a representative of population

*sample size must be appropriate to to population
semilongitudinal
compromise of longitudinal, divide total age span into overlapping age spans
correlational
-asks two basic questions
1. how closely related are the variables
2. how well can performance on one variable predict performance on another
survey
use questionaires, interviews, or combinatons
retrospective or post facto
done after the fact, IV occured in the past and investigator starts with effect
case-control study
a type of design often used in epidemiology
case history
examine one person in depth. looking at an unusual phenomen, hard to generalize
frequency
measures how many instances there were and what was the time frame
ex) he said ball 8 out of 10times

*good for nominal and ordinal data
*useful and objective
duration
count the duration of a type of behavior
ex) client was dysfluent for 15 minutes during 30 minutes of speaking time

*good for contineous behaviors
interresponse time
time lapse between any two discreet events or responses

ex)want kid to sit in chair
-count time elapsed between last time left seat to when he sits down to when he gets back up agaiin
latency or reaction time
how long it takes for a client to respond to a stimulus
response amplitude
intensity of response
ex)voice intensity, sweaty palms,
utility
is the date useful for your purpose?
will you be able to answer your research question with the data you're collecting
sensitivity of test
rarely fails to identify disorder or disease

mathmatically defined as a proportion of people who likewise test positive for disorder or disease

*gather all people with that trait (false-positives)
speciicity of test
seldom identifys a person having a disease or a disorder or when they don't

mathmatically defined as ratio of people who test negative on a screening test to people without disease or disorder

*might miss more (false positives)
floor effect
test so hard, no one can pass it
ceiling effect
test so easy, everyone can pass it
precision
the reliability aspect of a test, will it give you the same scores with each administration of test
accuracy
both reliablity and validity; tests what you want it to test and scores are reliable
scaling power
refers to scale of measurement
test-retest
methods to estimate reliability: stability
parallel forms
methods to estimate reliability: equivalence
split-half
alpha
kuder-richardson #20 formula
methods to estimate reliablity: internal consistency
standard error of measurement

small SEM related to high levels of reliability
SEM
kappa
statistic that takes into account fact that some agreements may occur by chance, especially in very frequently or less frequent occuring behaviors
direct replication
same investigator repeats same research with same or similar participants, in same physical setting, to confirm realiability of original results
systematic replication
try to extend generalization across participants, settings, measurments, or treatments

manipulates 1 or 2 variables
threats to external validity
the ability for research to be generalized
selection bias
generalizing the results to a population when an experiment is conducted on a nonrandom sample
reactive effects of experimental arrangements
if the experimental setting is different from the natural setting in which the population usually operates, the effects there were observed in the experimental setting may not be generalized to the natural setting
reactive effect of testing
pretest sensitization
a pretest might affect how a participant will respond to experimental treatment
multi-treatment interference
when a group of participants receive more than one treatment which could ultimately affect their responses to later treatment
threats to internal validity
explains how aspects other than the treatment actually working could interfer with the validity of an experiment
history
other enviormental influences on the participants between the pretest and the posttest
maturation
participant got older, wiser, or smarter between pre and post test
instrumentation
possible changes in the instrument
testing
effects of the pretest on the performance exhibited on the post test
statistical regression
occurs when participants are selected on the basis of their extreme scores
intact groups
researcher uses previously existing groups, so they are not random
selection
two different groups are not initially the same in all important aspects