• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/46

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

46 Cards in this Set

  • Front
  • Back
factorial design
have experiments with more than one IV; multiply variables to get possible number of conditions.
main effects
average of one variable across the other variables; use one variable and average as if its constant. effect of just one of the IV. Just A
interaction
Combining A and B; the effect of one IV changes in effect to another IV
two way interaction
the effect one IV changes across another IV
three way interaction
pattern between two way interaction changes across the level of a third IV
within-subjects design(repeated measures)
each person participates in all levels of the IV/condition. within a person each person goes through each variable manipulating IV within a group of individuals.
between-subject design
each person participated in only one condition. protect from order or carry over effects

adv: cost efficient and control over participants. each subject is his or her own control. no diff. in subjects

dis:participants have to do diff. conditons in specific order. participants can develop diff. attitudes as experiment continues. subjects can experience carry over effect or order effect
carry over effect
disadvantage
subjects result/effect of one condition is affected by the preceding condition
order effect
disadvantage
having participants go through conditions in a diff. order to control for interactions
quasi-experimental method
misisng something that makes them a true experiment; no comparison or control part or lacks randomization
observation-treatment-observation
pretest-post-test; one treatment given before one after; have small group of subjects who you want to apply a new treatment to but dont have a control group ; no background. cant tell if change is made by treatment.
interrupted-time-series design
lots of observations before treatment; have times series of measurment/observations, then interrupt wiht treatment; no control group; can see changes in DV
non-equivalent control group design
have a group of subjects but want a control group so you put a non randomized group together
matched-groups design
creates non equivalent control groups but matched with treatment group with similar variables(variables you can control)
natural manipulations
look at measures but dont manipulate anything; differences in sex, gender, handedness, pathology...
handedness
left handed people die younger. more right handed people than left
cross sectional method
diff. ppl at diff. ages
longitudinal
same ppl at diff. ages
cross sequential
diff. ages, get measures two times. mix of cross sectional and longitudinal
history and threats to internal validity
something happens around the time of the treatment; outdide of the subject in the world
maturation and threats to internal validity
changes in body; how infants change as they get older; change inside subject, effect from inside the subject(change in brain/body)
regression to the mean and threats to internal validity
when you look at a person initially and then look another time, the second measure is closer to the mean than the extreme first measurement( all hitters whether in a slump or doing well at the begining all end up around the same average)
mortality/subject attrition and threats to internal validity
subjects dont come back, lose subjects becasue they didnt eant to or couldnt find them
individual diff./subject variables and threats to internal validity
if treatment didnt work they arent likely to to return
chi-square test for independence
same logic as correlation; when a null hypothesis is true; used to compare observed data with data we would expect to obtain from a more specific hypothesis- null
contingency table
used to record and analyze relationships between 2 variables
correlational methods
correlation doesnt equal causation
Pearson's product-moment correlation coefficient
problem id directionality, confounding variables-intervening variables
scatter diagrams
one extreme value effects the mean and size of the correlation; assume the relationship is linear
intervening variable
when the unused variable affects the outcome
directionality
does A cause B or does B cause A?
truncated range
not looking at entire range the alters the conclusion

ex. SAT scores and college acceptance
nonlinearity
relationship between variable is not statistic or directly proprtional to the input, but dynamic and variable
hypothesis testing
approach to answering questions about results
significance testing
how likely it is that the results occured by chance
null results
you cant prove a negative, so the same is true of a null hypothesis. you can say we have no evidence to reject the null hypothesis.
null hypothesis
might be falsified by a specific statistical test
null hypothesis significance testing
through replicating variables, whats the probability of getting the same results through different conditions? if null hypothesis is true how likely are we to observe those same results. how likely is it that changes in the study may be due to error- random variation?
random variation
error
type one error
false positive, leads you to believe there is systematic variance but isnt
type two error
false negative
eliminate null hypothesis
cant prove the null hypothesis; how likely is it that i would observe the same results if the null is true? if probability is small it is unlikely to be true
ceiling effect
all stats near top limits of the DV; making test too easy so everyone gets an A. IV has no room to move up more
floor effect
all stats near bottom of DV; making test too hard os everyone fails. IV has no room to move down further
null results
can reject but never prove null hypothesis
easiest way to get null is small samples with lots of variance
4 ways to increase power of an experiment
1. increase effect size; increase diff. between mean distribution, when you increase diff. you gain power.
2. increase alpha to decrease beta you can increase power
3. decrease error and variability by increasing experimental control
4. increase number of participants/population