• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/21

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

21 Cards in this Set

  • Front
  • Back
nominal vs. ratio variable
has a name to it (species) vs. has a number to it (weight)
type I vs. type II error
alpha (false alarm) vs. beta (miss) error
forumla for variance and deviance
variance = (x-mean)^2/N-1

deviance = sqrt(variance)
parameters
properties of populations
in the ANOVA, are variables nominal or ratio?
- the dv is ratio

- the ivs are nominal
sampling error
diff between an estimate and the corresponding parameter
univariate vs. multivariate
- 1 dv vs. multiple dvs.
counterbalancing
- presenting a random order of the levels of a repeated measures factor to each subject experiencing the repeated measure
assumption made by using MS (what, how)
- assumption of homogeneity of variance (especially for pooled variance)
- cause it averages the variance of subject scores across groups
expected mean squares (definition, what they tell us)
these try to predict what the mean square for a certain source looks like in the population. you derive the source of variation from these.

they tell us that F has to be at least >1 for it to be a significant result.
the assumptions underlying ANOVA (4)
- that subjects are randomly sampled (sample represents the population)

- independent observations (that each observation has no influence on the next)

- normal distribution (gaussians for each factor)

- homogeneity of variance (group variances are equal (no skew) )
the Fmax test (what for, formula)
- testing the assumption of homogeneity of variance. usually when there are outlying data points or unequal n's.

- Fmax = s^2 largest group variance / s^2 smallest group variance

If ABOVE critical value, then it IS VIOLATED and TRANSFORMATIONS NEEDED
error term rules (3)
1. if a is crossed with no random factors, the error term is at the bottom of the anova table

2. if a is crossed with 1 random factor s, the error term is MSas

3. if a is crossed with more than 1 random factor, there is no simple error term
why are unequal n's bad?
- because they make A and B related and not orthogonal anymore

- when one of the n's is 3 times bigger than the other, then it throws off the calculation so bad that you cant even perform the anova
homogeneity of covariance
an assumption that a repeated measures design makes.

it basically means that the levels of the repeated measure are all equally variant from from each other. this gets harder to prove the more and more levels you have, which is why you apply the Greenhouse Geisser conservative df correction for significant results. for results that are at first significant and then become insignificant, you can apply another correction through the computer.

GG is a very conservative test. it prevents alpha error (type 1)
advantages and disadvantages for repeated measures designs (4)
advantages
-require fewer subjects
-smaller error terms - large the variation between subjects, the smaller the left over error terms

disadvantages
- carry-over effects (can reduce with counterbalancing)
- can violate the assumption of covariance
scheffe test critical value
Fc = k-1( Fc(k-1, df))
why use tests of simple main effects?
- because each post hoc gets weaker (except for t-tests) as the # of means being compared increases. this is cause the other posthocs all use setwise correction, and the critical values of the test can be so high that you dont find significance.

- simple main effects allows you to isolate the row/column that is the source of variation before entering post-hocs, reducing the # of means you have to compare for post hocs.
when are the two cases to use planned comparisons?
- when you want to know something specific about the ANOVA (make a specific comparison) and dont care about the majority of the results

- to test the difference between two non-significant results
what happens if there is non-orthogonality in the planned comparison?
- you basically carry through with the calculation for SS, MS, and F.

-when choosing a Fcrit, do a Bonferroni correction. Make sure the alpha level is divided by the # of comparisons you are making
assumptions of a planned comparison
that the factor you are testing is nominal and equally spaced apart