Study your flashcards anywhere!
Download the official Cram app for free >
 Shuffle
Toggle OnToggle Off
 Alphabetize
Toggle OnToggle Off
 Front First
Toggle OnToggle Off
 Both Sides
Toggle OnToggle Off
 Read
Toggle OnToggle Off
How to study your flashcards.
Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key
Up/Down arrow keys: Flip the card between the front and back.down keyup key
H key: Show hint (3rd side).h key
A key: Read text to speech.a key
76 Cards in this Set
 Front
 Back
What is the goal of inferential statistics?

Goal: to draw conclusions about a population by analyzing data from a sample.


What are the two types of inferential statistics?

parametric (make assumptions)
nonparametric (do not make assumptions) 

What are the steps of hypothesis testing?

1.State the statistical (null) hypothesis
2.Choose appropriate test statistic 3.Set the criterion for rejecting the the statistical (null) hypothesis 4.Calculate the statistic from the sample 5.Compare the test statistic to criterion. 6. Decide to reject or fail to reject the statistical (null) hypothesis and state an appropriate conclusion. 

What are the two types of Ttests?

One Group (Sample) Case for the Mean (aka one sample ttest)
Two group (Sample) Case for the Mean* *In addition there are 2 kinds of two group ttests 

What are the two types of twogroup Ttests?

Independent Samples
Dependent Samples 

Independent Samples Ttest

A type of 2 group ttest
one measurement from each group member of the two groups in the sample. 

Dependent Samples Ttest

A type of 2 group ttest
2 possibilities create the need for a dependent sample analysis 2 measures from each member of the sample and there is an assumption that the subjects are related and the measurements are related Ex: pre and posttest scores: if subject scores higher in the pretest they will score higher in the posttest 

Hypotheses for the onesample ttest

measures the difference between a one group sample mean and a "test value" (the letter "a")


How is a test value specified?

Specified by the researcher based on literature or by chance.


Hypothesis for the onesample ttest, nondirectional and directional:

Nondirectional Research Hypothesis:
The mean value of a characteristic in the population is different from a designated test value. Ha: xbar≠a (where a=test value) Null Hypothesis: The mean value of a characteristic in the population is not different from a designated test value. Ho: µ = a Directional: Research: Ha: Xbar > a Null: Ho: Xbar ≤ a 

How to choose a test statistic:

1.based on literature
2.the midpoint on the test variable 3.value of the test variable that represents chance 

Fore the onesample TTest, the standard deviation of the sampling distribution is called...

the standard error of the mean.


Assumptions of the one sample ttest

The study variable under consideration:
is measured at interval/ratio level is normally distributed in the population sample observations are random and independent 

What does the twosample ttest (independent samples ttest) measure?

measures the difference between two group means to determine how likely it is that both groups are from the same population or from different populations


Hypotheses for the independent samples ttest

If the hypothesis is nondirectional
Research Hypothesis: There is a difference between the two groups on the dependent variable. Ha: Xbar1 ≠ Xbar2 or Ha: Xbar1Xbar2 ≠ 0) Null Hypothesis: There is no positive difference between the two groups on the dependent variable. Ho: µ1 ≤ µ2 or Ho: µ1µ2< or = 0 If it is directional: Research: Ha: Xbar1 > Xbar2 or Ha: Xbar1Xbar2 > 0 Null: Ho: µ1 ≤ µ2 or Ho: µ1µ2 ≤ 0 

What is the sampling distribution for the independent sample ttest?

theoretical distribution of the differences in the means of two groups
the stand deviation of the sampling distribution is called the standard error of the difference 

Assumptions of the Independent ttest

1.one dependent variable measured at interval/ratio level
2.dependent variable is assumed to be normally distributed in the population 3.one independent variable that is categorical with two levels (ex: m/f) 4.observations are random and independent 5.homogeneity of variance  the variance of the dependent variable is assumed to be equal in the two groups (and similar to the population) 

Twosample case for the mean (dependent samples) aka Paired TTest

simplest type of related measures test
dependent variable is measured twice for each subject or there is some basis to assume that the responses from the subjects in the sample would be correlated adjusts the test statistic to compensate for the correlation between the two observations uses a pooled estimate of variance 

Hypotheses for the Paired Ttest:
nondirectional 
Research Hypothesis: the mean of the difference scores across the two measurements is not zero.
Ha:∂ = xbar1  xbar2 ≠ 0 Null Hypothesis: the mean of the difference scores across the two measurements is zero. Ho: ∂ = µ1  µ2 = 0 

Hypotheses for the Paired Ttest:
directional 
Research Hypothesis: the mean of the difference scores across the two measures is greater than zero.
Ha: ∂ = xbar1  xbar2 > 0 Null Hypothesis: the mean of the difference scores across the two measurements is equal to or less than zero. Ho: ∂ = µ1  µ2 ≤ 0 

What are the 2 inferential procedures in which sampling distributions and sampling error is used?

Parameter Estimation (Confidence Intervals)
Hypothesis Testing (Significance Testing) 

Formula for Confidence Interval

Sample Statistic ± (Critical value)(Standard Error*)
* Std. Dev. of the Sampling Distribution The following statistics can be used as the sample statistic: mean, proportion, correlation 

Procedure for calculating a confidence interval:

1.calculate the sample statistic
2.Determine level of confidence (set a level of significance). Ex: .05 or .01 correspond to 95% or 99% respectively 3.select the appropriate critical value from a table 4.estimate the standard error (if>100 use z, if ≤100 use t) 5.apply the formula 6.interpret result 

Confidence Interval Around a Mean

Ex:
1.mean: 138 2.set level of significance: .05 (95%) 3.25≤100, use t (always 2tailed) 4.sample size is 25 so df=24 (n1) 5.estimate std. error (see formula) 6.apply CI formula (Sample Statistic ± (Critical value)(Standard Error*)) 7.interpret "I am 95% confident that the interval 136.97 to 139.03 contains the population mean." 

Power and Effect Size are used to...

...determine "practical significance"


What is effect size?

The size of the phenomenon under study.


Effect sizes are expressed as...

...an amount of shared variance.


Effect Size for OneSample Ttest

It evaluates the degree that the mean score on the test variable differs from the test value (specified by the researcher), expressed in SD units.
Formula: d=mean difference/SD where, d=standardized effect size mean difference = average difference b/w each observed value in the sample and test value SD = sample standard deviation ALSO d = t / √N where, d = standardized effect size t = t value (given in table) N = #of subjects in the sample interpretation of d: small .2 med .5 large .8 

Effect Size for Independent Samples Ttest

d = t √N1+N2 / N1*N2
interpretation of d: small .2 med .5 large .8 

What is Eta Squared?

Another measure of effect size in independent samples ttest.
The proportion of variance in the dependent variable that can be attributed to the grouping (independent) variable. "The % of the variability in the test scores can be attributed to (independent variable)." 

Calculation of Eta Squared (n2)

n2 = t2 / t2 + (N1+N22)
Interpretation small .01 med .06 large .14 

What is the Effect Size in Dependent Samples ttest?

It evaluates the degree that the mean of the difference scores deviates from 0.
d = t / √N where N = # of pairs of observations NOT # of subjects Interpretation small .01 med .06 large .14 

What is power?

Probability of not making a mistake;
Probablity of correctly rejecting a false null hypothesis (1ß=power). howell p.335 

Type I Error
Type II Error 
rejecting a true null hypothesis (probability=alpha)
failing to reject a false null hypothesis (probability=ß) 

What factors affect power?

1.level of significance (alpha)
increase alpha, decrease power 2.directional (onetailed) are more powerful than nondirectional (twotailed) 3.sample size and population variance increase sample size (smaller sample error), increase power 4.effect size decrease effect, decrease power *sample size is often changed to vary power 

How to maximize power:

1.increase alpha level (not past .10)
2.use onetailed test 3.look for a larger effect size 4.use a sufficiently large sample 

How is power used when conducting research?

1.considered a prior to determine sufficient sample size
2.considered after collecting data for the study (post hoc) to assist in interpretation of results (use when you get results you didn't expect, to check if you had enough power to see the results you saw) 

By convention, what is sufficient power?

.80


Benefits of ANOVA
Analysis of Variance 
Allows testing of the differences between multiple population means while maintaining the Type I error rate at a preestablished alpha level for all comparisons.
Determines whether group means are significantly different. 

Types of ANOVA

Oneway or Simple ANOVA: One IV, One DV
Twoway ANOVA: Two IVs, One DV Multifactorial ANOVA: Two or more IVs, One DV Multiple ANOVA: One or more IVs, Two or more DVs Repeated Measures ANOVA: One or more IVs, one or more DVs measured on more than one occasion 

What is Simple ANOVA?

One IV  at nominal or ordinal level
One DV  at interval/ratio level Generally answers: 1.Is there a difference among two or more group means on the dependent variable? 2.If there is a difference, which group means are different from one another? 

What are the assumptions of Simple ANOVA?

1.Dependent variable is normally distributed in the population.
2.Observations should be random and independent. 3.Homogeneity of Variance among the groups. 

ANOVA looks at Total Variance divided into 2 parts:

Within Group Variation (unexplained  differences due to sampling error)
Between Group Variation (explained  differences due to group membership/characteristics combined with sampling error.) 

Between Groups Variance

Differences among subjects exposed to different treatments or having different characteristics, AND to sampling fluctuation (error).
Avg. squared difference b/w group means and the grand mean* *grand mean  mean of all the subjects in the sample 

Within Groups Variance

differences among subjects exposed to the same treatment or having the same characteristics just due to sampling fluctuation (error).
Avg. squared difference b/w each score in the group and it's own group mean 

Total Variance

Avg. squared difference b/w each score in the sample and the grand mean.


ANOVA tests...

whether the Between Group Variance is greater than the Within Group Variance
If B/W Group Var > than W/in Group Var => groups are significantly different If B/W Group Var ≤ than W/in Group Var => groups are not significantly different 

What is the test statistic for ANOVA?

F ratio
Sampling distributions are distributions of F ratios for different sample sizes and different numbers of groups. F = b/w group variability / w/in group variability 

In ANOVA, if the Null Hypothesis is true...

F Ratio = 1.00
no variability, all variance due to sampling fluctuation only 

In ANOVA, if the Null Hypothesis is false...

F Ratio > 1.00
B/W Groups Var. = treatment variance + sampling fluctuation 

Hypotheses for ANOVA

Research: at least one group mean is different on the dependent variable.
Ha: Xbar1 ≠ Xbar2 for some j,k Null: Group means do not differ on the dependent variable. Ho: µ1=µ2=...µk 

Calculating the Test Statistic (F Ratio): Terminology

SS B = sum of squares between: sum of squared deviations of each group mean from the grand mean times group size.
SS W = sum of squares within: each score within a group from the group mean and the the sum of each of the within group squared deviations. SS T = sum of squares total: sum of the squared deviations of each score from the grand mean. Mean Square B/W = SS B / df B Mean Square W/in = SS W / df W df B = k1 (#groups 1) df W = Nk (#subjects  #groups) df T = N1 (#subjects  1) F= MS B / MS W (variance b/w / variance w/in) 

Interpreting the Test Statistic (F Ratio)

Look at table
Consult table (Howell p. 5167) F>tabled value > significant Computer Output: If your observed level of significance is less than .05, it is significant 

Use F Ratio to Decide about Hypothesis in ANOVA

If test is not significant, i.e. F ratio > tabled value OR
F ratio sig. < .05 (computer output) then fail to reject null. If test is significant, i.e. F ratio ≤ tabled value OR F ratio sig. > .05 (computer output) then reject null and conclude at least one pair of groups means is different. 

Follow up tests are performed after ANOVA if...

...the ANOVA includes more than one comparison of two means, in order to determine which pairs of means are significantly different. (Multiple Comparison Tests)
If there is only one comparison of two means, no post hoc testing is needed, just look at the two and decide how they are different. 

Post Hoc Tests

Decided by Levene's test
Equal variances  Tukey's HSD, Scheffe Unequal variance  Dunnett's C 

Simple ANOVA: Effect Size

Eta squared = SS B / SS T
computer general linear model will compute 

For a simple oneway ANOVA with 2 groups, F=....

F = t squared


When is the multifactorial ANOVA used?

To examine research questions with two or more independent variables (factors) with one dependent variable.
Described by the number of variables (factors)  Ex: TwoWay ANOVA. (*rarely more than three IVs) 

What are the advantages of the multifactorial ANOVA?

1.Efficiency  test more than one IV w/a single analysis.
2.Control effect of additional variables by including them in the analysis. 3.Interaction  study interaction b/w IVs r/t DV and the separate effects (main effect) of each IV on the DV. 

Data requirments for multifactorial ANOVA:

IVs must be categorical (nominal/ordinal) w/ at least 2 categories in each variable.
DV must be measured at interval/ratio level. 

*Assumptions for using multifactorial ANOVA:
*same as oneway ANOVA 
DV is normally distributed/
Observations are random and independent. Homogeneity of variance. Robust to violations of assumptions (if group sizes are not too small and are about equal). 

Multifactorial ANOVA:
The interaction effect 
3/2 ANOVA has 6 group means that represent the "crossing" of the two independent variables and are compared to one another in the first part of the ANOVA.


Multifactorial ANOVA:
The First Main Effect 
3/2 ANOVA
"Row Means" They are compared to one another in the second part of the ANOVA. 

Multifactorial ANOVA:
The Second Main Effect 
3/2 ANOVA
"Column Means" They are compared to one another in the third part of the ANOVA. 

Main Difference b/w Multifactorial ANOVA and simple ANOVA:

Between Group Variance is now subdivided to represent the interaction between the IVs and the main effect for each of the IVs separately.
Total Var  variation among all the scores B/W Group Var: 1.Var among row means (effect of first IV)  b/w row mean & grand mean 2.Var among column means (effect of second IV) b/w column mean & grand mean 3.Var due to interaction (effect of 1stIV across levels of 2ndIV) b/w cell mean & grand mean W/in Group Var  variation within cells diff b/w each score & its own cell mean 

What is an interaction?

When the effect of a variable depends on the groups or conditions to which it is applied.
The effects of an IV on the DV are different across the levels of a second IV. 

Interaction Effect

Cell mean minus grand mean removing the main effects of the independent variables.
µjk  µj  µk + µ 

How do you identify a significant interaction?

Plot cell means for one IV separately for each level of the 2nd IV.
No interaction: parallel lines Ordinal interaction: unparallel lines w/no intersect Disordinal interacton: unparallel lines with intersect. 

What does multifactorial ANOVA test?

Whether B/W Group Var > W/in Group Var.
B/W Group Var > W/in Group Var > groups are significantly different. B/W Group Var < W/in Group Var > groups are NOT significantly different. 

Statistical questions for multifactorial ANOVA:

1.Do the levels of the 1st IV affect the DV in the same way across the levels of the 2nd IV? (Is there a significant interaction among the IVs).
2.Is there a difference among two or more group means within each IV on the DV? (Are there significant main effects?) 3.If there is a difference among group means, which specific groups means are different from one another? (Are there significant post hoc tests?) 

Hypotheses for multifactorial ANOVA
(There are 3 sets of hypotheses b/c there are three types of means being compared.) 
Research: At least one group mean is different from one other group mean on the dependent variable.
Ha: Xbarj1 ≠ Xbarj2 for some row pair or Ha: Xbark1 ≠ Xbark2 for some column pair or Ha: at least one (Xbarjk  Xbarj  Xbark + Xbar) ≠ 0 Null: There is no difference among group means on the dependent variable. Ho: µ1 = µ2 = µj for some row pair or Ho: µ1 = µ2 = µk for some column pair or Ho: all (µjk  µj  µk + µ) = 0 i.e. there isn't going to be an interaction 

Test statistic for multifactorial ANOVA:

F ratio
F = B/W Group Var / W/in Group Var Calculate a separate test statistic for each of the null hypotheses being tested. 

Interpreting multifactorial ANOVA:

Look at interaction effects(F ratio)
If significant, do analysis of simple main effects, plot means, and do a oneway ANOVA using one IV separately for each level of the second IV. If not significant, interpret main effects. 

Multiple comparison tests with multifactorial ANOVA:

Used to analyze each IV when there are more than 2 levels.
Purpose is to compare each pair of means without increasing Type 1 error rate. 

Effect size with multifactorial ANOVA:
*calculated with statistical software 
Measure of an association b/w the IV and DV in ANOVA.
Identifies the proportion of variance in the DV that is explained by the IV(s). Eta squared = SSB / SST For a twoway ANOVA there would be 3 effect sizes. Interaction First Main Effect Second Main Effect **SEE PAGE 8 in NOTES!! 