• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/176

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

176 Cards in this Set

  • Front
  • Back

Identify the formulae for all degrees of freedom in ANOVA

df total = N-1
df factor = # levels of factor - 1


df interaction = profuct of df for factors in the interaction (b-1)x(a-1)


df error= total # observations (N) - # treatments

Identify the formulae for all degrees of freedom in 3-way factorial ANOVA

Identify the formulae for all degrees of freedom in within participants ANOVA

df total = nj (#ps x # conditions) - 1 = N (# conditions) - 1


df p = n-1


df tr = j-1


df error = (n-1)(j-1) - Now an interaction : ps x treat

Identify the formulae for all degrees of freedom in a mixed ANOVA

df total = gbn -1


df between ps = (g)(n) -1


> df group = g-1


> df Ps within G = dfP - dfG


df within Ps = df total - df ps


> df block = b-1


> df BG = (b-1)(g-1)


> df BxP within G = df within - dfB - dfBG

What is a one-way design aiming to determine?

Are the mean dependent variabl scores of populations for each level of the factor different from the grand mean?

What is a factorial design aimed at determining?

Is there a main effect of factor A


Is there a main effect of factor B


Is there an A x B interaction

What are the advantages of factorial designs

- requires fewer participants because we average over the levels of the other factors


- allows examination of the interaction of IVs


-- generalisability of results can be assessed:is the difference described by the main effect the same across the levels of the other factor?

In factorial Anova what is on the X axis and what is on the Y?

Y = DV


X = IV (with the most levels or most important)


Lines = other factor

What do non-parallel lines tell us in factorial ANOVA

Interaction!
Ordinal or disordinal (crossed lines)

How do we determine if theres a main effect in factorial ANOVA plots

Differences in the average height of the levels of the factor

What are the source of variance in a one-way ANOVA

Between groups - distribution of group means around grand mean


Within groups - distribution of individual DV scores around the group mean

if MStreat is a good estimate of error variance, F = ...

MStreat/MSerror = 1

if MStreat is more than just error variance, F = ...

MStreat/MSerror > 1

What do larger F values in ANOVA indicate?

H0is probably wrong

Derivation for one-way ANOVA: expected mean squares

- an expected value of a statistic is defined as the ‘long-range average’ of a sampling statistic


- our expected mean squares are:




E(MSerror) --> oe 2 i.e., the long term average of the variances within each sample (S2) would be the population variance oe 2




E(MStreat) --> oe 2 + not 2 where ot 2 is the long term average of the variance between sample means and n is the number of observations in each group i.e., the long term average of the variances within each sample PLUS any variance between each sample

How is variance partitioned in a two-way factorial ANOVA

Total variation


- Within groups


-Between Groups


-- Variance due to factor A


--Variance due to factor B


--Variance fur to A x B

Assumptionsof ANOVA

Population


- treatment populations are not normally distributed and have the same variance


Sample


- sample are independent


-independent random sampling


-each sample has 2 pbservations and equal n


Data (DV scores)


- measured using a continuous scale (interval/ratio)


-calculations for means, variance, ect do not make sense for other kinds of scales


Main effects and interactions are omnibus tests

What are the problems associated with significance testing as a function of determining the importance of findings

-use of arbitrary acceptance criterion (alpha) results in a binary outcome (sig or not sig)


-no info about the practical significant of findings


- a large p-value (ns) will eventually slip under the acceptance criterion as the sample size increases

What does effect size allow you to do in terms of assessing results?

- the effect size gives you another way of assessing the reliability of the result in terms of variance


- can compare size of effects within a factorial design: how much variance explained by factor 1, factor 2, their interaction, etc.


-differentiating effect sizes (Cohen, 1973):


-- 0.2 = small


-- 0.5 = medium


-- 0.8 = large

What are the two main approaches to estimating effect sizes in ANOVA

Eta-squared (n2)


Omega-squared (w2)


The difference between them depends on sample size and error variance

What does eta-sqaured tell us?

-describes the proportion of the variance in the sample's DV scores that is accoundted for by the effect


-considered a biased estimate of the true magnitude of the effect in the population


-most commonly reported effect size because it's easily interpretable (ranges from -1 to +1)

What does omega-sqaured tell us?

- describes the proportion of variance in the population's DV scores that is accounted for by the effect


-a less biased estimate of the effect size


-more conservative estimate

What is partial eta squared?

- proportion of residual variance accounted for by the effect


-residual variance = variance left over to be explained (ie. not accounted for by any other IV)

What are the limitations of partial n2?

- in factorial ANOVA, partial n2 is inflated because error and effect is less than total


- in factorial anova, n2 adds up to a maximum of 100% but partial n2 can add up to >100%


-hard to make a meaningful comparison


-- Instead of dividing by total variability we're dividing by the variability due to our own effect + error (all other effects eg main effect and interaction are excluded) Therefore we get a number larger (for partial eta-squared) - unless the other factors/effects in model don't account for any variance

How do we follow up a main effect in ANOVA?

- protected t-test: used to conduct pairwise comparisons (ie.2 means at the same time)


- linear contrasts

How do we follow up a main effect in ANOVA

- if interaction is significant, must be followed up with a simple effects test


-- tests effects of one factor at each level of the other factor


-simple effects re-partition the main effect and interaction variance


--report F and if factor has only 2 levels

What do we do if a simple effects test shows a significant interaction what do we do next?

-follow up with a simple comparisons (t-tests or linear contrasts)


-same procedure as simple effects but investigates cell means not marginal means

What are the issues associates with follow up comparisons in ANOVA and what are their (2) solutions?

- redundancy: explaining same mean difference > once


-- solution: orthogonal (independent) linear contrasts


- icnreases family-wise error rate


--solution: bonferroni adjustment for critical and/or conduct contrast a-priori rather than orthogonal set

What determines a higher order factorial design

More than two IVs

What are the different effects of a higher order factorial design

- Main effects: possible main effects for each IV


- Two way interactions: one factor changes depending on the level of the other factor averaging over levels of a 3rd factor (differences in simple effects)


- Three way interactions: two way interaction between two factors changes depending on the level of the 3rd factor (does the two way look the same at each level of the third factor?)

How is the variance partitioned in a 3 way ANOVA

Main effects


-variance: factor A (alpha)


- Variance: factor B (beta)


- Variance: factor C (Y)


2-way interaction


- Variance: A+B (alpha + beta)


- Variance: B+C (beta + Y)


- Variance: A+C (alpha + Y)


3-way interaction


- variance due to aBY


Error/residual


-Variance due to e

In factorial designs how do you follow up significant omnibus effects

1. Interpret main effect


2. If >2 levels, main effect comparison (t-tests or linear contrasts)


3. Interpret 2-way interaction


4. Simple effects test (F test)


5. Significant simple effects for factor >2 levels, follow up with simple comparisons

How do you follow up 3-way interactions

Simple interaction effects


-Breaks down into series of 2-way interactions at each level of 3rd factor


- allows to follow up only sig. ones


Simple simple effects


- Follow up used after omnibus 2-way interaction


- Examine effect of factor A qt each level of factor B, qt each level of factor C


-Use MSerror from omnibus anova tables as error term (unlike one-way)


Simple simple comparisons


-Compute contrasts for each level of a third factor

What is type I error

Finding a significant effect where one doesn't exist in the population

What is type II error

Finding no significant effect where there is one in the population



- Hypothesis testing pays little attention to type II error


- Concept of power shifts focus to type II error

What is the technical and useful definition of power

Technical: the probability of correctly rejecting a false Ho



Useful: degree to which we can detect treatments effects (Including main/simple effects and interactions) when they exist in the population

Why should we care about power

- Power analyses put the emphasis on researchers ability to find effects that exist, rather than the likelihood of incorrectly finding effects don't exist


- Report post hoc power of significant effect


- Calculate predicted power (a priori) when no significant effect is found but mean difference is thought to exist in population

What factors effect power?

Significance level (alpha)


- a relaxed alpha means more power


Sample size (N)


- More N means more power


Mean differences (Uo-U1)


- Larger differences mean more power


Error variance (or MSerror)


- Less error variance means more power


Power as a function of effect size (d)


- d indicates how many SDs the means are apart and thus the overlap of the two distributions


- Power and d are closely related


What percentage is aimed to be achieved with power?

.8 is the minimum / optimum. 80% chance a significant effect found

What are the 3 issues with power

Effect must exist for you to find it


- Increasing power can help detect very small effects


- but if your theory and predictions (re: mean differences) do not reflect the population, Ho will be right and H1 wrong



Large samples can be bewitching


- Large samples can detect very small effects that may be unstable or unimportant


- Exclusive focus on significance test may lead to overestimation of importance of small effect



Error variance is also important


- Higher error variance means that w large effect may still turn out to be ns. Don't just focus on the sample size

How do you maximise power (4)

Increase Sample size



Increase Alpha level (increases type I)



Study Larger effects



Decrease Error variance


-Aim to reduce variation in Dv scores from sources other than your iv


1) improve operqlisation of variables: increases validity


2) improve measurement of variables: increases (internal) reliability


3) improve design of your study: account for variance from other sources (eg. Blocking designs)


4) improve methods of analysis: control for variance from other sources (eg. ANCOVA)

In blocking: how does adding a second factor increase power by reducing error variance?

-Error/residual variance represents all variance left over after accounting for systematic variance accounts for by the IV


- Error variance could reflect chnage or systematic unmeasured influences from the other factors


- Adding 2nd factor (THAT IS CORRELATED WITH DV) may account for some of the left over variance


- Reducing error variance this way will increase power

Why would we use a blocking design?

- Always want to explain variance in the Dv with a novel IV


- Often variance in the Dv can also be explained by additional factors which are less novel, known as control or concomitant variables


- Blocking introduces control variables into your design: reflect additional sources of variation or pre- existing differences on the DV scores

What are the two steps in setting up blocking designs?

1) homogeneous blocks created with levels of blocking factor: Ps are matched with the levels of the blocking factor


2) participants within each block then randomly assigned to the levels of the IV (stratified random assignment)

Are blocking designs randomised?

No, Ps categorised into levels of the blocking factor

What happens if a blocking variable does not reduce error term?

It either is not associated with DV or it explains the same variance as focal IV

What role does blocking play in controls and confounds?

A main effect of blocking = a sign of good control variable


- shows systematic variability due to blocking factor, which has been removed from error variance


-increases power of test for focal IV (as long as it doesn't explain the same variance as the IV)


Blocking favtor x IV interaction = sign of condound


- increase spower to detect focal IC main effects because systematic variance due to interaction is removed from error


-but that positive outcome is outweighed by negative outcome: interaction means that effect of focal IV changes dependeing on blocking factor


-significant block x interaction shows failure of treatment IV effect to generalise across levels of blocking variable

Error term for within participants ANOVA

Conceptually: inconsistencies in the effect being tested across participants


Mixed anova error term for main effects

Mixed anova error term for interactions and follow up tests

What are the error terms for between participants design?

What are the advantages of blocking ()

-may equate treatment groups better than a completely randomised design (assuming there is an equal N for levels of blocking)


- increased power due to lower error term


-can check interactions of treatments and blocks

Disadvantages of blocking

-practical costs of introducing blocking factor


-loss of power if blocking variable is poorly correlated with DV because of lower DF error


-blocking factor treated as having discrete levels


-artificial grouping may be necessary- could lose some info

For anova what represents the DV?

X

For Regression what represents the DV

Y

What is covariance?

average cross=product of deviation scores


- scale dependent therefore don't often use

What is standardised covariance?

Pearson's r


- expresses relationship between two variables in terms of SDs

What does r2 allow us to do?

Generalise effect size


- proportion of variance in one variabe that is explained by the variance if the other


-whatever isn't explained by the IV is residual


-it is a sample statistic and is biased to the #n in the sample ( increased stat = decreased n)


-- r as a population estimate = radj : caluclate rho (p) the population correlation coefficient. Difference between e and radj lower when n is higher

What are correlation and covariance measures of?

association

What does bivariate regression do?

estimates score on one variable (Y, criterion) on the basis of scores on another variable (X, predictor)




-regresssion of X on Y


- objective: find best fitting line on scatter

What can't you do in regression

infer causality


-no random assignment


-can infer based on theory but wont know what causes what or if a 3rd variable causes both

What is the unstandardised regression equation?

Yhat = bx + a




Yhat = predicted value of Y(DV)


b = slope of regression line (change in Y associated with one unit change in X)


x= value predictor (IV)


a= intercept (value of Y when X=0

What is the standardised regression equation?

Zhaty= beta Zx = rxyZx




how many SD changes in Y would you expect from 1 SD change in X

In regression, what is the best predictor of Y when X is unknown?

Best predictor of Y is Ybar


-error calculated around the mean

In regression, what is the best predictor of Y when X is known?

- Yhat is the best predictor of Y


-Error caluclated around regression line


-Sy.x = standard error of the estimate

What is the standard error of the estimate?

- Sy.x reflects the amount of variability around the regression slope (Yhat= conditional value), and is an important statistic in correlation and regression


-regression line is fitted according to the least squares criterion so that error of prediction are a minimum

What factors influence the standard error of the estimate?

-increased correlated between IV and DV (rxy) decreases standard error of the estimate


-small sample size lead to underestimation of standard error

How blocking can help

- add 2nd IV that is known to explain your additional vairance therefore now 2-way factorial


-increases chance that focal IV has significant effect


- covariate in blocking: control variable

What is the difference between variance and covariance?

Variance: tendency for scores to vary around the group mean


Covariance: tendency for scores to vary together

What does ANCOVA analyse

Covariance

How is ANCOVA similar and different to blocking?

-same goal as blocking but it adjusts the error term


-covariate like control variable in blocking but continuous and in ANOVA its used to remove error from error term and treatment effect


-treatment means are adjusted to account for differences ont he covariate


-random assignment usually prevents differences in variate means, but int his case covariate does differ accross groups


-ANCOVA partials out the effects of covariance from the focal IV as well as the error term

In ANCOVA, how does the covariate reduce error term?

If it is related to DV, if not you lose DF (power) without any lowered error

ANCOVA model

Xij = mu+ aj + BZij + Eij




X= DV


i= Ps


j= group


mu= grand mean


a= 1st IV


BZij= 2nd IV: score on variable Z multiplied by a fixed (B) weight


Eij = error





What are some assumptions and commonalities in ANCOVA

- score on DV goes up or down depending on score on Z


- NO interaction beween categorical IV and covariate


- in this case B = coefficient (strength/relationship) for control variable and DV

How does ANCOVA reduce error variance?

If covariate is associated with DV


- this relationship accounts for systematic variance unexplained by focal IV therefore lower variance


-a smaller error term because we've partitioned out cariance die to covariate therefore increase statisfical power in testing the effect of the focal IV

Why does ANCOVA adjust treatment means

- if focal IV affects DV there is a significant difference between the levels of the IV


- if covariate also differs between levels of the focal IV, which variable explains difference in DV treatment means? therefore confound


- WE CARE ABOUT EFFECTS OF THE FOCAL IV NOT THE EFFECT OF THE COVARIATE

How does ANCOVA adjust treatment means on DV? (5)

1. Calculate overall covariate sample mean


2.Assume this si the population mean


3.Then assume that in an uncondounded population all groups of the focal IV are assumed to have this covariate mean


4.Therefore for your sample, if a groups mean is different on the covariate than the overall covariate mean, that is confounded


5.Can adjust the groups expected mean on the DV to be what it would be if the groups covariate mean were the overall covariate mean by using the regression line

Formal assumptions of ANCOVA

-Homogenous variance


-normally distributed


-independence of error


-relationship between covarite and DV is linear (if not then it degrades power)


-relationship between covariate and DV is linear within each group


- relationship between DV and covariate is equal scross treatment groups (homogencity of regression slopes)

When is ANCOVA best to use?

When you perceive that theres no reason to expect associations between the focal IV and the covariate and that its okay to remove variability if you find differences

What makes multiple regression different to bivariate regression?

-multiple predictors


- multiple correlation (R)


-- relation between criterion Y and set of predictors


-multiple regression


-- scores on criterion Y are predicted use >1 predictor

What are the two tests in multiple regression?

1) Strength of overall relationship between criterion and set of predictors: R2 * F test




2) Importance of individual predictors: b, B, sr *t test


-predictors usually correlated so their contribution overlaps; this has implications for both tests

WWhat does MR assume about interactions?

There are none

What coefficients exist for MR with uncorrelated predictors?

two coefficients of determination, one for each variable

MR with correlated predictors

- predictors share overlapping variance with eachother as well as DV


-we can't add up the coefficients of determination as we will double up therefore must adjust estimate


-conceptual distinction betwee the overall model R2 and the individual variability, don't inflate model variable


-R2 measures the non-redundant variance in DV accounted for from the combination of variables


-Need to think about correlation between each IV and the DV adjusted to control for the effects of the other IVs. 2 options: partial correlation or semi-partial correlation

What is the partial correlation?

- examines relationship between predictor 1 and criterion, with the variance shared with predictor 2 partialled out of DV and IV


-Effect size measure: pr2= the proportion of residual variance in the criterion uniquely accounted for by the predictor 1

What is the semi-partial correlation?

- examines the relationship between predictor 1 and the variance shared between predictor 1 and 2


-very useful in regression because it provides the best effect size measure:


spr2= the proportion of total variance in the criterion UNIQUELY accounted for by predictor 1

Difference between structure of ANOVA and MR

ANOVA


- no test of the overall model


-tests main effects of each IV: other IVs assumed to be uncorrelated


-Report Fs, effect size for each IV and interaction +follow up


-autocmatically tests all interactions




MR


-automatically tests overall model


-tests unique effect of each IV: IV correlation partialled out


-report model R2 with F + each B for IVs + follow up


-must ask for interactions

What is the linear model?

Criterion scores are predicted using the best linear combination of the predictors


-similar to 'line of best fit', now plane of best fit


-the equation is derived according to the elast-squares criterion: so that deviations from dots on the plane are lowered (E(Y-Yhat)2)


-b1: slope of plane relative to the X1 axis


-b2: slope relative to the X2 axis


-a: point where the plane intersects the Y axis (when X1 + X2 are = to 0)

What is the principle of parsimony?

We want the simplest explanation of the data


-High parsimony is good because it highlights that predictors are:


> highly correlated with criterion


> lowly correlated with one another




Low parsimony is bad because it indicates that predictors are:


-highly correlated with each other and therefore redundant because they don't explain unique variance

What is the regression solution?

Distill all IVs down to Yhat predicted score and then take correlation between predicted score and DV = multiple correlation coefficient (R)


-Y is modelled with linear composite formed by multiplying each predictor by its regression weight/slope/coefficient(unstandardised)(just like a linear contrast) and adding the constant


-theres always one parameter for each IV and there is a constant attempt to predict DV


-need to understand scales to interpret the Yhat

What is the multiple correlation coefficient ? (R)

a bivariate correlation between the criterion and the best linear combination of the predictors (Yhat)

What happens to the multiple correlation coefficient (R) at low sample sizes?

it will be inflated (like r) and therefore will need to be adjusted to Radj

What formula is used to test overall model R2 for significance?

F = (R2/p) / ((i-R2)/(N-p-1))




Variance accounted for/df


/ variance not accounted for (error)/df

How do we assess the importance of predictor in regression

-can't rely on rs (zero order) because the predictors are interellated


-- a predictor with a sig r may contrinbute nothing once other predictors are included


- partial regression coefficients (bs) are adjusted for correlation of the predictor with the other predictors BUT can't use relative magnitudes of bs because they are scale bound

What is the standard regression coefficient (Bs)

- estimate of relative contribution of predictors because they use the same metric


-can compare Bs within a regression equation


CANNOT COMPARE ACROSS GROUPS AND SETTINGS BECAUSE SDs OF VARIABLES MAY CHANGE ACROSS STUDIES

What what happens to the standard regression coefficient (Bs) when IVs are not correlated with eachother?

B=r

What what happens to the standard regression coefficient (Bs) when IVs are correlated with eachother?

B can go up or down depending on the pattern of the correlation among predictors

What is hierachical regression?

Predictors are entered sequentially in a pre-specified order based on logic and or theory


-each predictor is evaluated in terms of what it adds to prediction at its point of entry (ie. independently of all other predictors in the model)


-Order of prediction based on logic or theory

In HMR, what does b based on?

- b @ each block is based on unique contribution, controlling for other IVs in current and earlier steps bit not later ones

What is the order of entering predictors in HMR?

1. Partial out effect of control variable: like ANCOVA predictors at step 1 lie covariance


2. Build a sequential model according to theory


- order is crucial for outcome and interpretation


-broad measures in step 1 and more specific in step 2

What is r

Pearson/zero-order correlation


-standardised (scale free) covariance between two variables


-ignores correlations between IVs

What is r2

Coefficient of determination


- proportion of variability in one variable accounted for by another variable

What is b

Unstandardised slope/regression coefficient


-scale dependent slope of the regression line


- change in units of Y expected with 1 unit increase in X

What is B

Stardardised slope/regression coefficient


-scale free slope of the regression line if all variable were standardised


-change in SD in Y expected with 1 SD increase in X, controlling for all other varibles


- B= r in bivariate regression

What is spr2/sr2

Semi-partial correlation squared


-scale free measure of association between two variables controlling for other IVs by removing shared variance between IVs


-proportion of total variance in DV uniquely accounted for by IV


-similar to eta squared

What is pr2

Partial correlation squared


-scale free measure of association between two variables, controlling for other IVs by removing all shared variance


-proportion of residual variance in DV (after other IVs are controlled for) uniquely accounted for by IV

What are the two slope estimates?

b and B

There are two other estimates of association

Covariance and corelation (r)

The slope estimate b tells you what?

The predicted unit change in Y for every unit increase in X.

B tells you what?

The expected SD change in Y for every SD increase in X (IV)

You decide to change from 1-7 scales to 0-10 to measure the same variables. If you are using covariance as your estimate of the association of the variables and b as your estimate of the slope the estimates with Change/not change?

Change

Again you change you scales and you are using correlation as your estimate of the association and B as your slope estimate, these estimates will not or will change?

Will not change

What are the three uses of HMR

-To account for control variables


-To test mediation: INDIRECT EFFECTS


-To test moderation: INTERACTIONS

What are the assumptions of HMR?

Distribution of residuals


-conditional Y values are normally distributed around the regression line


-homoscedasicity: variance of Y values are constant across different values of Yhat


-no linear relationship between Yhat and errors of prediction


-independence of error


Scales (predictors and criterion scores


-normally distributed


- linear relationship between predictors and creiterion


-predictors highly correlation


-continuous scale

What is multicollineariy and singularity?

Assumption of HMR


-Multicollinearity: IVs correlated


-Singlularity: measuring same thing




-measured using tolerance = 1-R2x


-R2x is the overlap between a particular predictor and all other predictors


-low tolerance = multicollinearity --> singlularity


-high tolerance = relatively independent predictors


- MULTICOLLINEARITY LEADS TO UNSTABLE CALCULATION OF REGRESSION COEFFICIENTS (b) EVEN IN R2 IS SIGNIFICANT

What are interactions in Moderation

-relationship between a criterion and predictor varies as a function of a 2nd predictor


-2nd predictor known as moderator


-moderator enhances or attenuates the relationship between criterion and predictor

What does the Moderation model test

tests the direct effect of Z, plus the direct effect of X and the ZX interaction


-interactions are not about the shared variance between moderator and predictor

What degree of collinearity do we want in SMR? how does that contrast to interactions in regression?

Want low collinearity in a SMR but irrelevant in tests of interactions

Steps in calculation of Moderation (4)

1) Calculate interaction term - based on mean-centered predictors will not as correlated with those predictors (low multicollinearity)


2) test of the interaction term


-1st block: enter centered IVs as predictors


-2nd block: enter interaction term to see if it accounts for additional variance (look for a sig R2 change)


3) test simple slopes - we examine the relationship between X and Y @ high and low values of Z (using +/- SDs)


4)Plot simple slopes on graph


- effect size : sr2


-report B


-Simple slopes report coefficient for IV with high and low moderator and interaction also included

In MMR, what would a B closer to zero indicate?

A lessened effect

In MMR, what is the DV regressed on

the key predictor

In MMR, what happens to main effects when we find an interaction?

they're trumped basically


-no longer a useful or insightful way of interpreting the data

What is mediated multiple regression?

The mediator EXPLAINS the relationship between the IV and DV




Key word often : because

What are the relationships between Iv, DV and mediator in mediated multiple regression

The IV is related to (causes) mediator, and related to DV




Mediator related to DV when effect of IV is controlled for (C.B)




IV no longer related to DVwhen effect of mediator is controlled for (B.C)

Analysis steps in mediated multiple regression (4)

1) SMR: regression mediator on IV. Report: R2 & B


2) HMR: Blcok 1 - predict DV from IV


3) HMR:Block 2- predict DV from IV + mediatior


-if coefficient for mediator is significant, then the conditions 3 is met


-coefficient for IV also matters because if no longer singificant =full mediation but if it is still significant = partial mediation


4) Test signfiicant of significant change: SOBEL / BOOSTRAPPING


-if significant, there isan indirect effect of IV via mediator on DV

What does Z do to the relationship between X and Y in Moderation?

Z just adjusts it



What is expected of collinearities in Moderation?

Low collinearities desired and high collinearities suggest indirect effects

In moderation what is the expected relationship between the moderator and IV?

Moderator often uncorrelated with IV

In mediation, what is the relationship between the mediator and IV

Associated in one way or another

Structural model of one way between ps anova

to find effect of being in a particular condition, we subtract the grand mean from condition mean

Structural model of one way repeated measures anova

score in condition 1 minus personal average for each ps

In between ps designs, what is within-cell variability assumed to be

residual error

Between participants term used for follow ups

In between, MS error is the terms used to test any effect, including simple comapirsons

levelWithin participants error

Within ps, we partition out and ignore the main effect of participants and compute an error term estimating inconsistency as ps change over WS

Why are seperate error terms used in following up main effects of treatment in ANOVA

- we expect inconsistencies in treatment effect x participant so in simple comparisons use only data for conditions involved in comparison & caluclate SEPERATE ERROR TERMS FOR EACH TIME

What are the error terms used for two-way repeated measures

Main effect of A : error term is MSAxP


Main effect of B: error term is MSBxP


Interaction AxB: error term is MSABxP




-Each effect tested has a seperate error term


-this error term corresponds to an invertaction between the effect due to ps and the treatment effect


-within treatment not considered at all

Following up main effects

-A seperate error term must be calculated for each comparison undertaken


-Simple effects


-- the interaction between A treatment and pariticpants at B1


-Simple comparisons


--interaction between A treatement (only data contributing to comparison) and participants at B1

What are the two approaches to within-participants designs?

Mixed model approach

What is the mixed model approach of within-ps ANOVA

-treatment is a fixed factor and participants are random factor


--fixed factor: choose levels of IV


--random factor: levels of IV chosen at random from a larger pool of levels of that IV


- this can be powerful but restrictive assumptions come with it

What are the assumptions of the mixed model approach?

-sample randomly drawn from population


- DV scores normally distributed in population


- compound symmetry


--homogeneity of variance in levels of repeated measures factor


--homogeneity of covariances (rqual correlation/covariance) between pairs of levels


-variance-covariance matrix: compount sym = covariances roughly equal

What are the methods of dealing with violations of the mixed model approach

- Mauchly's test of spherity


- Epsilon adjustment


--Lower bound


--Greenhouse-Geiser


--Huynh-Field

What is Mauchly's test of spherity

- it examines overall structure of covariance matrix, determines whether covariances and variances are roughly equal


- Evaluated as X2 and if significant there is a violation


- not robust at all and fails to find significance even when present in the data

When does sphericity not matter

Repeated measure variance


IV has only 2 levels

When does sphericity matter

- in within-ps with 3+ levels


-when sphericity assumption is violated, F ratios are positively biased


--critical values of F are too small therefore type 1 error prob increases

What is the epsilon edjustment?

- value by which degrees of freedom for F-test is multiplied


-equal to 1 when assumption is met and <1 when violated


-lower the value (further from 1) means increased conservativity of test


-Closer to 0 = problem

What does the lower-bound Epsilon adjustment do?

-act as if we only have 2 treat levels with maximal heterogeneity


-used for conditions of maximal heterogeneity or worst case violation of sphericity


-often too conservative


-type II error, not enough power

What does the Greenhouse-Geisser Epsilon adjustment do?

-size of epsilon depends on degree to which sphericity is violated - each cal F compared to different crit F to investigate


-it puts forward an epsilon based on data therefore it it the perfect method to use

What does the Huynh-Feldt Epsilon adjustment do?

-adjustment to GG increased to make more lax


-often results in epsilon exceeding 1


-used when the value of ep. believed to be >or = to .75

What is the multivariate approach to within-ps design



Multivariate linear composite of variance (MANOVA)


-creates linear composite of multiple DVs


-Manova treats repreated measure variable is treated as multiple DVs and combined/weighted to maximise the difference between levels of other varaibles


--multivariate tests: Pillars TRace, Wilk's Lambda, Hotellings trace and roy's largest root


-does not require restrictive assumptions that mixed within ps does (sphericity)

What are the advantages of within design

-more efficient: n PS in j treatment generate nj data points


- more sensistive: estimate individual differences (ss participants) and remove from error term

What are the disadvantages of within design

-restrictive statistical assumptions


- sequencing effects


--learning: improving due to practice


--fatigue: deterioration


--habituation: insensitivity to latermanipulations


--sensitisation: more responsive


--contrast: previous treatment sets standard


--adaptation: adjustment to previous condition


--direct carry-over: learn something and carry it over


-Need to counterbalance


--but can still get treatment x order interactions and this increases error

In within ps design what is the error term used for any effect?

- error term used for any effect is equal to the interaction between that effect and the effect of participants (a random factor)


-this applied to:


--main effects and follow up main comparisons


--interactions: simple effects and follow up simple comparosons




Due to the issue of compound symmetry (sphericity), adjust df where needed

What is a mixed anova?

-also known as split plot


- has a within ps and between ps factor

In mixed ANOVA what happen with Within ps design?

- mixed model WP is done the normal way as WP anova: evaluate sphericity and report an adjusted F such as GG

Why use a mixed ANOVA?

- WP anova is great for power, but some variables can be difficult to manipulate in WP


- can also manipulate a varaible BP to excluse the potential cary-over effects (because BP observations are independent)

What are assumptions of Mixed ANOVA

-DV is normally distributed


-Between Ps terms:


-- homogeneity of variance


- with ps terms:


--homogeneity of variance: assume WPFxP interactions constant at all levels of between Ps factor


--variance-covariance matrix same @ all levels


--pooled variance-covariance matrix exhibits compount symmetry (c.f sphericitty)


-- usual epsilon adjustments apply when within ps assumptions are violated

In a split-plot design, what is the block factor?

a within-participant factor

In a split-plot design, what is the group factor?

a between participants design

In a split-plot design, ifone of the IVS is a WP factor what do we include in the partitioning of variance?

Random factor participants

The participants factor is said to be ? under levels of the ? factor group

The participants factor is said to be nested within each level of the BP factor group (each ps is tested in one group only)

In a split-plot design, what is variability in within groups considered?

Error for BPF effect

In a split-plot design, inconsistencies in the block effect cross participants =

error for WP effect

In a split-plot design, inconsistencies in block effect across ps (interaction) =

error for WPFxBPF interaction

In a mixed ANOVA, the error for th BP main effect is..

participants within groups (deviations of means for each p from group mean)

In a mixed ANOVA, the error for th WP main effect & interaction is..

WPFxPwithinG (inconsistencies in effect of the WSF across Ps, adjusted for group differences)

In a mixed ANOVA, how do you follow up a main effect in BP

-same as one way BP


-ue original error term from test of BF main effect


-MSpsWithinG

In a mixed ANOVA, how do you follow up a main effect in WP

- seperate error term for every follow up test


-MS BcompxPwithinG

What to look for in a significant main effect of block (example)

- comparisons between different groups doesn't really tell us if any learning has occured, we need to see that ps are improving towards end of study


-could test linear contrasts


-need error for each comparison based on only data involved in that comparison


-could also examine simple effects of group for each of the four blocks

What are the two approaches for simple effects in BPF

1) use seperate error term for each simple effect (recommended) : run 4 oneway BP anovas to compare groups at each of four blocks. Then use MSpsWithinG @ B1 and again at B2 ect


2) A special pooled error term

What does a special pooled error term method involve

-MSPswithinCell


-this is an estimate of the average error variance with cells


-using same powerand distributes error


-SSwithinCell = SSPWithinG + SSBxPWithinG


-MSPswithinCell= SS withincell/ (dfPsWithinG+dfBxPsWithinG)


ot may be okay to pool because ps effects are independent

Simple effects of BPF: similarities and differences for two methods

in both cases the SS for simple effects are derived just as we have seen in BF anova




The seperate error term method is a little quicker byt df is compromised (power)

Simple effects in WPF

To conduct simple effects of block (for each group), we always run a one-way ANOVA WP on block seperately (each has own error)

MEthods of simple comparisons

Can do linear contrasts


But whatever test was used for simple effects is to be used here too

Error term for WP effects is the effect being exmined in ? with ? PS WG

Error term for WP effects is the effect being exmined in interaction with random factor PS WG

error term for BP main effects is

PS within groups

In mixed anova there are two error terms for omnibus tests

-BP main effect has own error term: PsWithinG


--deviation of averages of Ps from their group mean are proposed to be error


-WP main effect and interaction: block x PwithinG