• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/56

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

56 Cards in this Set

  • Front
  • Back
  • 3rd side (hint)
Variance


SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
With experiments we manipulate the IV =
The idea is to introduce variance, experimental variance.

(Random assignment equates the groups.
Manipulation of the IV will disrupt this equality.
Causing variation between the groups.)
Extraneous variance hurts us:
Threatens internal validity by creating possible alternative explanations.
Sources and Forms of Variance

Systematic Between Groups Variance

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Systematic, or PLANNED variance
This is variance between the groups caused by manipulation of the IV.

We are looking for significant differences in the variance between groups. (DUE TO EXPERIMENTAL VARIANCE OR EXTRANEOUS VARIANCE)
Systematic variance increases between group variance beyond the variability due to sampling error.
Two types of systematic variance

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Experimental - Variance caused by manipulation to the IV

Extraneous - from uncontrolled variables, confounds.

Stats can only tell us if significant differences exist.
Will not say if the difference is due more to experimental or extraneous variance.
Sampling Error

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Always a source of variance
Significant differences in group means indicates that variability is larger than would be expected due to chance.
Add up .05 for every variable
Sources and Forms of Variance

NONSystematic WITHIN groups Variance

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Also known as error variance.
This is due to random factors that affect participants differentially within the same group.
Systematic reflects variance among all subjects, across groups.

EVERYBODY GETS EVERYTHING in WITHIN group design
F Ratio


SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Differentiates between systematic between groups variance and variance due to error variance

Systematic Effects + Error Variance
F= Error Variance


If no systematic effects from the systematic between groups variance.
Then only thing left is error variance.
Thus, F ratio equals around 1.00
In order to make causal inferences

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Need to prove that the variance was from IV manipulation and not from extraneous variance

The more extraneous and/or error variance you have the more difficult it is to show the effects of systematic experimental variance.
Controlling Variance in Research

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
We want to MAXIMIZE experimental variance, CONTROL extraneous variance, and MINIMIZE error variance
Maximizing Experimental Variance

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
We need to be sure our manipulation had it’s intended affects.
We want to be sure the IV really varied.

Use a manipulation check!
Manipulation check ensures that our manipulation created a difference, had it’s intended affect.
Controlling Extraneous Variance

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Two basic ideas here:

Experimental and control groups need to be as similar as possible at the outset.

Groups are treated exactly the same (except for the manipulation, of course).

Extraneous variance nicely controlled by RATG

Make the variables constant – make participants homogenous. (This may limit generalizability)

Build the confound into the study as another IV
Minimizing Error Variance


SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
A couple sources:

Measurement error – variations in the way participants respond, may come from unreliability of the instruments, for instance.

Individual differences – remember the snow flake idea?

Maintain carefully controlled study, controlled and reliable measurements.
Ex Post Facto Studies

Nonexperimental Approaches

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Observe present behavior and attempt to relate to prior experiences.
But, little confidence in validity due to lack of controls.
There is no manipulation.
Not creating systematic variance.

Example - Sexually abused individuals are often depressed, but Sexual abuse does not necessarily lead to depression (ex post facto fallacy)

Cannot control for possible confounds.
Hence alternative hypotheses cannot be ruled out.
Single Group Posttest Only Studies

Nonexperimental Approaches

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Somewhat higher level of constraint here because
There is manipulation of an IV.
But, there is no control group, no comparison.
There is also only one measurement taken after the manipulation.

Many Factors Left Uncontrolled Such as:
Placebo effect.
History.
Maturation.
Regression to the mean.
Single Group Pretest-Posttest Studies

Nonexperimental Approaches

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Here there is now a pretest taken prior to the manipulation.
We can now assess, or verify, that a real change occurred.
But, the same factors are still uncontrolled:
Placebo effect.
History.
Maturation.
Regression to the mean.
Pretest-Posttest Natural Control Group Studies

Nonexperimental Approaches

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Have added a no-treatment control group.
A group that does not receive the manipulation.
But, the control group is naturally occurring.

There is no RATG: (Differential)
The problem?
We cannot know whether the groups are equal at the outset.
We could test on some variables to determine equality.
But, we cannot possibly know all of the potentially confounding variables.
This is why RATG is so important.
It will equate the groups on all those unknown factors, the extraneous variables.
Randomized Posttest Only Control Group Design

Experimental Approaches

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Inclusion of a control group.
RATG.
Then a manipulation or treatment occurs.
A measurement is then taken.


Control group helps control against:
History.
Maturation.
Regression to the mean.
Placebo effect.
Randomized Pretest-Posttest Control Group Design

Experimental Approaches

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Essentially the same as the pretest-posttest natural control group design except with RATG.

We can see that the change in DV was caused by IV and ensure this was the case due to RATG

You could calculate a difference score (posttest – pretest measurement).
BUT If you compare changes as a function of time, then you’ve gone beyond a single variable design.
This is why, at this level of single variable independent groups, you must either make the comparison for the posttest or for the difference score.
Look at the paradigm, what other IV could we have?
Thus, examination of difference or posttest is the defining feature, otherwise, is factorial.
Multilevel Completely Randomized Between Subjects Design

Experimental Approaches

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
So far the designs have only had two levels of the IV.
Here, participants randomly assigned to three or more conditions.
So, we examine several different emotions, not just two.
Solomon’s Four Group Design

Experimental Approaches

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Pretest is a "double edge sword"

the pretest may affect responses to the treatment or to the posttest.
Or, there could be some interaction involved.
Either way, the results may be confounded.
The pretest may sensitize the subjects, it may affect later responses.
The pretest may be a type of “pretreatment.”

The Solomon design is a combination of the randomized pretest posttest control group design and the posttest only control group design.

Group A Pretest Tx Posttest
Group B Pretest Posttest
Group C Tx Posttest
Group D Posttest

What comparisons to make?
Group A vs Group B on posttest measure to examine effects of treatment.
Group A vs Group C on posttest measure to examine effect of pretest condition.
Group D allows us to see how not giving a pretest or a manipulation can affect score.
Statistical Analyses

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Nominal use chi-square.
Ordinal use Mann-Whitney U.
Interval and ratio (score) use t-test or ANOVA.

Assumptions must also be met.
For ANOVA:
Normally distributed data.
Homogeneity of variance.

What if these assumptions not met?
Use Mann-Whitney U.

May also use one of Winer’s (1971) transformations.
For instance, log transformation of the data
T-test

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Examines differences between two means.
As either primary analysis.
Or for follow-up analyses (multiple comparisons).
Analysis of Variance (ANOVA)

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
ANOVA can test for differences between a set of two or more means.
“oneway” ANOVA means that there is just one IV.
But, can have multiple levels.

What goes into the ANOVA?
Within groups variance.
(Measure of nonsystematic variation within a group, Error or chance variation, Average variability within the group.)
Between groups variance.
(Represents how variable the groups means are.)
Sum of Squares.
Each source of variance has a sum of squares.
(the sum of squared deviations from the mean, how variances are calculated)
How ANOVA is calculated

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Sum of squares for each source of variance calculated.
Hence, the sum of squares is divided by the degrees of freedom (df).
The result is the mean square (MS).
The MS for the Between Groups is divided by the MS for the Within Groups.
The result is the F ratio.
Significance of F-Ratio

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Large F ratio means that the experimental manipulation may have had an effect, but it could have been from extraneous variance!
The p value indicates that probability of obtaining an F that big if there were no systematic effects.
The probability of finding that F given the null hypothesis is true.

If significant then we can conclude that at least one mean is different from at least one other mean.

All that tells us is that a significant difference exists somewhere. Use T-test to find out which mean is different.
Two Classes of Multiple Comparisons

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
A PRIORI are when we specified at the outset which comparisons we were interested in making.
(We then make these comparisons after finding a significant F ratio.)

POST HOC are when we did not specify which comparisons were of interest before the study.
(We just go in and make a bunch of comparisons.)
Control for experiment wise error rate


SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
Must be mindful of the Type I error rate.
Need to control for experiment wise error rate.

Each comparison we add .05.

Thus, our chances of making Type I error increases.

Need to control for this though:
Tukey Honestly Significant Difference
Tukey Least Significant Difference
Newman-Keuls
Sheffe
Bonferroni
Bonferroni

SINGLE VARIABLE - INDEPENDENT GROUPS DESIGN
.05 divided by # of Comparisons
Within Subjects Designs

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
EVERYBODY GETS EVERYTHING

This design sometimes called a repeated measures design.

scores in each condition are correlated with scores in other conditions.
Critical comparison is the difference between the conditions on the DV.

The problem is that the participants serve in all conditions.

Any difference found between the conditions might not be due to the manipulation but rather due to confounding effects of one condition on subsequent conditions.

This is called a sequence effect.
Analyzing Within Subjects Designs

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
Since the conditions are correlated, we use a modified ANOVA.
Called a repeated measures ANOVA.

Consider influence on F-ratio; since groups are equal from the start, error variance is decreased. This would lower the denominator, causing the F ratio to be higher.
Sequence effect

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
Sequence effects can be controlled by using counterbalancing procedures.
Varying the order of presentation of the conditions.
Types of Sequence Effects

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
Practice effects result from subjects gaining experience with the test, the procedure.

(Positive practice effect is when performance is enhanced. Negative practice effect is when performance decreases, diminishes with fatigue.)

Carryover effects are due to influence of a particular condition or a combination of conditions on later responses.
Controls for Practice Effects

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
Practice effects.
For positive practice effects hold the variable constant.Train participants to the same level.
All participants are then equally as skilled and familiar.

For negative practice effects include a rest period.
Allow the fatigue to dissipate between conditions.

May also use EQUIVALENT, ALTERNATE FORMS
Controls for Carryover Effects

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
Best control is to vary order of presentation.
This is the only control for carryover effects.

Two methods to do this:
Randomization
Randomization:
(Here the order of presentation of conditions is randomized. Each participant then receives a different, random order of presentation. Use a table of random numbers.)

Counterbalancing (Systematic arrangement of the order of conditions so that all possible orders or positions of conditions are represented an equal number of times.)
Counterbalancing

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
Two general types of counterbalancing:
Complete (Complete counterbalancing all possible orders of conditions occur the same number of times. Thus, with 3 different conditions you will need 3 x 2 x 1 = 6. Not advisable if you have more than 3 or 4 conditions, becomes unwieldy.)

Partial
(One method is to randomly select some of the sequences of conditions and then randomly assign participants to those.
Another method is to use Latin Square Design.)
Latin Square Design

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
Used as a partial method to counterbalancing

Anger (1) Sad (2) Happy (3) Fear (4) Joy (5) Funny (6)

Use the following rule.

1, 2, N, 3, N – 1, 4, N – 2, 5, N – 3, 6, N – 4, 7, etc.
Thus, our first row would be Anger (1), Sad (2), Funny (6), Happy (3), Joy (5), Fear (4).
Generate the second row by adding 1 to each number of the first row, with 1 added to N equaling 1.
Thus, our second row would be Sad (2), Happy (3), Anger (1), Fear (4), Funny (6), Joy (5).
Generate the third row by adding 1 to each number of the second row (N + 1 = 1).
Thus, we get Happy (3), Fear (4), Sad (2), Joy (5), Anger (1), and Funny (6).
Strengths and weaknesses of Within subjects designs

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
STRENGTHS

No group differences due to sampling error

Within subjects designs eliminates error variance (WHICH WE WANT TO MINIMIZE)

Fewer participants are needed
WEAKNESSES

Sequence Effects!
Matched Subjects Designs

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
Different participants in each condition.
But they are closely matched before being assigned to the conditions (Thus, groups are correlated)

Each participant exposed to only one level of the IV
Match on the most important variables that are strongly related to performance on the DV.

With greater variability there is an increased chance of extraneous variance affecting the study.
When to use Matched Subjects Design

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT DESIGNS
When we want the advantages of correlated groups design but cannot use a within subjects design.

i.e. Wisconsin Card Sorting Test.
Hard to find perfect matches, especially with multiple conditions

To help you may consider using ranges of scores instead of exact scores.
Strengths and Weaknesses of Matched Subjects Designs

SINGLE VARIABLE CORRELATED GROUPS AND SINGLE SUBJECT
STRENGTHS

With greater sensitivity can use smaller number of subjects.
Also, unlike within subjects designs, no problems with carryover and practice effects.
WEAKNESSES

Matched designs are a lot of work.
(Takes time to match people.)

You have to know and then decide which variables to match on.
(You had better be well read.
And then you need to accurately measure those variables.)

There can be much data loss.
(You may not find suitable matches for many subjects.
Thus, you may initially need a very large sample.)
Basics of Factorial Designs

FACTORIAL DESIGNS
When you're interested in how to variables combined can affect behavior (interactions)

If we were not interested in how two variables combine to affect behavior then would just use a single variable design.

The IVs in factorial designs are called factors.
Interactions

FACTORIAL DESIGNS
One factor (Time) has a different effect on the DV depending on the level of the other factor (Hand).

We only see effects when both variables are combined.

(i.e. time/hand interaction on levels of creativity)
Design Notations

FACTORIAL DESIGNS
Shows how many IVs and how many levels to each IV.

Thus 2 x 2 indicates:
Two IVs.
Two levels to each IV.
(such as in time and hand, (Pre and post time, right and left hand))

A 2 x 3 x 3 indicates:
Three IVs.
One with two levels, one with three levels, and another with three levels.
Main Effects

FACTORIAL DESIGNS
The effect of each IV on the DV

The effect of Time on the DV (creativity)
The effect of Hand on the DV

OR


The effect of one IV irrespective of the other IV
Alternative ways to analyze multiple factors

FACTORIAL DESIGNS
Could have examined the main effects by just doing two studies.
But then we could not assess whether an interaction exists.


Two, oneway ANOVA's
But then we have the problem of experiment-wise error rate.
Increased chance of Type I error (rejecting the null hypothesis when it's actually true).
Main Effects are Boooooooring, Why?

FACTORIAL DESIGNS
Often, the results from main effects don’t really tell us anything

Can't look at data inside the cells unless a significant interaction exists

averaging across the cells so the numbers are usually even
Hypothesis Testing

FACTORIAL DESIGNS
Essentially the same as with single subject designs. Just more hypotheses to evaluate.

But, since so many different hypotheses, there are more chances for confounding to occur.

Factorial designs are complex.
So may be the threats to validity.
We must rule out alternative explanations for each factor involved.
Each factor must have controls associated with it.

Random assignment for each cell is good protection.
Analysis of Variance in Factorial Designs

FACTORIAL DESIGNS
ANOVA

Results of an ANOVA presented in a summary table.
Summary table lists the sources of variance.
But with factorial designs there are more sources of variance.
ANOVAs all do the following:
Compare the variability between the groups to the variability within the groups.
Steps of analysis and interpretation

FACTORIAL DESIGNS
When using factorial designs we interpret the results for both the main effects and the interaction.
We evaluate whether they meet our .05 criterion.
BUT, always begin with interpretation of the interaction.
Only with a significant interaction do we have permission to evaluate differences between the cell means.
Analysis and interpretation of multiple comparisons

FACTORIAL DESIGNS
Same issues as with single independent variables designs.
But here, there may be many, many more comparisons to make.
With more comparisons come increased risk of Type I error.

We can use the same correction for experiment wise error rate:
(ie Bonferroni)
.05/# of comparisons

If we are making 20 individual comparisons, then .05/20 = .0025.
What does it mean to "stick within level of our finding?"

FACTORIAL DESIGNS
We MUST stick within the level of our findings.

If we find a significant main effect:
Then we only have permission to examine differences between the levels of that independent variable.

In our previous example, if we found a main effect for hand, then:
We can only look at all possible comparisons when we have a significant interaction.
Variations of Basic Factorial Designs

FACTORIAL DESIGNS
Within Subjects or Repeated Measures Factorial Design:
With our previous example, using a between subjects design:
Each participant appears in only one cell.
There is RATG for each cell.
(one person for right hand pre treatment, one person for right hand post treatment....)

OR you can just use within subjects design and use the same people for left/right pre and post test

This will control the same as RATG

Paired samples t-tests conducted for multiple comparisons.
Mixed Designs

FACTORIAL DESIGNS
With more than one factor, they may be of different types

One factor may be within subjects.
Hence, every subjects gets exposed to every level of that factor.
One factor may be between subjects.
Hence, different subjects are assigned to each level of that factor.

A “mixed” design can mean different things:
May refer to one or more factors being within subjects and one or more being between subjects.
May also refer to one or more factors being manipulated and one or more being non-manipulated.
Mixed Design Controls?

FACTORIAL DESIGNS
for between factors need RATG.
For within factors need to use randomization or counterbalancing to control sequence effects.
So, if the levels you are comparing are between groups, then use independent samples t-test.
If the levels you are comparing are within groups, then use paired samples t-test.
Analysis of Covariance (ANCOVA)


FACTORIAL DESIGNS
Remember the partial correlation?
We wanted to assess the strength and direction of a relationship between two variables.
While controlling for the effects of a third variable.

ANCOVA also seeks to control for effects of a third variable.
We suspect it may be a confound.
The effects of a third variable are removed from the dependent variable.
As an example, with PD research we may want to control for the potential confound of disease severity.
Many ways to accomplish this.

Experimentally based controls:
Hold variable constant, use the same UPDRS scores.
Create a factorial design by creating two groups of PD patients, low versus high UPDRS scores.
Use a matching procedure to make sure UPDRS scores are equivalent across groups.
Sometimes RATG may also make groups equivalent.
Statistically based controls:
ANCOVA.
Here we rely on statistics to remove the effects of UPDRS on the dependent variable.
Multivariate Analysis of Variance (MANOVA)

FACTORIAL DESIGNS
Sometimes we may have more than one dependent variable.
We may, for instance, be comparing PD patients with LHO versus RHO.
Maybe we have given a battery of tests, the COWAT, the TMT, the WCST.
We could do three separate ANOVAs, one for each dependent measure...But, experiment wise error rate!
The solution?
Use the MANOVA.
This test analyses all the dependent measures under the umbrella of one criterion level.
Again, though, will only tell you that one DV had an effect.

You need to then determine which was associated with the significant effect.
Multivariate Analysis of Covariance (MANCOVA)

FACTORIAL DESIGNS
This is a combination of the ANCOVA and the MANOVA.
Here we have multiple IVs.
We have multiple DVs.
And we have a known confound that we want to control through statistics.