• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/87

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

87 Cards in this Set

  • Front
  • Back

Sampling Error

The difference between sample mean and population mean

Sampling error in One-sample t-test

standard error of the mean

Sampling error in Independent t-test:

standard error of difference

Sampling error in Analysis of Variance F-test

mean square error

Sampling error in Correlation coefficient-r

standard error of estimate

Sampling error in Multiple regression

standard error of regression coefficient

Quasi-experimental research

No random sampling assignment

Best way to reduce sampling error

random sampling (experimental research)

Error of Measurement

The difference between an obsered and a true score

Random error of measurement

fluctuations in the characteristics of individuals, fluctuations in examiner assessment procedures

Systematic error of meausrement

decrease/increase in scores in predictable way

Does measurement error always exist?

measurement error exists in all studies! We cannot eliminate errors in measurement, but we can minimize their effects.

Internal Validity

When a researcher controls for all extraneous variables and the only variable influencing the results of a study is the one being manipulated by the researcher. (the variable intended to study is indeed the one effecting the results)

T.H.I.S. M.E.S.S. D.R.E.A.D. - what to do with

Threats to Internal Validity. (THIS MESS) can be assessed and ruled out through the use of experimental designs and the last five threats (DREAD) are corrected using experimental procedures

THIS MESS DREAD - what are they

Testing Design


History


Instrumentation


Selection (differential)



Maturation


Experimental Mortality (attrition)


Statistical Regression


Selection-Maturation Interaction



Diffusion of Experimental Effect


Rivalry (Compensatory)


Equalization of Treatments (Compensatory)


Ambiguous Temporal Precedence


Demoralization (resentfulness)

Testing Design - threat to internal validity + example

The treatment effect may be confounded when changes in post-test scores of participants are influenced by their experience from taking a pretest. Hence the test becomes part of the intervention.



Example: A study with repeated measures or a pre-/post-test design.

History - threat to internal validity + example

An event that occurs during a study that can effect the responses of the participants; it could be something in the news, or national publicity, (i.e. an earthquake)



Example: Longitudinal study, a study with repeated measures.

Instrumentation - threat to internal validity + example

Pretest and posttest scores may change because of a faulty measurement instrument, irrespective of the treatment.



Example: Physiological instruments, or those in which researchers are collecting data in person.

Selection (differential) - threat to internal validity + example

Differences in the characteristics of participants assigned (usually not random) to treatment conditions may confound attributing the changes in the dependent variable to the treatment.



Example: Quasi-Experimental studies and convenience samples.



* Doing statistical comparisons between groups can help address this threat.

Maturation - threat to internal validity + example

Psychological and physical changes within participants may occur in an experiment, especially over time. These participant maturational changes may have an extraneous effect on the dependent variable.



Example: Longitudinal studies, where the participants are more likely to change such as adolescents, infants or people who are severely ill.

Experimental Mortality (attrition) - threat to internal validity + example

No, this doesn’t mean the sick people die!



The loss of participants to treatment or measurement of the dependent variable can produce unbalanced attribution of the effects of the treatment on the dependent variable.



Example: Longitudinal studies.



NB: Its important to analyze the existing data to determine if there are differences between those who dropped out v.s. those who continued. If more than 10% of the participants are lost to follow-up this seriously effects the ability to generalize findings.

Statistical Regression - threat to internal validity + example

This is the phenomenon that extremely high or low group scores on a variable tend to regress to the mean on the second measurement of the variable, confounding the treatment effect.



Example: Pretest-Posttest designs, small sample size studies, fewer people means that each individual has a greater effect on the average score.

Selection-Maturation Interaction - threat to internal validity + example

This is a combination of two threats to internal validity. For example, some participants in one assigned treatment condition group may have matured in math self-efficacy more than participants in another treatment condition group and purpose of the study is to increase math achievements. Additionally, other combinations of threats could interact to confound the treatment effect.



Example: Three most likely interactions are history, maturation and instrumentation.

Diffusion of Experimental Effect (Diffusion of Treatment) - threat to internal validity + example

The treatment may diffuse to the control group over time because the control group may seek access to the more desirable treatment. “Contamination” of the control group.



Example: Studies with an intervention when the two groups are being studied at the same time in the same location (clinic, small town). This is essentially no control group. Fatal flaw!

Rivalry (Compensatory) - threat to internal validity + example

The control group participants may perform beyond their usual levels because they perceive that they are in competition with the experimental group.



Example: Intervention studies where participants know which group they’re in.

Equalization of Treatments (Compensatory) - threat to internal validity + example

A treatment group may receive experimental rewards that appear more desirable than those received by the control group. Efforts are made by individuals outside of the experiment to compensate the control group participants with similar desirable goods. This would obscure the results of treatment.



Example: health care provider with a heart of gold (and little common sense). Intervention Studies.

Ambiguous Temporal Precedence - threat to internal validity + example

A lack of clarity is provided by the researchers as to which variable occurred first, leading to a question of which variable is the cause and which is the effect.



Example: lung cancer, smoking, which came first? Does smoking really cause lung cancer? (No! :)

Demoralization (resentfulness) - threat to internal validity + example

Lower performance of control group participants on the dependent measures may result from their belief that the treatment group is receiving a desirable treatment.



Example: Just like rivalry (compensatory), but the opposite.

Experimental Designs - key symbols

 R: random assignment


 O: observation


 X: treatment condition


 C: control condition

Randomized Treatment and Control (2 groups) with Posttest-Only Design --> what test do you use?

Independent sample t-test

What does a t-test assume?

normal distribution, homogeneity of variance

Randomized Multiple Treatments and Control with Posttest-Only Design --> what test do you use?

One-way ANOVA

What does ANOVA assume?

normal distribution, homogeneity of variance +



either independence of scores related to the DV if it is between groups (one-way ANOVA) or covariance related to the DV (RM-ANOVA)

Randomized Multiple Treatments and Control with Pretest and Posttest Design

3x2 repeated-measures ANOVA

What does t-test examine?

compare means between experimental and control group conditions for significant differences

What does one-way ANOVA examine?

compare means across different experimental conditions for significant differences (post-hoc analysis to compare three pairs of means)

Randomized Multiple Treatments and Control with Pretest and Posttest Design -- what test do you use?

3x2 repeated-measures ANOVA

What does 3*2 repeated-measures ANOVA examine?

compare means across different experimental conditions for significant differences (post-hoc analysis to compare three pairs of means)



also: diffs pre and post, interaction treatment x time

Recommended experimental research procedures to reduce the threats of DREAD

1 Use different persons implement treatments


2 Minimize contact experimental/control groups


3 Use blinded procedures


4 Assess expectations of ppl in treatment cndtn


5 Consider not using experimental rewards


6 Clearly define treatment conditions


7 Use past research and theory to guide the evidence of an A (IV) to B (DV) causal relationship

Quasi-Experimental Designs

no random assignment, but has a control group or multiple measures



Are considered compromised designs

Methods to improve Quasi-experimental designs

1. Adding control or comparison groups 2. Adding pretest and posttests 3. Removing and reinstituting treatments 4. Adding replications 5. Reversing treatment 6. Case matching (e.g. propensity scores)

Simple ex post facto design (Quasi-Experimental) --> what kind of test to use?

X is not a 'treatment' but a *prior* event



can still use Independent sample t-test

Correlational Research Designs - purpose?

investigation of bivariate or multivariate relationships, predictions

Independent variable vs . Dependent variable

IV - what is before the measurement, how to separate the groups;


DV - what is measured, the test scale

"Relationship of participants' scores across groups being compared - Not-related (Independent) vs. Related (Dependent)"

Between-group design = independent (not-related) ; Within-group design = dependent (related) {scores on the same, or matched, participants are obtained two or more times};


Correlation/Multiple Regression = dependent

Scale of measurement of DV in one-way, RM, multifactor ANOVA

continuously scaled (interval or ratio) DV

Simple RM Anova - how many IVs?

Only one

Factor

= independent variable

Purpose of simple RM-ANOVA

compare mean differences across groups, conditions, testing times (i.e pretest-post-test)

Purpose of factorial (mutli-factor) ANOVA

assess mean differences across main effects, interaction effects and simple effects

One-way ANOVA = between or within groups?

Between group design

RM ANOVA = between or within groups?

Within group design

Factorial ANOVA - between or within groups?

Either; can include both scores that are not related (between groups) and scores that are related (within groups)

Purpose of one-way ANCOVA

compare mean differences among groups when the DV has been adjusted for by one more covariates

Number of IV in a one-way ANCOVA

One!

Scale of measurement in one-way ANCOVA

continuously scaled (interval/ratio) DV

one-way ANCOVA - between or within groups?

Depends.. there are different participants in the groups, DV scores are not related (independent) --> so between groups.



But, if a covariate used is a pretest / posttest score, then there are elements of *both* between-group and within-group designs

Mann-Whitney U Test - purpose and use

compare mean rank differences between two groups



the non-parametric alternative to the independent t-test

Number of IVs in a Mann-Whitney U Test

One!

Scale of measurement of DV in a MWU test

Discrete-ordinal DV is used in the MWU analysis



Continuous DVs can be used and the observed scores are converted to ranks in the analysis

Pearson’s Product Moment Coefficient of Correlation (r) - the purpose

Analyze the relationship between two variables

Number of IVs in Pearson’s Coefficient of Correlation

Two variables are used in the analysis.



It is *not* necessary, but one may be called an IV (X) or predictor variable and the other may be called a DV (Y) or criterion variable

Scale of measurement of variables in Pearson's correlation coefficient analysis

Continuously scaled (interval/ratio)

Relationship of participants scores in Pearson's correlation analysis

Scored on the two variables paired on same participants - dependent on each other ("within group")

Underlying assumptions of the MWU test

Non-parametric - "distribution-free" test


Therefore assumptions of normality and homogeneity of variance are not necessary!



However it is important that observations are independent from each other and there is some degree of continuity of the variable used




Underlying assumptions of the Pearson's Correlation Coefficient test

*Requires* meeting normality and homogeneity in arrays that are the residual variance of a Y conditional to a specific X

Multiple Regression Analysis - purpose ?

To analyze the extent that two or more independent variables relate to a dependent variable

Number of IV in a Multiple Regression Analysis

Two or more continuously scaled independent (predictor) variables used in the analysis

Scale of measurement of DV in Multiple Regression Analysis

Continuously scaled (interval/ratio)

Relationship of participants scores across groups compared in a Multiple Regression Analysis

Participants have scores on *all* of the variables used in the MRA, therefore the scores are related to (dependent on) each other

Underlying assumptions of Multiple Regression Analysis

MRA assumes or asses various issues, including normality, homoscedasticity, linearity, independence of errors, multicollinearity

homoscedasticity

a sequence or a vector of random variables is homoscedastic if all random variables in the sequence or vector have the same finite variance. This is also known as homogeneity of variance

NOIR

level of measurement of the DV

Multi-Group Design - what test do you use

One-way ANOVA

Distributing survey comparing two things that people do anyway -

Correlation (Bivariate)

Two-group, Pre-Post Design - what test do you use?

t test for dependent means (Paired Samples)

GLM-Univariate

same thing as Factorial / Two-way ANOVA

1+ categorical variables; looking for even distribution of cases per category



(e.g Determine if number of males/females differs by year in school)



- what is the test used?

Chi-square

2 continuous variables



e.g - Determine association between intelligence and GRE scores.



- what is the test used?

Correlation (Bivariate)

3+ continuous variables (1 of which you are trying to predict that serves as the DV) --



e.g what the best predictor is of counseling skill (college grades; emotional intelligence; hours of experience)



- what is the test used?

Regression (Linear)

2 x 2 Mixed Design - what is the test used?

GLM-Univariate (Factorial/Two-Way ANOVA)

Testing Males vs Females on verbal ability

Two-Group/Simple Experiment: 1 IV; 2 levels (Between-Subjects); 1 DV



T-test for independent means

Testing Mood before and after an exercise program

Two-Group/Pre-Post Design: 1 IV; 2 levels (Within-Subjects); 1 DV



T-test for dependent means (Paired Samples T-test)

Effectiveness of psychoanalysis vs. cognitive-behavioral vs. no treatment on depression

Multi-Group Design: 1 IV; 3+ levels (Between-Subjects); 1 DV



One-way Analysis of Variance (ANOVA)

Stress level measured each week following either a week of problemfocused coping; emotional-focused coping; or nothing.

Multi-Group Design/Repeated Measures: 1 IV; 3+ levels (Within-Subjects); 1 DV



GLM-Repeated Measures Analysis of Variance (ANOVA)

Want to test the effect of product advertising (magazine vs. tv) and product cost (low vs. high) on sales

Factorial Design: 2+ IV; 2+ levels each (Between-Subjects); 1 DV



GLM-Univariate (Factorial/Two-way ANOVA)

Want to test if there is a change in willingness to help before/after a course on public service and if it varies by gender.

Mixed Design: 1 IV; 2+ levels (Between-Subjects) 1 IV; 2+ levels (Within-Subjects) 1 DV



GLM-Repeated Measures Analysis of Variance (ANOVA)