• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/47

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

47 Cards in this Set

  • Front
  • Back
Three Factors that can cause variability in the study's DEPENDENT variable
1) the independent variable (experimental variance)

2) systematic error (error due to extraneous variables)

3)random error (error due to random fluctuations in subjects, experimental conditions, methods of measurement etc.)
Extraneous (confounding) variables
Sources of SYSTEMATIC ERROR
Techniques to control the effects of extraneous variables
1)radomization (random assignment)
2)Hold the extraneous variable constant (select subjects that are homogenous with respect to the variable)
3)matching
4)blocking (builds the extraneous variable into the study; extraneous variable can then be STATISTICALLY analyzed.
5)Statistical control of the extraneous variable (uses ANCOVA or similar to STATISTICALLY CONTROL, or statistically remove the variability in the DV that is due to the extraneous variable)
Minimizing Random error
--experimental research, esp. true research design allows the investigator to minimize the effects of random/unpredictable fluctuations in subjects, conditions, and measuring instruments

-Utilizing reliable measuring devices
Internal Validity
Is there a relationship between the IV and the DV? If so, is the relationship a causal one?

-must control the effects of the IV, control the effects of extraneous variables, and/or minimize the effects of random error
Threats to Internal validity
1)Maturation (changes within the subject)
2)History (something that occurs/happens/impacts that is external to the subjects)
3)Statistical Regression
4) Selection/Assignment (selection can act alone or interact with other validity threats)
5)Testing
6)Instrumentation
7)Attrition (mortality)
External Validity
generalizability

Can the relationship between the IV and DV be generalized to other ppl, settings, times etc?

Population validity= generalizability to other people
Ecological validity= gener. to other settings

A study's external validity IS ALWAYS LIMITED BY ITS INTERNAL VALIDITY!!!

BUT a high degree of internal validity does not guarantee external validity.
Threats to External Validity
1)Interaction between testing and treatment (e.g., pretest can "sensitize" subjects to the purpose of the study)

2)Interaction between selection and tx (e.g., the use of volunteers, they may not reflect the population at large)

3)Reactivity (e.g., ppl respond in a certain way b/c they know thay are being observed; also includes evaluation apprehension, demand characteristics and experimenter expectancy)

4)Multiple Tx Interference (order effects or carryover effects; a prob when subjects are exposed to two or more levels of the IV such as when using a within-subjects design)
Between Groups Designs
Between groups (or between-subjects) design is used when the effects of different levels of an IV are assessed by administering each level to a DIFFERENT group and then comparing the status or performance of the groups on the DV
Within-Subjects Design
Repeated measures design

all levels of the IV are administered sequentially to all subjects

Comparisons are made within subjects rather than between groups of subjects

one type is the single-group, time-series design

remember that a single group time series design can help control MATURATION effects but not History effects.
Autocorrelation
A disadvantage of time series or other within subjects designs

Confounds analysis because the subjects performance on a post test is like correlated with his performance on the pre test; inflates the value of the inferential statistic and makes a TYPE I ERROR MORE LIKELY
Mixed Design
Utilizes both between groups and within groups methodologies
Single subjects design
***each single subject design includes at least an A phase (Baseline) and a B phase (treatment)

each subject acts as his/her own no-treatment control

DV is measured througout the study
Parametric Tests
include t-test and anova

evaluate hypotheses about population means, variance or other parameters

the variable of interest must be measured on an interval or ratio scale

TWO ASSUMPTIONS
1)value of interest is NORMALLY DISTRIBUTED
2)when a study incldues more than one group, there is homoscedastticity (variance of the popuations that the different groups represent are equal)
Homoscedasticity
assumption that the variance of the populations that two plus groups represent are equal
Nonparametric tests
used to analyze data collected on variables that have been measured on a nominal or ordinal scale

do not make assumptions about the shape of population distributions

used to evaluate hypotheses about the shape of a distribution, rather than the distribution's mean, variance etc.

Less Powerful; more likely to reject a false null hypothesis
Tests for Nominal Data
1)Chi Square test--used to analyze the frequency of observations in each category (level) of a nominal variable

Single sample chi square test(also known as the goodness of fit test)

Multiple sample chi square test
Degrees of freedom for chi-square tests
tests for nominal data

single-sample chi square = categories - 1

multiple sample chi square =
(c-1)(r-1)

c=number of columns
r=number of rows
Tests for Ordinal Data
1)Mann-Whitney U Test
2)Wilcoxon Matched-Pairs Signed-Ranks Test
3)Kruskal-Wallis Test
Mann Whitney U Test
nonparametric test for ordinal data

Use: One IV with two independent groups; One DV that is rank ordered

The nonparametric ALTERNATIVE for the T TEST FOR INDEPENDENT SAMPLES
Wilcoxon Matched-Pairs Signed-Ranks test
nonparametric test for ordinal data

USE: One IV with two correlated (matched)groups; one DV with rank ordered data

the nonparamentric alternative to T-TEST FOR CORRELATED SAMPLES

the statistic=T
Kruskal Wallis Test
nonparametric test for ordinal data

Use: One IV with two or more independent groups; one DV with rank ordered data

the NONPARAMETRIC ALTERNATIVE to a one-way ANOVA

The statistic=H
Tests for Interval and Ratio Data
1) T-test (student's t-test)
2)Analysis of Variance (ANOVA)
T-Test versus ANOVA
A t-test is used to evaluate hypotheses about the DIFFERENCES BETWEEN TWO MEANS

whereas

an ANOVA is used to COMPARE TWO OR MORE MEANS (it helps control experimentwise error rate, decreasing prob of making a Type I error)
T test for a single sample
used when the study includes only one group and the group (sample) mean will be compared to a known population mean (In essence, the population is acting as a no-tx control group)

Use: One IV, single group
One DV with interval or ratio data

df=number of subjects - 1

n-1
T test for independent samples
used when a study includes two independent (unrelated) groups and the means of the groups will be compared

Use: one IV with two independent groups, one DV with interval or ratio data

df=n-2 or the total number of subjects minus 2

(or think, n of group 1 minus 1 plus n of group 2 minus one)
T test for correlated samples
used when the two means to be compared come from correlated groups (e.g., within subjects design or groups that used matching)

use: one IV with two correlated groups; one DV with interval or ratio data

df=number of pairs of scores minus 1

(e.g., may have 50 subjects, with 25 in each group that were matched...25-1 OR
25 people who are assessed before and after the IV...25 pairs of scores -1)
One-way ANOVA
USE: One IV with two or more independent groups (usually three or more b/c with two usually use t-test); one DV with interval or ratio data

df= (c-1), (n-c)

c=number of levels of the IV
n=number of subjects
Factorial ANOVA (two-way, three way etc)
USE: TWO OR MORE IV with independent groups; one DV with interval or ratio data
MANOVA
Use: one or more IV, and TWO OR MORE DVs

increases statistical power by simultaneously assessing the effects of the IV on all of the DVs
ANCOVA
combines and ANOVA with regression analysis, allows the investigator to control an extraneous variable by statistically removing the portion of variability in the DV that is due to the extraneous variable

reduces with-in group variability , resulting in MORE POWER
Pearson product moment (r)
type of correlation coefficient

used with both variables are interval or ratio data
Spearman (rho)
type of correlation coeff. aka spearman rank-order

both variables are rank ordered data
Point biserial
type of correlation coefficient

variable 1 is nominal data that rep a TRUE dichotomy (eg., male or female)

variable 2 is interval or ratio data
Biserial
correlation coeffecient

Variable 1 is nominal data reflecting an artificial dichotomy (e.g, favorable or unfavorable)

Variable 2 is interval or ratio data
Eta
(!!!!!)
a correlation coefficient used to ASSESS NONLINEAR RELATIONSHIPS

both variables must represent interval or ratio data

example studying the effects of anxiety on performance (too low or too high might equal poor performance, in the middle produces the best results)
Three assumptions of Peason Product Moment (and most other correlation coefficients)
1) linearity (linear relationship between variables)
2)unrestricted range (data collected from people who are heterogenous with regard to the characteristics)
3)homoscedasticity (range of Y scores is about the same for all values of X)
Coefficient of Determination
a squared correlation coefficient , that is interpreted as the proportion of variability in Y that is associated by X
Multivariate Techniques
used to assess the degree of association among three or more variables and to make predictions that involve, at a minimum, two predictors and one crieterion
Multiple Regression
a type of multivariate technique

used when two or more continuous or discret predictors will be used to predict status on a single continuous criterion

The output is a MULTIPLE CORRELATION COEFFICENT (R)

May be used in place of ANOVA when groups are unequal in size
Simple (simultaneous) Regression
analyzing the effects of all of the predictors on the criterion at once
Forward or Step up Regression
One predictor is added in each subsequent analysis
Backward or Step Down
all predictors are used, then one predictor is eliminated in each subsequent analysis
Multiple Regression vs. ANOVA
-Mult. Regression is better when groups are unequal in size, b/c this can reduce the power and robustness of the ANOVA

-use MR when the IVs are measured on a continuous scale, b/c an ANOVA would require that the continuous data be broken into categories or levels which reduces power
-MR permits a research to add or subtract IVs (predictors) to the analysis to determine which subset best explains the variability in the DV (criterion)
Canonical Correlation
Type of Multiple Regression used when two or more continuous predictors (IVs) are used to predict the status on two or more continuous criteria (DVs)
Discriminant Function analysis
Type of multiple regression used when two or more continuous predictors (IVs) are used to predict a person's status on a single discrete (nominal) criteria (DV).
Causal Modeling
1)Path Analysis (Structural Equation)
2)LISREL

test a predefined causal model or theory

CANNOT PROVE causality, but can provide evidence that the causal theory or model is correct or incorrect