Study your flashcards anywhere!
Download the official Cram app for free >
 Shuffle
Toggle OnToggle Off
 Alphabetize
Toggle OnToggle Off
 Front First
Toggle OnToggle Off
 Both Sides
Toggle OnToggle Off
 Read
Toggle OnToggle Off
How to study your flashcards.
Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key
Up/Down arrow keys: Flip the card between the front and back.down keyup key
H key: Show hint (3rd side).h key
A key: Read text to speech.a key
62 Cards in this Set
 Front
 Back
Def: cluster analysis *

Random selection of naturally occurring groups, rather than individuals


Def: analog study

Assessing a phenomenon under conditions that resemble the phenomenon in the field


Def: a crosssequential research design

A combination of longitudinal and crosssectional designs
Subjects divided into age groups Assessed on dependent variable repeatedly over time 

Developmental research designs

Longitudinal
Crosssectional Crosssequential 

Describe a matching design

Grouping subjects similar on an extraneous variable and then assigning members of the group to each treatment condition


Describe stratified random sampling

Random sampling of subgroups of a population
eg children, teens, young adults, etc 

Describe multiple baseline study *

Single subject
Application of treatment across different baselines (behaviors, settings, individuals) Used when reversal is not possible or is unethical 

Describe a onegroup timeseries design

Multiple pretests, followed by treatment, followed by multiple posttests
Controls for maturation, testing and regression effects Vulnerable to history, or a simultaneously occurring event 

Formula: variance (s squared)

(sum of (X  mean)squared) / n
population denominator n sample denominator n1 

Formula: z score

(X  mean) / standard deviation


T score attributes

mean = 50
sd = 10 

Stanine attributes

Divides score range into equal ninths
Mean = 5 SD = 2 

Formula: standard error of the mean *

SE = standard deviation / square root of N
also is SD of the sampling distribution of means the expected difference between the sample mean and the population mean 

Chi square requirements *

Independent observations
Mutually exclusive categories Frequency, not percentage data used 

Formula: ANOVA mean square

mean square = sum of squares / df


Use: phi coefficient

Two dichotomous variables


Use: pointbiserial coefficient *

One interval or ratio variable
One naturally dichotomous variable (2 categories) 

Use: biserial coefficient *

One interval or ratio variable
One artifically dichotomous variable (2 categories) (eg scores above, scores below) 

Use: contingency coefficient

Two nominally scaled variables, each with more than 2 categories


Use: canonical correlation

Multiple predictors and
multiple criterion values 

Use: Spearman's rho

Both predictor and criterion variables are ranked


Def: coefficient of determination

Pearson r squared
% of variability accounted for in the correlation 

In an ANOVA, what does within group variance measure

Random variance


Which has the smallest variance?
population samples individual samples mean population samples 
Mean population samples


When are nonparametric tests used?

When normality can't be assumed


When homogeneity of variance is compromised, the best way to assure result robustness is...?

To keep sample size equal


Use: eta correlation

With nonlinear, continuous variables


Distinguish the use of: ttest, one way ANOVA, factorial ANOVA, MANOVA and ANCOVA *

ttest: pair of means
One way ANOVA: 1 independent variable; 2 groups Factorial ANOVA: >1 independent variable permits analysis of interaction effects MANOVA >1 dependent variables minimizes p(Type I error) ANCOVA to control for the presence of an extraneous variable 

Def: internal validity

Study that permits the conclusion that there is a causal effect between the independent and dependent variable


Threats to internal validity

History  an external event
Maturation Test learning Changes in instrumentation Statistical regression Subject characteristics Systematic differences between stickers and dropouts Experimenter bias 

Methods for controlling threats to internal validity

Randomization
Matching Blocking Hold extraneous variables constant ANCOVA 

Def: matching

Grouping subjects by status on extraneous variable and then randomly assigning from within groups


Def: blocking

Treating an extraneous variable like another independent variable


Def: time series design

Multiple pretests
Treatment Multiple posttests History is a threat to internal validity 

Bias in longitudinal studies

Tendency to underestimate age related change, esp decrements
Drop outs tend to be poorer performers Practice effects on measures 

Bias in cross sectional studies

Over estimation of effects due to aging
Cohort effects Experience 

Def: Type II error (beta) *

retaining a false null hypothesis
failing to detect a true effect 

Techniques to increase the validity coefficient

Increase the range of scores


Def: shrinkage *

Occurs when predictors are DEVELOPED on one sample and then VALIDATED on another. The correlation coefficient for the second sample is likely to be lower.


def: power *

ability to detect a treatment effect
p (rejecting a false null hypothesis) p (not making a type II error) 1  beta 

factors affecting power

sample size  larger
alpha  larger one tailed test magnitude of the population difference  larger 

assumptions of parametric tests

normal distribution of the dependent variable
homogeneity of variance independence of observations  most critical 

Def: F statistic

In an ANOVA, the ratio of between group variance over within group variance


Common nonparametric tests

Chisquare  frequencies of nominal data
MannWhitney U  nonparametric equivalent of a ttest; 2 independent groups  nominal scores Wilcoxon MatchedPairs test  nonparametric equivalent of a ttest for correlated scores KruskalWallis test  nonparametric alternative to a oneway ANOVA 

ANOVA posthoc tests

Scheffe provides greatest protection against a type I error, but increases probability of a type II error
Tukey most appropriate for pairwise comparisons 

Calculation of CHIsquare expected frequencies

simple case = subjects / cells
complex case = column total * row total / total N 

Assumptions of Pearson r

linear relationship between variables
homoscedasticity  equal variability on y throughout the x range r is highest when using the full range of scores on both variables 

Use: discriminant function analysis

scores are combined to determine group assignment
in contrast to multiple regression in which multiple variables are combined to predict a score 

Def: differential validity

in discriminant analysis, each predictor has a high correlation with a single category criterion and a low correlation with the other category criteria
IQ has low differential validity 

Use: structural equation modeling

testing causal models based on multiple variables


Techniques of structural equation modeling *

Path analysis  one way causal relationship with observed values
LISREL  one or two way causal analysis with both observed and infered variables helps sort out the contributions of true score and error variance 

Use: trend analysis

determination of shape of the relationship between variables: eg linear, quadratic, cubic, quartic...
yield the significance of the trend 

Def: sampling distribution

a distribution of the values of a statistic (eg the mean) with each value computed from samesized samples drawn with replacement from the population
has less variability than the population distribution 

Central limit theorem *

1. As sample size increases the shape of the sampling distribution of means approaches a normal shape  even if the distribution of scores is not normal
2.The mean of the sampling distribution of means is equal to the mean of the population 

Rosenthal effect

aka experimenter expectancy effect
unintentional effect experimenter exerts towards making the results come out right 

experimentwise error rate

probability of making at least 1 type I error when multiple comparisons are made in a single experiment


heteroscedasticity

unequal variability of y scores at different values of x


Effect on t test when comparison groups are highly correlated

Within group variability is suppressed, giving an artificially high t value


Threats to external validity *

Interaction between selection and treatment would create problems in generalization


Use: tetrachoric coefficient

2 artificially dichotomous variables


Use: paired t test

Analysis of means when groups are not independent (eg twin studies or repeated measures)
df = # of pairs  1 

ANOVA vs multiple regression

ANOVA uses categorical independent variables only
Multiple regression can use either categorical or continuous variables 