• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/23

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

23 Cards in this Set

  • Front
  • Back
Variance
The mean of all squared deviation scores.
ơ^2=(SS)/N (population)
s^2=SS/(n-1) (sample)
Standard Deviation
√variance
A rough measure of the average amount by which scores deviate on either side of the mean.
Most distributions:
1SD=68%, 2SD=95%, 3SD=99%
ơ=√(SS/N)(population)
s=√(SS/n-1)(sample)
Sum of Squares
The sum of squared deviation scores.
SS=Σ(X-μ)^2
SS=ΣX2 – (ΣX)^2/N
Degrees of Freedom
The number of values free to vary, given one or more mathematical restrictions.
Interquartile Range
The range for the middle 50% of all scores
z score
A unit-free, standardized score that, regardless of the original units of measurement, indicates how many standard deviations a score is above or below the mean of its distribution.
z=(X-μ)/ơ
Standard Normal Curve
Always has a mean of 0 and a SD of 1. Area under the curve equals 1.
Standard Score
A unit-free score expressed relative to a known means and a known SD. Transformed standard scores are unit-free standard scores that lack negative signs and decimal points (for example, T scores).

z’= desired mean + (z)(desired SD)
Upper and Lower Bound of a 95% Confidence Interval
Mean +/- (1.96 x SE)
z-scores of 95%, 99%, and 99.9%
+/-1.96, +/-2.58, +/-3.29
Pearson Correlation Coefficient (r)
A number between -1 and 1 that describes the strength and direction of the linear relationship between pairs of quantitative variables.
r=(SPsubxy)/√(SSsubx*SSsuby)
where
SPsubxy= ΣXY-[(ΣX●ΣY)/n]
When used to measure effect ±0.1 is a small effect, ±0.3 is a medium effect, and ±0.5 is a large effect.
Assumptions of Pearson Correlation Coefficient (r)
1. Variables are linearly related.
2. Data are interval or ratio.
3. If measuring significance, data should be normally distributed (although one of the variables can be categorical if there are only two categories)
Bivariate correlations
Correlation between two variables.
Examples: Pearson’s r, Spearman’s rho
Partial correlations
Looks at the relationship between two variables while controlling for a third variable.
r^2
Pearson’s correlation coefficient squared. r^2 is the amount (percentage) of variance in one variable that is explained by another.
rsub-s
Spearman’s correlation coefficient. A non-parametric statistic computed by first ranking the data and then applying Pearson’s equation to those ranks.
τ
Kendall’s tau. A non-parametric correlation used rather than Spearman’s rs when you have a small data set with a large number of tied ranks. Probably more accurate than Spearman’s statistic.
Biserial (rb) &
Point-Biserial (rpb) Correlation
Coefficients
Correlation coefficient used when one of the two variables is dichotomous. The point-biserial correlation coefficient r¬pb, is used when one variable is a discrete dichotomy whereas the biserial correlation coefficient rb, is used when one variable is a continuous dichotomy.
Standard error
Can be calculated by dividing the sample SD by the square root of the sample size (N)
What are the assumptions of parametric data?
1. Normally distributed data
2. Homogeneity of variance
3. Interval data
4. Independence
What does the assumption of "homogeneity of variance" mean?
It means that as you go through levels of one variable, the variance of the other should not change. The variance of one variable should be stable at all levels of the other variable.
How do you test for homogeneity of variance?
Can use Levene's statistic (want it to be NS). Also can use the variance ratio (especially with large sample sizes), which is the ratio of the variances between the group with the biggest variance and the group with the smallest variance. If the ratio is less than 2 it is safe to assume homogeneity of variance.
What is the formula for the covariance of variables x & y?
cov(x,y)= SUM(X-MeanX)(Y-MeanY)/(N-1)
A positive covariance indicates that as one variable deviates from the mean, the other variable deviates from the mean in the same direction.