• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/15

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

15 Cards in this Set

  • Front
  • Back
Statistical distribution
Measures of central tendency: mean, median mode.

Measures of dispersion: standard deviation (SD), standard error of the mean (SEM), Z-score, confidence interval.
Normal Distribution
Gaussian (bell-shaped)
Mean = median = mode
Stand deviation and standard error of the mean
SEM = SD/n^1/2, with n = sample size

SEM decreases as n increases
Positive skew
Positive skew:

Typically, mean > median > mode.

Asymmetry of distribution curve, with longer tail on the right.

Mode is least affected by outliers in the sample.
Negative skew
Negative skew:

Typically, mean < median < mode.

Asymmetrical distribution curve, with longer tail on the right.
Null hypothesis (H0)
Null hypothesis = hypothesis of no difference
-e.g., there is no association between the disease and the risk factor in the population.
Alternative hypothesis (H1)
Alternative hypothesis = hypothesis of some difference
-e.g., there is some association between the disease and the risk factor in the population.
Type I error (alpha)
Type 1 error = false positive error

Stating that there is an effect of difference when none actually exists, i.e., to mistakenly accept the alternative hypothesis and reject the null hypothesis.

"Alpha" = You sAw a difference that did not exist, e.g., convicting an innocent man.

Alpha is the probability of making a type I error.

Rho (looks like lower case p) is judged against a preset alpha level of significance (usually under .05).

-If rho is under .05, there is less than a 5% chance that the data will show something that is not really there.
Type II error (beta)
Type II error = false-negative error.

-Stating that there is not an effect or difference when one does exist, i.e., to fail to reject the null hypothesis when it is in fact false.

Beta is probability of making a type II error.

"Beta = you were Blind to a difference that did exist," e.g., letting a guilty man go free.
Power (1 – beta)
Power is the probability of rejecting the null hypothesis when it is intact false, i.e., the likelihood of finding a difference if one in fact exists.

Power increases with:
-Increased sample size ("there's power in numbers")
-Increased expected effected size
-Increased precision of measurement
Meta-analysis
A meta-analysis pools data and integrates results from several similar studies to reach an overall conclusion.

It increases statistical power – i.e., increased probability of rejecting null hypothesis when it is in fact false.

Meta-analysis is limited by the quality of individual studies, or by bias in study selection.
Confidence interval
Confidence interval is a range of values in which a specified probability of the means of related samples would be expected to fall.

CI = confidence interval
CI = range from [mean – Z(SEM)] to [mean + Z(SEM)]

The 95% CI is often used. This corresponds to a rho under .05, which indicates there is less than a 5% chance that the data will show something that is not really there.

For the 95% CI, Z = 1.96
For the 99% CI, Z = 2.58
Interpreting confidence interval values
If the 95% CI for a mean difference between 2 variables includes 0, then there is no significant difference and H0 (null hypothesis) is not rejected.

If the 95% CI for odds ratio or relative risk includes 1, H0 is not rejected.

If the CIs between 2 groups do not overlap, then significant difference exists.

If the CIs between 2 groups do overlap, then usually no significant difference exists.
t-test vs. ANOVA vs. Chi-square

-Chi-square looks like X^2
t-test: checks difference between the means of 2 groups

-"Mr. T is mean."

ANOVA: checks the difference between the means of 3 or more groups

-"ANOVA = ANalysis Of VAriance" of 3 or more groups

Chi-square test checks difference between 2 or more percentages or proportions of categorical outcomes (not mean values).

-Chi-square compares percentages or proportions
Pearson's correlation coefficient (r)
Pearson's correlation coefficient (r) is always between -1 and +1. The closer the absolute value of r is to 1, the stronger the linear correlation between the two variables.

-Coefficient of determination = r^2. This is the value that is usually reported.