• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/43

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

43 Cards in this Set

  • Front
  • Back
frequency
number of participants or cases; N=a population, n=a sample
proportion
part of 1
frequency distribution/polygon
a table/drawing that shows how many participants have each score
positive skew
distribution curve with longer tail to the right
negative skew
distribution curve with longer tail to the left
bimodal distribution
has two high points, most likely to emerge when human intervention or a rare event has changed the composition of a population
mean
balance point in a distribution of scores, or, the point around which all the deviations sum to zero
computation of mean
sum scores and divide by number of scores, symbolized by M, or m, or (x-bar)
notes on mean
pulled in direction of extreme scores therefore inappropriate for skewed distributions; should only be used with interval and ratio scales
median
middle point of a distribution, 50% of cases above and 50% below it.
notes on the median
insensitive to extreme scores therefore good for skewed distributions; cannot be used on nominal data
mode
most frequently occurring score; can be used for nominal data, though percentages may be more informative
variability
differences among scores; also called spread or dispersion
range
difference between highest and lowest score (but being based on the two extreme scores can be a weakness)
outliers
scores that lie far outside the range of most other scores
interquartile range
range of the middle 50% of the participants (ignores outliers)
standard deviation
most frequently used measure of variability; provides an overall measurement of how much scores differ from the mean
standard dev. and normal curve
about 68% or two-thirds of cases will lie within one standard deviation of the mean
correlation
extent to which two variables are related
census
study in which all members of a population are included
direct relationship is also called
positive relationship (high or low on both variables)
inverse relationship is also called
negative relationship (high on one variable, low on the other)
correlation is not
causation
what is needed to study cause and effect?
a controlled experiment
Pearson correlation coefficient (Pearson r, Pearson product-moment correlation coefficient)
describes relationship between 2 variables: -1 (perfect inverse) to 1 (perfect direct relationship); 0 is absence of relationship (this is NOT a proportion)
coefficient of determination (r squared)
when converted to a percentage (by multiplying by 100) tells us how effective one variable is in predicting another
null hypothesis
for the difference between 2 sample means, the true difference between the means is zero; or, there is no true difference between the means
research hypothesis
a researchers "expectation" or personal hypothesis
directional hypothesis
that one group's average will be higher than another's
nondirectional hypothesis
that one group's average will be higher but there's insufficient info to say which
significance test (p-italicized)
tests the null hypothesis and yields a probability that it is true
what kinds of tests are used to test the null hypothesis?
inferential tests
probability level
(also called alpha level) the level for rejecting the null hypothesis, that is, that it is not true; commonly .05
Type I error
the error of rejecting the null hypothesis when it is correct
synonym for "rejecting the null hypothesis"
results are statistically significant
Type II error
error of failing to reject the null hypothesis when it is false
t-test
tests the difference between two sample means to determine statistical significance and yields a probability that the null hypothesis is correct
3 factors resulting in a low probability of a correct null hypothesis (ie, that it will be rejected)
1) larger samples (reduces sampling error)
2) larger difference between means (random sampling produces few large differences)
3) smaller variance (less sampling error)
chi-square
tests for differences among frequencies, ie, for nominal data where number of cases and percentages are reported (can't compute mean or standard deviation for nominal data)
is significant difference (ie, reliable) the same as a large difference?
no
limitations of significance testing
1) fails to indicate size of difference
2) does not assess the practical significance of a difference
effect size
standardizes the size of the difference between two means
Cohen's d
measure of effect size, in standard deviation units (there are only about 3 standard deviation units above and below the mean).