• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/21

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

21 Cards in this Set

  • Front
  • Back
Normal Distribution
very common and useful.

Describes many REAL variables:
height, weight, IQ, SAT.

Can be used to calculate probabilities
Characteristics of the normal disribution
Bell shaped; symmetrical; mean median and mode are equal; tail extends indefinitely; area under the curve equals 1.0
You cannot directly compare scores when the mean and standard deviation are:
different. The mean and SD have to be the same to directly compare.
Standard scores
scores with a set mean and standard deviation.
ex. IQ, GRE, t scores
z score (one of the standard scores)
States how many standard deviations the original score is above or below the mean of a distribution.

You calculate a z score to compare two distributions with different means and standard deviations.
Characteristics of z scores
The mean of any set of z scores equals 0.0
The standard deviation of any set of z scores equals 1.0
The shape of a distribution of z scores maintains the shape of the original score distribution from which the z scores were derived.
z score calculation
z=x + mean(or X bar)
----------- (divided by)
SD (stand. dev.)
Random Sampling Distribution
A relative frequency distribution of a sample statistic, obtained from an unlimited series of sampling experiments, each consisting of a sample size "n", randomly selected from the population.

Ex. Random Sampling Distribution of the mean
Finding Areas Under the Curve
For z scores distributions
Use z score to find area-use table in book
Random Sampling Distribution of the Mean
The relative frequency distribution of means, obtained from an unlimited series of sampling experiments, each consisting of a sample size "n", randomly selected from the population.

Basically: little "x bars" make a distribution...
Central Limit Theorem
The random sampling distribution of the mean approaches normal shape as the sample size increases, regardless of the shape of the population distribution.

It doesn't matter what the shape of the population is, the Random Sampling Distribution will always be normal in shape.
The Logic of Hypothesis Testing: Modus Tollens
Deductive argument of the form, "If P, then Q" Given -Q (not Q), you conclude -P(not P)

If it's raining then the streets are wet. (If P then Q)

The streets are not wet, (-Q)therefore its not raining. (-P)
Null Hypothesis
(H subscript 0) "Fake" hypothesis which is assumed to be true, tested directly, and either rejected or not rejected.
Alternative hypothesis
(H subscript 1)
The hypothesis believed to be true by the researcher
Level of significance
Probability that specifies how rare a sample result must be to reject null hypothesis as being true
Establish region(s) of rejection
Area under the curve where null hypothesis is rejected if the sample result should fall within the region (i.e. identifies sample results are to be considered -Q)

(See "Inferential Statistics" powerpoint Slide #10 for chart)
When to Reject the null hypothesis
When the calculated value of the test statistic equals or exceeds the critical value, our decision is to reject the null hypothesis.
When to not reject the null hypothesis
The calculated value of the test statistic is less than the critical value, we DO NOT accept the null hypothesis. Rather, we "do not reject" the null hypothesis.
There is insufficent evidence to reject the null hypothesis.
To "accept" the null hypothesis would be to make a fallacious argument called:
Affirming the consequent
Statistical Significance
Indicates the null hypothesis was rejected at a specified o- level
Practical Significance
refers to the real life importance of the result. That is, does the obeserved difference between the null hypothesis and alternative hypothesis really mean anything in practical terms?