Study your flashcards anywhere!
Download the official Cram app for free >
 Shuffle
Toggle OnToggle Off
 Alphabetize
Toggle OnToggle Off
 Front First
Toggle OnToggle Off
 Both Sides
Toggle OnToggle Off
 Read
Toggle OnToggle Off
How to study your flashcards.
Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key
Up/Down arrow keys: Flip the card between the front and back.down keyup key
H key: Show hint (3rd side).h key
A key: Read text to speech.a key
38 Cards in this Set
 Front
 Back
Sampling distribution

The probability distribution of a statistic from a random sample or randomized experiment


Population distribution

the distribution of a variable's values for all members of the population (also the probability distribution when choosing one individual at random from the population)


Count of X

The number of occurences of some outcome in a fixed number of observations


Sample proportion

For n number of observations:
p^ = X / n 

Binomial setting (4)

1. There are a fixed number n of observations
2. The n observations are all independent 3. Each observation falls into success or failure category 4. THe probability (p) of success is equal for each observation 

Binomial distribution

The distribution of the count X of successes in the binomial setting (parameters n and p)


Possible values of X

Whole numbers from 0 to n (n  1 values); X is B(n, p)


Binomial mean and standard deviation

µ[x] = np
σ[X] = sqrt(np(1p)) 

Mean and standard deviation of a sample proportion

µ[p^] = p
σ[p^] = sqrt((p(p1))/n) 

Binomial coefficient

The number of ways of arranging k successes among n observations>
(n!) / (k!(nk)!) 

Binomial probability

P(X=k) = (n::k)(p^k)(1p)^(nk)
(n::k)>binomial coefficient 

Averages vs. individual observations (2)

1. Averages are less variable than individual observations
2. Averages are more normal than individual observations 

Mean and standard deviation of sample means

µ[xbar] = µ
σ[xbar] = σ / sqrt(n) 

Sampling distribution of a sample mean

If a population has the N(µ, σ) distribution, then the sample mean xbar of n independent observations has the N(µ, σ/sqrt(n))


Central limit theorem

Draw an SRS of size n from any population with mean µ and finite standard deviation σ; when n is large, the sample distribution of the sample mean xbar is approximately normal with the sample mean distribution


Number of observations to achieve normality

Depends on the normality of the populationthe more normal it is, the fewer observations are needed


3 facts related to the CLT and normal approximation

1. The normal approximation for sample proportions and counts is an example of the central limit theorem
2. Any linear combination of independent normal random variables is also normally distributed 3. More general versions of the central limit theorem say that the distribution of a sum or average of many small random quantities is close to normal 

Confidence interval (form)

estimate +/ margin of error


Margin of error

Shows how accurate we believe our guess is based on the variability of the estimate


Confidence level (C)

Shows how confident we are that the procedure will catch the true population mean µ


Confidence interval (level C)

An interval computed from sample data (for a parameter) by a method that has probability C of producing an interval containing the true values of the parameter


z*

The number such that any normal distribution has probability C within +/ z* standard deviations of its mean


Margin of error from SRS of size n from population with mean µ and standard deviation σ

m = z*(σ/sqrt(n))


Confidence interval based on sample mean and margin of error

xbar +/ m


Ways to reduce margin of error (3)

1. Use a lower level of confidence (C)
2. Increase sample size (n) 3. Reduce σ 

Sample size for desired margin of error

n = (z*σ/m)^2


Explanations for something happening a very small proportion of the time (2)

1. We have observed something that is very unusual
2. The assumption that underlies the calculation, no difference in the mean of two groups, is not true 

Null hypothesis

The statement being tested in a test of significance, which is designed to assess the strength of the evidence against the null hypothesis (usually a statement of no effect or no difference)


Alternative hypothesis

The statement we suspect is true instead of the null


Hypotheses and parameters

They always refer to some population or model, not to particular outcomes, so they must be stated Ho and Ha in terms of population parameters


Test statistic

z = (estimate  hypothesized value)/(σ of the estimate); measures compatability between the null hypothesis and the data


Test of significance

Finds the probability of getting an outcome as extreme or more extreme than the actually observed outcome


Pvalue

The probability, computed assuming Ho is true, that the test statistic would take a value as extreme or more extreme than that actually observed (the smaller, the stronger evidence against Ho)


Statistical significance

If the Pvalue is as small or smaller than significance level alpha, we say that the data are statistically significant at level alpha


4Steps in tests of significance

1. State Ho and Ha
2. Calculate value of test statistic 3. Find Pvalue 4. Either reject Ho or say the data do not provide sufficient evidence to reject the null 

Null hypothesis/test statistic of a test for population mean

Ho:µ = µo
z = (xbar  µo)/(σ/sqrt(n)) 

zTest for population mean

Ha: µ > µo is P(Z≥z)
Ha: µ < µo is P(Z≤z) Ha: µ ≠ µo is P(Z≥lzl) (exact Pvalues for normal population distribution and approximately correct for large n in other cases) 

Twosided significance tests and confidence intervals

A level alpha twosided significance test rejects a hypothesis Ho: µ = µo exactly when the value µo falls outside a level 1  alpha confidence interval for µ
