• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/74

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

74 Cards in this Set

  • Front
  • Back
What is z*
Z* - is the number of standard deviations from the mean for a given percent confidence interval. Example 95% is 1.96 (2) standard deviations away from the mean. The whole 95% is captured between –z* and z* standard deviations away from the mean.
Margin of Error is calculated by
z*(Sigma/Sqrt(n))
Confidence Interval is calculated by
Xbar +/- z*(Sigma/Sqrt(n))
s (standard deviation for a sampling distribution) is calculated by
s= (sigma/Sqrt(n))
z* for 95% confidence is
1.960
Null Hypothesis
The test is designed to asses the strength of the evidence against the null hypothesis. The statement of "no difference"
Alternative Hypothesis
The about the population we are trying to find evidence for.
A small p-value means
strong evidence against Ho provided by the data.
large p value means
fail to give evidence against Ho.
Level of Significance
Alpha, usually .01 and .05
Statistically significant happens when
P < alpha. When results are statistically significant we have evidence to reject Ho.
Test Statistic
A statistic on which a statistical test is based, used to accept or refute the null hypothesis. Z and T
a smaller z* is the same as a
lower confidence level (Smaller % of confidence)
because n appears under a square root sign, we must take four times as many observations in order to cut the margin of error in half.
=
How do you find standard error
s/sqrt(n)
What is standard of error
it is the same as a standard deviation for a sample mean distribution. but for a population where sigma is unknown.
Level of Significance
Alpha
The probability of failing to reject a false null hypothesis
Beta
The hypothesis that the researcher wants to prove or verify; a statement about the value of a parameter that is either “less than,” “greater than,” or “not equal to.”
Alternative hypothesis:
A test for comparing the means of two independent samples or two treatments where the test statistics has an approximate t distribution.
Approximate two-sample t test:
A statistical procedure for testing the equality of means using variances.
ANOVA (Analysis of Variance
When sampling from a non-Normal population, the sampling distribution of x¯ is approximately Normal whenever the sample is large and random.
Central Limit Theorem
The value of the parameter given in the null hypothesis. E.g. µ0 is a claimed parameter value.
Claimed parameter value
The basic premises for inferential procedures. If the conditions are not met, the results may not be valid.
Conditions
Normality of the original population & SRS. (Note: We can use a t-distribution procedure when n < 40 provided the data have no outliers. We must have an SRS, however.) Check (1) data collection and (2) if n < 40, check the outliers in data plot; if n ≥ 40, apply CLT.
Conditions necessary for a one-sample t procedure (using t* for C.I. or getting P-value from t table):
Conditions necessary for a two-sample t procedure (using t* for C.I. or getting P-value for t table)
Normality of both populations & either stratified sample (independent SRS’s) or random allocation. Check (1) data collection and (2) if n1 + n2 < 40, check for outliers in both data plots; if n1 + n2 ≥ 40, apply CLT.
Conditions necessary for ANOVA
Normality of all populations, equality of variances & either stratified random sample (independent SRS’s) or random allocation. Check (1) data collection, (2) if n1 + n2 +…+ nk < 40, check for outliers in all k data plots; if n1 + n2 + … nk ≥ 40, apply CLT, and (3) largest standard deviation divided by smallest standard deviation < 2.
An estimate of the value of a parameter in interval form with an associated level of confidence; in other words, a list of reasonable or plausible values for the parameter based on the value of a statistic. E.g. a confidence interval for µ gives a list of possible values that µ could be based on the sample mean.
Confidence interval
: A test for comparing the means from two independent samples or two treatments where the degrees of freedom are taken to be the minimum of (n1-1) and (n2-1).
Conservative two-sample t test
What happens to the width of a confidence interval when sample size is increased (or level of confidence is decreased.)
Decreased
A characteristic of the t-distribution (e.g. n – 1 for a one-sample t); a measure of the amount of information available for estimating σ using s.
Degrees of freedom
A condition for ANOVA; the condition is met when the largest standard deviation divided by the smallest standard deviation is less than 2
Equal Variance
The appropriate statistical conclusion when the P-value is greater than α
Fail to reject H0
Results from statistical analyses performed on non-random samples or experimental data obtained without random allocation of treatments to individuals
Garbage
Using results about sample statistics to draw conclusions about population parameters.
Inference
The basis for hypothesis testing and confidence interval estimation.
Laws of probability
Level of confidence
The percent of the time that the confidence interval estimation procedure will give you intervals containing the value of the parameter being estimated. (Note: This can only be defined in terms of probability as follows: “The probability that the confidence interval to be computed (before data are gathered) will contain the value of the parameter.” After data are collected, level of confidence is no longer a probability because a calculated confidence interval either contains the value of the parameter or it doesn’t.)
The probability of rejecting a true null hypothesis; equivalently, the largest risk a researcher is willing to take of rejecting a true null hypothesis.
Level of significance (symbolized by α)
A test with “<” in the alternative hypothesis. This is a one-sided test.
Lower-tailed test (Also called a left-tailed test)
: The maximum amount that a statistic value will differ from the parameter value for the middle 95% of the distribution of all possible statistics. (Note: 95% can be changed to any other level of confidence.)
Margin of error for 95% confidence
Either two measurements are taken on each individual such as pre and post OR two individuals are matched by a third variable (different from the explanatory variable and the response variable) such as identical twins.
Matched pairs
Matched pairs t test
The hypothesis testing method for matched pairs data. The typical null hypothesis is H0:µd = 0 where µd is the mean difference between treatments. For this test, a difference is computed within every pair. The mean and standard deviation of these differences are computed and used in computing the test statistics.
μ0: The claimed value of the population mean given in H0.
μ0
The claimed value of the population mean given in H0.
Multiple Analyses
Performing two or more tests of significance on the same data set. This inflates the overall α (probability of making a type I error) for the tests. (The more analyses performed, the greater the chances of falsely rejecting at least one true null hypothesis.)
Null hypothesis
The hypothesis of no difference or no change. The hypothesis that the researcher assumes to be true until sample results indicate otherwise. Generally, the hypothesis that the researcher wants to disprove. (Note: Interpretations of P-value and statistically significant need to say something about “if H0 is true” in order to be correct.)
: The difference between the observed statistic and the claimed parameter value; e.g. – μ0.
Observed effect
A test where the alternative hypothesis contains either “<” or “>”
One-sided or one-tailed test
An inferential statistical procedure that uses the mean for one sample of data for either estimating the mean of the population or testing whether the mean of the population equals some claimed value.
One-sample t test
: An observation that falls outside the pattern of the data set. For one sample of data, an outlier will be any observation that is a long way from the rest of the data
Outlier
: A characteristic of a population that is usually unknown; this characteristic could be the mean (μ), median, proportion, standard deviation (σ), etc.
Parameter
: The probability of rejecting a false null hypothesis; computed as 1 – β. Increase power by increasing sample size or increasing α
Power:
Practical significance
A difference between the observed statistic and the claimed parameter value that is large enough to be worth reporting. To assess practical significance, look at the numerator of the test statistic and ask ‘Is it worth anything?’ If yes, then results are also of practical significance. Note: Do not assess practical significance unless results are statistically significant
P-value
The probability of getting a test statistic as extreme or more extreme than the value observed assuming H0 is true. OR The probability of obtaining a test statistic value as far or farther from the value actually obtained if H0 were true
The appropriate statistical conclusion when P-value < α.
Reject H0
The variability of sample results from one sample to the next—something we must measure in order to effectively do inference. Margin of error only covers sampling variability
Sampling variability
Significance lever (α)
Level of Significance
A measure of the variability (spread) of data in a sample about
Standard deviation (s
A measure of the variability of the sampling distribution of xbar ; equals σ / sqrt(n) .
Standard deviation of xbar
: A measure of the variability of the sampling distribution of xbar; estimates the standard deviation of the sampling distribution of xbar ; computed using the formula: s / sqrt(n) .
Standard error of xbar
Standard of a statistic
An estimate of the standard deviation of the sampling distribution of the statistic; in other words, it is a measure of the variability of the statistic. Note: The denominators of most test statistics are called standard errors.
A characteristic of a sample; a number computed from sample data (without any knowledge of the value of a parameter) used to estimate the value of a parameter. Examples include , the sample mean, and s, the sample standard deviation
Statistic
: A difference between the observed statistic and the claimed parameter value as given in H0 that is too large to be due to chance. (An observed effect that is too large to be due to chance.) Results are significant when P-value < α. Just because the results are statistically significant does not imply that the results are important.
Statistical significance
Results of a study that differ too much from what we expected because of randomization to attribute to chance variation
Statistically significant
: A distribution specified by degrees of freedom used to model test statistics for the one-sample t test, the two-sample t test, etc. where σ (‘s) is (are) unknown. Also used to obtain a confidence interval for estimating a population mean, or the difference between two populations means, etc.
t distribution
t*:
The multiplier of standard error in computing margin of error for estimating a mean (or the difference between two means).. The value for t* is found on the t table in the intersection of the appropriate df row and level of confidence column
Procedure used to assess the evidence against a claim (hypothesis) about the value of a parameter.
Test of significance
A number that summarizes the data for a test of significance; usually used to obtain P-value
Test statistic
: A statistical procedure used to compare the means from two populations either with a test of their equality or by estimating the difference between two population means.
Two-sample t procedure:
A test where the alternative hypothesis contains “does not equal”.
Two sided test
The error made when a true null hypothesis is rejected. (i.e. you reject H0 when H0 is true.)
Type I error:
The error made when a false null hypothesis is not rejected. (i.e. you fail to reject H0 when H0 is false.)
Type II error
A condition where the mean of all possible statistic values equals the parameter that the statistic estimates.
Unbiased
A test with “>” in the alternative hypothesis.
Upper-tailed test (also called a right-tailed test)
: The square of standard deviation. Sample variance is s2 and population variance is σ2.
Variance