Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
43 Cards in this Set
- Front
- Back
What percentage of people fall within two SDs of the mean? |
A: 95.4% (34.1%= 1 SD above or below; 13.6% = 2nd SD above or below) |
|
What is the probability of scoring more than 2 SDs above the mean? |
A: 2.5% (Probability: .025) …. Unless you’re us, in which case, 100% |
|
What does a Zscore represent? |
A: The number of SDs above or below the mean a particular score is |
|
What is theformula for a Z score? |
|
|
If a personscored 70 on a test with a mean score of 50 and a standard deviation of 10,what is the Z score? |
A: 2 |
|
Explain theequation for The Model |
A: Outcome =(Model) + error Or Test statistic =effect/error |
|
What is aconfidence interval? |
A: A range ofvalues computed in such a way that it contains the estimated parameter a highproportion of the time |
|
Can a confidentinterval be constructed for any estimated parameter other than the mean? |
A: Yes, anyparameter (z, SD) |
|
What does the pvalue tell us (the answer is not, “whether it’s significant”)? |
A: How often youwould get a difference as large or larger than the one obtained in theexperiment if the experimental manipulation really had no effect (and thedifference was due to chance). Only if very rarely, then is significant |
|
What does a pvalue of .05 tell us? |
A: That chancealone would produce a difference as large or larger than the one obtained only5% of the time or less |
|
Does thestatistical significance tell us that the null hypothesis is true? |
A: No, it is never true… |
|
Does statisticalsignificance tell us the importance of an effect? Explain. |
A: No,significance depends on sample size |
|
Should we useone-tailed or two-tailed tests? |
A: Two-tailed,unless it doesn’t make logical sense to use a one-tailed |
|
What is Type Ierror? |
A: We thinkthere is a statistically significant effect but there isn’t (We think thevariance accounted for by the model is larger than what is unaccounted for)“Acceptablelevel”: Fischer’s criterion p<.05 – probability is alpha level |
|
What is Type IIerror? |
A: When we thinkthere is no statistically significant effect but there is (We think there istoo much variance unaccounted for by the model)“Acceptablelevel”: Fischer’s criterion p=.2 – probability is power level |
|
Do one-tailedand two-tailed tests have the same Type I error rate? |
A: Yes |
|
Doone-tailed and two-tailed tests have the same Type II error rate? |
A: No, 1T havelower Type II error rates, and more power than 2T |
|
What is effectsize the measure of? |
A: The degree towhich the mean of H1 and H0 differ (OR) how close the predictions of the modelare to the observed outcomes (how much variance is explained) |
|
Is effect size astandardized measure? |
A: Yes |
|
Is effect sizereliant on sample size? |
A: Not very |
|
List three typesof effect size measures. |
A: Cohen’s d, Pearson’s r, Glass’ triangle thing, Hedge’s g,Odds ratio/risk rates |
|
Which effectsize measure should you use when group sizes are different? |
A: Cohen’s d |
|
What is Cohen’sd effect size based on? |
A: Thedifference between means |
|
What isPearson’s r effect size based on? |
A: Varianceexplained by the model |
|
What constitutesa ‘large’ effect of Pearson’s r? |
A: .5 to .8 (theeffect accounts for 25% of the variance) |
|
Why might wecommit Type II error? |
A: We don’t haveenough statistical power |
|
What do you needto calculate the prospective statistical power (and implied probability of makinga Type II error) of an experiment? |
A:Effect size, sample size, chosen alpha level |
|
What are theassumptions of parametric test? |
A:Additivity & linearity, normality, homogeneity of variance, independence |
|
What islinearity? |
A: When theoutcome variable is linearly related to any predictors |
|
What isadditivity? |
A: If you haveseveral predictors, then their combined effect is best described by addingtheir effects together |
|
Which of theseis not relevant to a normal distribution?Parameter,confidence intervals around a parameter, outliers, null hypothesis testing |
A: Outliers |
|
What are threeways of spotting normality in a sample? |
A: Central LimitTheorum (N>30), graphs (p-p plots, histograms), skew/kurtosis,Kolmogorov-Smirnov test |
|
What ishomogeneity of variance (homoscedasticity)? |
A: When all thegroups you’re testing have similar variance |
|
What doesLevene’s test for? |
A:Whether the variance in different groups is the same (homoscedasticity) |
|
What isWindsorizing? |
A: A method ofreducing bias by substituting outliers with the highest value that isn’t anoutlier |
|
Is “Children canlearn a second language faster before the age of 7” a one-tailed or two-tailedhypothesis? |
A: One-tailed |
|
Whatdoes it mean if we have a 95% confidence interval? |
A: That 95 outof 100 confidence intervals will contain the population mean |
|
Power is theability of a test to detect an effect given that an effect of a certain sizeexists in a population. True or false. |
A: True |
|
We can use powerto determine how large a sample is required to detect an effect of a certainsize. True or false. |
A: True |
|
Poweris linked to the probability of making a Type II error. True or false. |
A: True |
|
The power of atest is the probability that a given test is reliable and valid. True or false. |
A: False |
|
What happens tothe SE as the sample size increases? |
A: It decreases |
|
What happens tothe confidence interval as sample size increases?
|
A: It getsnarrower |