• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/27

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

27 Cards in this Set

  • Front
  • Back

How reliable are the results?

Answer via setting confidence limits


bounds to our estimates of population parameters


use standard errors

How probable that the results are due to chance alone?

answer by evaluating differences between observed and expected results

Notes on stats

as sample size increases, range of sample means calculated from the sample decreases, and precision of the estimate increases

Standard error

the standard deviation of the estimate's sampling distribution is its standard error


every estimate of any stat has a sampling distribution with a standard error


measure of reliability (precision) of the estimate


standard error of the mean is easy to calc and should always be reported with the mean


SE of mean= standard deviation over the square root of the number of observations




as sample size increases, se of estimated mean decreases

Note on sample means

Sample means based on large samples should be close to the parametric mean and will not vary as much as will means based on small samples

confidence interval

range of values surrounding the sample estimate that is likely to contain the population parameter


(most plausible range for pop mean; beyond range less plausible)


Use SE of mean to calc confidence interval

Correct ways to state confidence intervals

We are 95% confident that the true mean lies between bla and bla units.




There is a 95% probability that our confidence interval covers the true mean.




As ss increases, it narrows the confidence interval and increases the precision because se is decreasing

2 se rule of thumb

rough approx of 95% confidence interval for pop mean


mean +/- 2 se of mean

To reduce width of confidence interval

decrease measurement error


use better controls in lab


but can't change natural populations sd in nature




for small samples use t-table distribution values instead of normal distribution values

estimation

putting reasonable bounds on value of parameter

hypothesis testing

determining whether parameter differs from some null expectation


biggest use for stats in biology

null hypo

stat hypo being tested


no real differences

alt hypo

sometimes can be specific but usually not just opposite of null

Reject null

if data differs so much from expectations that it would be very unlikely to get such data if null were true

test stat

calculated to evaluate whether data agrees with the null expectation


increase in deviations increase test stat




number calc to represent the match between the data and null hypo, which can be compared to a general distribution to infer probability

Null distribution

sampling distribution of the outcomes expected for a test stat if null were true


Theoretical distributions like normal, chi squared, t ,etc.

p-value

probability that any difference is merely due to chance


probability of getting data that differs as much from the expected results simply by chance

significance level a

probability chosen as the criterion for rejecting the null




typically a=0.05

reject null

saying sample is significantly different from expected at P is lesser than or equal to alpha


if P is lesser than or equal to 0.05 we reject null




(if p is greater than 0.05) not significant and fail to reject




might be because null is true or null is false but there wasn't enough power to show it




confidence intervals for estimates are useful for NS situations


large confidence interval suggests power was low


small confidence interval suggests truly not much difference

Reporting results

always include test-statistic value, sample size, P-value


also good to report confidence intervals or standard errors for parameters

How P-values are determined

Simulation, parametric tests, re-sampling

Type 1 error

rejecting a true null hypo


a= acceptable prob of this mistake


significance level


set arbitrarily by convention

Type 2 error

Failing to reject a false null.


B= prob of this mistake


rarely able to estimate


decreasing a makes it harder to ever reject null


as decrease a, b increases

Power of test

sensitivity of test


1-B (probability of rejecting null when false)


want as large as possible for given test (B as small as possible)

Increase power

increase sample size, use different statistical procedure, increasing effect size, decreasing variance in population (error variance)




intended a may not be actual a, for some tests


actual a may be greater than intended a (liberal test, rejecting null and saying things are sig. diff. too often)


actual a may be less than intended a (conservative test, accepting null too often)

Two-tailed

alternative hypo can be on either side of null value


most tests are two tailed


deviation in either direction would reject null


a is divided into a/2 on one side and a/2 on the other

One-tailed

alternative hypo value on just one side of null value


only rejected if data depart from it in direction stated by H alt


more powerful but often unfair


must have a very good biological reason to do this


all of a comes from one tail


only used when other tail is nonsensical