• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/62

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

62 Cards in this Set

  • Front
  • Back

1. Parametric Statistics: definition & criteria for validity

used toestimate population parameters


1. data normally distributed


2. randomized sampling


3. variances in samples collected must be equal or close


4. data has to be measured on interval or ratio scale

2. Positively skewed

=right-skewed; mean is greater than median (but graph 'hump' leans to left)

3. Negatively skewed

=left-skewed; mean is less than median (but graph 'hump' leans to right)

4. Bimodal

2 humps, one skewed to right, one to left

5. Platykurtic Distribution

almost all data points the same; graph looks level throughout

6. Non-parametric statistics: definition & when to use

- does same thing as parametric (compares populations) but less powerful


- when to use: when parametric conditions not met (data not normally distributed, data isn't random, variances b/t samples not equal, data is on nominal or ordinal scales


-generally less sensitive than parametric! (but doesn't mean info isn't accurate!!)

7. Independent sample

when 2+ samples are from completely different people

8. Dependent sample

test-retest: same individuals are tested and then re-tested

9. Inferential statistics: what 2 tests used?

Parametric & Nonparametric tests

10. Types of parametric tests:

T-test


Paired T-test


One-way, 2 way, 3 way, & repeated measures ANOVA

11. T-test

-a parametric test


-used to compare means of TWO INDEPENDENT groups (also called un-paired T-test); variances b/t groups are equal


-generates a T-statisitic that is used to create a p-value, which is reported

12. T-statistic

-generated from T test


= (difference b/t group means) / (variability b/t groups)


- the larger the difference in group means, the larger the T statistic


- the larger the T statistic, the smaller the p value!

13. Paired T-test

-a parametric test


-use when you have TWO DEPENDENT samples


-analyzes difference scores (subjects are compared only to themselves or to their match) --> reduces sample variability


-a strong design, uses test-retest or matched design, more powerful than unpaired t-test

14. Why can't you use multiple T-tests to compare a baseline, 2 months, 4 months, 6 months data?

b/c with each t-test you run risk of committing type 1 error

15. When do you use ANOVA?

when you have to compare 2+ means!

16. What is ANOVA

-parametric test


-used to test 3+ samples/groups


-creates F ratio = treatment effects / error variance

17. What is F-ratio

= treatment effects / error variance


treatment effects = difference in means


error variance = unexplained variance b/t groups


**same calculation as t statistic


**the larger the F ratio, the bigger the difference b/t means, and the smaller the p value

18. What is 1 way ANOVA?

testing 1 factor (like BP) in 3+ groups

19. What is 2 way ANOVA?

testing 2 independent factors (ex. effects on lower back pain depending on age of clinician and experience of clinician)


- 2 factors: age & experience of clinician


- 1 variable: back pain

20. What is 3 way ANOVA?

3 independent factors (ex. clinician age, experience, & gender)

21. What is Repeated Measures ANOVA?

each subject is tested across all conditions; each subject serves as own control


*dependent samples


*can have 1-way, 2-way, 3-way repeated measures ANOVA


**very powerful design

22. What do results of ANOVA test tell you??

if you get p<0.05 --> there is a difference b/t at least 2 group means; BUT it doesn't tell you which ones!

23. So what do you do next after getting p<0.05 on ANOVA test?

run a Multiple Comparisons Test or Post Hoc Testing


--> how you determine which groups from ANOVA test are different!

24. What are the different kinds of:


Multiple Comparisons Test or Post Hoc Testing

1. Tukey Test - decrease type 1 error


2. Neuman-Keuls (NK) - more risk type 1 error


3. Bonferroni t-test (Dunn's) - greatly reduces Type 1


4. Scheffe's comparisons - decreases type 1 but less power




(Trust Nothing But Statistics)

25. What is Bonferroni correction?

-used in Bonferroni t-test (Dunn's) to adjust a-level of significance (the 0.05) based on # comparisons you need to make


-ex. if you have 5 means & you're running the test to see where the difference(s) lie, you divide 0.05 by 5 --> 0.01, which means you have to reach that new level of significance

26. NonParametric tests: what can you do to make them more powerful?

increase sample size, n!!!

27. Types of NonParametric tests to use when TWO sample groups

1. Mann-Whitney U (independent samples)


2. Sign Test


3. Wilcoxin Signed-Ranks Test




(Maybe Statistics Wins)

28. When to use Mann-Whitney U

-independent samples


-2 sample groups


-nonparametric data


-a more powerful test


-analogous to parametric unpaired t-test

29. When to use Sign Test

-dependent samples


-2 sample groups


-nonparametric data


-uses binomial data (doesn't require quantitative data)


-uses + and - signs


-analogous to parametric paired t-test

30. When to use Wilcoxin Signed-Ranks Test

- dependent samples


- 2 sample groups


- nonparametric data


- uses same criteria as Sign Test (+ or -) to detect direction of differences but ALSO tests relative amount of difference (like +3 or -2)


-SO, more powerful than Sign!


-analogous to parametric paired t-test

31. Non-parametric tests w/ >2 groups?

1. Kruskall-Wallace ANOVA by Ranks


2. Friedman ANOVA by Ranks




(Kiss Friends)

32. When to use Kruskall-Wallace ANOVA by Ranks

- more than 2 groups


- parametric criteria not met


- independent samples


- handles ordinal data


- analogous to parametric 1-way ANOVA


- can use Mann-Whitney test if significance found, as long as Bonferroni correction used!

33. When to use Friedman ANOVA by Ranks

- more than 2 groups


- parametric criteria not met


- dependent samples


- handles ordinal data


- analogous to parametric repeated measures ANOVA

34. Key/tip for deciding which test to use...

First determine if parametric or nonparametric


Second ask how if samples are independent or dependent


Then ask how many samples

35. What is a correlation?

used to describe relative strength & direction of relationship b/t 2 variables; NOT causation!

36. What is a regression?

uses an established correlation to predict one variable (the dependent variable) based on another variable (independent variable)

37. What type of graph helps to establish a correlation?

Scatterplot - lets you visibly see strength and direction of relationship

38. What is correlation coefficient (r)? Is there another name for this?

quantitative value to determine strength and relationship b/t two variables


also called "goodness of fit"


between -1 (perfect negative relationship) and 1 (perfect positive relationship); both equally strong


0 = no correlation, weak

39. What is line of best fit?

-used in scatterplot


-used to determine r (correlation coefficient)

40. If you have data with r=0.35 & you run a p-test and get p<0.05, what does that mean?

95% certainty that the 2 variables are WEAKLY related (b/c r is small)

41. T/F: there is no widely accepted criteria for determining strong vs. moderate vs. weak correlation

True

42. T/F: there can be non-linear associations in a scatterplot, aka: the line of best fit does not have to be straight

True, ex. U-shaped curve

43.What will correlation coefficient (r) look like when data is non-linear on scatterplot? What does that mean?

r will be low, will look like there is not correlation, but that isn't necessarily true!!


why? b/c r measures only linear relationships!

44. Does r represent the percentage of association between 2 variables?

NO! it just measures strength and direction on a scale of -1 to 1

45. Does a high r indicate causation?

not necessarily! can't infer this! both variables could be influenced by a 3rd variable!

46. What value does regression analysis provide?

1. prediction tool


2. impt for decision making, goal setting

47. What is linear regression analysis?

examination of 2 variables that are linearly related


independent/measured variable (the one that is controlled/manipulated) = x


dependent/criterion variable (the variable to be determined/predicted) = y

48. In a linear regression analysis, if r < 1:


what does the graph/line look like?


how do you determine y?

the points do not all fall on the line b/c r doesn't equal 1


both the line & y have to be estimated

49. What is the equation for a regression line? What is another name for the regression line?

Y = a + b(x)


Y = predicted value


a = y intercept when x=0


b = slope


x = measured variable




aka. line of best fit

50. How accurate is the regression analysis prediction (what measures do you use?)

1. Coefficient of determination (r2)


2. Standard error of the estimate (SEE)

51. What is r2?

coefficient of determination; measures the percentage of variance in the predicted scores (y values) that can be explained by the measured scores (x values)




**indicates the accuracy of the prediction!

52. Will r2 always be smaller than r?

Yes!

53. If r2 = .73, what does that mean?

73% of variance in y can be accounted for by knowing variance in x




We have 73% of the info we need to make an accurate prediction, but there are other factors involved (the other factors account for the remaining percent)

54. What is SEE? What does it measure?

Standard error of the estimate


*the variance of the data/errors on either side of the regression line (the residuals)


*measures the goodness of fit


*the farther the points are from the line of best fit, the more error in the prediction


*the larger the SEE, the less accurate the prediction

55. Are statistical tests sensitive to units of measurement?

No

56. If something is statistically significant, is it clinically significant? and vice versa?

Not necessarily

For a correlation coefficient (r): 0-0.25 means..

little to no relationship

For a correlation coefficient (r): 0.25-0.5 means

little to moderate relationship

For a correlation coefficient (r): 0.5-0.75 means...

moderate to good relationship

For a correlation coefficient (r): 0.75-1 means...

good to excellent relationship

The Pearson Product Moment coefficient (r) is used for....

a correlation coefficient used for parametric data measured on interval or ratio scale

The Spearman rank coefficient (rs) is used for

a correlation coefficient used for nonparametric data or ordinal data