• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/37

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

37 Cards in this Set

  • Front
  • Back
What is the magic rule about DV and IV for test of differences?
Dv: ratio, IV: nominal or ordinal
Parametric tests:
based on specific assumptions about the distribution of populations; inferential statistics
Use sample statistics (mean, standard deviation, variance)- dealing with parametric tests
to estimate differences between population parameters.
• Two classes: T tests and analyses of variance (ANOVAs)
• More powerful than nonparametric tests(more robust)
• Cannot always be used because the assumptions on which they are based are more stringent than the assumptions for nonparametric tests
• Two assumptions that are accepted
random selection and homogeneity of variance. Third assumption: controversial and relates to the measurement level of the data
Random selection:
participants are randomly selected from normally distributed populations.
• Don’t have to have randomly selected participants IF the variables used in the analysis are relatively normally distributed in the population you are studying (scatterplots, stem and leaf plots)
• How do you deal with non normal data-
1)can convert or transform the data mathematically: squaring, taking square root, or calculating a logarithm of raw data (not going there!) or 2) use nonparametric tests
Homogeneity of variance:
population variances of the groups being tests are equal (homogeneous)
• Tested statistically
• If variances between the groups differ significantly, use nonparametric tests
• Sample sizes are the same, differences in variances of the groups become less of a concern
• Researchers aim to have equal sample sizes
Level of measurement:
Parametric tests require data from which means and variances can be calculated (interval and ratio data only)
Problem is with ordinal data
Validity of using only interval/ratio data for parametric tests
o As long as data themselves meet the parametric assumptions, regardless of origin of the numbers (read ordinal data), then parametric tests can be conducted BUT you have to base your conclusions based on the use of ordinal data in the clinic
• It is reasonable to conduct parametric tests with ordinal data as long as interpretation of the tests accounts for the nature of the ordinal scale (figure 18-1; p. 231)
Independent:
values in one set tell nothing about values in another set .Two or more groups consist of different, unrelated individuals, the observations made about the samples are independent.
• Ex: 3 week ROM scores for patients in Clinics 1-3 are independent of one another
Dependent:
sets of numbers consist of repeated measures on the same individuals. (Husband/wife, other relatives, matched pairs) 3 week, 6 week, and 6 month ROM scores for patients across the three clinics are dependent measures.
Nonparametric tests:
not based on specific assumptions about the distribution of populations. (much looser)
• Use rank or frequency information to draw conclusions about differences between populations (nominal and ranked ordinal data). Interval and ratio data can be converted into ranks or grouped into categories.
Independent t test:
Parametric test of differences between two independent sample means
• Test statistic: ratio of the differences between the groups (numerator) to the differences within the groups (denominator)
• Ratio of differences is explained by the independent variable
• Differences within the groups is unexplained- do not know what leads to individual difference between subjects
• When the variability explained by the independent variable is sufficiently large compared with the unexplained variability, the test statistic is large and a statistically significant difference is identified.
Mann-Whitney test or Wilcoxon rank sum test:
Nonparametric alternative to the independent t test
• Use this if assumptions for the independent t test are violated
Mann-Whitney test or Wilcoxon rank sum test:
• Hypotheses need to be stated in more general terms than the hypotheses for parametric tests:
o Ho: the populations form which Clinic 1 and clinic 2 samples are drawn are identical
o H1: one population tents to produce larger observations than the other population
• Mann-Whitney: rank the scores from the two groups, regardless of original group membership. When a number occurs more than once, its ranking is the mean of the multiple ranks it occupies. Ex: two 67s are the 11th and 12th scores so each receives a rank of 11.5. Next rank is 13.
• Alternative form of Mann-Whitney: U statistic converted into a z score
o Done by a computer
F statistic:
need to know the individual group means as well as the grand mean
F= MSB/MSW
Grand mean:
mean of all the scores across the groups
Total sum of squares (SST):
the sum of the squared deviations of each individual score from the grand mean. (total variability)
Within group sum of squares (SSW):
the sum of the squared deviations of the individual scores from the group mean. (Group variability)
Between group sum of squares (SSB):
the sum of the squared deviations of the group means from the grand mean. (Between groups variability)
SST= equation...
SSB+ SSW
Mean square between groups
(MSB)
• MSB= SSB/ (groups -1)
o Mean square within each group
MSW)
• MSW= SSW/ (N-groups)
What does a large F value mean?
indicate that difference between the groups are large compared with the differences within groups.
What does a small F value mean?
indicated that the differences between groups are small compared with the differences wihin groups
• Multiple comparison tests:
similar to t tests but with a correction to prevent inflation of the alpha level
o EX: Bonferroni test
Kruskal-Wallis test
o Nonparametric equivalent of the one way ANOVA
• Use if the assumptions of the parametric test are not met
o Ho: the three samples come from populations that are identical
o H1: at least one of the populations tends to produce larger observations than another population
How do you conduct a Kruskal-Wallis test?
• Rank scores regardless of group membership
• Ranks are summed and plugged into a formula to generate a Kruskal wallis (KW) statistic.
• If this is lower than the alpha level- there is a significant difference among the groups
• When there is a difference, use Mann Whitney test with a bonferroni adjustment of alpha as the multiple comparision procedure
How to calculate paired t test?
• Determine the difference between each pair of measurements
• Determine mean difference and standard deviation of the differences and the mean is compared with a mean difference of zero.
• Calculate the t statistic for paired samples by dividing the mean difference by the standard error of the mean differences
• If t statistic is……
Wilcoxon Signed Rank test:
nonparamentric version of the Paired t test
o Ho: the difference between the population medians is equal to zero
o H1: the difference between the population medians is not equal to zero
To conduct Wilcoxon Signed Rank test:
• Calculate the difference between each pair of numbers
• Rank the nonzero differences according to absolute value
• Separate them into ranks associated with positive and negative differences
• If there is no difference from one time to the next, the sum of the positive ranks should be approximately equal to the sum of the negative
• Ranked information transformed into z score (by computer)
• Since this is less than alpha of.05, there is a significant difference btween the scores
• To determine the clinical importance of the difference, we look at the median of the difference btween the two samples.
Repeated Measures ANOVA:
extension of a paired t test to more than 2 dependent samples
Three different approaches:
o Multivariate
o Univariate
o Adjusted univariate
o Univariate
• First partitions the variability in the data set into btween subjects and within subject categories
• The within subject variability is then subdivided into between treatments and error components
• Two F ratios can be generated:
• 1) ratio of btween subjects to within subject varability
o Sometimes reported but not relevant to the research question
• 2) ratio of btween treatments to residual variability
• Repeated measures ANOVA
mathematically eliminates between subjects variability to focus the analysis on within subject variability
Freidman’s ANOVA
Nonparametric equivalent of the repeated measures ANOVA
Calculation is based on rankings of the repeated measures for each participant
Computer computes either a Friedmans F or Friedmans’ chi square
Ex:p 292
If you have done an one way ANOVA---
you would do an independent t test
If you have done a repeated measure ANOVA---
you would do a paired T test