• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/33

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

33 Cards in this Set

  • Front
  • Back
What is the goal of a Z-score?
To determine whether the obtained difference between the data and the hypothesis is significantly greater than would be expected by chance
Why/when do we use the t-test instead of the z-test?
You use a t-test to test hypotheses about an unknown population mean when the value of standard deviation is not known.
The t-statistic is the same as the z-statistic except the t-statistic uses the estimated standard error as the denominator.
When the population standard error of the mean cannot be computed, what do you use to estimate it? How?
You would use the estimated standard error. You do this by replacing the sample variance (s^2) in place of the standard deviation.
What is the definition of estimated standard error (Sm)?
This is used an an estimate of the real standard error when the value of the standard deviation is unknown.
-It is computed from the sample variance or sample standard deviation and provides an estimate of the standard distance between a sample mean (M) and the population mean (m).
What are the two reasons to make the shift from standard deviation to varience?
1. B/c on avg. sample variance (s^2) will provide an accurate and unbiased estimate of the population variance.
2. other versions of the t statistic that require variance in the formulas for estimated standard error.
What is the definiton of a t-statistic?
This is used to test hypotheses about an unknown population mean when the value of the standard deviation is unknown.
-The only difference between this and the Z-score is that the t-score has the estimated standard error in the denominator.
what are degrees of freedom?
describes the number of scores in a sample that are independent and free to vary. it is represented as (n-1)
How does the shape of a t distribution compare to the shape of a normal distribution?
The t distribution approximates the normal distribution but it tends to be flatter and more spread out.
What is the relationship between the df and the shape of the t distribution?
-The greater the value of df for a sample the better s^2 represents standard deviation^2 and the better the t statistic approximates the z-score.
-As df gets very large, the t distribution gets closer in shape to a normal z-score distribution.
-As a result, t statistics are more variable than z-scores.
What are the 3 steps in calculating a t statistic?
1.) calculate the sample varience. s^2= SS/df
2.) Then use the sample varience to compute the estimated standard error
3.) Now you are able to compute the t-statistic.
What are the two basic assumption that are necessary for hypothesis tests with the t statistic?
1.) The values in the sample must consist of independent observations. meaning the occurence of the first event has no effect on the probability of the second event.
2.) The population sampled must be normal
What is the difference between Cohen's d and the estimated d? when do we use it?
Cohen's d measures the effect size in terms of population mean difference and the population standard deviation. When the pop. mean w/treatment and the standard deviation are unknown we use replace them with the mean for the treated sample and the sample standard deviation. This gives us the "estimated d".

*an estimated d of 1.00 indicates that the size of the treatment is equivalent to one standard deviation.*
What is an alternative method for measuring effect size?
Determining how much of the variability in the scores is explained by the treatment effect. By doing this we will obtain a measure of the size of the treatment effect.
What is r^2?
This is the percentage of variance accounted for by the treatment. it is computed by doing r^2= t^2/t^2+ df.

*a less efficient way of computing this is by taking the difference between the two SS values and putting that over the total variability (which is the SS value of the treatment effect).
What r^2 values are considered to be a small, med, and large effect?
r^2=0.01 is a small effect
r^2=0.09 is a medium effect
r^2=0.25 is a large effect
What are some factors that will decrease the likelihood of rejecting the null hypothesis?
-Increasing the standard error
-Increasing the sample variance which would increase estimated standard error

*In general large variance means that you are less likely to obtain a significant treatment effect*
how is the sample size (n) related to estimated standard error?
They are inversely related!! the larger the sample is, the smaller the error is.
-This means that if all other factors are held constant, the large samples will produce bigger t statistics and increase the likelihood of rejecting the null hypothesis!!
Does sample size effect Cohen's d or r^2?
No!! Cohen's d is not influenced by sample size and r^2 is on slightly affected by changes in size of the sample.
Under what circumstances do you use the Independent measures t-test?
This is used when the study involves two seperate sample groups.
- The goal of an independent-measures research study is to evaluate the mean diff. between two populations (or two treatment conditions). It uses the diff between two sample means to evaluate the diff between two pop. means.
What is the purpose of pooled variance and what does it correct for?
This is used when the sample sizes are unequal and it corrects the biased assumption of the sample variances that the sample sizes are equal.
-Since the larger sample has a larger df value it will carry more weight when averaging the two variances.
How do you compute the df for an independent measures t-test?
By simply adding the df of sample 1 to the df of sample 2
In the overall context of an Independent-measures test what can the t-statistic be reduced to?
t= (data - hypothesis)/error
What are the steps in determining a t-statistic using independent measures?
1.) state the hypothesis and select the alpha level
2.) find the df and the critical region
3.) a. find the pooled variance
b. use the pooled variance to compute the estimated standard error
c. compute the t statistic
4.) Make a decision
What is the estimated d for and independent measures research study?
The estimated d= estimated mean diff/ estimated standard deviation (which is the sqr rt of the pooled variance)
How do you compute r^2 for independent measures?
The calculation is exactly the same as it was for the single sample t.
What is the homogeneity of variance?
this is the assumption that the two pop. being compared must have the same variance. Without satisfying this requirement you cannot accurately interpret a t statistic, and the hypothesis test becomes meaningless.
How do you know if the homogeneity of variance assumption has been satisfied?
You can use Hartley's F max test! This test can check the homogeneity of variance with more than two independent variables
What is the F-max test anyway?
It is based on the principle that a sample variance provides and unbiased estimate of the pop. variance.
- To compute this you 1st compute the sample variance for each sample and then put the largest of the smalles to get your F max value.
*a large value indicates a large diff between the sample variances while a small value (near 1.00) indicates that sample variances are similar and that the homogeneity assumption is reasonable.
- When the F max value is computed you need to find the critical region on table B.3
-to do this you need to know
a. k= # of seperate samples
b. df for each sample variance. (The Hartley test assumes that all samples are the same size)
*also note that this test would generally use the larger alpha level)
Why use the dependent measures t test? what are the advantages?
the main advantage of a repeated measures study is that it uses the same individuals in all treatment conditions.

-some advantages are.....
1. the repeated measures design typically requires fewer subjects than an independent measures design
2. A researcher can better study and observe behaviors that change or develop over time
3. It reduces or eliminates problems caused by individual diff. such as IQ, age, gender, etc.
Under what circumstances do you use the dependent measures t test?
You use this when there is one group of subjects that are being tested with two different conditions.
What is the diff between the repeated measures and matched subjects design?
The repeated measures design uses the same subjects in all treatment conditions.
-The matched subjects design tries to match subjects so that they are close to equivalent with respect to a specific variable that the researcher would like to control.
*The problem with this is that the more variables you try and control the less likely it is that you will find subjects that fit the criteria*
Under what circumstances should you avoid using the dependent measures t-test?
The primary disadvantage of this design is that it allows time related factors and order effects to change the participants score from one treatment to the next.
- You should NOT use a repeated measures design if there is any reason to expect strong time-related effects or strong order effects. Your best strategy would be to use and Independent measures design (or matched-subjects) so each individual participates in only one treatment and is measured only one time.
How do you compute df for a dependent measures test?
you only have one group of subjects so its only (n-1)

*n refers to the number of D scores, not the number of x scores. (D scores being the difference between the x scores of each sample)*