• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/54

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

54 Cards in this Set

  • Front
  • Back
You have a large data set you are using to predict college GPA from high school GPA and SAT scores. You've done a multiple regression analysis. What coefficient would you choose to examine the following
questions? (b, beta, partial correlation, semipartial correlation, multiple r, any coefficient squared, etc.)

By how much does college GPA increase for every I-point increase in SAT scores?
use unstandardized B

(because original unit of measure)
You have a large data set you are using to predict college GPA from high school GPA and SAT scores. You've done a multiple regression analysis. What coefficient would you choose to examine the following
questions? (b, beta, partial correlation, semipartial correlation, multiple r, any coefficient squared, etc.)

Controlling for high school GPA, what proportion of the total variance in college GPA is explained by
SAT scores?
use semipartial correlation coeficient squared

(because question is asking for the proportion of the total variance).
You have a large data set you are using to predict college GPA from high school GPA and SAT scores. You've done a multiple regression analysis. What coefficient would you choose to examine the following
questions? (b, beta, partial correlation, semipartial correlation, multiple r, any coefficient squared, etc.)

Is high school GPA or SAT scores a stronger predictor of college GPA in the model?
Standardized B, sr^(2), or pr^(2)
You have a large data set you are using to predict college GPA from high school GPA and SAT scores. You've done a multiple regression analysis. What coefficient would you choose to examine the following
questions? (b, beta, partial correlation, semipartial correlation, multiple r, any coefficient squared, etc.)

What proportion of variance in college GPA is predicted by the optimal linear combination of high
school GPA and SAT scores?
multiple R^(2)
You have a large data set you are using to predict college GPA from high school GPA and SAT scores. You've done a multiple regression analysis. What coefficient would you choose to examine the following
questions? (b, beta, partial correlation, semipartial correlation, multiple r, any coefficient squared, etc.)

After removing variance due to high school GPA, what proportion of the remaining variance in college
GPA can be explained by SAT scores?
pr^(2)
You have a large data set you are using to predict college GPA from high school GPA and SAT scores. You've done a multiple regression analysis. What coefficient would you choose to examine the following
questions? (b, beta, partial correlation, semipartial correlation, multiple r, any coefficient squared, etc.)

By how much does college GPA increase for every 1 standard deviation increase in SAT scores?
standardized b
Multiple R^(2) is always __ and can range from __ to __.
Multiple R^(2) is always positive and can range from 0 to 1.
Multiple R^(2)reflects the relationship between...
y and predicted y
The most prestigious universities have the highest average faculty salaries, but professors who get offers
from two universities at once almost always find that the more prestigious university offers them the lower
salary. ("Surely, salary is not the only compensation. You'll have wonderful facilities and colleagues.")
Suppose prestigious universities find that this argument works and they can generally pay less that other
universities for a person with the same qualifications. What is the sign of the simple correlation between
prestige and salary? What is the sign of the partial correlation between prestige and salary, after removing variance associated with academic ranking?
1. the sign of the simple correlation between
prestige and salary is positive
2.the sign of the partial correlation between prestige and salary is negative

Prestigious universities pay more, but within universities of the same rank, more prestigious universities pay less.
represents the proportion of the remaining variance in y after we remove what is accounted for by the other predictor.
pr^(2)
represents the proportion of total variance in y that is uniquely associated with x1.
sr^(2)
If we remove the variace associated with the other predictor, this term represents how much you can account for out of the remaining variance in y.
pr ^(2)
Which is almost always higher, pr^(2) or sr^(2)?
pr^(2)
Interpret the following:
pr^(2) = .25
After removing the variance associated with x2 (in this example), 25% of the remaining variance is associated with x1.
We take out all of the variance associated with x2. If we then ask "of the remaining variance, what proportion of the variance is accounted for by x1, what is the term to which we are referring?
pr^(2)
What part of the variance is uniquely predicted by x1?
semipartial correlation coeficient squared.
This term refers to the proportion of variance accounted for by the regression equation.
multiple R^(2)
Say that multiple R^(2) = .67. What does this mean?
67% of the variance in recovery scores can be accounted for by the optimal linear combination of optimisim and severity (ie, the two IVs).
Describe how to interpret sr^(2).
sr^(2) represents the increase in multiple R^(2) when this predictor is added to the regression equation.
This represents the proportion of total variance in y that is uniquely accounted for by that predictor(it doesn't include overlapping variance).
sr^(2)
Describe how to interpret pr^(2).
The proportion of variance in Y not associated with X2 that is associated with X1.

Asks: How much of Y that is not predicted by X2 does X1 predict?
What is sr^(2)?
The squared semipartial corelation coeficient.

This tells us the increase in the proportion of explained Y variance when X1 is added to the regression analysis.
What is pr^(2)
This is the squared partial correlation coeficient.

This tells us the proportion of variance in Y, not explained by other predictors, that is explained by the predictor in question.
You have a large data set you are using to predict college GPA from high school GPA and SAT scores. You've done a multiple regression analysis. What coefficient would you choose to examine the following
questions? (b, beta, partial correlation, semipartial correlation, multiple r, any coefficient squared, etc.)

What is the correlation between Y and Y-hat?
multiple R^(2)
You have a large data set you are using to predict college GPA from high school GPA and SAT scores. You've done a multiple regression analysis. What coefficient would you choose to examine the following
questions? (b, beta, partial correlation, semipartial correlation, multiple r, any coefficient squared, etc.)

What is the increase in R^(2) when you add SAT scores to a model already containing high school GPA?
sr^(2)
When drawing the Venn diagrams associated with MLR, we are always looking at variance in _____?
dependent variable
This is the proportion of variance in the dv (y) that represents how much R^(2) will increase when we add a predictor
sr^(2)
I am under the impression that the fundamental difference between sr^(2) and pr^(2) is...
With sr^(2), you CONTROL for the variance associated with one variable in order to predict the proportion of TOTAL VARIANCE that is accountec for by another variable.

With pr^(2), you REMOVE variance associated with one variable in order to determine what proportion of the REMAINING VARIANCE in one variable can explain another.
When determining the coeficient to use, what are some useful things to remember?
sr^(2) and pr^(2) ask for proportions of variance either of the total (sr^2) or of what is remaining after removal (pr^(2)).

The standardized/unstandardized Bs are referring to how much dv increase/decrease for every one unit increas/decrease in iv

Multiple R^(2) seems to ask about optimal linear combination of variables.
When determining the coeficient to use, what are some useful things to remember?
sr^(2) and pr^(2) ask for proportions of variance either of the total (sr^2) or of what is remaining after removal (pr^(2)).

The standardized/unstandardized Bs are referring to how much dv increase/decrease for every one unit increas/decrease in iv

Multiple R^(2) seems to ask about optimal linear combination of variables.
When determining the coeficient to use, what are some useful things to remember?
sr^(2) and pr^(2) ask for proportions of variance either of the total (sr^2) or of what is remaining after removal (pr^(2)).

The standardized/unstandardized Bs are referring to how much dv increase/decrease for every one unit increas/decrease in iv

Multiple R^(2) seems to ask about optimal linear combination of variables.
When determining the coeficient to use, what are some useful things to remember?
sr^(2) and pr^(2) ask for proportions of variance either of the total (sr^2) or of what is remaining after removal (pr^(2)).

The standardized/unstandardized Bs are referring to how much dv increase/decrease for every one unit increas/decrease in iv

Multiple R^(2) seems to ask about optimal linear combination of variables.
For every 1 SD increase in optimisim scores, recovery scores increase by .559 SD, holding severity constant.

The sentance above describes an interpretation of what coeficient?
standardized b
For every 1 point increase in optimisim scores, recovery scores increase by 1.31 points, holding severity constant.

The sentance above describes an interpretation of what coeficient?
unstandardized b.
If, in MLR, you are asked to compute the standardized/unstandardized regression weights using correlation formulas and interpret them, what general form will this take?
*For MLR, you will be given two X values and one Y value.

You will need to create 4 interpretations:

Standardized:
For every 1 SD increase in x1, y scores increase by .559 SD, holding x2 constant

Unstandardized:
For every 1 point increase in X1, Y scores increase by 1.31 points, holding X2 constant.

Standardized:
For every 1 SD increase in x2, y scores increase by .559 SD, holding x1 constant.

Unstandardized:
For every 1 point increase in X2, Y scores increase by 1.31 points, holding X1 constant.
Interpret the following:
Multiple R^(2) = .67.
67% of the variance in recovery scores can be accounted for by our optimial linear combination of x1 and x2.
What is the Venn diagram formula for multiple R^(2)?
(a+b+c)/ (a+b+c+e)
What is the Venn diagram formula for sr^(2)?
(a)/(a+b+c+e)
Represents the part of the variance that is uniquely predicted by one of the Xes on the Venn diagram.
a. Also note that (a/(a+b+c+e))refers to the semipartial correlation squared.
What is the Venn diagram formula for pr^(2)?
with pr^2, you are taking out all of the variance associated with one of the Xes and looking at what remains.

I think that the Venn diagram formula is represented like this:
pr^(2) = a/(a+e)
this coeficient is dependent upon the scales of the variables.
unstandardized partial regression coeficient b
this coeficient represents the average change in y for every unit increase in x.
unstandardized partial regression coeficient b
this tells us the average change in the z score for y for every 1 sd increase in x
standardized partial regression coeficient b*
This allows us to interpret the relative magnitudes of the b* weights in terms of relative importance of the predictors (thus, we can compare across predictors)
standardized partial regression coeficient b*
This coeficient answers the following question:
How much does this predictor add to the proportion of variance in y (which ranges from 0 to 1)?
squared semipartial correlation coeficient sr^(2)
this tells us the increase in the proportion of explained y variance when X1 is added to the regression analysis.
squared semipartial correlation coeficient sr^(2)
This tells us the proportion of variance in y,not explained by other predictors, that is explained by the predictor in question.
squared partial correlation coeficient pr^(2)
Represents the proportion of variance in y NOT associated with x2 that is associated with x1.
pr^(2)
Increase in multiple R squared when this predictor is added to the regression equation.
sr^(2)
X1 is a measure of stress, x2 is a measure of negative life events, y is a measure of depressive symptoms. Ry1=0.50 and ry2=0.70.

What is the proportion of variance in y that is accounted for by each variable alone?
ry1^(2) = 0.25
ry2^(2) = 0.49
X1 is a measure of stress, x2 is a measure of negative life events, y is a measure of depressive symptoms. Ry1=0.50 and ry2=0.70.

1. If X1 and X2 are uncorrelated with each other, what is the proportion of variance in y that would be accounted for if both variales were included as predictors in a multiple regression analysis?
2. Would this proprtion be the same if x1 and x2 were correlated with each other? Why or why not.
1. If they are uncorrelated, .25+.49 = .74. 74% of the variance in y would be accounted for if both variables were included as predictors in mlr.

2. This proportion would not be the same if x1 and x2 were correlated with each other, be/c with correlated variables there would be overlapping variance and the overal proportion of variance accounted for would be smaller.
X1 is a measure of stress, x2 is a measure of negative life events, y is a measure of depressive symptoms. Ry1=0.50 and ry2=0.70.

Imagine that you condicted a MLR analysis with x1 and x2 predicting y. You found that the unstandardixed b for x1 was 0.82. Interpret b in relation to the variables in the study.
For every 1 pt increase in stress scores, scores on the measure of depressive symptoms will increase by 0.82 points, holding scores on the measure of negative life events constant.
X1 is a measure of stress, x2 is a measure of negative life events, y is a measure of depressive symptoms. Ry1=0.50 and ry2=0.70.

Imagine that you condicted a MLR analysis with x1 and x2 predicting y. You found that the squared semi-partial correlation coeficient for x2 was 0.22. Interpret this value in relation to the variables.
Controlling for stress scores, the proportion of the total variance in depressive symptoms that can be explained by scores on a measure of negative life events is 0.22.
X1 is a measure of stress, x2 is a measure of negative life events, y is a measure of depressive symptoms. Ry1=0.50 and ry2=0.70.

Imagine that you condicted a MLR analysis with x1 and x2 predicting y. You found that the R^(2) value for the MLR was 0.43. Interpret R^(2) in relation to the variables in the study.
This means that 43% of the variance in scores on the measure of depressive symptoms can be explained by the optimal linear combination of stress and negative life event scores.