• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/47

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

47 Cards in this Set

  • Front
  • Back
What are the differences between experimental and nonexperimental research?
-Experimental is when at least one IV is controlled and influenced factors, or within a laboratory. -Nonexperimental is no control on IV, just the IV is defined.
Describe continuous, discrete, and dichotomous data.
-Continuous data are measured on a scale that changes values smoothly rather than in steps. -Discrete data takes on a finite and usually small number of values, and the are no smooth transitions.
-Dichotomous are within two points, can have linear relationships with other variables.
What are the differences between descriptive and inferential statistics?
Descriptive statistics describe samples of subjects in terms of variables or combinations of variables. Inferential statistics test hypotheses about differences in populations on the basis of measurements made on samples of subjects.
What is a factorial design?
A design that consists of two or more factors, on each level and show all possible combinations to these factors.
Why calculate the correlation between two variables?
Correlations is the measure of the size and direction of the linear relationship between two variable. It is used to measure the association between variables. A researcher may want to look at the effect size between the variables, where the IV-DV distinction is blurred.
Make a distinction between prediction and explanation.
Prediction is when a researcher predicts or guesses what they think is going to happen in their experiment or hypothesis. An explanation talks more about the details in depth regarding the IV’s, DV’s and any other variables that need to be discussed.
What are Inflated Correlations?
If using composite variables, make sure the individual items are unique
What are Deflated Correlations?
Sample correlations may be smaller than population correlations when there is restricted range in the sampling of cases or very uneven splits in dichotomous variables
SPSS: Steps to find Univariate Outliers
-Analysys/Descriptives/Frequencies
-Click on Statistics and select appropriate statistics
-Click on Charts and select Histogram with Normal Curve
SPSS: Multivariate Outliers - cases with unusual combination of scores on two or more variables
-Analysis/Regression/Linear
-Variable in DV, Enter all IV's
-Stats, click collinearity
-Paste
-At the end of syntax, add /RESIDUALS=OUTLIERS(MAHAL).
-Run
Nonlinearity
-r only captures the linear relationship between variables
-Assess using bivariate scatterplots
Homoscedasticity
-The variability of scores for one continuous variable is roughly the same at all values of another continuous variable
-Assess using bivariate scatterplots
Multicollinearity
The variables are highly correlated (.90 and above)
Singularity
The variables are redundant—one of the variables is the combination of two or more of the other variables
Variance Inflation Factor (VIF) = 1/Tolerance

Tolerance is the % of variance in predictor that cannot be accounted by other predictors
VIF>10 needs investigation
Multiple Regression
-Used to assess the relationship between one dependent variable (DV) and several independent variables (IVs)
-Used for prediction purposes
-An extension of bivariate regression
-Y’ is the predicted value on the DV
-A is the Y intercept (the value of Y when all the X values are zero)
-Bs are the coefficients assigned to each of the IV during regression
Sequential Multiple Regression
The order of entry is specified by the researcher
Each IV is assessed in terms of what it adds to the equation at its own point of entry
Commonality Analysis
-A method of variance partitioning aimed at identifying the proportions of variance in a dependent variable attributable to each of several independent variables
Centering
-When examining interactions between IVs
-Must convert IVs to deviation scores
-Mean of zero
Skewness Formula
K - 0/ SE
Steps when Screening Data
-Check for out-of-range values
-Verify that means and standard deviations are plausible
-Look for univariate outliers
Steps in Missing Data
a. Use prior knowledge
b. Mean substitution
c. Regression
When screening data, you should identify nonnormal variables, including checking for skewness and kurtosis. The significance tests for both skewness and kurtosis test the obtained value against the null hypothesis of zero. Which distribution is used to compare the obtained values with zero?
The z Distribution, p.80
When checking for multicollinearity and singularity, we basically…
Check to see if the variables are very highly correlated ( .90 and above, multicollinearity) and if the variables are redundant (singularity) p. 88
ANCOVA is an extension of ANOVA in which…
Main effects and interactions of IVs are assessed after DV scores are adjusted for differences associated with one or more covariates
A 2 x 3 x 4 factorial design has how many total main effects and interactions?
7
Logistic regression tests the model with constant and the predictors with a model that has only the constant.

True or False
True
The usefulness of predictors in a logistic regression can be assessed with:
Wald test
Logistic regression is relatively free of restrictions

True or False
True
What are the methods of extraction in Factor Analysis
a. Principal components
b. Principal axis factoring
c. Maximum likelihood
In Factor Analysis, rotation is used to...
Maximize high correlations between factors and variables and minimize low ones p.612
Factor Analysis is useful because...
It can help the researcher understand which variables form coherent subsets that are relatively independent of one another p.612
MANOVA
-a generalization of ANOVA to a situation in which there are several DVs
-Protects against inflated Type I error due to multiple tests of the likely correlated DVs
-MANCOVA is the multivariate extension of ANCOVA
SPSS: Correlations
1. Analyze
2. Correlate
3. Bivariate
4. Variables
5. Paste, Run
SPSS: Commonality Analysis
1. Analyze, Correlate, Bivariate
2. Enter IV's & then DV (All in order)
Ex. (order = X1 X2 X3 Y)
3. Paste & Run
SPSS: Regression Analysis
1. Analyze, Regression, Linear
2. Enter DV & IV's
3. Paste, Run
SPSS: ANCOVA
1. Analyze, General Linear Model, Univariate
2. Enter DV into DV
3. Enter IV into Fixed
4. Enter Covariate
5. Paste, Run
Rotation for Factor Analysis
-Used after extraction to maximize high correlations between factors & Variables & minimize low ones
-Used to improve the interpretability & scientific utility of the solution
SPSS: MANOVA
1. Analyze, Gen Linear, Multivariate
2. Enter DV into DV
3. Enter IV into Fixed
4. Enter Covariate
5. Paste & Run
SPSS: Discriminant Analysis
1. Analyze, Classify, Discriminant
2. Grouping Variable = DV
3. Enter IV's in IV
4. Click Classify, Select Compute from Group Sizes, Click Summary Table
5. Paste, Run
Do IV's help predict the DV?
SPSS: Logistic Regression
1. Analyze, Regression, Binary Logistic
2. Enter DV in DV
3. Enter IV's into Covariates
4. Click Options, select classification cutoff (0.5)
SPSS: Factoral Analysis
1. Analyze, Dimension Reduction, Factor ...
2. Select Variables & enter into Variables
3. Click Descriptives, initial solution, coefficients, significance levels, KMO & Bartlett, Continue
4. Click Extraction, Method=Principal Axis Factoring, Analyze=Correlation Matrix, Display=Click boxes, unrotated factor solution & Scree Plot, Click Based on Eigenvalue=1, Max Convergence =100, Continue
5. Click Rotation, Usually Varimax, Click Rotated Solution, Max Convergence = 100
6. Scores, Save as variables=Regression,
7. Options, Exclude Case Listwise, Sorted by size, suppress Small coefficients (.30)
Power
Probability that effects that actually have a chance of producing statistical significance
Overfitting
Solution is so good that unlikely to be generalizable
Correlation Matrix = R
R is a square, symmetrical matrix
Variance
Averaged Squared deviations of each score from the mean of the scores
Covariances
averaged cross-products (products of the deviations between one variable and its mean and the deviation between a second variable and its means).
-Retain info concerning scales in which variables are measured