Study your flashcards anywhere!
Download the official Cram app for free >
 Shuffle
Toggle OnToggle Off
 Alphabetize
Toggle OnToggle Off
 Front First
Toggle OnToggle Off
 Both Sides
Toggle OnToggle Off
 Read
Toggle OnToggle Off
How to study your flashcards.
Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key
Up/Down arrow keys: Flip the card between the front and back.down keyup key
H key: Show hint (3rd side).h key
A key: Read text to speech.a key
57 Cards in this Set
 Front
 Back
alpha slippage

More statistical tests in a research program → more chance of type 1 errors. (reduces alpha)


effect size

A statistic that indexes the size of a relationship. 0= no relationship between variables, and larger (positive) effect sizes indicate stronger relationships.


pvalue

Each statistic has an associated probability value that shows the likelihood of an observed statistic occurring on the basis of the sampling distribution. The pvalue indicates how extreme the data is.


sampling distribution

The distribution of all possible value of a statistic. Each statistic has an associated sampling distribution.


significance level (alpha)

The standard that the observed data must meet. This is the probability that we will make a Type 1 Error. (Normally set to .05). Alpha sets the standard for how extreme the data must be before we can reject the null hypothesis


statistical power

The power of a statistical test is the probability that the researcher will, on the basis of observed data, be able to reject the null hypothesis given that the null hypothesis is actually false and should obviously be rejected. Power = 1β


binomial distributions

In a sampling distribution, when there are events that have two equally likely possibilities (such as a correct and incorrect guess) a binomial distribution must be used.


how/why science is conservative

bc would rather have a type 1 error than a type 2 error


hypothesis testing

Hypothesis testing is accomplished through a set of procedures designed to determine whether the observed data can be interpreted as providing support for the research hypothesis. These procedures, which are based on inferential statistics, are specified by the scientific method and are set in place before the scientist begins to collect data. Because it is not possible to directly test the research hypothesis, observed data are compared to what is expected under the null hypothesis. Since all data has random error, scientists can never be sure that their observed data actually supports their hypothesis.


the relationship between sample size and sampling distributions

As sample size increases, the sampling distributions become more narrow. The tighter it gets, the lesser chance of extreme values of the statistics


factors affecting statistical power

the larger the sample size, the more stat power it will have. The greater the effect size, the more the statistical power


alpha and beta (with respect to inferential errors)

Alpha refers to type 1 error; beta refers to type 2 error (probability of missing a good finding)
Alpha is always designated at a certain value, the lower the alpha is, the higher the beta is. Type 1 is more dangerous due to presence of type 1 and type 2 errors (conservatism), rejecting the null doesn’t necessarily mean null is false. 

Descriptive statistics

numbers, such as the mean, median, mode, standard deviation, and the variance, that summarizes the distribution of a measured variable


Inferential Statistics

numbers, such as a pvalue, that are used to specify the characteristics of a population on the basis of the data in a sample.


one and twotailed tests

Onesided only takes into account one side of the distribution, twotailed takes into consideration the two sides of a binomial distribution. With 2 sided, take into consideration that unusual outcomes may occur in more than one way.


statistically significant and nonsignificant results

If the pvalue is less than alpha (p<.05), then we reject the null hypothesis, and say that the results is STATISTICALLY SIGNIFICANT
If the pvalue is greater than alpha (p>.05), then we fail to reject the null hypothesis, and say that the result is STATISTICALLY NONSIGNIFICANT 

the types of (statistical) inferential errors

Type 1 Error rejecting the null hypothesis when it is really true
Type 2 Error the mistake of failing to reject the null hypothesis when the null hypothesis is really false. For a coin guessing experiment, the null hypothesis is that the probability of a correct guess is .5 For a correlational design, the null hypothesis is that there is no correlation between the 2 measured variables (Basically, r=0) For an experimental research design, the null hypothesis is that the mean score on the dependent variable is the same in all experimental groups 

contingency table

A table that displays the number of individuals in each of the combinations of the two nominal variables.


statistically significant and nonsignificant results

If the pvalue is less than alpha (p<.05), then we reject the null hypothesis, and say that the results is STATISTICALLY SIGNIFICANT
If the pvalue is greater than alpha (p>.05), then we fail to reject the null hypothesis, and say that the result is STATISTICALLY NONSIGNIFICANT 

the types of (statistical) inferential errors

Type 1 Error rejecting the null hypothesis when it is really true
Type 2 Error the mistake of failing to reject the null hypothesis when the null hypothesis is really false. For a coin guessing experiment, the null hypothesis is that the probability of a correct guess is .5 For a correlational design, the null hypothesis is that there is no correlation between the 2 measured variables (Basically, r=0) For an experimental research design, the null hypothesis is that the mean score on the dependent variable is the same in all experimental groups 

multiple regression

A statistical technique for analyzing a research design in which more than one predictor variable is used to predict a single outcome variable.
Requires extensive set of calculationscomputer time. 

contingency table

A table that displays the number of individuals in each of the combinations of the two nominal variables.


multiple regression

A statistical technique for analyzing a research design in which more than one predictor variable is used to predict a single outcome variable.
Requires extensive set of calculationscomputer time. 

regression coefficients/beta weights

Statistics that show the relationship between one of the predictor variables and the outcome variable in a multiple regression analysis.


regression coefficients/beta weights

Statistics that show the relationship between one of the predictor variables and the outcome variable in a multiple regression analysis.


regression line/line of best fit

On a Scatter plot, this line minimizes the squared distance of the points from the line. It is the best fit line.


reverse causation

In a correlational research design, it is the possibility that the outcome variable caused the predictor variable, rather than the predictor variable causing the outcome.
Ie: Viewing Violent TV ← Aggressive Play 

Chisquare statistic

Assesses the relationship between two nominal variables
ie; ethnicity and attitudes towards housing projects 

how to interpret a correlation matrix

Symmetrical display of multiple correlations
the correlations with the asterisks next to them are the significant values the information on the diagonal is not particularly useful, and the information below the diagonal is redundant with the information above the diagonal. It is general practice to report only the upper triangle of the correlation matrix in the research paper. 

how to interpret the Pearson r (and the other forms of correlation assessments discussed)

a significant r indicates that there is a linear association between the variables and thus that it is possible to use a knowledge about a person’s score on one variable to predict his or her score on the other variable. However, a nonsignificant r does not necessarily mean that there is no systematic relationship between the variables. The correlation between two variables that have curvilinear relationships is likely to be about zero. What this means is that although one variable can be used to predict the other, the Pearson correlation coefficient does not provide a good estimate of the extent to which this is possible. This represents a limitation of the correlation coefficient because, as we have seen, some important relationships are curvilinear.


how to report the results of Chisquare analyses

1. Construct a Contingency Table
2. Consult sampling distribution 3. Compared expected to actual number of entries 4. Effect Size calculated using phil/crammers statistic *exception to ruleuse eyeballing to discern pattern once significance has been determined results usually reported in text of report [chi2 (3, N=300)= 45,78, P<.001] 3= degree of freedom 300=sample size 45, 78= chi2 statistic 

Multiple Correlation Coefficient, R

a statistic that indicates the extent to which all of the predictor variables in a regression analysis are able, together, to predict the outcome variable


multiple correlation

Two pieces needed for multiple regression, multiple correlation coefficient, R, and regression coefficients/beta weights


restriction range

the range of scores on a given variable is correlated with another variable, without considering the scores that fall outside a specific range on a correlation


scatter plots

A graph showing the relationship between two quantitative variables, in which the horizontal axis indicates scores of the predictor variable, and the vertical axis indicates scores of the outcome variable.


commoncausal, extraneous, and mediating variables

Commoncausal causes both the predictor and outcome variables and thus produces the observed correlation between them.
Extraneous causes the outcome variables but does not cause the predictor variable.  Mediating variable is caused by the predictor variable, which in turn causes the outcome variable watching violent tv creates arousal which increases agression 

linear and nonlinear relationships in correlational data

Linear relationships involve 2 quantitative variables that can be approximated with a straight line, while nonlinear relationships involve 2 quantitative variables that cannot be approximated with a straight line.


longitudinal and crosssectional research designs

Longitudinal measures the same individuals more than one time, and the time period between the measurements is long enough that changes in the variables of interest could occur.
Crosssectional is where comparisons are made across different age groups, but all groups are measured at the same time 

reverse and reciprocal causation

Reverse causation the possibility that the outcome variable causes the predictor variable, rather than vice versa
Reciprocal Causation possibility that the predictor caused the outcome, and the outcome causes the predictor 

statistical assessment of nominal versus quantitative variables

Although the correlation coefficient is used to assess the relationship between two quantitative variables, an alternative statistic known as the chi2 statistic, must be used to assess the relationship between two nominal variables


ANOVA

A statistical procedure designed to compare the means of the dependent variable across the conditions of an experimental research design


counterbalancing

arranging the order in which the conditions occur equally often in each position. (ie, aggression study, reaction time studies)


experimental levels/conditions

specific situations/values for the independent variable


oneway experimental designs

An experiment design testing only one independent variable
ie; violent tv vs nonviolent tv 

equivalence conditions

Eliminates commoncausal variables


experimental manipulation

1. Divide the independent variable into levels/conditions
ie; cartoon type (nonviolent cartoon/violent cartoons) 2. Set up equivalence among participants (this helps ensure control of commoncausal variables) 3. Random assignment (of participants from the same population to conditions/levels) ie; flipping a coin, drawing numbers, computers. 

the F statistic and what it is used for

F=betweengroups variance/within groups variance


how to read an ANOVA summary table

The first step in interpreting the results of an experiment is to inspect the table to determine whether the condition means are significantly different from each other. Next, look at the means of the dependent variable in the different conditions. The results in the summary table are only meaningful in conjunction with an inspection of the condition means.


random assignment (of participants

The most common method of creating equivalence among the experimental conditions. It involves the researcher determining separately for each participant which level of the independent


advantages and disadvantages of repeated measures designs

Advantages 1.greater statistical power than betweenparticipants design. Because the responses of an individual in one condition can be directly compared to the same person’s responses in another condition, the statistical power of a repeatedmeasures design is greater than the power of a betweenparticipants design. 2. Economy of participants, more efficient because they require fewer participants.
Disadvantages Carryover (when you do something to somebody, it does not just go away), Practice and Fatigue (participants must be measured more than once), counterbalancing 

disadvantages of restricting an experiment to only two levels/conditions

There are two main limitations. One is that it can sometimes be difficult to tell which of the levels is causing a change in the measure. Also, it is difficult to draw conclusions about the pattern of the relationship between the independent and dependent variables. The problem is that some relationships are curvilinear such that increases in the independent variable cause increases in the dependent variable at some points but causes decreases at other points.
1. –detecting nonlinear relationships, sometimes the impact of the independent variable varies at different specific levels (anxiety performance) 2. sometimes both levels may cause an effect ie; more violent cartoons increase aggression, but nonviolent cartoons=less aggression than a normal baseline 

the different options for ‘control conditions’ in various types of experiments

Sometimes, they aren’t necessary or even used.


problems with experimental designs

1. many variables can’t be manipulated
2. low ecological validity 3. necessary simplification of situation 

the scientifically relevant factors affecting causality

1. Association, there must be a correlation between the independent variable and the dependent variable (X more likely to occur, given Y ie; childhood abuse/likely to abuse your children)
2. Temporal Priority event A after Event B=event A not causing event B often difficult to determine ie violent tv aggression, snake/heart rate 3. Control of commoncausal variables association and temporal priority necessary, but not sufficient, specification of equivalence conditions used to maintain experiment control 

types of equivalence designs

1. betweensubject designs= different (but equivalent) people in each condition (compare groups)
2. withinsubjects designs= same people in every condition (comparisons with self) 

experimental conditions versus control conditions

Experimental conditions are the levels of the independent variable in which the situation was created, and control conditions are the levels of the independent variable in which the situation of interest was not created


the general (default) null versus alternative hypothesis for experimental designs

In experimental designs, the null hypothesis is that the mean score on the dependent variables is the same at all levels of the independence variable (except for differences due to chance). The research hypothesis states that there is a difference among the conditions and normally states the specific direction of those differences.
The ANOVA treats the null hypothesis in terms of the absence of variability among the condition means. If all the means are equivalent, then there should be no differences, except those due to chance. If the experimental manipulation has influences the dependent variable, then the condition means should not all be the same, and thus there will be significantly more differences than would be expected by chance 