• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/75

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

75 Cards in this Set

  • Front
  • Back

Mean:

Average numbers. Add together all of the numbers in a set and then divide the sum by the total count of numbers.

Median:

The statistical median is the middle number in a sequence of numbers. To find the median, organize each number in order by size; the number in the middle is the median.

For an even number add two middle numbers and divide by two.

Mode:

The mode is the number that occurs most often within a set of numbers.

Range:

The range is the difference between the highest and lowest values within a set of numbers. To calculate range, subtract the smallest number from the largest number in the set.

Standard Deviation SD:

is a measure that is used to quantify the amount of variation or dispersion of a set of data values.

Variance:

is the expectation of the squared deviation of a random variable from its mean, and it informally measures how far a set of (random) numbers are spread out from their mean.

alternative hypothesis:

what we hope to support in our research. Opposite of Null.

Alpha. a= .05

Confidence level. we use 95% confidence. which is P< .05

One way Anova analysis of variance:

inferential parametric statistic we use for comparing the means of a few groups. F= 3 or more. We symbolize it with the letter F. (3 independent variables.)

Repeated Measures (within) One way Anova:

Each person is tested on all levels of the independent variable.

Parametric test:

CI, T, F

Percentile:

How relative someone did based on others in the group.

Post hoc Test:

LSD test. Testing all possible pairs of groups to poinpoint which ones differ significantly from each other and which ones do not.

Prep:

A way to measure how likely we can replicate the results of the study, idealy in a different context or with a sample that has different characteristics.

Positive skew:

A distribution peak is to the left of the center point. slows to the right.

Quasi experimental method:

Research method in which the IV cannot be manipulated. Most often the variable is a characteristic of the people we are studying such as gender.We are restricted in the conclusions we can draw to difference conclusions, not knowing if the IV is the cause of any difference we observe in behavior. Just that there is a difference. (not cause and effect)

ratio:

numerical and has an absolute zero. common examples include time, centimeters.

Raw score:

A data point that has not been transformed, analyzed, or changed in any way. (hasnt been transformed or put into a t score).

Sampling Error:

The amount of inaccuracy in data due to studying a sample rather than testing the entire population. We can estimate this after determining our strength of our independent variable.

Subject variable:

A characteristic of the people we are studying, such as their gender or age, which allows difference in conclusions. (quasi).

Situation variable:

One we can manipulate, such as sending or not sending e-mails, which allows for cause-effect conclusions. (True experiment).

Standard Scores:

Raw scores converted into Z T

Standard Normal Distribution:

A normal distribution with a mean of 0 and a standard deviation of 1. When raw scores are converted to this distribution they are called standard scores or z-scores. We use Z-scores for comparing scores from different distributions and primarily for calculating percentiles.

Standard error of the mean:

The standard deviation of the sampling distribution, which we can approximate using statistics from samples (Sample size and standard deviation.

If null is true?

We risk Type two. (retain the null). NS.

Type one's a chance?

With significance. Reject the null. P<.05

Cd 0:

.0 negligible, paltry, very little or none. (strength, or effect.) 99% overlap which suggests highly similar.

CD .2:

Weak, small, not much, a little bit, a smidgen difference or effect. 20% overlap.

CD .5:

Moderate, modest, some, a fair amount, a reasonable amount, average.

CD .8:

Strong, lots, a large amount, a whole bunch, whopping, OMG this is big. (20% overlap).

John Tukey developed the box and whisker plot to display a data set. The whiskers extend from the box to show:

The minimum and maximum scores.

Jacob Cohen developed a set of guidelines to evaluate the differences between two groups-say, the differences between males and females on a measure of mathematical skill. What would a cohens D= .11 tell us?

Small difference, lots of overlap in the distributions of males and females.

If we reject the null hypothesis and claim an outcome is significant, what are the chances we have made a type 1 error?

P<.05

If on the Neo-ff our class mean on the neuroticism scale is 28, with a 95% CI of {25, 31} the population mean is likely to be:

Inside the 95% confidence interval.

The number of scores in a data set that are free to vary is called:

Degrees of freedom

1-N2 is a good estimate of the amount of difference between groups due to:

Error or chance factors.

Which of the following is NOT a common Technique in exploratory data analysis?

Anova

If the IV is having no effect on the DV in ANOVA, the F will be:

1

If the standard deviation for a set of scores is small, the scores:

are homogeneous.

We can make pair-wise comparisons between any two means in an anlysis of variance using:

LSD tests

If you are in the 62%tile for agreeableness on the NEo-ff personality inventory, what does this say about you?

You are above average on the Neo-ff scale

Gender, age, and ethnicity are good examples of:

Subject variables.
Determining a persons sex involves measurement on a _____

Nominal

Determining a persons reaction time involves a measurement on a _____

Interval/ratio

A recent report concludes that rats given vitamin supplements have better maze learning scores than rats on a regular diet. The IV is:

The type of diet

A recent study with college students reports that reaction times in the morning are faster than reaction times in the afternoon. For this report, reaction time is the:

Dependent variable.

If a population of N = 10 scores has a mean of 30 and a standard Deviation of 4, then the variance equals?

16 (just square 4)

Suppose you earned a score of 40 on an exam. Which parameters would give you the highest grade?

Mean of 45 and S= 10

Which is not a measure of variability?

Mean

A positive deviation score indicates that the raw score

Greater than the mean

Regression equation:

Y= a + bX

confidence interval:

Yt = +_ 1.96(standard error of estimate)

eta square is to analysis of variance as ____ is to _____

coefficient of determination/correlation

in a 2 x 2 ANOVA, there are ____ separate null Hypothesis

3

In a positive relationship

as x increases, y increases


as x decreases, y decreases

Y can be most accurately predicted from x if the correlation between x and y is:

-.98 (or plus)

if there is a positive correlation between x and y then the regression equation, Y = a + bX will have

b>0

For what scale of measurement is chi-square most readily suited

nominal

As the differences between expected and observed frequencies increase

the likelihood of rejecting Ho also increases (Fo=Fe) if different, there is significance.

A chi square to see if political affiliation (republican, democrat, independent, ) is related to religious affiliation (protestant, catholic, Jew, non-denominational) is a

3x4 chi-square



The null hypothesis in the chi-square test for independence is that

the variables are independent

If we found that position on abortion depended on gender, what would be the proper citation?

x2 (1, N =100) = 4.22, p < .05

Adjusted standardized residual:

in chi-square research this is the difference between the observed frequency (fo) and the expected frequency (Fe) for a cell, divided by standard error. We can ask SPSS for this number for each cell and use it to decide which cells we need ot talk about and which cells we can ignore in our interpretation. An ASR > 1.96 is significant.

cause in correlation:

Correlation shows the relation between two sets of scores--two DVs we can only determine cause with a true experiment, manipulating an IV and measuring its effects on the DV, so we cannot NORMALLY draw cause-and-Effect conclusions with correlation.

Cell Mean:

The mean for any unique combination of the levels of the IVs in a factorial design ANOVA. Cell means are what we graph and we might polish them to make an interaction stand out.

Correlational Method:

A research method in which the relationship between at least two dependent variables is measured. Usually neither of the variables can be manipulated or controlled. Almost always both variables are dependent variables. When one of the variables is an independent variable then we say we have a quasi-experimental design and we do regression analysis.

Factorial design:

with more than one independent variable. This phrase comes up most often in reference to analysis of variance where a design and analysis involving 2 independent variables is called a factorial design and uses a factorial analysis of variance.

Interaction:

The effect of each independent variable across levels of the other independent variable. This occurs in 2-way anova in the summary table the interaction line appears as IV1*IV2. When the means of each group are graphed, the interaction shows a non parallel lines. when writing about the interaction we have to say how the variables in combination influence the dependent variable.

intercept:

The predicted value for Y when X is equal to 0 in a regression equation, or the point where the regression line crosses (or intercepts) the Y-axis. If the regression equation has the general linear from Y= a + bX the intercept is "A"

linear relation:

A relation between two dependent variables best fit by a straight line. It is the relation detected by Pearson's correlation and it is represented on the scatter plot as the regression line. If we claim there is a correlation between two variables we mean a linear relation-no correlation is no linear relation.

main effect:

An effect of a single independent variable. This term shows up most often in analysis of variance. The result of a one-way ANOVA is a simple main effect because the study involves only one IV and the source table has only one F to reflect that main effect. In a two-way ANOVA there are two main effects and an interaction, so there are 3 F values in the source table, two of which are main effects, one for each IV.

multiple regression:

A proceedure in correlation that uses two or more predictor variables in a regression equation. If there are two predictor variables, a graph of the data is three dimensional. Somewhat hard to visualize but we can try by using one-point perspective and representing some of the data points as larger and smaller bubbles.

Negative correlation:

Also a negative relationship; an inverse relation between two variables where an increase in one variable is related to a decrease in the other. We could describe this as a teeter totter relation, as one goes up the other goes down. When one is higher the other is low.

observed frequencies:

The number of participants in each category in our sample. We symbolize this as Fo and our chi-square compares the observe frequencies to expected frequencies.

Pearsons correlation:

The most common of the correlation statistics when both variables are measured on an interval or ratio scales. The letter associated with Pearsons correlation is R and these values are often reported in a correlation matrix, a table that shows the R values for all pairs of variables tested. Pearsons can range from 0-1 and can be positive or negative.