• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/176

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

176 Cards in this Set

  • Front
  • Back

Constructs

Hypothetical attributes or mechanisms that help explain and predict behavior in a theory. Also known as hypothetical constructs.

Continuous Variable

A variable that can be divided into smaller units without limit.

Control Condition

A condition where the treatment is not administered.

Correlational Method

A research method that simply observes two existing variables to determine the nature of their relationship.

Data

Measurements or observations.

Data Set

A collection of measurements or observations.

Datum

A single measurement or observation and is commonly called a score or raw score.

Dependent Variable

In an experiment, the variable that is observed for changes. (the scores)

Descriptive Statistics

Techniques that organize and summarize a set of data

Discrete Variable

A variable that exists in indivisible units.

Experimental Condition

A condition where the treatment is administered.

Experimental Method

A research method that manipulates one variable, observes a second variable for changes, and controls all other variables. The goal is to establish a cause-and-effect relationship.

Independent Variable

In an experiment, the variable that is manipulated by the researcher. (the treatment conditions)

Inferential Statistics

Techniques that use sample data to draw general conclusions about populations.

Interval Scale

An ordinal scale where all the categories are intervals with exactly the same width.

Lower Real Limit

The boundary that separates an interval from the next lower interval.

Nominal Scale

A measurement scale where the categories are differentiated only by qualitative names.

Nonequivalent Group Studies

A research study in which the different groups of participants are formed under circumstances that do not permit the researcher to control the assignment of individuals to groups and the groups of participants are, therefore, considered nonequivalent.

Operational Definition

A procedure for measuring and defining a construct.

Ordinal Scale

A measurement scale consisting of a series of ordered categories.

Parameter

A characteristic that describes a population.

Population

The entire group of individuals that a researcher wishes to study.

Pre-post Study

Quasi-experimental and nonexperimental designs consisting of a series of observations made over time. The goal is to evaluate the effect of an intervening treatment or event by comparing observations made before versus after the treatment.

Quasi-independent variable

In a quasi-experimental or nonexperimental research study, the variable that differentiates the groups or conditions being compared. Similar to the independent variable in an experiment.

Ratio Scale

An interval scale where a value of zero corresponds to none.

Raw Score

An original, unaltered measurement.

Real Limits

The boundaries separating the intervals that define the scores for a continuous variable.

Sample

A group selected from a population to participate in a research study.

Sampling Error

The discrepancy between a statistic and a parameter.

Statistic

A Characteristic that describes a sample.

Statistics

A value, usually a numerical value, that describes a sample. A statistic is usually derived from measurements of the individuals in the sample.

Upper Real Limit

The boundary that separates an interval from the next higher interval.

Variables

A characteristic that can change or take on different values.

Apparent Limits

The score values that appear as the lowest score and the highest score in an interval.

Axes

The two perpendicular lines that form a bar graph.

Bar Graph

A graph showing a bar above each score or interval so that the height of the bar corresponds to the frequency. A space is left between adjacent bars.

Class Interval

A group of scores in a grouped frequency distribution.

Frequency Distribution

A tabulation of the number of individuals in each category on the scale of measurement.

Grouped Frequency Distribution

A frequency distribution where scores are grouped into intervals rather than listed as individual values.

Histogram

A graph showing a bar above each score or interval so that the height of the bar corresponds to the frequency and width extends to the real limits.

Negatively Skewed Distribution

A distribution where the scores pile up on the right side and taper off to the left.

Normal

A specific shape that can be precisely defined by an equation.

Polygon

A graph consisting of a line that connects a series of dots. A dot is placed above each score or interval so that the height of the dot corresponds to the frequency.

Positively Skewed Distribution

A distribution where the scores pile up on the left side and taper off to the right.

Range

The distance from the upper real limit of the highest score to the lower real limit of the lowest score; the total distance from the absolute highest point to the lowest point in the distribution.

Relative Frequency

The proportion of the total distribution rather than the absolute frequency. Used for population distributions for which the absolute number of individuals is not known for each category.

Symmetrical Distribution

A distribution where the left-hand side is a mirror image of the right-hand side.

Tail(s) of a Distribution

A section on either side of a distribution where the frequency tapers down toward zero as the X values become more extreme.

Bimodal

A distribution with two modes.

Central Tendency

A statistical measures that identifies a single score (usually a central value) to serve as a representative for the entire group.

Line Graph

A display in which points connected by straight lines show several different means obtained from different groups or treatment conditions. Also used to show different medians, proportions, or other sample statistics.

Major Mode

The taller peak of two modes with unequal frequencies.

Median

The score that divides a distribution exactly in half.

Minor Mode

The shorter peak of two modes with unequal frequencies.

Mode

The score with the greatest frequency overall (major), or the greatest frequency within the set of neighboring scores (minor).

Multimodal

A distribution with more than two modes.

Weighted Mean

The average of two means, calculated so that each mean is weighted by the number of scores it represents.

Biased Statistic

A statistic that, on average, consistently tends to overestimate (or underestimate) the corresponding population parameter.

Degrees of Freedom (df)

Degrees of freedom = df = n – 1, measures the number of scores that are free to vary when computing SS for sample data. The value of df also describes how well a t statistic estimates a z-score.

Deviation Score

The distance (and direction) from the mean to a specific score. Deviation = X – μ.

Error Variance

Unexplained, unsystematic differences that are not caused by any known factor.

Mean Squared Deviation

The mean squared deviation equals the population variance. Variance is the average squared distance from the mean.

Population Standard Deviation (σ)

The square root of the population variance; a measure of the standard distance from the mean.

population variance (σ2)

The average squared distance from the mean; the mean of the squared deviations.

sample standard deviation (s)

The square root of the sample variance.

sample variance (s2)

The sum of the squared deviations divided by df = n – 1. An unbiased estimate of the population variance.

sum of squares (SS)

The sum of the squared deviation scores.

Unbiased Statistic

A statistic that, on average, provides an accurate estimate of the corresponding population parameter. The sample mean and sample variance are unbiased statistics.

Variability

A measure of the degree to which the scores in a distribution are clustered together or spread apart.

Z-score

A standardized score with a sign that indicates direction from the mean (+ above μ and – below μ), and a numerical value equal to the distance from the mean measured in standard deviations.

Z-score Transformation

A transformation that changes raw scores (X values) into z-scores.

Standardized Distribution

An entire distribution that has been transformed to create predetermined values for μ and σ.

Standardized Score

A score that has been transformed into a standard form.

Independent Random Sample

Requires that each individual has an equal chance of being selected and that the probability of being selected stays constant from one selection to the next if more than one individual is selected.

Percentile

A score that is identified by the percentage of the distribution that falls below its value.

Percentile Rank

The percentage of a distribution that falls below a specific score.

Probability

Probability is defined as a proportion, a specific part out of the whole set of possibilities.

Random Sample

A sample obtained using a process that gives every individual an equal chance of being selected and keeps the probability of being selected constant over a series of selections.

Sampling with Replacement

A sampling technique that returns the current selection to the population before the next selection is made. A required part of random sampling.

Unit Normal Table

A table listing proportions corresponding to each z-score location in a normal distribution.

Central Limit Theorem

A mathematical theorem that specifies the characteristics of the distribution of sample means.

Distribution of Sample Means

The set of sample means from all the possible random means population.

Expected Value of M

The mean of the distribution of sample means. The average of the M values.

Law of Large Numbers

In the field of statistics, the principle that states that the larger the sample size, the more likely it is that values obtained from the sample are similar to the actual values for the population.

Sampling Distribution

A distribution of statistics (as opposed to a distribution of scores). The distribution of sample means is an example of a sampling distribution.

Standard Error of M

The standard deviation of the distribution of sample means. The standard distance between a sample mean and the population mean.

Alpha Level

In a hypothesis test, the criterion for statistical significance that defines the maximum probability that the research result was obtained simply by chance. Also known as level of significance.

Alternative Hypothesis

The alternative hypothesis states that there is an effect, there is a difference, or there is a relationship.

Beta

Beta is the probability of a Type II error.

Cohen's d

A standard measure of effect size computed by dividing the sample mean difference by the sample standard deviation.

Critical Region
consists of outcomes that are very unlikely to be obtained if the null hypothesis is true. The term very unlikely is defined by ±.
Directional Test
A hypothesis test that includes a directional prediction in the statement of the hypotheses and places the critical region entirely in one tail of the distribution.

Effect Size

A measure of the size of the treatment effect that is separate from the statistical significance of the effect

Hypothesis Test

A statistical procedure that uses data from a sample to test a hypothesis about a population.

Null Hypothesis
states that there is no effect, no difference, or no relationship.

One Tailed Test

A directional test is a hypothesis test that includes a directional prediction in the statement of the hypotheses and places the critical region entirely in one tail of the distribution.

Power

The probability that the hypothesis test will reject the null hypothesis when there actually is a treatment effect.

Significant

A result is said to be this, if it is very unlikely to occur when the null hypothesis is true. That is, the result is sufficient to reject the null hypothesis. Thus, a treatment has a significant effect if the decision from the hypothesis test is to reject H0.

Test Statistic
A statistic that summarizes the sample data in a hypothesis test. It is used to determine whether or not the data are in the critical region.
Type I Error
This rejects a true null hypothesis. You have concluded that a treatment does have an effect when actually it does not.
Type II Error
This is failing to reject a false null hypothesis. The test fails to detect a real treatment effect.

t Distribution

The distribution of t statistics is symmetrical and centered at zero like a normal distribution. A t distribution is flatter and more spread out than the normal distribution, but approaches a normal shape as df increases.

t Statistic
A statistic used to summarize sample data in situations where the population standard deviation is not known. It is similar to a z-score for a sample mean, but the t statistic uses an estimate of the standard error.

Confidence Interval

An interval estimate that is described in terms of the level (percentage) of confidence in the accuracy of the estimation.

Estimated Standard Error

An estimate of the standard error that uses the sample variance (or standard deviation) in place of the corresponding population value.

percentage of variance accounted for by the treatment (r2)

A measure of effect size that determines what portion of the variability in the scores can be accounted for by the treatment effect.

Between-Subjects Research Design

An alternative term for an independent-measures design.

Homogeneity of Variance

An assumption that the two populations from which the samples were obtained have equal variances.

Independent Measures t Statistic

In a between-subjects design, a hypothesis test that evaluates the statistical significance of the mean difference between two separate groups of participants.

Independent Measures Research Design

A research design that uses a separate sample for each treatment condition or each population being compared.

Pooled Variance

A single measure of sample variance that is obtained by averaging two sample variances. It is a weighted mean of the two variances.

Repeated Measures Research Design

A research design in which the different groups of scores are all obtained from the same group of participants.

Within-Subjects Research Design

A research design in which the different groups of scores are all obtained from the same group of participants. Also known as repeated-measures design.

Difference Scores

The difference between two measurements obtained for a single subject. D = X2 – X1

Individual Differences

The naturally occurring differences from one individual to another that may cause the individuals to have different scores.

Matched-Subjects Design

A research study where the individuals in one sample are matched one-to-one with the individuals in a second sample. The matching is based on a variable considered relevant to the study.

Order Effects

The effects of participating in one treatment that may influence the scores in the following treatment.

Related-Samples Design

Two research designs that are statistically equivalent. The scores in one set are directly related, one-to-one, with the scores in the second set.

Repeated Measures Design

A research design that uses the same group of subjects in all of the treatment conditions that are being compared.

F-Ratio

The test statistic for analysis of variance that compares the differences (variance) between treatments with the differences (variance) that are expected by chance.

Analysis of Variance (ANOVA)

A hypothesis-testing procedure that is used to evaluate mean differences between two or more treatments (or populations).

ANOVA Summary Table

A table that shows the source of variability (between treatments, within treatments, and total variability), SS, df, MS, and F.

Between-Treatment Variance

Values used to measure and describe the differences between treatments (mean differences).

Distribution of F-ratios

All of the possible F values when Ho is true.

Error Term

For ANOVA, the denominator of the F-ratio is called the error term. The error term provides a measure of the variance caused by random, unsystematic differences. When the treatment effect is zero (H0 is true), the error term measures the same sources of variance as the numerator of the F-ratio, so the value of the F-ratio is expected to be nearly equal to 1.00.

Eta Squared

A measure of effect size based on the percentage of variance accounted for by the sample mean differences.

Experiementwise Alpha Level

The risk of a Type I error that accumulates as you do more and more separate tests.

Factor
In analysis of variance, an independent variable (or quasi-independent variable) is called this.

Levels

In an experiment, the different values of the independent variable selected to create and define the treatment conditions. In other research studies, the different values of a factor.

Mean Square (MS)
In analysis of variance, a sample variance is called this, indicating that variance measures the mean of the squared deviations.

Pairwise Compairisons

To go back through the data and compare the individual treatments two at a time.

Post Hoc Tests

A test that is conducted after an ANOVA with more than two treatment conditions where the null hypothesis was rejected. The purpose of post hoc tests is to determine exactly which treatment conditions are significantly different.

Scheffe Test

A test that uses an F-ratio to evaluate the significance of the difference between any two treatment conditions. One of the safest of all possible post hoc tests.

Testwise Alpha Level

Systematic differences that are caused by changing treatment conditions.

Tukey's HSD Test

A test that allows you to compute a single value that determines the minimum difference between treatment means that is necessary for significance. A commonly used post hoc test.

Within-Treatments Variance

The differences that exist inside each treatment condition.

Between-Subjects Variance

The differences that exist from one subject to another.

Between-Treatments Variance

Values used to measure and describe the differences between treatments (mean differences).

Cells

A two-dimensional table is a matrix and each box in the table is called a cell.

Error Variance

Unexplained, unsystematic differences that are not caused by any known factor.

Interaction

Mean differences that cannot be explained by the main effects of the two factors. An interaction exists when the effects of one factor depend on the levels of the second factor.

Main Effect

The overall mean differences between the levels of one factor. When the data are organized in a matrix, the main effects are the mean differences among the rows (or among the columns).

Matrix

A two-dimensional table is a matrix and each box in the table is called a cell.

Two-Factor Design

A research study examining two factors (two independent or quasi-independent variables).

Y-Intercept

The value of Y when X = 0. In the linear equation, the value of a.

Analysis of Regression

Evaluating the significance of a regression equation by computing an F-ratio comparing the predicted variance (MS) in the numerator and the unpredicted variance (MS) in the denominator.

Coefficient of Determination

The degree to the variability in one variable can be predicted by its relationship with another variable: determination measured by r2.

Correlation

A statistical value that measures and describes the direction and degree of relationship between two variables. The sign (+/–) indicates the direction of the relationship. The numerical value (0.0 to 1.0) indicates the strength or consistency of the relationship. The type (Pearson or Spearman) indicates the form of the relationship. Also known as correlation coefficient.

Correlational Matrix

A table that shows the results from multiple correlations and uses footnotes to indicate which correlations are significant.

Dichotomous Variable

A variable with only two values. Also called a binomial variable.

Linear Equation

An equation of the form Y = bX + a expressing the relationship between two variables X and Y.

Linear Relationship

A relationship between two variables where a specific increase in one variable is always accompanied by a specific increase (or decrease) in the other variable.

Negative Correlation

A relationship between two variables where increases in one variable tend to be accompanied by decreases in the other variable.

Partial Correlation

A partial correlation measures the relationship between two variables while controlling the influence of a third variable by holding it constant.

Pearson Correlation

A measure of the direction and degree of linear relationship between two variables.

Perfect Correlation

A relationship where the actual data points perfectly fit the specific form being measured. For a Pearson correlation, the data points fit perfectly on a straight line.

Phi-coefficient

A correlation between two variables both of which are dichotomous

Point-biserial Correlation

A correlation between two variables where one of the variables is dichotomous.

Positive Correlation

A relationship between two variables where increases in one variable tend to be accompanied by increases in the other variable.

Regression

A statistical technique used for predicting one variable from another. The statistical process of finding the linear equation that produces the most accurate predicted values for Y using one predictor variable, X.

Regression Equation for Y

The equation for the best-fitting straight line to describe the relationship between X and Y.

Regression Line

The statistical technique for finding the best-fitting straight line for a set of data is called regression, and the resulting straight line is called the regression line.

Slope

The amount of change in Y for each 1-point increase in X. The value of b in the linear equation.

Spearman Correlation

A correlation calculated for ordinal data. Also used to measure the consistency of direction for a relationship.

Standard Error of Estimate

A measure of the average distance between the actual Y values and the predicted values from the regression equation.

Sum of Products (SP)

A measure of the degree of covariability between two variables; the degree to which they vary together.

Chi-Square Distribution

The theoretical distribution of chi-square values that would be obtained if the null hypothesis was true

Chi-Square Statistic

A test statistic that evaluates the discrepancy between a set of observed frequencies and a set of expected frequencies.

Chi-Square Test for Goodness-of-it

A test that uses the proportions found in sample data to test a hypothesis about the corresponding proportions in the general population.

Chi-Square Test for Independence

A test that uses the frequencies found in sample data to test a hypothesis about the relationship between two variables in the population.

Cramér’s V

A modification of the phi-coefficient to be used when one or both variables consist of more than two categories

Distribution-free Test

Also called a nonparametric test. A test that does not test hypotheses about parameters or make assumptions about parameters. The data usually consist of frequencies.

Expected Frequencies

Hypothetical, ideal frequencies that are predicted from the null hypothesis.

Nonparametric Test

A test that does not test hypotheses about parameters or make assumptions about parameters. The data usually consist of frequencies.

Observed Frequencies

The actual frequencies that are found in the sample data.

Parametric Test

A test evaluating hypotheses about population parameters and making assumptions about parameters. Also, a test requiring numerical scores.