• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/44

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

44 Cards in this Set

  • Front
  • Back
Independent Variables
Variables that the researcher controls. They can either be manipulated or classifying variables. (i.e. dosage level of medication)
Dependent Variable
A measure of the effect of the independent variable. (i.e. performance on a task or arithmetic achievement given a manipulative)
Nominal Scale
Scale that classifies objects into categories based on some defined characteristics. No logical order matters. (i.e. religion car make/model; there is no 'rank' of best to worst)
Ordinal Scale
We both classify and give a logical order to objects and characteristics. (i.e. grades; A, B, C, D, F - logical and classified)
Interval Scale
(third level of hierarchy scales). The difference between differences on scale are all exactly the same. (i.e. temperature. the difference between 77 and 78 is the same as between 81 and 82). The point '0' is just another point in every interval scale. Also, IQ scores is interval.
Ratio Scale
highest level on hierarchy of scales. There is a known or true 0-point. Because of this, we can make assumptions about the characteristics. For example, a bag of apples weighing 60 pounds weights twice as much as a bag of apples weighing 30 points; while a temperature of 30 degrees is not twice as cold as 70 degrees (interval scale) because there is no true zero point there. Like number of children in a family (0 is the starting point) or number of fatal accidents.
Descriptive vs. inferential statistics
Descriptive stats are used to classify and summarize numerical data (that is, to describe it) while inferential statistics seeks to make generalizations about populations by studying a sample and using procedures.
histogram
a bar graph that depicts the frequency of individual scores in class intervals by lengths of the bars.
cumulative frequency distribution
constructed by adding the frequency of scores in any class interval to the frequencies of all the class intervals BELOW it on the scale of measurement.
Skewed
Positively skewed peaks close to Y-Axis. Negative Skew is low toward Y-Axis.
Central Tendencies
indicators of the average score; mean, median, mode.
Percentile
The point in a distribution at or below which a given percent of scores is found. For example, the 28th percentile of a distribution of scores is the point at or below which 29 percent of scores fall. So if a kid takes the SAT and lands at the 28th%, the only 28% of the students taking the test have lower scores.
Computing percentile
Lets say someone wants to find the 75th percentile (P75) for 180 freshman. All you do is multiple .75 times 180 to get 135.
Percentile Rank
The percent of scores less than or equal to that score. For example, the percentile rank of 63 is the percent of scores in the distribution that falls at or below a score of 63. It is a point on the percentile scale, whereas a percentile is a score, a point on the original measurement scale.
Median
It is the 50th percentile or the point on the scale of measurement below which 50% of scores fall. This is a measure of the central tendency (beside mode, mean, range, etc.)
Two important properties of the mean:
1. The sum of deviations from the mean is zero. 2. The sum of squared deviations from the mean is a minimum.
Interquartile Range
The difference between the 25th and the 75th percentiles.
Variance
σ2 (the 2 is squared) (population variance) Always zero or greater. When it is zero, all of the scores are the same. The average of the sum of squared deviations around the mean. (Sum of Squares divided by N). Also described as a measure of the variation/dispersion of scores in a dispersion. Spread of scores around middle of distribution.
s 2 (s squared)
Sample variance. Variance of a sample
Standard Deviation
σX: The square root of the variance. A measure on the variation/dispersion of scores in a distribution
Z-Score
Also known as a 'standard score'. They use the standard deviation as the unit of measure. This is computed by subtracting the mean from the raw score and dividing the result by the standard deviation. so (X-XBar/SD).
standard score
A transformed score that indicates the number of standard deviations a corresponding raw score is above or below the mean = Z SCORE
Normal distribution
Not a single distribution, but a family of distributions, each of which is determined by its mean and standard deviation. It is unimodal. It is continuous. It is asymptotic to the x-axis.
central limit theorm
a theorem that provides the mathematical basis for using the normal distribution as the sampling distribution of all sample means of a given sample size. The theorem states that this distribution of sample means (1) is normally distributed (2) has a mean equal to mu and (3) has a variance equal to variance over 'n'.
Alpha / level of significance
The probability of making a type 1 error if the null hypothesis is rejected.
Linear regression line
The mathematical equation of a straight line: y=Bx+A
Correlation
The nature, or extent, of the relationship between two variables. Pearson (r) Degress of relationship/correlation
covariance
The average of the cross products of deviation scores
Correlation Coefficient
An index of the relationship between two variables
Deviation
The distance between a given score and the mean
Standard error of the mean
The standard deviation of the sampling distribution of sample means
standard error of the estimate (S.E.E)
Used with conditional distributions in regression to make probability statements about the mean of the Y scores for a given X score
Error of prediction
Differences in individuals. Used in regression to make probability statements about individual predicted Y scores for a given X score.
Four steps to Hypothesis Testing
(1) tentative conclusion (2) hypothesis / prediction (3) Data Collection (experiment and data analysis) (4) make inferences... begin with step 1.
Intact group study
no manipulation necessary. Study the group as they are. Not like with an experimental group.
mutually exclusive events
Events that cannot occur at the same time. In other words, they cannot be overlapping. P(X or Y) = P(x) + P(Y).
Not mutually exclusive. Statistically Independent
P(x AND Y) = P(X) multiplied by P(Y). Probability of getting a heads and then a tails is .5 X .5 = .25.
permutations
ORDER MATTERS.
Combination
nCk Order does not matter
Binomial distribution
Shows probably of X successes in N Trials. This is generated by taking the binomial (X+Y) and raising it to the Nth power.
Pearson product-moment correlation coefficient
The index of the linear relationship between two variables, called the Pearson r
Chance of making a Type I error
Alpha: Rejecting the Null Hypothesis (Ho) when it is true.
Type II Error
Beta. Do not reject Ho when it is false.
R-Squared: Coefficient of Determination
The square of the correlation coefficient (r^2); a measure of the shared variance.