• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/31

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

31 Cards in this Set

  • Front
  • Back
Famous people

1. William Wundt
2. Herman Ebbinghaus
3. Oswald Kulpe
4. James McKeen Cattell
5. Binet and Simon
6. Lewis Treman
Famous people

1. William Wundt: First Psych lab. Basically a cultural psychology. Belived a mental image MUST come with a thought.

2. Herman Ebbinghaus: Showed HIGHER mental processes can be studied with experiments.

3. Oswald Kulpe: Believed there IS imageless thought.

4. James McKeen Cattell: Brought mental testing to the US.

5. Binet and Simon: The first IQ test. Introduced MENTAL AGE.

6. Lewis Treman: Made the Stanford- Binet Test. The US version of the Simon-Binet.
Hypothesis, Variable, Operational Definition, IV/ DV.
Hypothesis

1. Hypothesis: A tentative, testable explanation of the relationship between two variables.
2. Variable: A factor that varies in amount or kind and can be measured.
3. Operational Definitions: How you'll measure the variable.

4. IV: The variable who EFFECT is being studied.
5. DV: The variable EXPECTED to CHANGE due to variations in the IV.
IV: Levels vs. Numbers
IV: there's a difference between the NUMBERS of IV's and the LEVELS of an IV

(ex. IV= protein, Levels: low and high)
(ex. IV= protein, IV 2= time of day)
Research Types:
What's a naturalistic observation? Correlational study? Quasi- Experiment ? True experiment?
Research Types:

1. Naturalistic (field study): Researcher does NOT intervene, measures behaviors as they presently occur.
2. Correlational Study: Researcher did NOT manipulate the IV
3. Quasi-Experiment: IV manipulated, NO random assignment
4. True Experiment: IV manipulated, subjects randomly assigned.
Populations and Samples
(what's a population? A representative sample? A random sample? A stratified random sample?)
Populations and Samples

1. Population: The group the researcher wishes to GENERALIZE the findings to.

2. Representativeness Sample: Sample is a mini version of the population.

3. Random Sample: Every population member has an EQUAL CHANCE of being selected for the sample.

4. Stratified Random Sample: Relevant subgroups of the population are randomly sampled in proportion to size.
Between-Subjects Design
Between-Subjects Design: Each subject is only exposed to ONE LEVEL of each IV. Subjects are assigned randomly to groups. Subjects in a given group don't get the same level of IV as people in other groups.

ex. Some people in High-Protein Group, some in Low-Protein group.
Matched- Subjects Design:
Matched- Subjects Design:
When you want to make sure both groups MATCH on a certain CONTROLLED VARIABLE, you can pair people (based on your controlled variable), then assign them RANDOMLY to one group or another.
This ensures BOTH GROUPS are equal on the MATCHING VARIABLE.
Within-Subjects Design
Within-Subjects Design (aka Repeated Measures Design):

The subject's OWN PERFORMANCE is the basis of comparison. It's matching subjects on every variable at the same time.

So: Each subject is exposed to more than one condition, and the researcher can SEPARATE the EFFECTS of individual differences in one thing from the effects of the IV.

Problem: What about practice effects from taking a test twice? Or boredom?
You COUNTERBALANCE: Assign half of your group to the first IV level first, and the other half to the second IV level first.
Confounding Variables
Confounding Variables: Unintended IV's. Like, other possible causes of your finding outside of YOUR IV.
So:

Control Group: They don't get any treatment.
Experimental Group: Gets the treatment.
Nonequivalent Group Design
Nonequivalent Group Design: The Control group is NOT NECESSARILY similar to the experimental group because there's no random assignment. Like, in education classes.
Potential Problems in Research Design

(What's experimenter bias? What's Double-blinding? Single blinding? What's Demand Characteristics? Placebo effect? Hawthorne Effect?
What's external validity?)
Potential Problems in Research Design:

Experimental Bias: You may accidentally treat some of your groups differently than others due to your expectations.
Controlled by...
a. Double-Blinding: You and your subjects don't know which group got the IV.
b. Single-Blind: You know, they don't know, that they got an IV.

Demand Characteristics: Any cues that suggest to the subject what you expect from them.
ex. Placebo Effect: A type of demand characteristic where a placebo has a beneficial effect on the subjects.
May be remedied by CONTROL groups.

Hawthorne Effect: Tendency of people to act differently if they know they're being watched. May be remedied with control groups.

External Validity: How GENERALIZABLE the results of an experiment are. Like, to people in general or to life outside the lab...
Descriptive Stats (and inferential stats).
Descriptive Stats: Organizing, quantifying, summarizing a collection of ACTUAL observations.

Inferential Statistics: Generalizing beyond actual observations. Infer re: sample involved to the population of interest.
Frequency Distribution
Frequency Distribution: A graphic representation of how often each value occurs.
Measures of Central Tendency (mean, mode, median, outliers)
Measures of Central Tendency

1. Mode: The number that's there most. There CAN be more than one mode.
2. Median: The middle number. Get an average if you have an even- numbered sample.
3. Mean: The average.
4. Outliers: Effects mean the worst.
Measures of Variability (Dispersion)
Measures of Variability (Dispersion):

1. Range: Highest score - lowest score.
2. Standard Deviation: "Average" DISTANCE from the mean. Also the Square Root of the variance.
3. Variance: The square of the standard deviation. It's how much EACH SCORE varies from the MEAN.
Distributions, Percentiles, and Z-Scores.
Distributions, Percentiles, and Z-Scores

With a normal distribution and knowing what your standard deviation is, you can calculate your percentile.

Percentile @
1. Within 1 stand deviation of the mean: %68.
2. Within 2 stand deviations of the mean: %96
3. Beyond 2 stand deviations: %4.

Z-Score: Calculating how many standard deviations above OR below the mean your score is...
How?
Your score - [mean]= [answer] / standard deviation.

ex. Mean = 20, stand dev = 15, you scored at 50. Z score:
50-20= 30/15 = 2 ;)

So: Your score is so fucking high it's 2 standard deviations ABOVE the mean...
Easy.

Percentiles for normal distributions:
ex. For a z-score of +1, what's that percentile?

%50 for all the scores below the median, and then %34 for the percent of people who got ABOVE or AT the +1 standard deviation.

Meaning, if you did a +1 z-Score, you'd be in the 84th percentile. You genius, you.

Alternately, if you did a -1 z-Score, you %50 (for the people below the median) - %34 (for all the other people who got between a 0 and -1 z-score), so: 50-34= %16. You did better than %16 of test takers.

That's what I'm aiming for on the GRE Psych.

To go (very roughly) from a z-Score to a percentile, just add all the percentages to the left of your score...
ex. You got a +2 z-Score, what's your percentile?
OK: 2+14+34+34+14= 98.

Now you can call yourself smart for real.

AND here's a random fuck-you from ETS to us:

What happens if you do a z-Score conversion to ALL your scores?
Your mean is 0, your standard distribution is 1.
T-Score
T-Score: Has a mean of 50 and a standard deviation of 10. T-Scores are often used for test score interpretation.
Normal vs. Skewed Distribution (re: the mean, median, and mode's location)
Normal: Symmetrical, greatest frequency's in the middle, so: mean, median, and mode are all the same

Skewed: They're not.
Correlation Coefficients (what is it? What's a positive and negative correlation? What's a scatterplot? Relation to factor analysis? What's a factor?)
Correlation Coefficients: A descriptive stat that measures to what extent two variables are related. Correlations: What's the degree of association between two variables.

Range: -1.00 to +1.00

Positive Correlation: They predict each other, in the same direction.

Negative Correlation: One changed causes another to change in the OPPOSITE direction.

Scatterplot: Graphical representation of correlational data. From that, we can draw the best-fitting line through the dots.

All the 17 year olds in the room, please note CORRELATION does NOT equal causation, mmmkkay?

Correlation is a cornerstone to Factor Analysis: Attempts to account for the interrelationships found among various variables. Shows how groups of variables "hang together."

Factor: In a correlation matrix, we want to see which variables are highly correlated, so we can assume they're measuring the same FACTOR ;)
Inferential Stats (what do they let us do? What's a significance test? What's the alternative and the null hypothesis, what's statistical significance, and what's a criterion of significance/ alpha level?)
Inferential Stats: Helps us use a small batch of data to make conclusions about a whole population of people.

Significance Test: When we try to show that our Alternative (research) hypothesis is SUPPORTED by our data, and we can reject our Null Hypothesis.

We test our null hypothesis against the data we obtained from our sample.

Are our findings due to a real difference, random chance, or error?

A significance test tells us the PROBABILITY that our observed difference is due to chance. If there's a high chance it was due to chance (trippy), we accept our null and get no more funding ever.

A low probability that it's due to chance means we can reject our null, meaning our observed difference is STATISTICALLY SIGNIFICANT.

Criterion of Significance: We establish it before collecting data. It's setting the alpha level.
Oh noes! Errors in Significance Testing (also called little white lies)

Type I and II, and what's the relationship between significance testing and sample size.
Errors in Significance Testing:

1. Type I: The null was true, you rejected it.

Type I: The chance of making this error is the same as your alpha level. Reality in a Type I: There's no difference in the population values, you got a statistically significant result by CHANCE.


2. Type II: The null was false, you accepted it.

Type II: Reality: You got a statistically insignificant result, but your null was accepted... The probability of making a Type II error is Beta.

Sample size and significance: The larger the sample size, the smaller the difference between the groups must be for significance.
Types of Significance Tests

1. t-test
2. ANOVA
3. Chi-Square
Types of Significance Tests

1. t-test: Compares the means of two groups.

2. ANOVA: when you have more than two different groups. It's an analysis of variance- estimates how much group means differ from EACH OTHER.

ANOVAs give an F-Ratio: [between - groups variance] / [within- groups variance]

Basically, ANOVAs can show us if there's any interaction between the two or more IVs.

i. Factorial Design: Each level of a given independent variable occurs with each level of the OTHER independent variable.
ii. Interaction: When the effects of one IV are not consistent for all levels of the other IVs.

3. Chi-Square: When the data is for names and categories. (nominal and categorical data).
Score Interpretation (norm and domain/criterion referenced test)
Score Interpretation

1. Norm-Referenced Testing: Assessing a score re: other's scores. Test norms come from standardized samples that should be large and representative.

Problem with norm-referenced: The target population can and will change, and then your norm is useless...

Domain/ Criterion-Referenced Testing: What does the test taker know about a certain content domain? It's like a driver's test...
Reliability

Test-retest, alternate-form, split-half. What's common to all tests?
Reliability: How consistently does a test measure something? Is the test dependable, reproducible, and consistent?

SEM (Standard Error of Measurement): Measures how much (average) we expect a person's score to vary from their actual ability score...

Measuring Reliability:

1. Test-retest reliability: Same test is given to the SAME group twice. Estimates the inter-individual stability of scores.

2. Alternate-Form Method: Given two different test forms at two different times.

3. Split-Half Reliability: There's only one test, but it's split in half and a correlation coefficient is taken

ALL METHODS: A correlation coefficient is calculated. A high one (over +.80) is reliable ;0
Validity

What is it? What's construct, content, face, criterion, convergent, and discriminant validity?)

What's the relationship between reliability and validity?
Validity: How well does a test measure what it's supposed to? What the relationship between scores on your test and other, independent sources of information about the behaviors in question?

Content Validity: How well does a test COVER all the facts of your skill or knowledge area?

Face Validity: Do the test term APPEAR to test what they're supposed to?

Criterion Validity: How well does the test PREDICT performance on an established test of the SAME skill?

Cross Validation: When you test the criterion validity of a test on a second sample, after you showed it on a first sample...

Construct Validity: How well does your test relate to the THEORETICAL framework re: what you want to measure?

like, if I want to test that pretty is related to smart, my pretty test people also need to score high on the smart test. (Convergent Validity).

Discriminant Validity: My test also has construct validity if it's not correlated with other variables that aren't related to my construct
(my pretty people should NOT score high on someone's ugly test).

A test can be %100 reliable but %0 valid (it tests something unrelated to your construct really well). But a test with %0 reliability will have %0 validity.
Scales of Measurement

1. Nominal
2. Ordinal
3. Interval
4. Ratio
Scales of Measurement

1. Nominal (categorical): ex. names.
2. Ordinal: Ranked re: size or magnitude. Like, highest, second-highest.
3. Interval: Uses ACTUAL numbers.
4. Ratio: There's a true zero (the total absence) of the thing we're measuring.

RATIO SCALE: We can add/subtract, multiple and divide.

INTERVAL: We can add and subtract.
Ability Tests (aptitude vs. achievement tests, what's IQ? What's ratio and deviation IQ?)
Ability Tests:

1. Aptitude: Your POTENTIAL to accomplish something. (SAT: Your potential to do well in school)

IQ: Intelligence aptitude test.

Ratio IQ: (Mental Age / [chronological age] ) X 100.
Problem with ratio IQ is that it goes down with age...

Deviation IQ: Reflects your standing among your same-age peers.


2. Achievement: What you already know.
Intelligence tests:

Wechsler (different types? Difference between W and S-B?)
Wechsler: All items are grouped into subtests, arranged in order of increasing difficulty within each subtest.

The Stanford-Binet is organized by are levels.

A Wechsler for you:
1. WPPSI: Preschoolers.
2. WISC: 5-16.
3. WAIS III: 16 and older.
Personality Tests I

Personality Inventory (MMPI, CPI)
Hathaway and McKinley? What's empirical criterion-keying approach? What's a personality inventory?
Personality Tests

1. Personality Inventory: Self-rating device with statements that can be answered by the person TAKING the test. There's a limited way to respond (ex. yes/no)...


a. MMPI: 550 statements (t/f/na): Ten clinical scales.

Also has scales for careless, fake, distorted, and misrepresented answers. If you score high on this scale, your score may be thrown out .
The MMPI is to help assess clinical disorders.

Hathaway and McKinley: The MMPI was developed using an empirical criterion-keying approach: Answers that tested different between clinical and nonclinical pops. Each criterion group's responses made a clinical scale.


MMPI 2: Added content scales. Derived from theoretical concerns.

b. California Psychological Inventory (CPI): A personality inventory based on the MMPI. Used for normal high schoolers and college students.

CPI: Measures dominance, sociability, self- control, femininity. All scores are standard against a standardized sample.
Personality Tests II:

Projective Tests (what is it? What's a Rorschach? TAT? Blacky pictures? Rotter Incomplete Sentence?)

What's the Barnum Effect?
Personality Tests II:

Projective Tests: Relatively ambiguous stimuli are presented, the subject is asked to INTERPRET the stimuli. Test taker can answer a lot of ways, and scoring for these tests is SUBJECTIVE.

1. Rorschach: 10 inkblot cards presented in a specific order with specific instructions (what do the blots remind you of?)

Scoring: What the person saw, what spontaneous remarks they make.

2. Thematic Apperception Test (Morgan and Murray): 20 simple pictures with scenes that may have ambiguous meanings. Tell a story about what's happening and why?

3. Blacky picture: A sentence completion tests for little kids. Blacky's in a situation that corresponds to a particular stage of psychosexual development, Tell stories about the pictures you see...

4. Rotter Incomplete Sentences Blank: Has 40 sentence stems and you complete them with whatever's on your mind.

The Barnum Effect: The tendency for people to accept and approve of YOUR interpretation of their personality. Like astronomy almost.
Interest Testing (LAST FUCKING FLASHCARD!!!)

What's the Strong-Campbell Interest Inventory? How's it organized? What's relation to Holland's model? What's the RIASEC system?
Interest Testing: To assess your interest in a certain line of work..

1. Strong-Campbell Inventory: Organized like a personality inventory, using the empirical keying approach.

Testers: Given a list of interests, asked to say if they like them or not. Then give your preference for one of two paired items.

Based on Holland's occupational themes model: RIASEC (realistic, investigative, social, artistic, enterprising, conventional) interests.