Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
45 Cards in this Set
- Front
- Back
Definition of Experiment
|
- Systematic research study where the investigator varies some variable(s), attempts to hold other variables constant, and observes the effects of the systematic variation on another variable. With Independent, Extraneous, and Dependent variables.
- Vary X, then measure changes in Y -E.g., vary arousal, then measure changes in reaction time -Hold all else constant (e.g., hunger, distractions, etc). |
|
Independent Variable (I.V.)
|
The factor of interest that is varied by the experimenter.
- has at least two “Levels” - Usually directly manipulated • E.g., give drug vs. not, amount of certain drug given, the difficulty of a particular task, the arrangement of stimuli, etc. - Sometimes not directly manipulated • E.g., “subject variables” - 5 types of independent Variables |
|
Dependent Variable
|
– Behavior measured as the outcome of an experiment.
– Look at effect of the I.V. on the D.V. – Must be operationally defined. – Things to be careful about when choosing your dependent variable: 1. Reliability and Validity (already talked about) 2. Ceiling Effect 3. Floor Effect |
|
Extraneous Variables
|
• A variable other than the independent variable that could affect the dependent variable, if not controlled properly.
– aka “nuisance variable” – E.g., temperature, lighting, instructions, socioeconomic status, drug use history, etc • If extraneous variables are held constant across levels of the independent variable, it is usually not a problem. • But if an extraneous variable co-varies with the independent variable, then you have a confound. |
|
“levels” of an Independent Variable
|
E.G. drug vs. placebo (no drug) = 2 levels
- can have more than 2 levels – E.g., type of treatment • E.g., one group gets Cognitive Behavioral Therapy, another gets Psychoanalysis, another gets Pharmocotherapeutic drug, another gets nothing • This has 4 levels |
|
Situational Independent Variable
|
Features of subject’s environment are manipulated by investigator
- E.g., number of bystanders in “diffusion of responsibility” experiment |
|
Task Independent Variable
|
Features of subject’s task are manipulated by investigator
• E.g., harder vs. easier problems to solve • E.g., classical conditioning experiment, whether a stimulus is paired with food or not |
|
Instructional Independent Variable
|
Features of instructions given to the subjects are manipulated
• E.g., telling subjects to focus on one aspect of a task or another • E.g., telling subjects to solve problem one way or another • E.g., telling them that it’s drug when it’s actually placebo, or vice-versa - expectancies |
|
Invasive Independent Variables
|
Creating a physical change in the subject’s body
• E.g., drug vs. no drug • E.g., lesion vs. no lesion (or “sham” lesion) |
|
Subject variable, and why is it a special independent variable?
|
o Subject: existing characteristics of subjects that are used form different levels of the independent variable.
• Not directly manipulated by experimenter •Subjects in each condition (level) “selected” on basis of characteristic • E.g., Independent variable = ethnicity • Compare East Asians and European-Americans • E.g., depressed people vs. non-depressed people • E.g., high anxiety vs. low anxiety E.g., male vs. female •According to some, studies that have a subject variable as the I.V. are not a “true experiments”, or are only “quasi-experiments”, because the I.V. is not directly manipulated. – It’s basically just a correlational study. – The problem is with extraneous variables – you can’t really control all of them. – It’s the same with “natural experiments” |
|
Floor Effect
|
-Average scores in every group are so low that you can not detect a difference between groups
E.g., what if measure of cognitive performance was doing advanced calculus and High Anx. Group and Low Anx. Group both got 0-5% correct? |
|
Ceiling Effect
|
Average scores in every group are so high that you cannot detect a difference.
•E.g., what if measure of cognitive performance was doing simple sums like 2+3=5 and both High Anxiety Group and Low Anxiety Group both got 95-100% correct? |
|
What does the construct validity of an experiment refer to?
|
– Remember “construct validity” of a measure?
– Now, construct validity of a whole experiment – Experiment has high construct validity if the operational definitions of the I.V. and D.V. are valid |
|
History to Internal Validity
|
•Potential problem in studies that involve pre-test / post-test design
• Pre-test: measure administered to subjects before treatment given • Post-test: measure administered to subjects after treatment given • History refers to events other than the treatment that occur between pre- and post-tests that could affect post-test measures • E.g., measure “cognitive performance” on day 1, take experimental drug everyday for a week, repeat measure on day 7 •but suppose the night before someone pulled fire alarm in dorm and you had to stay up for several hours •Control Group deals with potential history effects |
|
Maturation to Internal Validity
|
• Also a potential problem in studies that involve pre-test / post-test design
• Psychological or physiological changes that occur between pre- and post-tests that could affect post-test measures • E.g., • Pre-test: measure performance in Morris Water Maze • Treatment: give cognitive-enhancing drug for 3 months • Post-test: measure performance in Morris Water Maze • http://www.youtube.com/watch?v=LrCzSIbvSN4 • Control Group deals with potential confound of maturation |
|
Regression to the Mean to internal validity
|
• Another potential problem in studies that involve pre-test / post-test design
• If Pre-test score was unusually high or low, Post-test score will likely be more “usual”, or near the mean, even if treatment has no effect. • Can give illusion that treatment is “causing” the change in scores • E.g., want to see if Cognitive Behavioral Therapy will help people who have low self-esteem. •Put up a flyer asking for students with low self-esteem to sign up for your study. •Administer self-esteem questionnaire. •Pre-test: self-esteem score mean is 7 for your group (for general population, mean is 50) •Treatment: 10 weeks cognitive behavioral treatment for self-esteem •Post-test: self-esteem score mean is 35. •Conclude that cognitive-behavioral therapy is effective in boosting low self-esteem? •Control Group deals with potential problem of regression. •If subjects were appropriately assigned to no-treatment control and experimental groups, so that both groups had same Pre-test scores, then any group differences on Post-test could be attributed to treatment, rather than simple “statistical regression” |
|
Testing Effects to Internal validity
|
• Just by taking Pre-Test, score on Post-Test changes
• change is not due to I.V. • Especially the case if the two tests are identical. • Memorize one list of syllables, administer drug, re-memorize list of syllables • If takes less time to memorize list second time around, is this proof that drug facilitates memory? • Control Group deals with potential repeated testing problem. |
|
Internal Validity
|
–Degree to which research is methodologically sound.
–Degree to which study is confound-free. |
|
Between-Subject Design
|
(between groups) subjects experience only one level of an independent variable
• Must be used when effects of independent variable are permanent, long-lasting, or irreversible • Must be used when independent variable is a subject variable (EX: male/female) |
|
Within-subject Design
|
(repeated measures) subjects experience more than one level of the independence variable
|
|
Advantages of Between-Subject Design
|
Don’t have order effects
|
|
Disadvantages of Between-Subject Design
|
•Need many subjects→ each subject contributes one data point
•Weaker statistical power •Difficult to create equivalent groups→ harder to eliminate confounds |
|
Advantages of Within-Subject Design
|
•Need fewer subjects→ each subject contributes multiple data points
•Better statistical advantage •No equivalent groups problem since there are no separate groups oMeans it is easier to create equivalent groups and eliminate confounds •Equivalent Groups make groups equal on everything (all extraneous variables) except the independent variable |
|
Disadvantages of Within-Subject Design
|
•Order Effects (“sequence effects”)
|
|
Random Assignment
|
Each subject in the experiment has an equal chance of being assigned to each level of the independent variable
• A strategy used to form equivalent groups Goals is to distribute any extraneous variable (SES, age, motivation) equally among the groups • Let the Law of Large Numbers do its thing |
|
How to Block Randomization
|
Done to ensure equal sample size numbers in 2 groups
• Subjects in each “block” of 2 are randomly assigned to the 2 groups • First 2 subjects to show up are randomly assigned to Group 2, 1; next 2 subjects to show up are assigned to Group 1,2 |
|
Order Effects? (Which design are they a problem with?)
|
Experience with one level of I.V. affects performance under other level(s)
Problem for Within-Subjects Designs |
|
Progressive Effects
|
1. Practice Effects- average performance increases, question as to whether or not hypothesis is actually supported
2. Fatigue or Boredom Effects- performance declines |
|
Carryover Effects
|
•Specific sequence of levels of the independent variable experienced by subjects produces effects that are different from the effects produced by other sequences
|
|
Strategy used to deal with order effects
|
Counterbalancing- use more than one order (sequence of conditions (levels of IV) in a way that holds potential order effects constant over conditions
|
|
Experimenter Bias and How to deal with it
|
Experimental Bias- preconceived expectations of what should happen (or what experimenter wants to happen) influence outcome of experiment
• Not fraud if experimenter isn’t even aware • EX: Rosenthal’s famous Maze-bright, maze-dull rats Forms: 1. Verbal cues and non-verbal cues- words used, variations in tone of voice, smiling vs. not smiling 2. Misjudgment of subjects- rating aggression in Bobo-doll experiment, was that really an aggressive act? 3. Inaccuracies in data recording- if you get unexpected results, you will probably check to make sure you entered data correctly; will you also do that if you get expected results? All these things can be completely unintentional How can we deal with it? 1. Standardization- read exact same instructions in exact same way 2. Automation (mechanization) 3. Double-blind Procedure- often done in drug research, but can also be done in more psychological research, though more difficult |
|
Demand Characteristics
|
Cues or features of experimental situation that suggest what the subject “should” do
|
|
Evaluation Apprehension
|
Can cause subjects to modify behavior or responses to questions to appear “better” than they actually are
• EX: how many hours of TV do you watch per week? - Less than 0, 1-8, 9-16, 17-24,more than 24 |
|
How does a waiting list control group work?
|
Subjects in experimental group(s) and control group are all seeking treatment
• Ensures that they are comparable in terms of problem for which they are seeking treatment Experimental group get new treatment, waiting list control gets nothing or standard treatment • Ethical Issues: People with terminal conditions often want access to “unproven” treatments, don’t want to wait |
|
How does a yoked control group work?
|
2 Subjects linked: “master” subject and “yoked” partner
Whenever master subject experiences some event, so too does the yoked subject, regardless of its behavior EX: Learned Helplessness Phase 1: Three different groups • Escapable Shock Group- could avoid shock by pushing panel with nose • Inescapable Shock Group- behavior had no influence on shock • No Shock Group- no shock given Question is not about shock per se, rather control over shock Yoking could equate for the amount and distribution of shocks in the first two groups • Each subject in Inescapable Shock Group was yoked to a master subject in Escapable Shock group o Actual electromechanical link btw the 2 experimental chambers o Whenever partner in ESG got shocked (due to not having learned contingency yet, inattention, etc), yoked subjects in ISG would also get shocked • So the 2 groups got exact same amount of shock, only difference is one group had control over the shock Phase 2: All groups tested on Shock-Avoidance task in “shuttle Box” (different from Phase 1 shock apparatus) |
|
Confound? (know def. and also how to identify in an example)
|
• An extraneous variable that co-varies with the independent variable and could be responsible for the observed results.
• To fix confound problem make the extraneous variabless the same. – Confound could provide “alternative explanation” of results. |
|
Is an extraneous variable necessarily a confound?
|
No, not if it’s held constant across levels of the I.V.
|
|
Subject selection bias and why might it result in a confound?
|
- Pre-existing characteristics of subjects are different in different groups
- E.g., - I.V. = “self-paced” learning vs. standard lecture course format - D.V. = grade on standardized exam at end of semester - Students are allowed to choose which course to enroll in. – Problem: One class might get students who are more motivated than the other, and this accounts for group differences. • Might result in confound because existing characteristics are not something an experimenter can control which can lead to different issues |
|
Differential attrition and why might it result in a confound?
|
- Like with subject selection bias, pre-existing characteristics of subjects are different in the different groups.
- But with attrition, these differences are a result of people the dropping out of the experiment. - Example: - Want to test if positive reinforcement or punishment is better for training air-traffic controllers. - 10 subjects quit the experiment halfway through in the Punishment Group, but only 2 subjects quit in the Reinforcement Group. - At end of experiment, Reinforcement Group has mean of 75% accuracy, while Punishment Group has mean of 90%. (statistically significant). - Experimenter concludes that punishment is better. But now the groups aren't equal |
|
Matching
|
Attempts to hold an extraneous variable constant over groups
• Matching Variable- is one that you suspect could influence the dependent variable (effect of age on treatment outcome) Matching is better especially with small samples - E.g., effect of age on treatment outcome. - Experimenter creates group of same level of matching variable. |
|
Complete Counterbalancing
|
every sequence used at least once, and no sequence is used disproportionately as compared to the others
|
|
Block Randomization
|
done to ensure equal numbers in control and experimental groups
If 2 groups, subjects in each “block” of 2 are randomly assigned to the 2 groups How it’s done: • First 2 subjects to show up are randomly assigned to Groups |
|
Partial Counterbalancing
|
randomly select from possible sequences, and for each subject, assign one particular randomly selected sequence.
• E.g., one subject gets Pop-Indian-Silence-Classical, another gets Silence-Pop-Classical-Indian, etc, etc. |
|
Reverse Counterbalancing
|
Can be done when subjects experience each condition more than once.
– Administer one order, then the exact reverse to every subject. • E.g., Pop-Classical-Silence-Indian; Indian-Silence-Classical-Pop |
|
Subject (Participant) Bias
|
preconceived expectations or attitudes of subjects influencing results
Placebo Effect- participants expectations to contribute |