Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
71 Cards in this Set
- Front
- Back
Three Types of Knowledge |
Descriptive: something exists. Separate/distinct from other things. Defining/classifying Predictive: X predicts Y - correlations. About patterns. Understanding: Cause before/leads to Effect. |
|
Sources of Knowledge |
EVERYDAY KNOWLEDGE/SOURCE OF HYPOTHESIS:
Authority: trustworthy source. Rational-Inductive Argument: learn what's already known. combine information to form new knowledge. Start with an axiom or fact. A-> B and B -> C so A probably -> C. Good for math, history, philosophy, lit.
EMPIRICISM: Scientific Empiricism: learn what's known, form hypothesis, research design, collect data, analyze data, conclude.
all four are sources of hypotheses. usually prior empirical research though.
only sci empiricism is a source of scientific psychological knowledge. |
|
Components of a Research Hypothesis |
educated guess. Needs to be testable and falsifiable.
Testable: limited by technology, ethics and resources. Falsifiable: predicts a specific outcome. |
|
Types of hypotheses
what is necessary for the last type? |
Attributive: something exists and can be measured/distinguished. Univariate. Associative: states that there's a relationship between two things. Causal: something causes something else after controlling for external variables.
Causal Needs: 1) cause before effect 2) demonstrate statistical relationship 3) elimination of alternative explanations |
|
Research Loop |
1. library research - know the hyp tested, res designs, analyses, conclusions. requires finding research, interpreting it and evaluating it. 2. hypothesis formation - identify a guess. need to analyze info into components, synthesize (assemble current knowledge), and evaluate whether the new knowledge is a worthwhile addition to what's already known. 3. research design - identify how you will collect data - target pop, setting, task, manipulation, data collection method 4. data collection - selection, assignment, task/setting/conditions 5. data analysis - must be scored, collated, aggregated and prepared for statistical analysis 6. hypothesis testing - results, bases for supporting RH, decision 7. draw conclusions - combining new knowledge with your lit review. |
|
How is the research loop applied |
novel research: new research using the best design you can replication: repeating the initial design, looking for consistency extension and convergence: testing variations of the research design. varying populations, setting, task, measures and sometimes data analysis. testing limits of generalizability. |
|
Critical Experiment vs Converging Operations |
Critical Experiment: the one correct and ideal way to perform an experiment Converging Operations: running multiple different versions of each study, looking for consistency |
|
What is a Research Hypothesis and how do you test it? |
a guess about the relationships between behaviors. sample participants collect data analyze data conclude if the data supports RH
Make sure the conclusion is VALID |
|
What is validity?
4 Types of Validity |
Accuracy/correctness.
External - to what extent can we generalize our results? Internal - is it correct to give a causal interp? Measurement - do our variables/data accurately represent the behaviors we intend to study? Statistical Conclusion - have we reached the correct conclusion? |
|
External Validity |
Can we accurately generalize our results to other participants, situations, and times?
Are the Participants, Stimuli/Tasks, Settings sufficiently applicable to the Population and Out-of-Lab Situations and Conditions |
|
Internal Validity |
Can we conclude a causal relationship? Does the test eliminate confounds? |
|
Measurement Validity |
Do our variables/data accurately represent the behaviors we intend to study?
Accuracy of OBSERVATION and SELF-REPORT.
|
|
Statistical Conclusion Validity |
Is the conclusion correct regarding relationship we are studying?
Does the data analysis produce the correct answer? Was the data analysis appropriate for the type of data and RH? Was the decision about whether there is a relationship between the variables accurate? |
|
Population External Validity |
Will the results generalize to other persons or animals?
college students ~ consumers chronically depressed ~ acutely depressed captive bred turtles ~ wild-caught turtles |
|
Setting External Validity |
Will the findings apply to other settings?
Lab study ~ classroom Psych hospital ~ out-patient clinic Lab study ~ retail stores |
|
Task/Stimulus External Validity |
Will the results generalize to other tasks or stimuli?
Lever pressing ~ compliment seeking Consumer decision making ~ selecting the best widget Visual illusions ~ perception of everyday objects |
|
Societal/Temporal Changes |
Will the findings continue to apply?
1965 ~ today today ~ 10 years from now |
|
Overlap of components of external validity |
Population and setting - the location changes the demographic (in-patient schizophrenia and out-patient schizophrenia might change type of schizophrenia)
Setting and Task/Stimuli - location changes what you're doing (argument role playing in a lab vs start of bar fights are different types of arguments/stimuli)
Population and Task/Stimu - you have to adjust tasks for population (elementary vs high school math: have to change type of math) |
|
Cultural External Validity |
Different behaviors/relationships between behaviors across cultures
Culture is defined by members and location... combination of population and settings |
|
Ecological External Validity |
Synonym for external validity. Elements the participant interacts with and within. Combination of setting and task/stimuli |
|
Generalizability |
Whether or not the results will hold for all (or most) combinations of the elements of external validity (setting, population, stimuli, temporal/societal). Not some - ALL.
Requires lots of convergent research. |
|
Applicability |
Whether a finding is applicable to a specific combination of elements of external validity. |
|
Reasons to limit external validity |
De-emphasize external validity: if the main focus is causal interpretability, you have to control for confounds which make the settings less realistic. Common among theoretical researchers but not for applied researchers.
Eschew external validity (emphasize specific applicability instead of generalizability). Common among applied researchers. They don't want to generalize - the research matches the application. |
|
Participant Selection/Sampling -related type of validity -stages |
Population External Validity - NOT Internal/causal validity
- Target Population: defining people/animals we want to study - Sampling Frame - "best list" we can get of population members - Selected Sample - sampling frame members we select to participate - Data Sample - participants from whom useful data is collected |
|
Population Sampling |
sampling frame includes entire population |
|
Purposive Sampling |
sampling frame includes a subset of the entire population that is deemed representative of the entire population. |
|
Selection/Sampling Procedures |
Population Sampling Frame vs Purposive Sampling Frame Researcher Selected vs Self-Selected Simple Sampling vs Stratified Sampling |
|
Researcher-Selected vs Self-Selected |
Researcher Selected - potential participants from the sampling frame are selected by the researcher - contacted and requested to participate.
Sampling frame is cut into strips and drawn from a hat.
Self-Selected - ALL potential participants are informed about the opportunity to participate and to contact the researcher if they wish to volunteer.
Representativeness can be compromised if the entire target population is not notified or if there is an uneven motivation to volunteer (payment/extra credit) |
|
Simple Sampling vs Stratified Sampling |
Simple: every member has equal probability of being in a study
Stratified: divide the sampling frame into strata using variables (age, gender, job). Members within each strata have equal probability of being in the study. Usually done to ensure representation of smaller segments. |
|
8 combinations of ways we obtain participants
what random sampling means in textbooks how is random sampling usually done how participant selection is usually done in empirical research |
population vs purposive simple vs stratified researcher selected vs self selected
Random Sampling in textbooks: Population sampling frame, researcher-selected Random Sampling in reality: Purposive sampling frame, research-selected Participant selection in empirical research: self-selected, purposive sampling frame |
|
2 types of behavior/measure in a research study |
Constant vs Variable Measured (subject) vs Manipulated (procedural)
|
|
Causal Research Hypothesis |
the Independent/Causal/Procedural Variable is Manipulated, changing the value of the Dependent/Subject/Effect/Response/Outcome Variable, which is Measured |
|
Four "roles" variables/constants might play in a study |
Causal Variable Effect Variable Confounding Variable Control Constant/Variable
|
|
Control Constants vs Control Variables |
Control Constants - any behavior/characteristic where all participants have the same value. Control Variable - any behavior/characteristic which is on average balanced/equivalent within the treatments or conditions. |
|
Components of Internal Validity |
Initial Equivalence - prior to manipulating the causal variable, participants in different conditions are the same (on the average) on all measured/subject variables.
Ongoing Equivalence - during the manipulation of the causal variable, completion of the task, and measurement of the effect variable, participants in the different conditions are the same (on the average) on all manipulated/procedural variables except the causal variable. |
|
How to produce initial equivalence? |
RANDOM ASSIGNMENT of individual participants to treatment conditions before treatment begins. |
|
How do we produce ongoing equivalence? |
PROCEDURAL STANDARDIZATION of manipulation, confound control, task completion and performance measurement. |
|
Participant Assignment |
How we create initial equivalence. Who will be in what condition of the study, when. Goal is for each participant in each condition of the study to be equivalent, on average, before manipulation begins.
Participant selection = external/population. Participant assignment = internal/initial |
|
Between-Group Designs vs Within-Group Designs |
BG: Each participant completes only one condition WG: each participant will complete all conditions. Assignment determines the condition order. |
|
1 Acceptable and 5 Unacceptable Assignment Procedures |
Acceptable: random assignment of individuals by the researcher before manipulation of the IV. Unacceptable: random assignment of intact groups, arbitrary assignment by the researcher, self-assignment by the participant, administrative assignment by non-researcher, non-assignment or natural assignment (participants were already in conditions before they arrived. you didn't manipulate anything) |
|
How to generate ongoing equivalence |
Lab > field --- hard to control procedurals in field. Short > Long --- the longer the procedure, the harder it is to maintain ongoing equivalence. |
|
Relationship between Internal and External Validity` |
Trade-off Characterization - can't do both. researcher needs to choose what they prefer. Precursor Characterization - without causal interpretability, what is there to generalize? Associative information is not valuable. |
|
Two ways to demonstrate our studies are valid |
Replication (repeat same study) Convergence (complete different variations of the study) |
|
Two ways of providing evidence to support a RH |
Demonstration - using the treatment and showing that the results are good. (commercials) Comparison - comparing results of the treatment and a control. |
|
True Experiment vs Non-Experiment |
True Experiment: 1) random assignment of individual participants (initial equivalence) 2) manipulation of the IV (provides temporal precedence and ongoing equivalence) 3) control of procedural variables (provides ongoing equivalence)
Non Experiment: 1) no random assignment of individuals (maybe random assignment of intact groups) 2) no treatment/manipulation by researcher 3) poor/no control over procedural variables during a task |
|
What prevents us from random assignment/manipulation of IV? |
Technology Ethics Resources |
|
Between-Group vs Within-Group Designs |
BG/cross-sectional/between-subjects - each participant is in one of the treatments/conditions WG/longitudinal/repeated measures/within-subjects - each group receives all treatments in different order.
Both can be considered causal with Random Assignment, Manipulation of IV and Control over Confounds |
|
Research Design |
True vs Non BG vs WG |
|
Sampling Procedures |
Complete vs Purposive Researcher vs Self Simple vs Stratified |
|
Data Collection Methods |
Behavioral Observation Data Self-Report Data (survey research) Trace Data - data is obtained from the "traces" left by respondent's behavior |
|
Types of Behavioral Observation Data |
Naturalistic Observation - camouflage/distance. participants don't know they're being observed. Participant Observation - the researcher is participating in the situation. - undisguised: someone is observing in plain view. the participant might know they're collecting data - disguised: looks like someone who belongs there. |
|
Data Collection Settings |
Field (where participants naturally behave). external validity but less internal validity. Lab - help internal but hurts external Structured - natural appearing setting that promotes natural behavior but allows control. |
|
Naturalistic Observation pros/cons |
Best external validity Act naturally Can be experimental with creativity.
Limited to studying behavior Limited to observing public behaviors Requires reliable/accurate coding to produce useful data |
|
Undisguised pros/cons |
Behavior can be natural after participants get used to it (habituation: wait until they get used to them, then starts collecting data... desensitization: observed slowly approaches so can get used to them)
Limited to studying behavior Limited to public behavior Some behaviors/participants don't habituate/desensitize Requires reliable/accurate coding |
|
Disguised pros/cons |
Pros: the participant doesn't know they are being observed, so they "act naturally". Experimental or nonexperimental designs can be used. Cons: Limited to studying behavior. Intrusion/privacy issues. Participation reduces objectivity. Need reliable/accurate coding |
|
Observational Data methods |
Audio recordings (more accurate than written) Picture/video recordings Non-verbal behaviors (reaction time, eye movement - computerized RT > stopwatches) Medical/physiological recordings - EEG, EKG, EMG, GSR, MRI, PET scans, hormone levels - can't measure without instrumentation.
|
|
Self-Report Data Collection methods |
Mail/Computerized/Group Questionnaire Personal/phone/group interview Journal/diary |
|
Self-Report Data Collection pros/cons |
pros: get non-observable data. experimental and non-experimental (can easily manipulate and randomly assign) cons: dependent on accuracy/honesty, which increases with anonymity/confidentiality/rapport. response accuracy increases with construction of questions and their sequence. |
|
Trace data definition, types and pros/cons |
data left behind by the behavior we are trying to measure.
Accretion - behavior adds something: trash, noseprints, graffiti Deletion - when behaviors "wears away" the environment. wear of steps on walkways
Pros: unobtrusive. more naturalistic. usually unbiased. Cons: differential deposit/retention - we can't be sure that nothing modified the trace. very few things leave traces. |
|
Garbageology |
the scientific study of society based on what it discards. - eating habits (take-out, etc.) |
|
Primary data sources vs archival data sources |
Primary: data that completed for the purpose of the study. researcher has maximal control of planning and completion
Data completed for previous research or as standard practice. Made available for secondary analysis. |
|
Experimenter Expectancy Effects |
Self-Fulfilling Prophecy - researchers unintentionally produce the results they want. 1) modifying participants' behavior: subtle differences in treatment - convey response expectancy. different quality of instruction. 2) data collection bias - coding/interpretation of data is key. subjectivity/error -> bias
Demand characteristic/conforming to how you should act. 1) social desirability - modifying what you have to say to seem socially acceptable/how they are expected to behave. 2) acquiescence/rejection response - can play along (acquiescence) or mess things up (rejection response) - especially important in within-groups designs |
|
Single and Double-blind |
Single-blind Procedures: the participant doesn't know the hypothesis, conditions and what condition they're in. Double-blind Procedures: neither the participant nor the data collector/coder know the hypotheses or other info that could bias the interaction/reporting/coding of the researcher or the responses of the patients. |
|
Reactivity and Response Bias |
(inaccurate data)
Reactivity - reacting to being observed. common with observational data. Behaving unnaturally. Aided by naturalistic/disguised participant observation and habituation/desensitization.
Response Bias - dishonest responding. common with self-report. when people describe their character/opinions/behavior as they think they "should" or to present a good impression. Protecting anonymity and participant-researcher rapport increase honesty |
|
Observer Bias and Interviewer Bias |
(seeing what you want to see)
Observer Bias - observation: "inaccurate data recording/coding". need automation/instrumentation. Need to be done objectively and accurately.
Interviewer Bias - self-report data collection: "coaching" how questions are asked/reactions to answers can influence response bias. Computerized/paper-based help. |
|
Types of Data Collection and inaccuracies |
Observational + Researcher = observer bias Observational + Participant = reactivity Self Report + Researcher = Interviewer Bias Self Report + Participant = Response bias |
|
Attrition |
drop-out, data loss, response refusal and experimental mortality.
Hurts initial equivalence of subject variables. prevents random assignment. Similar to self-assignment. |
|
How to combat attrition |
1) Educate participants about the important role of random assignment 2) If there is a differential value of treatments, offer them a chance to participate in the preferred condition later. 3) Replacement of participants who drop out of the study 4) Collect data about possible confounds later 5) Replication and Convergence |
|
Risk-benefit trade-off model |
Risk: social embarrassment, psychological/physical risk. Risk might be from manipulation, task, data collection or being associated with the research.
Benefits: to society (knowledge), or the participant (remuneration, pay, credit, or direct benefit of the treatment) |
|
Voluntary Informed Consent without Deception |
Read/sign document that describes their participation and random assignment, as well as social, psychological or physical risks. No info may be withheld that might alter their decision to give informed consent. "Deception" - withholding info that might alter their decision whether or not to participate. Can withdraw informed consent. |
|
Levels of Disclosure |
No one knows the info (not collected as data) Anonymity (no direct connection between info and identity) Confidentiality (researcher has connection between info and identity, but doesn't disclose connection) Group Disclosure (info about "group" is released. must avoid indirect disclosure for small groups) Masked Individual Disclosure (pseudonym) Individual Disclosure (requires explicit informed consent) |