• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/97

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

97 Cards in this Set

  • Front
  • Back

"Validity of a Study"

Degree to which the studyaccurately answers the question it was intended to answer

"A Threat to Validity"

Anything that limits the study’s ability to answer theintended research question

External Validity

The extent to which the results obtained in astudy will hold true outside that specific study


ie. Will the results stand if I use:


a. Different Sample (results generalize to


population?)


b. Different Setting (different researchers at


different university)


c. Different Measurement (use physiological


measure vs self report)


d. Real world/ natural behavoir

Internal Validity

Whether there are factors about the study that raise doubts about the interpretation of results


-Any factor that allows for an alternate


explanation of results (ie. time of day vs


effect of music) is a threat to internal validity




**in order to assert 1 explanation for study researchers must eliminate compounding variables

Example of Threats of External Validity

1) Participants


2)

Threat to External Validity Via Participants

Participants may not be representative of their intended population due to:


1) Selection Bias (ie. only collecting in English)


2) Convience Sample (ie. Psych majors)


3) Participant characteristics (ie. Conservative)


4) Volunteers (??)


5) Cross-species generalizations (ie. Some species you can generalize from, others you cant)

Threat to External Validity Via Featuresof the Study

1) Novelty effect


-ie. novel environment affecting behavoir




2) Experimenter characteristics


-ie Experimenter’s behaviour affecting study variables




3) Multiple treatment interference ("testing effects")


-ie. fatique, practice or when participants experience more than one condition

Threat to External Validity Via Measuring

1) Timing of measurement


-ie. immediately or 6 months after




2) Sensitization


-ie. assessment sensitization- when the assessment of behaviours/thoughts/beliefs can have an effect

Threats to Internal Validity via Extraneous variables

Additional (extra) variables that change in theexperiment


- counfouding variables- 3rd party variable that interfere with the ability to interpret results


– These variables are generally not directly investigated


Any variable that can vary with the independent variable, & therefore, might effect the dependent variable.



Threats to Internal Validity via Assignment Bias

When participants one experimental condition arenoticeably different from participants in otherexperimental conditions


-ie placing more women in the worrycondition than the neutral condition

Threats to Internal Validity via One Group Over Time

1) Pt History- Stuff happens in participant’s lives that may affect your data


-ie. arriving day 2 with cold




2) Maturation– Systematic change in participant’s psychology or physiology


*Especially the case with children and the elderly




3) Instrumentation – Changes in a measuring instrument that occurs over time


-ie. Researcher becoming slightly different with the manner with which they observe

Threats to Internal Validity via Testing Effects

"Multiple Treatment Interference"- Any possible change in performance caused byparticipation in a previous treatment:




1) Fatigue Effect- when memory performance worsens because people are getting tired



2) Carry-over effect= when scores in the second condition are affected by the experience of the first condition


- know "what happened" in first condition

Threats to Internal Validity via Regression Towards the Mean

-Statistical phenomena whereby an extreme score is likely to be followed by a less extreme score


-Occurs because an individual’s score is a function of stable factors AND chance




ie. Someone with depression score of 60/63 at the start oftreatment, that depression score will come down even ifthe treatment is ineffective.

Balance Between Internal and External Validity

As you increase 1, you’ll decrease the other


∴ Researchers need to strike a balancebetween internal and external validity




**If study has highexperimental control and therefore high levelsof internal validity. Replicate with less experimental control to support external validation

What affects both internal AND externalvalidity?

1) Experimenter bias – Findings are influenced by experimenter’s expectations




2) Demand characteristics and participant reactivity – Participant acts according to the expectation of the study

Use of Exaggerated Differences

Experimenters often exaggerate the differencebetween groups in order to see an effect




ie. Compare and anger induction to a happy induction(rather than neutral condition)

Designing a Research Project

1) Chose research strategy


2) Set out research design


3) Define Procedure

Research Stratagies

1) Descriptive


2) Correlational


3) Experimental


4) Quasi-experimental


5) Non-experimental

Research Design

More specific parameters for study:


1) Will you have a group or an individual?


2) Will participants experience all conditions or will different participantsexperience difference conditions?


3) Which variables will you observe/manipulate?

Research Procedure

Detailed outline of exactly what you will do and exactly who willparticipate




-ie. 100 participants (aged 18 to 25) will be invited to participate....

Descriptive Research

Before you can start explaining why somethingoccurs, you must first be able to show that it evenoccurs


-ie. show that people "do" before ask "why"




Goal= to describe a phenomenon

Types of Descriptive Research

1)Observational:


– Have to decide how to observe behaviours, how to quantify behaviours you observe and in what settings to observe behaviour




2) Survey: Questionnaires & Interviews




3) Case study

Observational Research Design

Observe & systematically record behaviours forthe purpose of describing that behaviour




**can be used for correlational/experimental research if goal isn't to strictly describe behavoiur




Problem= observations can be subjective

Limiting Observational Disruption in Observational Research Design

1) Conceal yourself




2) Habituation= Allow participants to get used toyou & record behaviour after a certain amount of timebegin

Limit the Subjectivity of Observations in Observational Research Design

1) Have specific guidelines beforehand of whatyou’re watching for


-ie. Pre-existing list of aggressive behaviours that will count as “aggressive”




2) Have multiple observers


**need to establish the reliability of measurement via inter-rater agreement

Methods for Quantifying Observations

1)Frequency method– count the instances of a behaviour during a fixed time interval


-i.e.# of aggressive acts during first 30 mins




2) Duration method – record the amount of time an individual engages in a specificbehaviour during a fixed-interval period


-ie. Time spent playing in solitude / 30 mins




3) Interval method – divide the observation period into a series of smaller interval& record whether a specific behaviour occurs during eachinterval


-ie. divide 30 mins observation period could be split into 15 2-minuteintervals

Sampling Observations

Take a sample ofobservations instead of recording everything during complex situations




*(lots going on-becomes difficult to see everything)

Methods of Sampling Observations

1) Time sampling: – Observe during the 1st interval, thenrecord during the 2nd interval




2) Event sampling: – Identify 1 specific event to observe inthe 1st interval, then a different eventto observe in the 2nd interval




3) Individual Sampling:– Choosing 1 participant to observe during the 1st interval, then shift to anddifferent participant for the 2 interval

Examples of Indirect Observtion

1) Content Analysis- examine behaviour/events inliterature, movies, television, or othermedia


-ie. violence in cartoons




2) Archival Research- examine documents/ records to measurebehaviours or events that occurred in the past


-ie. count # of publications devoted toeach anxiety disorder from 1998 to 2008

Types of Observations

1) Naturalistic


2) Participant


3) Contrived

Naturalistic Observation

Observe behaviours in their natural setting




(+)=High external validity


=Useful for behaviours that areunethical or


impossible tomanipulate




(-)= may need to waituntil desired behaviour/ eventoccurs

Participant Observation

Interacting with participant & becoming one of them


-ie. Joining a fraternity for the purpose ofobserving behaviour




(-) = Interacting with participants might lead to changes in participant behaviour


= potentially dangerous

Contrived Observations

Setting up situation where the desired behaviour is likely to occur instead of waiting for the desired behaviour to occur


** usually conducted in laboratories (but not always)




(-)= Environment is less natural and may have unnatural consequences on behaviour

Survey Research

Asking directly about behavoiurs


– Better w/ information that people are more willing to disclose




**can be used for correlational/experimental research if goal isn't to strictly describe behavoir

Types of Survey Research

1)Questionnaires- can be done in person, online, by mail or by phone




2) Interviews- can be done in person or on phone

Organizing Survey Research

1)Develop questions


– What do you want to know about?


– How will you ask it?


– Are there good measures already developed that assess yourconstruct?




2) Organize the order of questions




3) Who will complete the survey?




4) How will they complete it?

Response Format for Survey Research

1) Restricted– respondent chooses from a list of possible answers




2) Open ended- respondent offers own answer


**more common w/ interviews

Types of Restricted Response Questions

1) Dichotomous


2) Ordinal Rating Scale


3) Rating-scale Questionnaires


4) Checklist

Dichotomous

Given two possible choices


-ie. yes/no, male/female, agree/disagree

Ordinal Rating Scale

Given a list & rank order their preferences


-ie. first choice, second choice...

Rating-scale Questionnaires

-Likert scales: 1- strongly agree...




**extremes= anchor


**Problem= participants use the same answer for most questions (may not accurately reflect thoughts if rushed)... use reversed scored items

Checklist

Given multiple response options and told to select allthat apply




ie. Indicate all countries you've travelled to

Considerations While Constructing a Survey

1) Demographic Info


ie. age, sex, ethnic group, marital status




2) Sensitive/embarrassing question


– ie. depression, alcohol, sex, drugs...




3) Appropriate language/vocabulary


-ie. How to ask a 12 year old v. 20 year old

Considerations for Survey Questions

1) Do your respondents have the necessaryinfo/background knowledge to respond to yourquestion?




2) Do you questions have the right specificity?


** too general can cause the information obtained to be difficult to interpret


-ie. start with “what are your thoughts about theHunger Games" and follow with rating, would they recommend etc.




3) Is the wording of questions appropriate?


a) Question clear (ie. medication vs treatment for headache)


b) Un-presumptuous (ie. what is your favourite alcoholic drink vs beer)


c) Appropriately personal (ie. do YOU think that parking is satisfactory)


d) Free of leading- (ie. Quebec separate vs sovereign)


**similar questions= different answers because people want to help your cause



Recruiting Participants for a Survey

Increase likelihood of response by introducing self/topic, make importance of topic clear and emphasizing the importance of their participation




-Mailing lists are available if you want to target a certain group


ie. single females between 30 and 35 who subscribe to the New York Times –

Features of Interview Survey Research

a) Questions asked in an open-ended, interview format


b) Conducted one on one


c) Time consuming


d) Small N


e) A lot of detail


f) Allow for opportunity for follow-up questions g) Necessary for certain populations (kids/ illiterate)

Case Study

The study of one individual for the purpose ofobtaining a description of that individual usuallyduring diagnosis & treatment


-Most used in clinical psychology




*useful for rare phenomena or situations in which experimental manipulation would be unethical


ie. dissociative identity disorder, traumatic accident




- allow for development of theories to explain phenomena/ behavoiur

Case History

Similar to case study but carried out when no treatment has been offered (yet)




-provide detailed description of a disorder & circumstancesthat surrounded it

Validity Concerns in Case Studies

1) Low Internal Validity– whether an abundance of causal explanations for observed behaviour


ie. speech difficulty being same regardless of neglect




2) Low External Validity– Would this same pattern of behaviour be observed with others?


ie. similar neglect of another child causing same deficits

Correlation

The relationship between 2 or more variables




Can be:


1) Bivariate or multivariate


2) Positive or negative




Offer a description of the direction & degree of arelationship NOT an explanation for it




Visualized in ascatterplot


-closer a point is to line of best fit "regression line", the stronger the association





Scatterplots

Use to visualized a correlation


-shows the degree & direction of associationbetween 2 continuous variables




1) X-Axis– Variable that the research believes does the predicting (similar to IV)


2) Y-Axis – Variable that the research believes gets predicted(Similar to DV)




Closer a point is to line of best fit "regression line", the stronger the association

Information Gained from a Scatterplot

1) Association- is there one and how strong?


2) Direction- if association is positive or negative


3) Linearity– is association linear or non-linear?


4) Presence of Outliers- data point that stands apart from the pack


– Outliers affect the magnitude of the relationship


**can find outlier and remove

Correlation Coefficient

1) Pearson’s correlation (r)


2) Spearman’s correlation (ρ)

Pearson’s Correlation (r)

A bivariate statistic that measures the degree of linear association between 2 quantitative variables.

Spearman’s correlation (ρ)

A bivariate statistic that measures the degree ofnonlinear association between 2 variables whereat least one is ordinal

Cohen's Suggestion

Used to determine strength of correlation if research does not determine otherwise:



Weak= 0 to .29


Moderate= .30 to .49


Strong= .50 to 1.00

Point-biserial Correlation

Where one variable is interval or ratio, & the othervariable is nominal with only 2 categories


-ie. Is there a relationship between gender & self-reportphysical aggression?

Statistical Significance

Determine whether or not your correlation is statistically significant


-ie. if relationship is "real" or not




Want to prove change in DV= due to IV manipulation not chance


...significant tells you if it’s likely this came about by chance, or if it’s unlikely this resulted from chance

Determining Statistical Significance from Table

1) df = N – 2 (**if given N look 2 below)


**round down if not specifically given




2) Use the .05 column (usually)




3) If r value is greater than the one i the table= statistically significant


**the bigger it is= the more likely it is caused by IV manipulation and not chance

Shared Variance

Reflection of what percentage of change in one variable can be accounted for by a change in the other variable


-shared variance= "percent change accounted for"




Calculation= r2 (=%)

Correlation vs Causation

Correlation does not necessarily imply causation




BUT if we know that 2 variables are related, then we can begin to make predictions:


a) if hours of studying & GPA= correlated, could make predictions about 1, 5h, 10h etc...



Prediction vs Regression

Regression= correlation




Difference= language:


a) A associated w/ B= correlation


b) A predicting outcome of B= regression

Variables in a Regression

1) Predictor variable(IV/ X):– Variable that the researcher believes influences change in the other variable




2) Criterion variable (DV/ Y): – Variable that the researcher believes is beinginfluenced by changes in the other variable

Other Uses for Correlation Tests

1)Test re-test reliability –


-ie. expect to find a statistically significant, positive, andstrong correlation between scores from Time 1 andscores from Time 2




2) Concurrent/convergent Validity


-ie. expect to find a statistically significant, positive, and strongcorrelation between scores from new measure & other measure ofthat construct (established measure or other)




3) Divergent validity


ie. Expect to find a statistically significant negative, & strongcorrelation between scores from one measure & scores from ameasure that assess an opposing construct (e.g., anxiety vsd relaxation scores)

Experimental Research

Main goal of an experiment is to establish acausal relationship


-ie manipulate of one variable (IV) to observeits effects on another variable (DV) whileholding other extraneous or confoundingvariables constant





2 Types of Experimental Research

1) Between groups


2) Within groups

Between Groups Experimental Research

Randomly forming 2 or more groups from apool of subjects/participants




Each group receives a different experimentaltreatment/ condition (level of the IV) & the groups arecompared




ie. Effect of advisory message


– Condition A: With warning, Condition B: No warning


- DV: Desire to see the movie

Within Groups Experimental Research

There is only 1 treatment group & each subject/participant is given all levels of the IV


-make before/after comparison between scoresobtained at different levels of the IV for thesame participants




ie. Diagnosis of cancer on emotional well being


Measure before and after treatment

4 Basic Elements of Experimental Research

1) Manipulation- manipulation of the IV by the experimenter


2) Measurement- measure of the DV by the experimenter


3) Comparison- comparison of DV from different conditions


4) Control- Experimenter controls the confounding variables

Critical components of Experimental Research

1) Independent and dependent variables


2) Experimental and control groups

Independent Variable

Believed to be the cause ...manipulated by the experimenter




Will have at least 2 levels (treatments) of the IV.




Can be experimental (researcher assigns participants to conditions) or non-experimental (participants alreadybelong to their conditions)

Factorial Design

Research with more than one IV


-ie. Effects of diet (IV #1) & exercise (IV #2) on weight

Dependent Variable

The variable that you measure in response tomanipulating the IV


- measured for each level of the IV andthen compared across conditions

A Multivariate Study

When the study has more than 1 DV

Groups in Experimental Research

1) Experimental group= participants exposed to the manipulation.




2) Control group= participants that are not exposed to the manipulationand are used for comparison purposes.

Extraneous vs Confounding Variable

1) Extraneous variable = variablesintroduced in the experiment that have no effect on IV or DV


**ie. if noise occurs across all treatments




2) Confounding variable= an extraneousvariable that varies w/ the IV & therefore canaffect the DV


** threaten internal validity (unclear whether the IV or CV causedchange in the DV)


** ie. if noise is present during only one treatment

Identifying Confounding Variables

1) Common sense


2) Previous experience


3) Previous research


4) Known confounds

Types of Known Confounds

1) Within-groups


– History


– Maturation


– Repeated testing


– Instrumentation


– Regression to the mean



2) Between-groups


– Assignment bias


– Attrition


– Compensation

Confounding Variable due to Within-group History

External events occurring between the 1st & 2nd testing session



**The longer the time interval between Time 1 & Time2, the more opportunity for outside events to occur



(ie. School program to reduce racism among High Schoolstudents • Time 1 = August 30th 2011, Time 2 = September 30th 2011)

Confounding Variable due to Within-group Maturation

Systematic change in participant’s psychology or physiology that is not related to the treatment


i


**More likely to occur w/ elderly & children




ie. physical growth, cognitive development, wisdom, etc...

Confounding Variable due to Within-group Testing Effects

Possible effects of the pre-test on the post-test (thesame measure taken at 2 different times).




** Scores onthe test may be partially due to experience




Note: Just because you have a pre-test, does not necessarilymean that it will affect your post-test. Just means that itcould.

Confounding Variable due to Within-group Instrumentation

Changes in characteristics of a measurement over time




-ie. more precise

Confounding Variable due to Within-group Regression to the Mean

The tendency for an extreme score to be followedby a less extreme score

Confounding Variable due to Between-group Assignment Bias

Bias introduced when groups are noticeably different on certain characteristics.

Confounding Variable due to Between-group Attrition

Loss of participants from 1 or more groups of a study




OK if it is affecting all groups equally but problematic if effect is unequal




**often differential=Participants w/ certain characteristics are lost


... who, why and when did drop out occur is important information

Confounding Variable due to Between-group Compensation

Untreated individuals or groups learn abouttreatment received by others & want similartreatment




**battle this by stressing importance of a control condition and assuring that if successful, they’ll get the treatment

Effect of Experimental Control

Experimental control means ensuring thatextraneous variables don’t become confoundingvariables





Methods of Protecting Against Confounding Factors

1)Remove them


-ie. warning participant of possibility to remove from "surprise" factor




2) Hold them constant


-ie. keep constant across treatments


*can also be done by keeping participant variables constant BUT= limits external validity




3) Use a placebo control


-Remove confounding effect of experimental method itself




4) Match them


- Identify potential confound & ensure that it isequal across groups


(ie. male to female ratio represented in breakdown of groups)




5) Randomize them


Assign participants randonly toconditions in hopes that potential confounds are alsorandomly assigned, will equal out and no longer confound


**larger N= more sucessful

Placebo Effect

Psychosomatic effect that causes a condition/treatment to yield results without active component

Types of Placebo

1) Psychotherapies- Actual therapy vs. nonspecific therapy


2) Surgeries- Real surgery v. mock surgery


3) Medications- Meds v. sugar pills


4) Food- Energy bar v. non-energy bar


5) Drink- Alcoholic v. non-alcoholic drink

Gaining Experimental Control

Ensuring that confounds are not varying with the IV




**experience high degree of internal validity




Certain methods of gaining experimental control will decrease your external validity


ie. Matching & Holding confounds constant

Between Group Design

2 or more groups of participants are formed atrandom


- Each group receives a different level of the IV& the groups are compared




**because each participant= one treatment= one score


-furthermore scores are independent of each other



Problem with Between Group Designs

People are Different... going to respond differently to same treatment/ manipulation




1) Assignment bias- participants in condition A are older thancondition B


2) When individual differencescreate too much variability within groups= Mask real effects

Solution to Individual Differences in Between Group Designs

FREE YES :)

Unsystematic Variance

Variation within treatment group




**within-group variability be relatively equal across groups


... many statistical tests assume that youhave equal variability within groups




BUT amount of unsystematic variance within a group can vary between groups




**want to minimize within group variance as much as possible

Systematic Variance (F)

Variation between groups


WANT variability/ high systemic variance (F) to be as high as possible- reflects effect of manipulating A (the IV) on B (DV) (vs chance)


**want AVERAGE score to be different



Implication of In group vs Between Group Variability

Want to minimize within group variability and maximize between group variability




Why? The bigger the difference in betweengroup variability compared to withingroup variability, the greater chancethat your group difference isstatistically significant