• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/62

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

62 Cards in this Set

  • Front
  • Back
Stanford Prison Study
- Geneva convention concerning humiliation, homophobia, sanitation influence.
- Balance costs with benefits
- What risks to list: did not know, gave up control to guards; eat dirty food, denied right to leave, guard plot to kidnap. Diff standards in '70s.
Stanford Prison Study
- Are effects ok if positive?
- Is it possible to have no effects?
- How to brief prisonders? Unantic risks: standard debriefing may help.
- Responsibilities of other citizens? Ethically?
- Adverse event: must contact IRB w/in 24 hrs for help.
- Today: asking about childhood is risky. -(Just a coping mechanism?) - No efects: Yes, for some types no long term effects; but manip emot well-being; ellicit emotions. Must simply best protect (cannot eliminate) risks, IRB for more possible risks, balance. Difficult to give control to partic, few guidelines for guards. Must remain in control of exp.
Surveys: Administration and Items.
A) Ways to administer surveys (generally about partic themselves vs others)
Demographics
Rapport
Basic background information asked on surveys such as age, gender, race, educational level; used to describe participants in your study.
- A friendly understanding and comfort in the relationship between an interviewer and participant
Surveys cont.
- More private:
Problems:
- Efficient:
- Interviews:
- Paper and pencil measures, online. - Not reading ques carefully/skip/misunderstood, vocab unfamiliar, cannot ask for clarific.
- Group administered.
- Make sure to get answers (and clear). More rapport (non judgemental, more info yet must be skilled in receiving sensitive info).
B. Types of Survey Questions
1. Open-ended questions
- Free-form, not limited.
- Short or long ans received (variation)
- Like w/ low constraint info, learn.
- Difficult to code; diff contents into numbers/empirical data. Diff to process as data.
- Informative.
2. Closed-ended questions
- Possibilities chosen in advance. - Yes/no or True/false.
- Categorical or multi choice (mult ans or choose one)
- Likert (scale usu 1-5, usu disagree vs agree)
- Usefal data with right ques; MUST come up w/ all an or 'other' category becomes open-ended. - Complete w/ outside help.
C. Constructing good survey questions
1. using unbiased language in your questions
(Valid/reliable; adequate rep; do not unintentionally push for a certain ans) 1) ex. push polls (ans in ques, soc desirability; word choice: Control, Unfair; labeling)
2. using neutral toned statements or questions
- Ineffec ques vs better. - Shocking? Need to disagree? Ex. selfish only children? (Neg) Just as altruistic? (pos). No rapport: can all go towards one end of distrib, Bad for statis/study.
3. avoiding response sets
4. making room in your scale for a variety of opinions *Give permission to ans negatively.
- Keeping ans in same way w/out reading carefully.
-> REVERSE items: shift from high to low.
- People do not like extreme ans, give room to be honest w/out feeling bad. Likert: most in middle, yes/no (do not want to look bad, not good for soc desirability ques, ex. I would cheat...)
5. asking one question at a time
- Ex. dress and act more sexually? Will give neutral ans if one is yes and one no.
A. Identifying your population
B. Sample:
A. Population: Every person in category. Impractical.
B. Subset of population accurately rep. population. External validity.
C. Choosing a sample
1. nonprobability or haphazard sampling
1) Ex. first 40 person at the location chosen. Fast. Good setting yields ok results. Interested population responding. Time, excluding people in various ways. Region; over/ underestimating?
2. simple random sampling
3. stratified random sampling
2) Not random but systematic, everyone in population has equal chance to be selected. Every fifth person in list; hat. Accurate for large groups; sometimes imperfect to determine issue of interest or men and women... 3) Percentages in sample represent in population; divide into groups for dimensions of interest. Sample randomly until 51% and 49% women/men. Ex. exit polls strat by district's voting trends: repub vs democ.
D. Biased sampling
1. refusal/response rates
2. limiting factors
Not representative. 1) May be systematic ex. lazy people do not want to respond to surveys; monitor reason for not responding; low response rate requires reflection. 2) Where list acquired. Ex. People only using cell phones (no directory), problem for polic polling, no phone service, people at work, people with caller ID, newsweek list (Liberal magazine) vs US News and World Report (Conserv). Location.
I. 1. Correlational vs. Differential Designs
A. Correl < constrait than differential and experimental. 1) Correl: >2 var; sometimes IV and DV interchangeable; generally continuous => (ordinal, int, ration) Not manipulating var; naturally relating; strength, direc of correl.
2. differential design
3. Differences:
2. Type of correl; quasi-exper; existing rel btwn >2 groups; usu NOMINAL (IV) int/ratio (DV), neither have causality, not manipulating, do identity IV, DV in advance. 3) Control procedures, more careful group selection and context.
D. When to use these designs vs experimental?
- Corre vs causality more important. Ex. hybrid car buyers and alaskan drilling, donate to both organizations? Cause unimportant. - Correl when you cannot manip IV; diff when IV categorical (differential), narrow down issue; more external validity (even if causal in lab), may complement exper research for int validity.
II. Cautions and Pitfalls with Correlational and Differential Designs
A. Correlation does not equal causation
- Cannot usu word Effect/Affect (assoc, predic). 1) The third var problem; may be multiple 'third' variables. Cannot det diff. in correl research. A third variable, C, causes both A and B, and therefore makes A and B look like they’re directly related. May not know how to measure "C"; particular confluence of third var perhaps.
2. selecting the appropriate comparison group
3. confounds
2) Ex. Divorced parents vs not comparison group? (Due to number of parents, loss of parent, absence of father as a role model...?) Can Choose Several for multiple competing hypoth. Will not completely narrow down interferences. 3) Anything inteferring with int. validity, vary together. Just rep your construct? C may actually cause B, tend to occur together. But Variable A and Variable C tend to occur together (they are related to each other, and thus confounded)

When we are predicting from Variable A to Variable B, if we haven’t measured Variable C and included it in our model, we can’t rule out that B was actually caused by C, not by A.
Ex...news studies often Not experimental research.
III. "Developmental" Designs
A. 1. Cross-sectional studies
2. cohort effects
- Change over time, for Any type of data collec vs just surveys. A. 1) Different ages at the same time (10, 15, 20 on Oct 30), cross sec of ages (IV), compare for DV. Cannot prove but make inferences; assoc; form of differential research. 2) Born at same time; diff societal experiences may result in psy diff. Ex cold war and nuclear war risks (age not the cause but time period) Differential experiences coming with ages.
B. Longitudinal studies
- More info about (systematic?) changes over time. - Changes w/in and across indiv, same Cohort. - Not Causal but correl. - More insight into order of devel. Ex. day care and later veh problems (not causal even if later in time) May monitor less their children; can connect prob/issues over time. - Expensive, difficult to maintain - confounds, third var.
Correlations, Part I
I. Relations Between Variables
A. Examining more than one variable
B. What is correlation?
B) A statistic telling to what DEGREE of relation and direction of relation btwn variables; systematically related. Also foundation for regression -> score on x as predic score on y.
II. Seeing Correlations: Scatterplots and Linearity
B. Linear correlations
1. positive relation
2. negative relation
- One point for each particip (on var x and y), scatter: series of points for study. B. 1) High relative to mean on x, also high rel to mean on y. (LL, HH) 2) Above mean on x in neg correl scored below mean on y (LH, HL)
3. no relation
C. Curvilinear correlations
3) Score on x is not rel. to score on y. Horizontal. C) At least one curve; diff rules apply. Still sys. correl.
III. Calculating Correlations: Pearson’s r
Variance:
- Integral or ratio data (ordinal, nominal, diff types of stat) - Var: How var x varies across distrib. (Σ (X – M) (X – M)) / (N)
Calculating Correl: Pearson's r (integral or ratio data) Covariance:
Σ (X – Mx) (Y – My)
--------------------
N

x and y varying across distrib & relation btwn x and y. How x and then y deviate around means.
Covariance:
Positive product for posxpos or negxneg. Vary greatly from mean on both: high number; find avg product for deviations. DIREC and MAGNITUDE; not standardized but on scale of variables; not succinct informationally => Standardize.
Standardize Covariance => Correlation =
(r ranges from -1 to 1, 0.93 is VERY Strong)
Covariance
-----------
(SDx ) (SDy ) or (mult then add):

Σ (X – Mx) (Y – My)
-------------------
sq root of [Σ (X – Mx)2] [Σ (Y – My)2]
Giving meaning to correl.
A) Direc (pos, neg) and Magnitude
B) Statis significance
B) Our sample as compared to the population. How variables relate to each other in the world; probability that it may be smaller in the population. P for probability, <5%)
Correlation Magnitude Guidelines (.9 is very rare, unless confounded...)
small magnitude correlation r = .10
medium magnitude correlation r = .30
large magnitude correlation r = .50
2) Expressing significance:
p is less than versus equal to (rules changing); p is less than 5% chance for errors indicates nothing about importance; never use 'significance' unless statistical.
C) Proportionate Reduction in Error
- Meaning from correl; importance. - Correl are NOT on a ration scale (pos and neg same magnitude) (.6 is not twice .3); square correl, correl become ratio scales. PRE = Correl^2; ex (.6)^2 = .36 vs (.3)^2 = .09; Nec to compare correl to eachother.
II. Correl in Research Articles A) Text reports
- usu Pearson's r (others rarer) r(n) for # data pts/particip/groups; r(n)= .XX (btwn [pos and neg]) r(125) = .38, p=.02 (signific level); whether .38 is sig depends on n.
Tables: Sample correl matrix (underlined)
Table 1
Intercorrel betwn (italics)

Scale 1 2 3 4 5
1) Pos resp -
2) Q of A .69*** -
3) Anger

+p<.10,*p<.05, **p<.01, ***p<.001
Dashes = 1, would be symm so only half filled in, correl betwn 3&4 = 4&3; sig shorthand vs write out p; note is always the same; +p>.10 Marginally sign/approaching sign; no signif: no marker; affected by number of partic
III. Funny Features of Correlations
A. Restriction in Range
- (Of one of var not varying much around mean -> correl will be close to ZERO) Low SD and small dev -> small correl.
- Var MUST vary for substantial correl and signif. Accidently set up study this way? Ex. Test adimin only to hired applicants with highest scores; SAT and GPA with lowest scores not getting in to college, minor restriction. Survey ques (vs extreme) partic unlikely to spread out, correl will be close to zero, must pretest survey ques. *Magnitude vs. direc will decrease.
B. Attenuation
C. Rank ordering, not absolute stability
B) Reduc in Magnitude due to reliability in measures, specifically. Error is random noise on construct: imprecise; unreliability reduces correl. Must improve measures, same outcome as restric in range. C) Esp for longitudinal: stable overtime? Not indicated by correl: only rank ordered relative to mean; are partic varying together? Do not knowo if partic are staying the same. All partic may dec in reaction time at same rate relative to mean, for ex, with a high correl. Must interprete as changing the same way; ex. height stable relative to peer group vs not growing. Not absolute stability through correol.
Regression/Prediction
I. Regression Analysis: Why Predict?
A. Testing expectable patterns
- Also called prediction; interval or ratio scales. - equation rep linear relation btwn x and y. A) How helpful is knowing x to knowing y? Not nec, causal usu correl. Leads to more complex ques and experiments.
B. Practical applications
II. Basic Concepts in Regression
A. Bivariate prediction
Have x, only score y -> to estimate y. Ex grades in college and grades in college; univ. use x to predict y based on previous data from Both var (have equation) II. A) One IV, one DV, collect data on both to figure out how to describe scatter of data. Fit regression line to scatter plot; line not going through origin; generally nonzero y-int/
B. Regression formula ("Linear Prediction Rule")
--Y hat = a + bX; X is the IV; --b is the reg coeff; also the slope; we compute b from sample data on X and Y. --a is the reg constant; intercept; computed from X and Y. --Y hat is the predicted value of the DV; the estimated vs observed value; we compute from X,b,a -> Yhat. The dots fall on the line; best predictors. Ŷ is the predicted value of the dependent variable
- Ŷ is an estimated value, not an observed value
- we compute Ŷ from X, b, and a
III. Computing Regression [For every unit of change in x, how many units of change for y] (b)
A. Raw score regression equation
b = rxy (SDY/SDX)
b is the correl btwn x and y times SD of Y div by SD of x. Takes into account variability of Y and X. Correl go through origin but it becomes adjusted. b is the sum of the dev products for x and y and the denonminator is the sum of the dev for X.
Computing the regression constant/intercept (a):
a = MY – (b)(MX)

The slope of line and means used. Plug in X to get Y hat (Ŷ = a + bX); even for a value of X not occurring in sample (w/in range).
A) Reg w/ stand var vs raw score reg equation.
If SDs of y and x were z-scores, b would be a correl. a = MY – (b)(MX); = 0-b(0); =0.
Regression Formula for Standardized Variables:
Graphing a Regression Line - Do not use zero for x; non-zero should go through a when touching y-axis. Slope should be pos if b is pos.
ZhatY = (β) (ZX)

Z-scored all x and y var; dots change location and line forced through origin; slope is correl betwn x and y; predicted for y is correl (beta) times Zx. Beta-> directin, strength of variation, r (for bivariate), -1<Beta<1
Regression/Prediction Part II
I. Error in Prediction
A. Estimating and Inaccuracy
B. Sum of Squared Errors: Error in comparison to the regression line
A) Other factors involved, will be wrong to some extent, measurement error, if y is off, a is off, etc.
B) Random variation; 1) HOW MUCH error? 2) ACCURACY of predic. (Predic vs actual values of y?)
Error when predicting Ŷ = a + bX (everyone scores on the regression line)

Error =
Error = Y – Ŷ, Error2 = (Y – Ŷ)2; SSerror = Σ (Y – Ŷ)2
- Everyone scores on the reg line; Predic value of y differs in each case; For every value for x, y hat changes. Compute y hat for every data point. Sum to zero (all errors), square, SSe, (How far off? How good an est is line? Not standardized) Relative error: vs not knowing but approx know y; compare to mean.
Error when predicting Ŷ = MY (subscript y)
(everyone scores at mean)
Error = Y – MY

Error2 = (Y – MY)2

SStotal = Σ (Y – MY)2

(If we guessed around mean, how far off? Sum of dev scores for y, squared)
Proportionate Reduction in Error
(improvement by estimating using the regression line rather than the mean, M sub y) (Usu less error than SS total, knowing x vs nothing, improving by est using the reg line vs mean)
SStotal – SSerror
PRE = -----------------
SS total
Total Variance in Y
Percent Variance in Y Accounted For by X (of total, 100% variance, how much differing, explain/predict) Not nec percent of people but discrepancies. PRE btwn 0 and 1.
Multiple Regression Equation
Ŷ = a + b1X1 + b2X2 + b3X3...
Ex height, age, gender predict weight. Multiple IVs for one DV (Limit IVs by sample size, usu 2-4) Simulataneous equations for one line, calc b diff than simple reg. *Diff reg coeff per var but still have intercept. Explains more of total var but Overlap; x1 and x3, x2 and x2, etc.
C. Multicollinearity
When all IVs are too highly correl w/ each other. All want same piece of pie. Cannot conduct mult reg analy if too high. Reason for correl matrices to show no problem with multicoll. Show you are predic from Diff constructs.
D. Measures of Error and Prediction
1. Multiple R
(similar to PRE) 1) Correl btwn all of IVs and DVs at same time; multiple correl. Always POS, from 0-1, the higher the better, NOT DIREC.
D. Measures of Error and Prediction
2. R2
2) Squared Multiple R. Calc differently but still prop var in y accounted by x; 0-1; higher better. Shown in simple reg.
I. What Is Probability?
A. Science and certainty
B. Different interpretations of probability
1. Common usage: Subjective interpretation of probability
2. Statistical usage: Expected relative frequency
Never make precise/point predictions/certain/exact. Never prove anything but support finding w/in margins. B. 1) Intuition, presonal experience. 2) The likelihood over the course of many trials that a particular outcome will occur, taking into consideration the total number of possible outcomes.
II. Calculating Probability
A. Calculating single event probability
A) p(successful outcome A) = # possible successful outc/# possible outc. Ex. complete deck with jokers has 54 cards and 13 diamonds.
B. Multiple event probability
1. Probability of either A or B occurring
2. Probability of both A and B occurring
Class examples:
1) p(A or B) = p(A) + p(B). Must be indep where one will not affect the likelihood of the other vs Diamond and a three. 2) p(A and B) = p(A) x p(B). Ex two aces in a row, assuming REPLACEMENT to restore original condition. Ex: Not EXPected rel Freq (long run expec freq) but rel freq (real number of occurences).
C. Wrong beliefs regarding probability
1. Law of large numbers
2. Gambler’s fallacy
1) Prob of large numbers misinterp. Belief in the 'Law of small numbers'; ex, studies replic w/ small groups. 2) Fate, fair, conscientious vs indep and chances of winning not inc.
Rev of samples and populations
A) Rep pop w/ samples
Haphazard/non-prob sampling; random sampling/prob sampling, very systematic; Strat random sampling (w/in strata/sub pop randomly sampled until percentage in sample mirrors in population.) Random sampling most effective with Very large samples.
IV. Population Statistics
Mean of pop more important than of sample; parallel set of central tendencies. SAMPLE (M, SD2 for var, SD) POPULATION (Mu, signa squared, sigma). Pop statistics are Est vs measured usu; relation btwn sample and pop eval through probability...(Means usu the same vs SD estimations btwn SD and sigma)
Predictor Variable
- Regression constant
(X) vs criterion var (y).
- Baseline number (a); fixed value. Regression=Prediction.
- Regression coefficient
- Y hat
- Sum of the squared errors
- the number you multiply by score on predictor var, b;.
- Predicted score on criterion var.
- Sum of the squared differences btwn each score and its predicted score.
- Least squares criterion
- Linear Prediction Rule
- Regression line w/ lowest sum of squared errors btwn actual scores and predicted scores on criterion var.
- Yhat = a+b(X)
- Standardized regression coefficient
- Referred to as Beta, shows the predicted amount of change in SD units of the X var if the value of Y increases by one SD. Just like there is the formula for changing a raw score to a Z score there is for changing a reg coeff into the stand reg coeff. Beta=...
- Limitations of prediction
- SS total
- Same as for correl; (unless curvilinear, restricted in range, etc; in these situations the reg coeff (bivar or mult) are smaller than they should be to reflect true assoc. These do not indicate Direction of Causality.
- In analy of variance, sum of squared dev of each score from overall mean, ignoring the group a score is in. In predic, sum of squared diff of each score from the predic score when predic from the mean.