• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/58

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

58 Cards in this Set

  • Front
  • Back
Deception in Research
- May not be told complete details of study or misled about procedures
- Must forewarn
- Must be justified - No possible alternatives would be effective
PROS: Naturalistic Behavior
CONS: Cause Mistrust
-Have to debrief
IRB
Institutional Review Board
Effective safeguard for participants, researchers, and universities
-Determines degree of risk
-Expedited or Formal Reviews
Effective Literature Searches
- Compose narrative of search questions
- Identify seperate concepts in your question
- Use APA thesaurus
- Combine concept words in manner that best suits question
Independent Variable
-Predictor Variable
-Manipulated
-"X"
Dependent Variable
-Outcome Variable
-Observale behavior we're measuring in response to the IV
-"Y"
Confounding Variables
-Any variable that changes systematically with the IV
-Any uncontrolled extraneous variable that covaries with the IV and could provide an alternative explanation for the results
-Causes poor internal validity
Constructs vs. Variables
Construct - Not directly measurable concepts

Variables - Something we can measure
Subject Variables
-Existing characteristics serve as variables
-Subject already possesses the thing you want to measure
-Equivlalent groups is not gauranteed & could influence outcome
-Cannot draw causation
Types of Variables
Control-We do not allow to fluctuate
Random-Allow to fluctuate
Confounding-Changes systematically with the IV
Extraneous-Uncontrolled factors that are not of interest but may influence the DV
Hypothesis
-Statement contain 2 or more variables that are measurable and specify how the variables are related
-Prediction about specific events that is derived from deduction
-Educated guess about what should happen under certain circumstances
Empirical Questions
-Those that can be answered through systematic observations & experiences that characterize scientific methodology
-Precise enough to allow specific predictions to be made
MUST:
-Answerable
-Specific
-Operational Definition
-Leads to clear hypothesis
-Asks ?'s we don't know answer
-Theory Driven
Type I Error
Rejecting the Null Hypothesis when it is in fact true - Found a significant difference in your study but there really isn't one
Type II Error
Failure to reject the Null when it is in fact false - You fail to find a significant difference in your study but there really was one
Reliability
How consistent is a measure over repeated applications
-Spread of scores cluster tight
-How much error of measurement is associated with a measure
Measuring Reliability
Single Administrations
-Split-Half
-Internal Consistency
-Interrater
Multiple Administrations
-Alternate Forms
-Test-Retest
Measurement Validity
Are we measuring what we intend to measure
-Constructs must be operationalized
4 Types:
(1)Face Validity
(2)Content
(3) Criterion
(4)Construct
Face Validity (1)
Does it look like its measuring what it says its measuring
Content Validity (2)
Related to Face - Items reflect the area
-The more the items cover the relevant areas the more content validity
Criterion Validity (3)
The degree to which a test is related to a criterion
How well does the measure predict outcomes based on info from other variables
(1)Predictive
(2)Concurrent
Construct Validity (4)
Does the measure assess the construct it claims to assess
The degree to which a test is an accurate measure of the construct
(1)Convergent - how is it similar to other measures
(2)Discriminant - divergent
(3)Nomological -
Experimenter Validity
Approximation that a conclusion is true
-A set of standards by which research can be judged
(1)Statistical Conclusion Valdity
(2)Internal Valdity
(3)Construct Validity
(4)External Validity
Statistical Conclusion Validity (1)
The extent to which the researcher uses statistics properly and draws the appropriate conclusions from the statistical analyses
Internal Validity (2)
The degree to which an experiment is methodologically sound and confound free
Construct Validity (3)
The adequacy of the definitions for the IV and DV
External Validity (4)
Generalizable
Can we generalize to:
(1)Other persons/populations
(2)Other environments
(3)Other times
Experimental Validity is Best When
-There is a relationship between the cause and effect
-The relationship is causal
-You can generalize to the constructs
-You can generalize to other persons, places, & times
Threats to Internal Validity
Pre-Post Tests
-History
-Maturation
-Regression to the mean
-Testing Effects
-Instrumentation Effects
Threats to Internal Validity
Participants
-Sample Selection
-Attrition
-Compare Groups
Operational Definition
A definition of a concept or variable in terms of precisely described operations, measures, or procedures
-Defines a variable in terms of the techniques used to measure it
Between-Subject Design
What is It
-Participants only receive 1 level of the IV
-Subject variables are almost always between-subjects
-Cross-Sectional
Between-Subject Design
Advantage
Subjects enter study fresh and naive
Between-Subjects Design
Disadvantages & Error
-Large # of people needed
-Time and energy
-Individual Differences: Error-whenever there is a large difference between people there will be a large amount of error
Between-Subjects Design
Threats
-Differential Attrition
-Diffusion
-Compensatory Equalization
-Compensatory Rivalry
-Resentful Demoralization
Within-Subjects Design
What is it
Every participant receives every condition or level of the IV
-Each group is assigned to each condition
-longitudinal studies
-repeated measures
Within-Subjects Design
Advantages
-Smaller sample size
-Convenient
-Use to study limited population
-Avoids Error Variance
Within-Subjects Design
Disadvantages
-Order/Sequence Effects
-Equivalent Groups
-Time related factors
-Attrition
Within-Subjects Design
Error
Differences can be due to:
-IV
-Systematic Error
-Nonsystematic Error
-Random Error
Experimenter Bias
Experimenter Expectancy Effects
-experimenters may inadvertently do something that leads participants to behave in ways that confirm the hypothesis
(a)Bio-Social Effects
(b)Psycho-Social Effects
(c)Situational Effects
Participant Bias
Participants unconsciously modify their behavior to match expected results of the research
Participant Bias
Hawthorne Effect
Change behavior when they know they're being studied/observed
Participant Bias
Demand Characteristics
Any potential cues or features of a study that make the hypothesis obvious & influence participants to respond or behave in certain ways
(1)Good Subject
(2)Negativistic Subject
(3)Faithful Subject
(4)Apprehensive Subject
Controlling Participant Bias
-Deception
-Manipulation Check
-Use small sample
-Field Research
Single Blind Study
Only experimenter knows which condition participant is in -
Double Blind Study
Neither the experimenter nor participant know who is getting which condition
Single Factor Designs
-1 IV with 2 or more levels
-Simplest experimental design
-Between or Within Subjects
Weaknesses:
-Not impressive
Strengths:
-Simplistic
Single Factor Designs
4 Types
Between Subjects
(1)Independent Groups - randomly assigned
(2)Matched Groups - matched
(3)Nonequivalent Groups - assignment is not random
Within Subjects
(1)Repeated Measures - uses counterbalancing
Single Factor Designs
Statistics
T Test - analyze mean difference
For Two Levels:
(1)t test for independent groups
(2)t test for dependent groups
More Than Two Levels:
(1)1-way ANOVA
(2)Post-Hoc Analysis
Factorial Designs
-At least 2 IV's with 2 or more levels each
-Numerical System indicates # of IV's and levels in each
-Factorial Matrix
Factorial Designs
Advantages
-Main Effects
-Interactions
How do factors operate independently & together to affect behavior
Factorial Designs
4 Types
(1)Between Subject
(2)Within Subject
(3)Mixed Factorial Design - 1 factor within, 1 between
(4)SxM (SubjectxManipulated)- 1 subject variable, 1 manipulated variable
Factorial Designs
Statistics
-N-way ANOVA
-N = # of IV's
-F score for each main effect and each possible interaction
-Post-Hoc Analysis
Main Effects
Mean Effect
-Comparing overall means
-The overall effect of a single IV
-How do factors influence behavior simultaneously
Interactions
-One factor modifies the effect of a second factor
-Factors are interdependent
-Occurs when the effect of 1 IV depends on the level of another IV
-When effects of a factor vary depending on the level of another factor, unique effects occur
Interactions
Not/Is
Not an Interaction if:
-Main effects are additive
-You can predict cell means
Is an Interaction if:
-Main effects are not additive
-Extra means differences not explained by main effects
-Below .05 is significant
Correlations
-A numerical relationship between 2 variables
-When the goal of descriptive research is to test a hypothesis about the relationship between variables
-No manipulation of variables
-Implies Prediction
-Predictor Variable
-Criterion Variable
Correlations
3 Things To Consider
(1)Directionality
-Positive
-Negative
-Curvilinear
-No Relationship
(2)Form
-Linear
-Monotonic
(3)Strength
Small = .10-.29
Moderate = .30-.49
Large = .50-1.00
Correlations
Strength
-Study what exists
-Variables that cannot be tested
-Study many variables
-High external validity
Correlations
Weaknesses
-Directionality Problem- Does A cause B or does B cause A

-Third Variable-Another variable may be contributing to effect