• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/164

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

164 Cards in this Set

  • Front
  • Back

Why are surveys important?

-“snapshot” of how people think at a certain point in time
-study relationships among variables
-ways that attitudes change over time
-compliment experimental research findings
Response set
a tendency to respond to all questions from a particular perspective rather than to provide answers that are directly related to the questions
*can affect usefulness of data obtained from self-reports
Social desirability
response set that leads the individual to answer in the most socially acceptable way
When are people most likely to lie on a survey, and how can this be avoided?
when they don’t trust the researcher

researcher needs to:
-openly and honestly communicate the purposes and uses of the research
-promise to provide feedback about the results
-assure confidentiality
What are the major considerations in survey research?
-constructing the questions that are asked
-choosing the methods for presenting the question
-sampling the individuals taking part of the research
What are the three general types of survey questions?
-attitudes & beliefs
-facts & demographics
-behaviours
What are the potential problems with question wording on a survey?
-difficulty understanding Q
-unfamiliar, vague or imprecise terms,
-bad grammar
-complex phrasing,
-embedding Q with misleading info
What is important to consider with question wording?
-simplicity, loaded questions, negative wording, yes/nay saying
How does a researcher construct the questions to ask on a survey?
-define research objectives
-type of questions to ask
-question wording
What are the two basic sampling techniques?
probability and nonprobability sampling.
probability sampling
each member of the population has a specifiable probability of being chosen
nonprobability sampling
we don't know the probability of any particular member of the population being chosen
What are the three types of probability sampling?
simple random sampling, stratified random sampling, cluster sampling
simple random sampling
every member of the population has an equal probability of being selected
stratified random sampling
population is divided into subgroups (strata) and random sampling techniques are then used to select sample members from each stratum. (age, gender, education, etc)
cluster sampling
researcher identifies "clusters" of individuals and then sample from these clusters. (ex: list all classes being taught and randomly sample each class)
What are three types of nonprobability sampling?
haphazard sampling, purposive sampling, and quota sampling
haphazard sampling
(aka convenience) take them where you find them
purposive sampling
obtaining a sample of people who meet some predetermined criterion
quota sampling
choosing a sample that reflects the numerical composition of various subgroups in the population
how do you create a completely unbiased sample of a population?
-randomly sample from the whole population
-contact and obtain responses from all individuals in the population
what are the two sources of biases in random sampling?
the sampling frame used and poor response rates
sampling frame
the actual population of individuals (or clusters) from which a random sample will be drawn
response rate
the percentage of people in the sample who actually complete the survey
How can you maximize the response rate of a mail survey?
-explanatory postcard before survey is mailed
-follow-up reminders
-second mailing of questionnaire
-personally stamped return envelope
-look of cover page
How can you maximize the response rate of a telephone survey?
-no answer = call again
-reschedule those who can't interview at the time
-offer incentive
-convince importance and value of participation
Why use a convenience sample?
-low cost/time selecting sample
-research is studying relationships between variables, rather than accurately estimating
Panel study
when the same people are surveyed at two or more points in time
What does a researcher need to consider about responses to questions?
-closed vs open ended
-number of response alternatives
-rating scales
-labelling response alternatives
What are the types of rating scales (surveys)?
-graphic (agree/disagree)
-semantic differential scale (good/bad)
-nonverbal scale for children
How does a researcher finalize the questionnaire?
-format whole thing and refine Q’s (pilot study)
What are the two main ways to administer a survey?
-written or interview
What are the four types of questionnaires?
-personal admin to groups or individuals
-mail surveys
-internet surveys
-other tech
What are the advantages/disadvantages of questionnaires?
- less costly, completely anonymous.
- Respondent needs to be able to read/understand, can be boring (motivation)
interviewer bias
all of the biases that can arise because interviewer is a human)
What are the advantages/disadvantages of interviews?
-interaction=important;more likely to agree to answer Q’s + can clarify questions and follow up on answers that need expansion
-interviewer bias
How can you minimize interviewer bias?
careful screening of interviewer
What are the three methods of conducting interviews?
1. Face-to-face
2. Telephone
3. focus group interviews
confidence interval
An interval of values within which there is a given level of confidence (e.g., 95%) where the population value lies.
sampling error
margin of error
CATI
computer assisted telephone interviewing
What are two reasons for using statistics?
-describe the data
-make inferences
What do you do if you don't know whether an ordinal or interval scale is being used? why?
Assume interval scale because it allows for more sophisticated statistical treatment
Depending on the way a variable is being studied, what are the three basic ways of describing the results?
-comparing group percentages
-correlating scores of individuals on two variables
-comparing group means
frequency distribution
indicates the number of individuals that receive each possible score on a variable (ex: exam score)
What are some ways to graphically depict frequency distributions?
-pie chart
-bar graph
-frequency polygon
frequency polygon
uses a line to represent frequencies (most useful with interval or ratio scales)
histogram
uses bars to display a frequency distribution for a quanittative variable
What can you discover by examining frequency distributions?
-directly observe participants' responses
-see what scores are most frequent
-look at the shape of the distribution of scores
-can tell whether there are any outliers (unusual/unexpected/different)
-compare distribution of scores between groups
descriptive statistics
allow researchers to make precise statements about data
what two statistics are needed to describe data?
-central tendency (single number.. how participants scored overall)
-variability (how widely the distribution of scores was spread)
central tendency
tells us what the sample as a whole is like (average)
what are the three measures of central tendency?
mean, median, mode
mean
average (M)
-interval or ratio
median
the score that divides the group in half (Mdn)
-ordinal
mode
most frequent score
-nominal
variability
a number that characterizes the amount of spread in a distribution of scores
standard deviation
the average deviation of scores from the mean
correlation coefficient
statistic that describes how strongly variables are related to one another
What causes the problem of restricted range?
when the individuals in a sample are very similar (homogenous)
effect size
general term that refers to the strength of association between variables
regression equation
used to predict a person's score on one variable when that person's score on another variable is already known
multiple correlation
technique used to combine a number of predictor variables to increase the accuracy of prediction of a given criterion or outcome variable
partial correlation
statistically controls third variables by removing its influence with an equation
what are the two forms of basic experimental design?
-post-test only
-pre-test post-test
posttest-only design must:
-obtain two equal groups of participants
-introduce the independent variable
-measure the effect of IV on DV
pretest-posttest design
same as posttest-only but test is given before and after manipulation
independent groups design
when participants are randomly assigned to the various conditions and each is only in one group
repeated measures design
participants are in all conditions
what are the advantages/disadvantages of repeated measure design?
-fewer participants are needed
-extremely sensitive to finding statistically significant differences between groups
-disadv: one must come before the other
order effect
when the order of presenting the treatments affects the dependent variable
what are some types of order effects?
-practice effect (improvement)
-fatigue effect
-contrast effect (comparing them)
what are the two approaches to dealing with order effects?
-counterbalancing
-long interval between conditions
complete counterbalancing
when all possible orders of presentation are included in the experiment
latin square
limited set of orders constructed to ensure that
-each condition appears at each ordinal position
-each condition precedes and follows each condition one time
what are the two major advantages of repeated measures designs over independent groups?
-less participants needed
-more control over participant differences, thus greater ability to detect effect of IV
matched pairs design
matching people on a participant characteristic and putting them in different groups
what are three experimental designs?
-independent groups design
-repeated measures design
-matched pairs design
selection differences
Differences in the type of subjects who make up each group in an experimental design; this situation occurs when participants elect which group they are to be assigned to.
What are the advantages/disadvantages of pretest-posttest design?
-assess whether groups were equal to begin with
-participant selection
-measure the extent of change in individual
-necessary when there is a possibility that participants will drop out (mortality)
-disadv: time consuming and awkward to administer
-sensitise participants to what you are studying
How can you directly assess the impact of the pretest?
-combination of post-test and pretest only design
what are the two things you usually have to do when "setting the stage" for an experiment?
-provide the participants with the informed consent info needed
-explain why the experiment is being conducted
what are the two types of manipulations?
straightforward and staged
straightforward manipulation
manipulate variables with instructions and stimulus presentations
staged manipulation
staging events that occur during the experiment
when are staged manipulations more frequently used?
-when trying to create a psychological state in the participants
-simulate a situation that occurs in the real world
what needs to be taken into consideration before using the strongest manipulation possible?
-may involve a situation that rarely/never occurs in the real world
-ethics
what are the three general types of dependent variables?
-self-report
-behavioural
-physiological
what are the different ways of measuring behaviour?
-rate
-reaction time
-duration
functional MRI
areas of the brain can be scanned while participant performs a physical or cognitive task
sensitivity of the dependent variable
The ability of a measure to detect differences between groups.
ceiling effect
independent variable seems to have no effect on the DV because participants quickly reach the maximum performance level
floor effect
a task is so difficult that hardly anyone can perform it well
What are some considerations when choosing measures?
-type of measure
-sensitivity of dependent variable
-multiple measures
-cost
-ethics
demand characteristic
any feature of an experiment that might inform participants of the purpose of the study
how can you control for demand characteristics?
-deception
-filler items
-asking participants what they think the research is about
-field/observational (don't know being studied)
balanced placebo design
-expect-get
-expect-don't get
-don't expect-get
-don't expect-don't get
experimenter bias/expectancy effects
-experimenters may develop expectations about how participants should respond
-may occur whenever experimenter knows which condition the participants are in
what are the two potential sources of experimenter bias?
-unintentionally treating participants differently
-subtle differences in the way behaviour is recorded
teacher expectancy
when teacher's expectations influence student's performance
what are some solutions to the expectancy problem?
-well trained experimenters, practice
-run all conditions simultaneously
-automated procedures
-experimenters unaware
single-blind
participant is unaware of which group they are in
double blind experiment
neither the participant or the experimenter know which group the participant is in
what are some other factors that a researcher considers when planning a study?
-research proposal
-pilot studies
-manipulation checks
-debriefing
manipulation check
an attempt to directly measure whether the independent variable manipulation has the intended effect on the participants
what are the two advantages of manipulation checks?
-saves the expense of running the experiment if manipulation ineffective
-if no significant results
what are the steps to conducting an experiment?
-selecting research participants
-manipulating the independent variable
-measuring the dependent variable
-additional controls and considerations
-analyzing and interpreting results
-communicating research to others
why might a researcher want to design an experiment with three or more levels of an independent variable?
-two levels can't provide very much information about exact form of relationship
-can detect curvelinear relationships
-interested in comparing more than two groups
factorial designs
-designs with more than one independent variable
-all levels of each IV are combined with all levels of the other IV's
what two kinds of information do factorial designs yield?
-main effect
-interaction
main effect
information about the effect of each independent variable taken by itself
interaction
when one IV depends on the particular level of the other IV
IV x PV design
-type of factorial design
-independent variable x participant variable
moderator variable
influences the relationship between two other variables (ex: credibility)
what are the 5 main points of ethics?
Justice, Respect and Dignity, Fidelity and Responsibility, Integrity, Benefience and NonMalefience
Alternative Explanation
Part of causal inference; a potential alternative cause of an observed relationship between variables.
Autonomy (Belmont Report)
Principle that individuals in research investigations are capable of making a decision of whether to participate.
Beneficence (Belmont Report)
Principle that research should have beneficial effects while minimizing any harmful effects.
Case Study
A descriptive account of the behavior, past history, and other relevant factors concerning a specific individual.
Coding System
A set of rules used to categorize observations.
Concurrent Validity
The construct validity of a measure is assessed by examining whether groups of people differ on the measure in expected ways.
Content Analysis
Systematic analysis of the content of written records.
Construct Validity
The degree to which a measurement device accurately measures the theoretical construct it is designed to measure.
Convergent Validity
The construct validity of a measure is assessed by examining the extent to which scores on the measure are related to scores on other measures of the same construct or similar constructs.
Correlation Coefficient
An index of how strongly two variables are related to each other.
Covariation of Cause and Effect
Part of causal inference; observing that a change in one variable is accompanied by a change in a second variable.
Criterion-oriented validity
Techniques for determining construct validity that rely on assessing the relationship between scores on the measure and a criterion or outcome.
Cronbach’s Alpha
An indicator of internal consistency reliability assessed by examining the average correlation of each item (question) in a measure with every other question.
Discriminant Validity
The construct validity of a measure is assessed by examining the extent to which scores on the measure are not related to scores on conceptually unrelated measures.
External Validity
The degree to which the results of an experiment may be generalized.
Internal Consistency Validity
Reliability assessed with data collected at one point in time with multiple measures of a psychological construct. A measure is reliable when the multiple measures provide similar results.
Interval Scale
A scale of measurement in which the intervals between numbers on the scale are all equal in size.
Justice (Belmont Report)
Principle that all individuals and groups should have fair and equal access to the benefits of research participation as well as potential risks of research participation.
Measurement Error
The degree to which a measurement deviates from the true score value.
Nominal Scale
A scale of measurement with two or more categories that have no numerical (less than, greater than) properties.
Ordinal Scale
A scale of measurement in which the measurement categories form a rank order along a continuum.
Ratio Scale
A scale of measurement in which there is an absolute zero point, indicating an absence of the variable being measured. An implication is that ratios of numbers on the scale can be formed (generally, these are physical measures such as weight or timed measures such as duration or reaction time).
Reactivity
A problem of measurement in which the measure changes the behavior being observed.
Split-half reliability
A reliability coefficient determined by the correlation between scores on half of the items on a measure with scores on the other half of a measure.
Systematic Observation
Observations of one or more specific variables, usually made in a precisely defined setting.
Temporal Precedence
Part of causal inference; the cause precedes the effect in a time sequence.
baseline
In a single case design, the subject's behavior during a control period before introduction of the experimental manipulation.
cohort
A group of people born at about the same time and exposed to the same societal events; cohort effects are confounded with age in a cross-sectional study.
control series design
An extension of the interrupted time series quasi-experimental design in which there is a comparison or control group.
cross-sectional method
A developmental research method in which persons of different ages are studied at only one point in time; conceptually similar to an independent groups design.
history effect
As a threat to the internal validity of an experiment, refers to any outside event that is not part of the manipulation that could be responsible for the results.
instrument decay
As a threat to internal validity, the possibility that a change in the characteristics of the measurement instrument is responsible for the results.
interrupted time series design
a design in which the effectiveness of a treatment is determined by examining a series of measurements made over an extended time period both before and after the treatment is introduced. The treatment is not introduced at a random point in time
longitudinal method
A developmental research method in which the same persons are observed repeatedly as they grow older; conceptually similar to a repeated measures design.
maturation effect
As a threat to internal validity, the possibility that any naturally occurring change within the individual is responsible for the results.
multiple baseline design
Observing behavior before and after a manipulation under multiple circumstances (across different individuals, different behaviors, or different settings).
nonequivalent control group design
A quasi-experimental design in which nonequivalent groups of subjects participate in the different experimental groups, and there is no pretest.
nonequivalent control group pretest-posttest design
A quasi-experimental design in which nonequivalent groups are used, but a pretest allows assessment of equivalency and pretest-posttest changes.
one-group posttest-only design
A quasiexperimental design that has no control group and no pretest comparison; a very poor design in terms of internal validity.
one-group pretest-posttest design
A quasiexperimental design in which the effect of an independent variable is inferred from the pretest-posttest difference in a single group.
quasi-experimental design
A type of design that approximates the control features of true experiments to infer that a given treatment did have its intended effect.
regression toward the mean
Also called statistical regression; principle that extreme scores on a variable tend to be closer to the mean when a second measurement is made.
reversal design
A single case design in which the treatment is introduced after a baseline period and then withdrawn during a second baseline period. It may be extended by adding a second introduction of the treatment. Sometimes called a "withdrawal" design.
sequential method
A combination of the cross-sectional and longitudinal design to study developmental research questions.
testing effect
A threat to internal validity in which taking a pretest changes behavior without any effect on the independent variable.
conceptual replication
Replication of research using different procedures for manipulating or measuring the variables.
experimental realism
The extent to which the independent variable manipulation has an impact on and involves subjects in an experiment.
meta-analysis
A set of statistical procedures for combining the results of a number of studies in order to provide a general assessment of the relationship between variables.
mundane realism
The extent to which the independent variable manipulation is similar to events that occur in the real world.