• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

Card Range To Study



Play button


Play button




Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

70 Cards in this Set

  • Front
  • Back

The need for behavioral science

The “folk” understanding of behavior is severely limited because it is:

- usually postdictive, not predictive (based on non-specific clichés)

-often wrong (even our recent perceptions)

- doesn’t recognize the importance of factors outside our awareness
The scientific method (ideally) allows us to generate facts and accurately predict behavior.

Item-to-total correlations

a correlation to see how well one item on a survey agrees with other items.

ex) does item 20 correlate with the sum of items 1-19? (pearson correlation)


a series of self report measures administered either through an interview or a written questionnaire. These are the most widely used to collect descriptive data about a population


a set of fixed format, self-report items that is completed by respondents at their own pace, usually without supervision

Pros and Cons of questionnaires

Pros: cheaper, produce more honest answers, less likely to be influenced by the experimenter

Cons: response rates from the general population are sometimes low, which effects the results because it could be a certain type of person who is responding,

the experimenter has no way of knowing what order the participants answered the questions in, which could also alter results


questions are read to the respondent , either in person or over the phone

Values VS Empirical facts

Values: personal statements based on opinions

Facts: objective statements based on empirical study

Hindsight bias

the bias that people have to believe they could have predicted something that they could not have

The Scientific Method

The Scientific Method: the set of assumptions, rules, and procedures that scientists use to acquire new knowledge and integrate previous knowledge.

1. Observe and describe a phenomenon.

2. Formulate a hypothesis to explain it. (A hypothesis is a reasoned guess or an educated proposition.) Folk explanations stop at this step.

3. Use the hypothesis to generate predictions about the existence of other phenomena or observations. Sometimes there are two hypotheses that generate competing predictions.

4. Gather the data to test the predictions. Often, but not always, this involves performing an experiment.

5. Evaluate whether predictions are supported. Revise hypothesis accordingly.

Repeat steps 3 and 4.


In order for something to be scientifically testable, it must be able to have a negative result. You can't prove something is true if you can't prove it is false.

Applied VS general Research

general: answers fundamental questions about behavior, simply to understand how things work

Applied: investigates issues that have implications for everyday life and provide solutions to everyday problems

Descriptive Research Design

This type of research provides a "snapshot" of thoughts, feelings, or behaviors at a given time

It can use surveys, interviews, and naturalistic observation

it can be either qualitative or quantitative in design

Experimental Research Design

the active creation or manipulation of a given situation or experience for two or more groups or individuals

it is designed to create equivalence between the individuals before the experiment begins

Correlation Research Design

involves the measurement of two or more variables, and analysis of the relationship between them

statistical measures such as pearson correlation are used to determine how strong the relationship is

Pearson Correlation

represented as the letter r

ranges from -1 to 1

if it is negative, there is a correlation of the two things going together in opposite directions (as one increases, the other decreases), while if it is positive then they covary

the closer to -1 or 1, the higher the correlation. The closer to 0, the less strong the correlation is

Weaknesses of Correlation

cannot be used to identify causal relationships.

When you think one is causing the other, the reverse may be true.

additionally, there could be a third variable that is influencing them both to change.

it doesn't answer the question of why

elements of experimental research

Experiments have 3 inter-related elements.

1) Comparison – compare results under different conditions to rule out certain explanations and support others.

2) Control – attempt to compare conditions that are equivalent in all respects, except for the experimental manipulation.

3) Manipulation – alter one (or more) variable(s) or conditions. This manipulated variable is called the independent variable. The variable that might change in response is the dependent variable.

Random Assignment

Participants are randomly assigned to either the experimental of control group in order to make the two groups as similar as possible

Independent Vs. Dependent variable

The variable that is being manipulated is the independent variable.
The variable that changes as a result of the independent variable, is the dependent variable

observer/researcher expectancy effect

the person conducting the study is somehow influencing the study, thus effecting the outcome of the study. The best way to avoid this is to have the researchers be blind to which group the participants are assigned

Subject expectancy effect

when a subject knows (or suspects they know) what group they belong to, they may alter their behavior in a way that effects the data by trying to do or be what is predicted (or trying not to).


a placebo is a pseudo treatment administered to participants so that they think they are being given the experimental treatment, thus controlling for their expectancy effect. (everyone will think they are going to see a change, so the difference between the two is still valid)

Double Blind

both the researcher and the participant are unaware of which group the person is assigned to in an effort to remove both observer and subject expectancy effect

Ways to Generate Research

1. Identify a practical problem

2. Identify an empirical pattern

3. Finding limiting conditions; developing alternative explanations

4. Applying a hypothesis to new domain

What is the purpose of reading scientific literature?

The IRB and government granting agencies require research on previous text.

It will allow you to answer the following questions:

Has your topic already been investigated?

Has your specific study already been done?

What are the most important issues to investigate?

What are the most important variables to measure?

What’s the most effective and ethical method for investigating this issue?

primary vs. secondary sources

Primary sources come from the primary research body- this will have a methods and results section

***Meta-analysis is a primary source

secondary sources are reviews of research that have been completed, which do not have a methods or results section, and often cite many sources


principles that are well-established and extremely general. There are very few general laws in psychology. But for an evolutionary psychologist, the theory of evolution by natural selection is clearly a law; for a neuroscientist, the “neuron doctrine” is a law. Very little research is designed to test laws, since hardly anyone doubts they are true. (e.g. evolution by natural selection produces psychological and behavioral adaptations)


broad principles, but not as broad as laws; must be compatible with laws yet be logically independent of them; there is less certainty about their validity. (e.g. sports function as a display in social selection / sexual selection)

Research Hypotheses VS Theories

Theories are not generally directly tested because they are quite broad; instead, principles or logical deductions which follow from them are tested. These are research hypotheses; any good theory will provide many research hypotheses. If the hypotheses that follow from a theory are regularly tested and no support for them is found, the theory must eventually be rejected or at least modified. (e.g. better athletes will generally be more attractive)


the “nuts of bolts” of what variables are measured in observational and experimental studies; and how the variables are expected to relate to one another. If the falsifiable prediction is repeatedly not supported, the research hypothesis must be rejected or modified.

inductive vs deductive methods

Deductive deduces something from a hypothesis. The theory tells you what to look for. "top down"

Inductive is from observations in data or the real world that result in developing a way to test these things. "bottom up"

Informed Consent

Informed Consent is process, not just a signature. Process includes:

1. Verify eligibility to give consent (age, mental state).

2. Explain procedures and events, who is conducting research, and how data will be used and protected.

3. Explain potential costs and benefits.

4. Inform of rights (e.g. leaving).

Participants rarely drop out once they start.

- Possibly because informed consent addresses concerns

- But possibly because of social pressure

Power Differentials

Researchers inherently have power over participants; they shouldn’t abuse it.

-promising rewards and not delivering them

- wasting the participant’s time

-antagonizing someone for any reason that has no research purpose

-not protecting privacy or otherwise not living up to informed consent claims


Data should remain confidential

- in presentations and publications

- as much as possible, in researcher’s own records

- 3rd parties shouldn’t be able to link data to names

Breaking the link between data and name is good practice.

Data should be secured.

Sometimes participation can be anonymous.


Deception is defined as a participant not being fully informed about the true nature of research before deciding to participate.

- can involve actively giving misinformation

- can be passive, e.g. exposing someone to a scene and then giving a “pop quiz” about it

The main justification for deception is that many important psychological issues could not be studied without it.

Economists view of deception

Economists argue that trust is a public good. By deceiving, psychologists use up the trust and pollute the participant pool. Eventually all participants come into studies expecting to be deceived.

APA requirements for deception

APA and research boards recognize the need for deception but:

the minimum amount of deception necessary should occur

participants should be told of deception afterwards

deception should be used only when it cannot be avoided

the deception should not result in harm


Debriefing occurs at the end of a study.

Its purposes are to restate (or more fully explain) the nature of the study, ensure the well-being of participants, and allow questions about the research.

If deception was used, the deception should be revealed. Prior to this, the researchers may also ask the participants if they were suspicious about anything in the study.

Sometimes debriefing may involve activities to undo any changes that might have occurred during the study.

What is an IRB?

"institutional review board"

required by the government of any institution that receives federal funding, even if it isn't being used on the research

The IRB’s primary goal is to make sure that the costs (including risks) and benefits of the research, especially to participants, are fully identified. The risks involved must be justified by the potential knowledge gained.

The IRB does not guarantee the soundness of the research, that the researchers will follow the protocol they submitted, or that researchers will not be fraudulent.

Composition of IRBs

IRBs will have at least 5 members, and at least one of these will be a non-scientist, and at least one will not be affiliated with the institution where the research will be conducted.Scientists can’t serve as members of the IRB when their own project is being evaluated.

conceptual vs. measured variables

conceptual: the ideas that form the basis of a hypothesis. (e.g. self esteem, depression, cognitive development)

Measured: numbers that represent the conceptual variable

Operational Definitions

a precise statement of how a conceptual variable is turned into a measured variable

Converging Operations

Using more than one technique/research design to study the same thing with the hopes that they will produce similar findings

Nominal Variables

used to name or identify particular characteristics (e.g. religions, genders, races)

we will sometimes assign numbers to nominal variables, but the numbers are arbitrary (race in our lab)

Quantitative Variables

uses numbers to indicate the extent to which a person possesses a characteristic of interest. (e.g. height, BMI, time to complete a puzzle)

What are the three types of Quantitative scales?

Ordinal, ratio, and interval scales

Interval Scales

equal distances between scores correspond to equal changes in the conceptual variable. ALMOST NEVER USED IN PSYCH (e.g. The temperature difference between 10°F and 20°F is the same temperature difference as between 40°F and 50°F. )

Ratio Scales

Ratio scales: like interval scale but also have a true zero point so that scales values can be multiplied and divided meaningfully.***statistical options are greatest on this type of scale

Ordinal Scales

the order indicates whether there is more or less of something but they don’t indicate the exact interval between them.

self report VS behavioral measures

self report is asking people to report what behaviors they exhibit, while behavioral actually measures the true behavior

Free format (types and examples)

Projective Measures include the Rorschach inkblot test and the Thematic Apperception Test (TAT).

Associative Lists Measures

Think-aloud Measures involve participants describing their thoughts as they complete a task.

A major drawback of all free format measures is that coding data is time consuming.

Fixed format types and examples

some address unambiguous concepts (like circling ones race/ethnicity) while others use a scale. Number scales of agree to disagree can be used, pictures can be used, rating of importance, etc.

Likert Scale

a series of items that indicate agreement or disagree with the issue that is to be measured, each with a set of responses on which the respondents indicate their opinions.

(e.g. the rosenburg self-esteem)

Guttman scale

a fixed format self report scale in which the items are arranged in a cumulative order such that it is assumed that if a respondent endorses or answers correctly any one item, he or she will also endorse or correctly answer all of the previous scale items.


changes in responding that occur when individuals know they are being measured


a type of reactivity that occurs when research participants respond in ways that they think will make them look good.

random error

chance fluctuations in measurement that can be controlled by evenly distributing the error throughout the two samples

systematic error

influence of conceptual variables that are not being studied having an effect on the results of the study. These do not "self cancel" and therefore systematically increase or decrease the results of the study.

test-retest reliability

the extent to which score on the same measured variable correlate with the each other on two different measurements given at two different times

Equivalent forums

two different measures that are equivalent are given at two different times. the goal is that they are enough alike that they will measure the same thing accurately. (e.g. ACT)

internal consistency

the extent to which scores relate to each other and thus are all measuring the true score rather than random error

acquiesent responding

"a yes man", when participants are responding yes to everything.

a way to combat this is reversed items

Cronbach's alpha

an estimate of the average correlation among all of the items on the scale and is numerically equivalent to the average of all possible split-half reliabilities

Construct Validity

the extent to which a measured variable actually measures the conceptual variable that it is designed to measure

Face validity

the extent to which the measured variable appears to measure the conceptual variable

Content Validity

the extent to which the measured variable appears to have adequately covered the full domain of the conceptual variable

Convergent Validity

The extent to which a measured variable is found to be related to other measured variables designed to measure the same conceptual variables

discriminant validity

the amount to which a measured variable is found to be unrelated to other measured variables designed to measure different conceptual variables

Predictive Validity

The amount to which a a self report measure correlates (predicts) a future outcome


the entire group of people that the researcher would like to know about