Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key


Play button


Play button




Click to flip

113 Cards in this Set

  • Front
  • Back
Descriptive vs. Causal
Causal hypotheses state a cause-and-effect relationship between variables

Descriptive hypothesize simply state the existence of some relationship between variables or some difference between two groups
Criteria for good hypotheses
testable (can be measured)
falsifiable (can be proven wrong)
rational (follows logic, based on logical interpretation of past research)
parsimonious (explains information simply)
Null hypothesis
There is no difference in social anxiety between front-row
and back-row sitters.
Alternative hypothesis
There is a difference in social
anxiety between front-row and back-row sitters in a socially-anxious situation.
Goals of Psychological Research
We want an accurate understanding of
psychological phenomena.
Criteria for Evidence
empirical (measurable and observable)
objective (No bias)
controlled (No other variables)
replicable (Can be repeated with similar results)
Accurate Understanding
Being able to describe, explain, predict, and control behavior with confidence
The Scientific Method
The scientific method is a set of rules consisting of certain assumptions, attitudes, goals, and procedures for creating and answering questions about nature.
Possible Flaws
flawed logic
flawed design
flawed evidence

The Drunkard’s Search (Only looked for car keys in the light)
A logically organized set of proposals that defines, explains, organizes, and interrelates our knowledge about a phenomenon

-deductive method
Assumptions of the Scientific Method
Nature is lawful

The laws of nature are
understandable - RATIONALISM

Behavior is determined -

Nature is “experienceable”-
Attitude of Scientists
Scientists hold the assumptions listed previously but also adopt an approach to their understanding of nature that is uncertain and open-minded, cautious and skeptical, and ethical.
Inductive method
The observation of specific facts to get ideas (hypotheses) about more general relationships among variables
Deductive method
The use of a theory to generate specific ideas (hypotheses) that can be tested through research
Why search the literature?
What’s been done already?

Is your hypothesis consistent with existing evidence?

What methods have been used?

What is the current theoretical thinking?
-abstract concept
-general term
-not immediately measurable

ex: love, aggression, personality, learning, prejudice
-anything that varies
-a measurable aspect of a phenomenon

ex: racial bias – an indirect preference for one racial group
over another
Operational definition
-the measure
-actual manner in which a variable is measured

ex: racial bias – as measured by the Implicit Attitudes priming test
Experimental Design
-cause-and-effect relationships
between/among variables

-random assignment/equivalent groups

- Independent Variable (the variable manipulated) and Dependent Variable (the variable being measured)
Quasi-Experimental/Correlational Design
-inherent characteristics (age/sex)

-time-based characteristics (before and after)

-no cause and effect relationship

-used to assess or predict relationship between/among variables
Nonexperimental/Descriptive/Qualitative Design
-single-subject experiment
-field experiment
-naturalistic observation
-case study
-archival research
Research ethics
A set of principles that help researchers reconcile (potentially) conflicting values.
Societal Concerns (ethics)
To what extent should societal concerns and cultural values dictate the course of scientific investigation?
Professional Concerns (ethics)
-Fraudulent results (most serious crime)
-“Honorary” authorship (Giving others authorship)
-Partial or duplicate publication (Publish same research in multiple publications)
-Financial conflict of interest (Bias)
-Basic ethics – abuse of power
Participants Concerns (ethics)
-most fundamental issue
-must consider potential for physical and psychological harm
-both human and animal subjects
Getting Ideas
-personal experience
-need to solve a practical problem
Unsystematic observation
Informal, casual observations (good starting points)
Ethical Dilemmas
To determine if the potential gain in knowledge from a study outweighs the costs to research participants
Personal experience (Ideas)
unsystematic observation and systematic observation

-inductive method
Systematic observation
Developing and implementing a detailed procedure for measuring a phenomenon of interest
Who should make the subjective decision of the whether the benefit outweighs the costs?
Institutional Review Boards (IRBs) although the final ethical responsibility lies with the researcher

The IRB is a committee charged with evaluating research projects in which human participants are used.
APA Code of Ethics History
- Developed in last 40 years due to international (Nazi scientists) and domestic(Tuskegee syphilis experiment and Milgram’s obedience experiments) unethical methods.
Need to solve a practical problem (Ideas)
The application of the scientific method to real world situations
Main APA Principle
Animal and human participants must be protected from physical or psychological harm; any potential harm must be scientifically justified.
Researcher's obligation
-identify potential risks
-protect participants from physical and psychological harm
-justify remaining risks
-obtain informed consent
-debrief with full disclosure of any deception
-respect the participant’s right to decline
-ensure confidentiality
Informed consent
Participants should be “fully informed” about “all aspects” of the study including explanation about procedures and any possible adverse reactions
Active Deception
“deception by commission”

- experimenter deliberately misleads
Passive Deception
“deception by omission”

- certain information is withheld
Holmes & Bennet 1974
A compromise between informed consent and deception

- experimental group informed that they may be deceived, but not told how

- There was no change in performance
“Special” concerns
What if private behavior or socially “bad” behaviors create potential psych. harm?

ex: Milgram experiment
- post-experimental interview with participant in which component parts of experiment are explained
Debriefing re: any deception used by experimenter
Debriefing re: participant’s behavior (appears to reduce the anxiety-inducing effects of deceptive procedures)

Dilemma: Does it provide embarrassment?
Right to Decline
- Avoid Coercion (Incentives)

- Be aware of power differentials
Ensure that any information about a research participant is revealed only to the researcher and his/her staff
When identity of participant is unknown to researcher and
Superstition (Epistemology)
Gaining knowledge through subjective feelings, belief in chance, or belief in magical events
Intuition (Epistemology)
Gaining knowledge without being consciously aware of where that knowledge came from
Authority (Epistemology)
Gaining knowledge from those viewed as authority figures
Tenacity (Epistemology)
Gaining knowledge by clinging stubbornly to repeated ideas, despite evidence to the contrary
Rationalism (Epistemology)
Gaining knowledge through logical reasoning
Empiricism (Epistemology)
Gaining knowledge through observations of organisms and events in the real world
Science (Epistemology)
Gaining knowledge through both empiricism and rationalism
A prediction regarding the outcome of a study, often involving the relationship between two variables
Claims that appear to be scientific but that actually violate the criteria of science
Carefully observing behavior in order to describe it

Observational Method
Case Study Method
Survey Method

high external validity
low internal validity
Identifying the factors that indicate when an event or events will occur

Correlational Method
Quasi-experimental Method
Identifying the causes that determine when and why behaviors occur

Experimental Method

low external validity
high internal validity
Public verification
Presenting research to the public so that it can be observed, replicated, criticized and tested
Confirmations vs. Disconfirmation
Scientists do not try to prove their theories true, but rather to attempt to falsify it. We do not want to completely discount a theory based on a single study.
Where do I go to research my topic?
-Peer-reviewed journals
-Technical reports
What are the different parts of a research report and what would you expect to find in them?
Title Page
Appendix/Authors Note
Tables and Figures
basic research
the study of psychological issues in order to seek knowledge for its own sake
applied research
the study of psychological issues that have practical significance and potential solutions
correlational method
a method that assumes the degree of relationship between the variables
quasi-experimental method
research that compares naturally occurring groups of individuals, the variable of interest cannot be manipulated
participant variable
a characteristic inherent in the participants that cannot be changed
alternative explanation
the idea that it is possible that some other uncontrolled, extraneous variable may be responsible for the observed relationship
random assignment
assigning participants to conditions in such a way that every participant has an equal probability of being placed of condition
nominal scale
a scale in which objects or individuals are assigned to categories that have no numerical properties

ex: ethnicity, religion, sex
ordinal scale
a scale in which objects or individuals are categorized and the categories form a rank order along a continuum

ex: class rank, letter grade
interval scale
a scale in which the units of measurement between the numbers on the scale are all in equal in size

ex: temperature
ratio scale
a scale in which, in addition to order and equal units of measurement, there is an absolute zero that indicates an absence of the variable being measured

ex: weight, height, time
discrete variables
variables that usually consist of whole number units
continuous variables
variables that usually fall a continuum and allow for fractional amounts
self-report measures
questionnaires or interviews that measure how people report that they act, think or feel

limitations: biases in reporting, forgetting, social desirability, response sets
tests (measures)
a measurement instrument used to assess individual differences
behavior measures
careful observations and recordings of behavior
psychophysiological measures
measures of bodily activity

limitations: equipment costs, reliability, lack of “naturalism”
a possible reaction by participants in which they act unnaturally because they are being observed
-the degree to which the same event or behavior produces the same score each time it is measured.

-the degree to which measurements are consistent and free from error.

-the proportion of true-score (systematic) variance to total variance in a set of scores
test/retest reliability
measures stability over time by administering same test on different occasions
alternative-forms reliability
measures stability over time and equivalence of items by administering an equivalent test on different occasions
split-half reliability
measures equivalency of items by correlating performance on two equivalent halves of the same test
interrater reliability
have at least two people count or rate behaviors and determine the percentage of agreement between them
the extent to which a procedure
measures what it is intended to measure
content validity
the extent to which a measuring instrument covers a representative sample of the domain of behavior to be measured
face validity
the extent to which a measuring instrument appears valid on its surface
criterion validity
reflects the ability of a measure to be either predictive of
further performance on a similar measure (predictive validity) or its ability to provide results that are similar to results that similar measures provide
(concurrent validity).
construct validity
the degree to which a measuring instrument accurately measures a theoretical construct or trait that is a designed to measure

-is operational def. appropriate?
-does it reflect construct of interest?
archival measures
old records, reports, letters, etc.... that often requires content analysis

limitations: data originally gathered for different purposes, accuracy unknown
issues of variable selection
-How many? IV? DV?
-# levels of each IV? (at least 2)
-degree of manipulation/ amount of tx
-Scales of measurement?
-Any participant variables?
State variables
temporal – short-lived
-can manipulate
-state variables usually DV
Trait variables
longer-lived, more enduring
-can’t really manipulate
-can use trait characteristics as IV
Measure sensitivity
ability of measuring tool to pick up real differences (depends on degree to which DV actually varies)
Range restriction
floor: all scores low, regardless of condition
ceiling: all scores high, regardless of condition
Demand characteristics
aspects of the protocol that suggest the hypothesis or desired outcome
Placebo effects
participants tend to respond to placebo as they would to the treatment
Participant contributions to error
practice, fatigue, order effects (if multiple trials) and basic individual differences
Extraneous variable
any variable or condition that exists while the IV is being
Confounding variable
extraneous variable that varies systematically and simultaneously with variables of interest (IV)
Total variance
“true score” (systematic) variance + “error” variance
Random error
extraneous variables, measurement, individual variance
Systematic error
confounding variables
any variance in the DV that is not a result of variance in the IV
internal validity
Does the design reflect the real relationship between variables?

-affected by confounding variables
external validity
can the results be generalized to the “real world?”

-across time = temporal validity
-across situations = ecological validity
Single subject
these experiments are basically used when conducting clinical trials using behavioral modification (operant conditioning, using reward or punishment to change a person’s behavior). It is called an experiment because there is manipulation of a variable (usually called a stimulus or a reward/punishment schedule) on the part of the researcher/clinician. You can manipulate the “IV” to see how behavior (the DV) changes in the presence of the IV and when the IV is not present. This is not a true experiment primarily
because there is only one subject used.
Field experiment
a field experiment is one done out of the lab, in a more-or-less public setting. It is called an experiment because the researcher is manipulating some variable in the environment. It does not qualify as a true experiment because, typically, the researcher does not have any control over who is a participant and which condition a participant is “assigned” to
Survey research
often used to collect information/data from a large number of subjects. A researcher can USE a survey in a true experiment as a TOOL, but survey research as a method is not truly experimental in nature. Rather, it is a way to collect large data sets on demographic, attitudinal, or self-reported behavioral
Naturalistic observation
a method that is often used as a preliminary step to designing true experiments. It is a way to collect information about how subjects behave in their natural environment. The researcher typically
doesn’t manipulate any variables, and often does not have control over who the participants are.
Case studies
studies of particular situations or participants that occur in a singular manner. An example would be individual studies of psychiatric patients who present unique or novel symptoms, or who responded to novel clinical procedures. Often, the eventual collection of case studies that are similar in nature can lead to the generation of new hypothese (induction), eventaully leading to true experimentation on a phenomenon
Archival research
takes advantage of data that has been collected previously, either as actual data on the topic of interest, or, more usually, is actually a form of documentation used for other purposes – newspapers,
journals, private communications, speeches, birth, death, marriage, property records, etc. Often, this type of
research requires something called content analysis – for example, a linguist may be interested in how many
“masculine’ vs “feminine” adjectives are used in speeches by male or female elected officials. In this case, the documented speech is being examined for some specific aspect of its content, rether than the intended
communicative aspects of the speech.