Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key

image

Play button

image

Play button

image

Progress

1/67

Click to flip

67 Cards in this Set

  • Front
  • Back
1. Parsimony:
The idea that, with all being equal, one should take the simple explanation over the complex ones
2. Testability:
Theories should be confirmable or disconfirmable using currently available research techniques.
3. Operational Definitions:
Definition of a theoretical construct that is stated in terms of concrete observable procedures: e.g. define “helping” as the number of minutes a child spends assisting a friend with a problem.
4. Hypothesis:
A prediction about a specific event or set of events
5. Method of Induction:
making specific observations to draw general conclusions (small to big)
6. Problem of Induction:
Can never be 100% positive about Induction because it only takes one instance in millions of observations to disprove the theory.
a. Validation:
Type of hypothesis Testing to CONFIRM a hypothesis.
b. Falsification:
Type of hypothesis Testing to DISPROVE a hypothesis
c. Qualification:
Type of hypothesis Testing to identify the conditions that the theory is and is not true
9. Case studies:
a systematic analysis of the experiences of a particular person or group of people. Often serve as inspiration for theories or experiments
10. Sampling Error:
the likely discrepancy between the results one obtains in a specific survey sample and the results one would have been likely to have obtained from the entire population. This estimate of potential inaccuracy is also known as the margin of error. All else being equal, surveys from larger samples usually suffer from less sampling error.
11. Correlational methods:
an approach to research in which researchers gather a set of observations about a group of people and test for associations (i.e. correlations) between different variables. Examples include: questionnaire/ interview, or archival research
a. Questionnaire Research:
Investigators ask participants to respond to a standard set of given questions about their background, mood, attitudes, or experiences.
b. Archival Research:
Research in which investigators examine naturally existing public records to test a theory or hypothesis
12. Observational Research:
Research in which investigators record the behavior of people in their natural environments without influencing people’s behaviors. These are usually made secretly or unobtrusively.
a. Third-Variable problem:
The problem of confounds, especially as it applies to passive observational research (i.e. Correlational methods). Correlation does not equal Causation
13. Longitudinal or Prospective design:
a non-experimental research design (often questionnaire, interview and/or observational techniques) in which researchers track participants over time (e.g. to track changes in children’s social development or adults’ personalities).
Used to address the problem of reverse causality!
What problem does the longitudinal/prospective experimental design address?
The problem of reverse causality
• Validity:
The approximate truth of measures, inferences, or conclusions
 Construct Validity:
Degree to which your operationalizations, measures or experiments capture (or generalize to) the concepts you’re hoping to study
• Most important kind of validity – without this, you’re not studying what you think you are studying!
 Face Validity:
Degree to which your operationalizations, measures or experiments “on their face,” seem to capture the concepts you’re hoping to study
• Subjective criterion…. But can validate your sense by checking with other (experts)
• EX: Construct measure of emotional expressivity
o Do the items seem, on the face of them, to capture what we mean when we think about expressivity?
 Content Validity:
Provides a set of criteria against which to judge the validity of any measure; allows comparisons of different measures
• EX: does your expressivity questionnaire meet good criteria for measuring individuals differences in expressive behaviors?
What is the most important type of Validity?
Construct Validity
 Predictive Validity:
Does your operationalized measure predict something it should theoretically be able to predict?
• EX: Does your measure of everyday emotional expressivity predict other indices of emotional response:
o Inversely correlate with a measure of emotion control?
 Concurrent Validity:
Does your operationalized measure predict a kind of group membership it should theoretically be able to predict?
• EX: Does your measure of everyday emotional expressivity predict whether you’re:
o Male or female?
Convergent Validity:
Does your operationalized measure correlate with measures of the same or related constructs?
• EX: Does your measure of everyday emotional expressivity predict:
o How expressive your friends would judge you to be?
Discriminant Validity:
Does your operationalized measure predict something different than what other theoretically un-related measures predict?
• EX: Do the subscales of your emotional expressivity measure predict different things?
o For example, responses to positive or negative film clips?
External Validity:
Degree to which the conclusions in your study would hold for other persons in other places and at other times
• Sample from population you care about (not always possible)
Reliability:
Consistency, replicability of measurements in different experiments and/or at different times
o If a measure has no error, it is perfectly reliable.
o Reliability is, essentially, is the “proportion of truth” in a measure.
Test – retest reliability:
do it over again, correlate two scores
• Higher with shorter intervals
Internal consistency reliability:
For a given measure, how consistent are scores on different items measuring the same construct?
• Average inter-item
• Split-half
Inter-rater Reliability:
If you have something coded by raters, do they agree?
• EX: rating videotapes of people watching emotional videos
o Nominal (categorical) Scales:
Simplest kind of measurement scale. Categorical. Involve simple but potentially arbitrary and non-numerical, names or categories. i.e. gender, favorite color, pet ownership.
o Interval scales:
Scale that uses real numbers designating amounts to reflect relative differences in magnitude. Can be negative. i.e. Temperature, SAT scores, self esteem scores, etc.
o Ordinal Scales:
Measurement scale that uses order or Ranking. I.E. Birth order, Ranking in a foot race.
o Ratio Scales:
Measurement scales that use real numbers designating equal amounts to reflect relative differences in magnitude. Cannot be negative. Have the same properties of Interval scales except ratio scales always have a true zero point (at which none of the quantity under consideration is present). IE. David is 6.1 times as heavy as Melissa; etc.
• Unipolar Scales:
assess a psychological dimension that is anchored at the low end and high end (e.g. from “not at all happy” to “extremely happy”; from 0 – 10)
• Bipolar Scales:
Anchored at the Low and high ends by two opposing concepts (e.g. disagree to agree; negative to positive) and for which there is a meaningful midpoint (e.g. neither disagree nor agree; neutral). Typically use numbering scales that range from negative to positive with 0 as the midpoint.
• Pilot Testing:
A preliminary or practice study conducted before the full blown version of the study.
• Floor Effects:
problem when everyone in a sample responds at the same low level on a survey question or dependent measure.
• Ceiling Effects:
A methodological problem when everyone in a survey responds at a high level on a survey question or dependent measure
• Control Group:
a group used as a standard for comparison for assessing the effects of experimental manipulations or psychological treatment.
• Selection Bias:
Choosing research participants from a nonrepresentative sample by using imperfect or biased sampling techniques rather than true random sampling. This typically represents a threat to external validity.
• Nonresponse Bias:
The bias that occurs in research when a substantial proportion of those invited to take part in a study refuse to do so. If those who agree are different from those who refuse, the resulting bias is similar to selection bias and represents a threat to external validity
• History:
changes that occur over time in a very large group of people such as those living in a city, state or nation, or culture. When an investigator conducts a pre-test, post-test study in which all participants receive a treatment, changes due to history may masquerade as treatment effects. Thus history may be a threat to internal validity.
• Maturation:
changes that occur over time in a specific person or group of people due to normal development or experience (e.g. growth or learning). When an investigator conducts a pre-test, post-test study in which all participants receive a treatment, changes due to maturation may masquerade as treatment effects. Thus maturation may be a threat to internal validity.
• Testing effects:
the tendency for most people to perform better on a test or personality measure the second time they take the test or measure. When an investigator conducts a pre-test, post-test study in which all participants receive a treatment, changes due to testing effects may masquerade as treatment effects. Thus testing effects may be a threat to internal validity.
• Attrition (experimental mortality):
The failure of some of the participants to complete the study. In an experiment or quasi-experiment, this may include homogenous attrition (when attrition rates are equal across experimental conditions) = a threat to external validity; or heterogeneous attrition (when attrition rates are different across experimental conditions) = a threat to internal validity.
homogenous attrition
(when attrition rates are equal across experimental conditions) = a threat to external validity;
heterogeneous attrition
(when attrition rates are different across experimental conditions) = a threat to internal validity.
• Demand Characteristics:
Aspects of an experiment that subtly suggest how participants are expected to behave. They often contribute to the problem of participant expectancies.
• Participant Reaction Bias:
The bias that occurs when research participants realize they are being studied and behave in ways in which they normally would not behave. Most Forms of participant reaction bias threaten the internal validity of an investigation.
o Participant Expectancies:
The form of participant reaction bias that occurs when participants consciously or unconsciously try to behave in ways they believe to be consistent with the experimenter’s hypothesis. This is a threat to internal validity.
o Participant Reactance:
The form of participant reaction bias that occurs when participants attempt to assert their sense of personal freedom by choosing to behave in ways that they believe to be in opposition to the experimenter’s expectations. This is a threat to internal validity.
o Evaluation Apprehension:
The form of participant reaction bias that occurs when participants attempt to behave in whatever way they think will portray them most favorably. This is a threat to internal validity.
• Cover Story:
A false story about the nature and purpose of a study. Researchers use a cover story to divert participant’s attention from the true purpose of the study. They are used to divert participant’s attention from the true purpose of the study when they are concerned about participant reaction bias (i.e. when they believe that participants would not behave naturally if they knew the true purpose of the study). Most forms of deception in research are part of a cover story.
• Experimenter bias:
The bias that occurs in research when the investigators expectations about participants lead to false support for these expectations. One form of experimenter bias occurs when they interpret ambiguous behaviors in ways that are consistent with their expectations. A second form occurs when experimenters actually treat participants differently in different experimental conditions and thus influence participants’ real behavior in hypothesis consistent ways.
• Double-Blind Procedure:
A method of controlling for both participant expectancies and experimenter bias by keeping both research participants and experimenter unaware of participants’ treatment conditions during an experiment
• Artifact:
a variable that is held constant in a study but which influences the relation between the independent or predictor variable and the dependent variable. EX: a drug study that only includes male participants. if the drug being studied works for men but not for women, gender would be an artifact. This is a threat to external validity.
• Manipulation:
systematically varying the level of an independent variable in an experiment, with the goal of seeing whether doing so has any effect on the measured level of a dependent variable.
• Random Assignment:
A technique for assigning participants to different conditions in an experiment.
• Confounds:
A design problem in which some additional variable exists that may influence the dependent variable and which varies systematically along with the independent variable
o Person Confound:
a confound in non experimental research when a variable (e.g. income level) seems to cause something because people who are high or low in this variable happen to be high or low on some individual difference variable (e.g. education level) that is associated with the outcome variable of interest (e.g. scores on an IQ test). This can be limited by random assignment.
o Environmental Confound:
confound that occurs when a measured variable (e.g. depression) seems to cause something because people who are high or low on this variable also happen to be high or low on some contextual or situational variable (e.g. the recent loss of a loved one) that is associated with the outcome variable of interest (e.g. physical interest)
o Operational confound:
a confound that occurs when a measure designated to assess a specific construct (e.g. self esteem) inadvertently measures something else as well.
• Noise:
an extraneous variable in an experiment that a) influences the dependent variable, but b) is evenly distributed across experimental conditions. This is not a threat to validity, but may decrease a researcher’s ability to detect the effect that they are interested in.
• Artificiality:
the lack of realism in experiments. The idea that experiments are “artificial”. Concerns external validity.