• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/85

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

85 Cards in this Set

  • Front
  • Back
Subjects
Subjects: the name for people who are studied and participate in experimental research
stimulus/treatment
What the independent variable in experimental research is called
pretest/posttest
Pretest: the measurement of the dependent variable of an experiment prior to the treatment
Posttest: the measurement of the dependent variable of an experiment after the treatment
experimental/control groups
Experimental group: the group that receives the treatment
Control groups: the group that does not get the treatment
double-blind
A type of experimental research in which neither the subjects nor the person who directly deals with the subjects for the experimenter know the specifics of the experiment; used to eliminate experimenter expectancy (an internal validity threat)
placebo effect
Placebo effect: psychological effects of a treatment; although subjects are receiving a false treatment and not the actual treatment, they still show signs of receiving the treatment.
confederate
Confederate: people who pretend to be other subjects or bystanders but who actually work for the researcher and deliberately mislead subjects
Solomon 4-group design
An experimental design in which subjects are randomly assigned to two control groups and two experimental groups. Only one experimental group and one control group receive a pretest. All four groups receive a posttest. This design is supposed to address the issue of pretest effects (pretests can sometimes lead to improved posttest scores)
cross-sectional and longitudinal designs
cross-sectional is a study done at one point in time; longitudinal is a study with a cohort group over time (longitudinal is often better, but more difficult to conduct)
trend study
?
panel study
?
cohort study
?
Experimental design
Arranging the parts of an experiment and putting them together
logic
Experimental logic: The experimenter’s basic logic extends commonsense thinking. The three things researchers do in an experiment are (1) begin with a hypothesis, (2) modify something in a situation, and (3) compare outcomes with and without the modification. Compared to the other social research techniques, experimental research is the strongest for testing causal relationships because the three conditions for causality are best met in experimental designs.
notation
Shorthand system for symbolizing the parts of experimental design.

O= observation of dependent variable, X= treatment/independent variable, R= random assignment, O1=pretest, O2= posttest, If there are multiple Xs then they are numbered with subscripts to distinguish them (X1, X2, etc.)
methods of assigning subjects to groups
randomization or matching
randomization
Randomization (aka random assignment): dividing subjects into groups at the beginning of experimental research using a random process, so the experimenter can treat the groups as equivalent. Random means a case has an exactly equal chance of ending up in one or the other group.
matching
Matching: researchers match cases in groups on certain characteristics, such as age and sex. Matching is an alternative to random assignment, but it is infrequently used. Matching presents a problem: individual cases differ in thousands of ways—how can the researcher decide which characteristics are relevant and can one locate exact matches?
Equivalent groups.
Mandatory in an experiment; only achieved through random assignment
Isolating treatment effects and establishing causality.
A researcher wants to control all aspects of the experimental situation to isolate the effects of the treatment and eliminate alternative explanations. Researchers use deception (intentionally misleads subjects) to control the experimental setting. The use of pretests, posttests, manipulation check, and control groups help to establish causality.
Strengths and weaknesses of experimental method
Strengths and weaknesses of experimental method: The real strength of experimental research is its control and logical rigor in establishing evidence for causality. In general, experiments tend to be easier to replicate, less expensive, and less time consuming than the other techniques. Experimental research also has its limitations: (1) some questions cannot be addressed using experimental methods because control and experimental manipulation are impossible, (2) experiments usually test one or a few hypotheses at a time. This fragments knowledge and makes it necessary to synthesize results across many research reports and (3) external validity is another potential problem because many experiments rely on small nonrandom samples of college students.
laboratory and field experiments
Laboratory experiments have strong internal validity because situations are carefully controlled to isolate the effects of treatment. However, findings may not apply outside of the experimental setting (low external validity). Field experiments have external validity because they are in real-world settings. However internal validity is sacrificed because there are no control or equivalent groups.
Validity issues
include Hawthorne effect; history; maturation; mortality; pretest; contamination;
subject demoralization and rivalry; cover story; manipulation check; debriefing; internal and
external validity
Hawthorne effect
An effect of reactivity named after a famous case in which subjects reacted to the fact that they were in an experiment more than they reacted to the treatment
history
A threat to internal validity due to something that occurs and affects the dependent variable during an experiment, but which is unplanned and outside the control of the experimenter
maturation
A threat to internal validity due to natural processes of growth, boredom, and so on, that occur to subjects during the experiment and affect the dependent variable
mortality
Threats to internal validity due to subjects failing to participate through the entire experiment
pretest
Threatens internal validity because more than the treatment alone affects the dependent variable. The Solomon four-group design helps a researcher detect testing effects.
contamination
A threat to internal validity that occurs when the treatment “spills over” from the experimental group, and control groups subjects modify their behavior because they learn of the treatment
subject demoralization and rivalry
Subject demoralization: Dr. Marshall discussed demoralization during example of “Laughter is the Best Medicine”—control group heard laughter of experimental group watching the funny movie and wondered why they weren’t having as much fun as the other group;demoralization
Rivalry: control group changes performance to spite experimenters or experimental group
cover story
Cover story: the deception told to subjects to create experimental reality
manipulation check
Manipulation check: check for the validity of X; make sure the subject is interpreting the treatment in the way it was designed (i.e., in Good Samaritan experiment, check to make sure subject saw the confederate and knew he/she was in need of help. When subjects arrive at destination they are asked if they saw anything unusual on the way over—checking for X)
debriefing
Debriefing: ethical obligation to explain to subject the true meaning of the experiment after using deception
internal and external validity
Internal validity: the ability of experimenters to strengthen a causal explanation’s logical rigor by eliminating potential alternative explanations for an association between the treatment and the dependent variable through an experimental design

External validity: the ability to generalize from experimental research to settings or people that differ from the specific conditions of the study
Interview v. questionnaire
A questionnaire is a self administered survey; In an interview there is some interaction between the researcher and respondent, and interview is more reactive, interview has open questions
Element
if you sample colleges, then sample students from those colleges, the students are the sampling elements
unit
units of analysis: for example, households or colleges, not always individuals
population
Population- the large pool of element or cases; the unit being sampled; the geographical location, and temporal boundaries of populations.
sample
Sample- group which you are studying that is representative of the general population you’re observing
parameter
a characteristic of the entire population that is estimated from a sample.
statistic
a numerical estimate of a population parameter computed from a sample
sampling frame
a list of cases in a population, or the best approximation of it
weighting
Weighting- in an index, the researcher values or weights some items more than others.
Unless otherwise stated, assume that an index is unweighted. Like wise, unless you have a good theoretical reason for assigning different weights, use equal weights. Changes the theoretical definition of the construct. Can produce different index scores, but in most cases weighted and unweighted yield similar results.
normal curve
Normal Curve- a bell shaped curve; shaped by the sampling distribution when there is a huge number of random samples. *The midpoint of the
curve approaches the population parameters as the sample increases.
unit of analysis
Unit of Analysis- the kind of empirical case or unit that a researcher observes, measures, and analyzes in a study.
sampling distribution
Sampling Distribution- the set of many random samples. A distribution of different samples that shows the frequency of different samples outcomes from many separate random samples. The pattern in the sampling distribution suggests that over many separate samples, the true population parameter is more common than any other result. *When many different random samples are plotted, than the sampling distribution looks like a bell curve.
inferential statistics
Inferential Statistics- a branch of applied mathematics or statistics based on a random sample. It lets a researcher make precise statements about the level of confidence he or she has in the results of a sample being equal to the population parameter.
hidden population
Hidden Population- people who engage in clandestine, deviant, or concealed activities and who are difficult to locate and study. (e.g. illegal drug users, prostitutes, homosexuals, people with HIV/AIDS, homeless people and others)
Logic of sampling
First you establish… What you would like to talk about (the population to study; get a sampling frame)
- Then… The Sampling Process
- Finally…What you actually observe in the data (sample)
Issues of representativeness
include size, heterogeneity v. homogeneity. Probability v. nonprobability designs. Logic & interpretation of standard error, confidence interval, and confidence level
size
- sample size depends on the kind of data analysis the researcher plans.
- how accurate the sample has to be for the researcher’s purposes, and on population characteristics.
- a large scale size alone doesn’t generate a representative sample.
- samples should have a good sampling frame
- good samples for qualitative purpose can be very small
heterogeneity v. homogeneity.
Heterogeneity- everything being equal, larger samples are needed if one wants high accuracy if the population has
a great deal of variability, or if one wants to examine many variables in the data analysis simultaneously; The more heterogenous the sample, the larger the sampling error
§ Homogeneity- smaller samples are sufficient when less accuracy is acceptable, when the population is homogenous, or when only a few variables are examined at a time.
Probability v. nonprobability designs
Probability: better with large samples and generally more accurate, higher external validity, allows for estimating error, must have an exhaustive sampling frame & use random selection
NonProbability: when you don't have a sampling frame, no time or money for probability, for pretesting, for small samples, for theoretical reasons
Logic & interpretation of standard error
Logic & Interpretation of Standard Error- referred to as sampling error. The deviation between sample results and a
population parameter due to random processes.To specify exactly the probable margin of error in the estimate.
confidence interval
Confidence Interval- a range of values, usually a little higher and lower than a specific value found in a sample, within which a researcher has a specified and high degree of confidence that the population parameter lies. *We only select one sample and we don’t know the population parameter, we try to estimate it. By random sample selection, we can use probability theory to draw a band around the sample statistic.
confidence level
Confidence Level- to estimate the odds. We can do this because probability theory specifies that 68% of a sampling distribution will produce estimates falling
within one standard error of the parameter. (This is an extension of what we know about the normal curve, in which 68% of values fall within 1 standard deviation of the mean, 95% of values fall within 2 standard deviations of the mean, 99.9% of values fall within 3 standard deviations of the mean.)
simple random
prob sample; uses random numbers table
systematic random
prob sample; fixed interval; k is the interval size: K = sampling frame size/ desired sample size
stratified random
prob sample; dividing into homogenous strata; good for sampling rare elements
area/multistage cluster
prob sample; use sampling units (colleges), calculate standard error, sample elements (students), calculate standard error, combine SEs to estimate confidence interval
convenience
non prob; survey people who walk by; suseptible to bias
quota
non prob; asks a filter question to match sample characteristics to the aggregate in proportion; limited by how much matching you can do
purposive
non prob; rejects EPSEM although they may have a sampling frame; seeks out ideal types, key positions, etc
snowball
non prob; sample of last resort; accumulates slowly, sample size may be 10-20; referrals give homogenous samples; good for hidden populations
Definitions of scale and index
Scale- A measure in which a researcher captures the intensity, direction, level, or potency of a variable construct. It arranges responses or observations on a continuum. Can use a single indictor or multiple indicators, most are at ordinal level of measurement.
Index- A measure in which a researcher adds or combines several distinct indicators of a construct into a single score. This composite score is often a simple sum of the multiple indicators. It is used for content and convergent validity. Indexes are often measured at the interval or ratio-level.
why use composite measures
Content Validity- Multidimensional variables. (coverage over consensus)
Reliability- Measured in consistency, use of multiple measures enhances reliability.
Measurement refinement
intensity structure
?
scoring
amalgate scores into summary measure
rule of parsimony
In science, less is best. One score with multiple indicators.(Redundant data violates the rule of parsimony)
Index/scale similarities and differences
Scales and Indexes must be mutually exclusive, exhaustive, and unidimensional.

1.Mutually Exclusive- An individual or a case fits into one and only one attribute of a variable.

2.Exhaustive- All cases fit into one of the attributes of a variable.

3.Unidimensional- All the items in a scale or index fit together, or measure a single construct. Combine several specific pieces of information into a single score or measure, have all the pieces work together and measure the same thing.

Both indexes and scales produce ordinal or interval level measures of a variable
Both can be combined into one measure
Scales and indexes give a researcher more information about variables and make it possible to asses the quality of measurement. Both increase reliability and validity, and they simply the information that is collected.
index and scale construction
Purpose- Creates an ordinal, interval, or ratio measure of a variable expressed as a numerical score. (For Example, can help measure the most desirable place to live based on unemployment rates, commuting time, crime rate, recreation opportunities, weather…etc)
Weighting (Unless otherwise stated, assume that an index is unweighted.)
Unweighted Index- Gives each item equal weight. It involves adding up the items without modification, as if each were multiplied by 1 ( or by -1 for items that are negative)
Weighted Index- Researcher values or weights some items more than others. Size of weights can come from theoretical assumptions, the theoretical definition, or a statistical technique such as factor analysis. Weighting changes the theoretical definition of the construct.
Purpose- Creates ordinal, interval, or ratio measure of a variable expressed as a numerical score. Common when a researcher wants to measure how an individual feels or thinks about something. Some call this HARDNESS or POTENCY OF FEELINGS. Two purposes for using scales: Scales help in the conceptualization and operationalization processes- show the fit between a set of indicators and a single construct. Scales produce quantitative measures and can be used with other variables to test hypotheses.
item analysis
does the item have to do with what you're measuring? does it belong in the scale or index?
internal and external validation
Internal Validity- Means that there are no errors internal to the design of the research project. Used primarily in experimental research to talk about possible errors or alternative explanations of results that arise despite attempts to institute controls.
**High Internal Validity- Means that there are few such errors
**Low Internal Validity- Means that such errors are likely.
External Validity- Ability to generalize findings from a specific setting and small group to a broad range of setting and people. It addresses the question, if something happens in a lab or among a particular group of subjects can the findings be generalized to the ‘real’ world or to the general public.
** High External Validity- Means that the results can be generalized to many situations and many groups of people.
**Low External Validity- Means that the results apply only to a very specific setting.
Likert
widely used and very common in survey research. Usually ask people to indicate whether or not they agree or disagree with a statement. Require a miniumum of two categories such as “agree” and “disagree”. (Examples of type of Likert Scales Table 5.4 on p.131). Response choices must be evenly balanced. Ordinal level of measurement where responses only indicate a ranking. Response Set (Response bias or response style)- Tendecny of some people to answer a large number of items in the same way (usually agreeing) out of laziness or a psychological predisposition.
Two Limitations to using: Different combinations of several scale items can result in the same overall score or result and the response set is a potential danger.
Index v. Scale
Indexes are easier to construct; You can combine several scale questions in a composite index if they are all measuring a single construct
Bogardus
Measures the social distance separating ethnic or other groups from each other. It is used with one group to determine how much distance it feels toward a target or ‘out-group’.
People respond to a series of ordered statement; those that are most threatening or most socially distant are at one end, and those that might be least threatening or socially intimate are at the other end. The logic of the scale assumes that a person who refuses contact or is uncomfortable with the socially distant items will refuse the socially closer items.
Coding is different: 0= yes 1= no. Scale has an intensity structure empirically, questions go from easiest to hardest.
Two limitations- Researcher needs to tailor the categories to a specific out-group and social setting. Two, it is not easy for a researcher to compare how a respondent feels toward several different groups unless the respondent completes a similar social distance scale for all out-groups at the same time.
Guttman scales
Differs from Bogardus and Likert Scales because researchers use it to evaluate data after they are collected. Guttman scaling begins with measuring a set of indicators items. These can be questionnaire items, votes, or observed characteristics. Indicators are generally measured in a yes/no or present/absent fashion.
The logical relationship among items in Guttman Scaling is hierarchical.
Scalogram Analysis- Lets a researcher test whether a hierarchical relationship exists among the items.
Scalable- Capable of forming a Guttman Scale, if a hierarchical pattern exists.
The strength or degree to which items can be scaled is measured with statistics that measure wether the responses can be reproduced based on a hierarchical pattern.
Most range from zero to 100 percent. A score of zero indicators a random pattern, or no hierarchical pattern. A score of 100 indicates that all responses to the answer fit the hierarchical or scaled pattern.
coefficient of reproducibility
w/ Guttman Scales; (1 - # errors/ # of guesses) = (1 - nonscale types/ N cases x N items)
Univariate analysis
describes one variable in isolation
frequency distribution
the easiest way to describe numerical data of one variable. It is a table that show the distribution of cases into the category of one variable. Ex: the % of cases in each category
table construction
label & name the table, say when; list indicators and frequency of each; give N and missing
relative v. absolute frequency
relative frequency refers to the percentages and absolute refers to the total number of respondents.
central tendency
measures of the center of the frequency distribution that helps in summarizing the information about one variable into a single #. Ex: mean, median mode—which are often called averages.

-will all be equal if frequency distribution has a normal (bell shaped) curve—if it is skewed the the 3 will not be equal.
variation
spread dispersion and variability around the center—no variation means everyone is the same
-variation can be measured in 3 ways: range, percentile, and standard deviation
-range: largest and smallest score
-percentile: tell score at a specific place within the distribution
-standard deviation: most comprehensive and widely used.
-require interval or ratio level of measurement
-tells us who is more similar—the lower the S.D. the more similar the subjects in the group are.
bell curve
bell curve is formed from frequency distribution. In a bell curve (or normal curve) the three measures of central tendency are equal