Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key


Play button


Play button




Click to flip

46 Cards in this Set

  • Front
  • Back
Purpose of Experimental Research
to control relationships among a set of variables in order to enable evaluation of causal relationships between specific variables of interest to the study.
Basic Issues of Experimental
In designing an experiment, decisions have to be made with respect to the four basic issues; Independent variable(s) to be manipulated, Dependent variable(s) to selected and measured, Extraneous variables to be controlled and Assignment of test units to the experimental treatments
Manipulation of the Independent Variable
Experimental Treatment, Experimental Group, several experimental treatment levels, several independent variables, and control group
Selection and measurement of the dependent variables
based on the purpose of the research, measurement and latency
Test Units
persons or entities whose responses are to be measured—individuals, teams, departments, school systems, supermarkets, airports, etc.
Sample Selection Error
bias introduced by inappropriate selection of test units—MWC business students may not be representative of total student population
Random Sampling Error
statistical fluctuation resulting from chance in selecting test units for the sample—unavoidable
the assignment of subjects to treatments on the basis of chance. Generally considered the best procedure for controlling extraneous variables. Scatters or evens out the potential effects of extraneous variables
assigning test units to experimental treatments so groups are matched for extraneous variables that might distort results such as age, gender, education, income, etc.
Repeated Measures
subjecting the same subjects to all experimental treatments. Eliminates error due to subject differences but may introduce other problems such as “practice” or “history” effects.
Constant error
term usually used for “systematic error” when discussing experiments—error attributable to flaw in design or execution of the experiment.
Extraneous variables
variables that, uncontrolled, distort results in a particular direction every time an experiment is repeated and which mask the true state of affairs
o Example – consistently administering one experimental treatment in the morning and the other one in the afternoon
Demand characteristics
experimental procedures that hint to subjects the nature of the hypothesis and suggest—“demand”—that they respond in a particular way.
o Experimenter bias – a constant error caused by the experimenter’s presence, actions, or attitudes—appearing eager, authoritarian, etc.
o Guinea pig effect – a constant error caused by subjects behaving abnormally in order to “cooperate” with or “please” the experimenter.
o Hawthorne effect – a constant error caused by subjects’ knowing they are participating in “research”—a kind of status/motivational effect
Constancy of conditions
when extraneous variables can’t be eliminated, subjects in various treatments are exposed to identical conditions except for the treatment itself—assumes the effects of extraneous factors are spread evenly across all treatment conditions
Order of presentation bias
error caused by subjects’ accumulating experience or savvy in the course of responding to multiple experimental treatments.
technique to reduce error caused by order of presentation—varying the order of experimental treatments for different experimental groups
technique to control subjects’ knowledge of whether or not they have been given a particular experimental treatment.
Double-blind design
neither the subjects nor the experimenter know which are the experimental and which are the control conditions.
Random assignment
if extraneous factors can’t be controlled, it is assumed that their effects will be equally present in all experimental conditions through random assignment.
giving subjects all pertinent facts about the nature and purpose of the experiment after its completion.
Basic Design
a single independent variable is manipulated to observe its effect on single dependent variable
Factorial Designs
two or more independent variables are manipulated simultaneously to measure both main effects and the effect of their interaction on a dependent variable
Laboratory Experiment
an experiment conducted in an artificial setting in which the experimenter has almost complete control mover the research setting
a device that controls the amount of time a subject is exposed to a visual image.
Field Experiment
an experiment conducted in a natural setting—often for a long period of time—in which the experimenter has less control over extraneous variables.
Internal Validity
whether an experimental treatment was the sole cause of change in a dependent variable. Six common extraneous variables raise issues of internal validity ; Maturation Effect, instrumentaion effect, selection effect, testing effect, history effect, mortality effect
Maturation Effect
caused by changes in the experimental subjects over time such as boredom, aging, attitude.
Instrumentation Effect
a change in the wording of a questionnaire, a change in interviewers, or a change in procedures used to measure the dependent variable
Selection Effect
sample bias resulting in differential selection of subjects for the various experimental treatments—such as age, gender, or educational level.
Testing Effect
in a before- and after-study, an effect of pretesting that sensitizes subjects when tested for the second time
History Effect
a specific event in the external environment occurring between the first and second measurements of the experiment—beyond control of the experimenter—that affects the validity of the experiment
Mortality Effect
sample attrition that occurs when some subjects withdraw from an experiment before it is completed
External Validity
the ability to generalize the results of an experiment to similar conditions outside the experiment itself—transferring the results to the real world
The Trade-off
the choice between a laboratory experiment and a field study always involves a trade-off. A field study may achieve higher external validity at the expense of internal validity—meaning the results may be specific to the real-world field situation only. Likewise, the results of a laboratory experiment may not be generalizable to the real world.
One Shot Design
Flawed quasi experiment because it has no comparison group and no control over extraneous variables (exp. offering a prize to boost sales)
One-Group Pretest-Posttest Design
Flawed quasi experiment. SOUrces of error include maturation effect, history effect, mortality effect, instrumentation effect
Static Group Design
Flawed Quasi experiment. • Major weakness is that there may be systematic differences in the groups selected, e.g., volunteers used for the experimental treatment
Pretest-Posttest Control Group Design
It is assumed that the effects any extraneous variables are balanced by the randomization, but a testing effect is still possible if subjects are sensitized by the test questions to the purpose of the research
Posttest Only Control Group Design
Used when: (1) pretest not possible, or (2) groups are known to be equal. Eliminates testing effect and assumes extraneous variables operate equally on both groups.
Solomon Four-Group Design
Separates out maturation effect, testing effect, and treatment effect and is rarely used in business because of effort, time, and cost
Compromise Experimental Design
sometimes random assignment is not possible, e.g., when administering treatment to departments within an organization. In such cases compromise designs are unavoidable, e.g., assigning units as a whole to experimental treatments.
Time Series Design
repeated observations are taken over an extended period of time—e.g., as in political polls—enables researcher to evaluate: The immediate effect of an event or experimental treatment
The permanence of the effect. Time series designs are subject to, History effect and Loss of control (therefore are quasi-experimental designs)
Completely Randomized Design
randomly assigns experimental units to treatments to investigate the effects of a single independent variable
Randomized Block Design
an extension of the completely randomized design in which each experimental treatment is administered to each value (block) of a particular extraneous variable.
Factorial Design
allow for testing the effects of two or more treatments (factors) at various levels
Latin Square Design
attempts to control or block out the effect of two or more confounding extraneous factors. The major assumption—and drawback—of the Latin Square design is that interaction effects are expected to be minimal or nonexistent. A Latin Square design can have any number of treatments