Study your flashcards anywhere!
Download the official Cram app for free >
 Shuffle
Toggle OnToggle Off
 Alphabetize
Toggle OnToggle Off
 Front First
Toggle OnToggle Off
 Both Sides
Toggle OnToggle Off
 Read
Toggle OnToggle Off
How to study your flashcards.
Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key
Up/Down arrow keys: Flip the card between the front and back.down keyup key
H key: Show hint (3rd side).h key
A key: Read text to speech.a key
45 Cards in this Set
 Front
 Back
Correlation

Observed relationship between two variables in which the values of one variable vary predictably with the
values of the other  possibly, but not necessarily, reflecting a causeandeffect relationship. Example: Homicide rates vary predictably with the phase of the moon, rising as the moon becomes more full (allowing murderers to see better at night). 

Correlation Coefficient

Numerical index of association of two variables – usually linear association – computed from
measured values of those two variables in the same population, showing the extent to which the values of one variable can be used to predict the values of the other. Examples: a) Pearson r, the most common index of linear correlation used in Psychology. b) Spearman rankorder correlation coefficient (rho). (Most correlation coefficients vary from –1.0, indicating a perfect inverse, linear association, through zero, indicating no linear association, to +1.0, indicating a perfect direct, linear association. See also causal inference; correlation; third variable.) 

Hypothesis

Tentative statement of predicted relationship among variables or differences among groups. Examples: a)
Playing violent videogames promotes violent behavior. b) Increased illumination brings increased accuracy of performance. 

Operational Definition

Specific procedures used to measure or manipulate a variable in an empirical study. Examples: a)
Using the StanfordBinet IQ test as a measure of intelligence. b) Manipulating Noise via 2 conditions: Noisy, intermittent 90 dB varied blasts at least every 10 seconds, or Quiet, 20dB white noise. 

Paradigm

Prevailing scientific approach, including theory and associated research methods accepted by most scientists in a
discipline. Example: Before Galileo's time, astronomers' paradigm consisted of the accepted theory that the sun revolved around earth, tested by the method of unaided nighttime observation until the telescope was invented. 

Replication

Repetition of an empirical study using essentially the same procedures as in the original study, in a different
setting or population, to confirm the original results. Example: Bandura & Walters' second experiment on children's imitation of a televised adult model hitting a large, inflatable doll, in their study using the same procedures as in the first experiment. 

Statistical Significance

In research, an observed difference between groups or relationship of variables that researchers
deem unlikely due to chance alone, which in Psychology has traditionally meant p < .05, or a probability of less than 5 in 100. Example: Average, annual attendance rates at Kansas rotary clubs correlated .44 with club size or number of duespaying members per club, a relationship significant at p < .05 for the number of clubs studied. (See also Type I error. Testing statistical significance requires an inferential statistic. ) 

Theory

Collection of interrelated propositions concerning the causes of a phenomenon and how it operates, specifying
key variables of the phenomenon and their relationships. Examples: a) Kohlberg's theory of psychological development. b) Freud's theory of unconscious motivation. 

Variable

Property or characteristic that can take ≥2 values, representing either membership in a category or a point on a
continuum. Examples: a) gender; b) intelligence. (See also factor.) 

Control Procedures

In an experiment, techniques introduced to reduce or eliminate differences among participating
individuals or groups on extraneous variables. Examples: a) Holding a variable constant, for instance, keeping gender constant by only including female participants. b) Random assignment of participants to comparison groups, making them statistically comparable but leaving possible differences to chance. c) Balancing the distribution of a variable across comparison groups, for example in a participant population consisting of 40% male and 60% female volunteers, randomly assigning them proportionately to comparison groups so each has a 4060 mix of males and females. 

Dependent Variable

In an experiment, a property or characteristic measured to register effects of the manipulated,
independent variable. Example: In an experiment on the effects of meditation on blood pressure, dependent variables include systolic and diastolic blood pressure. (The name, "dependent variable," reflects the idea that, as a measure of the effect in a supposed causeandeffect relationship, its values are dependent on experimental treatments. See also causal inference.) 

Experiment

Empirical investigation intended as a basis for causal inference, with 4 features: a) independent variable; b) a
dependent variable; c) control procedures to deal with extraneous variables, such as holding some factors constant; and d) random assignment of participants to conditions. Example: Latane & Darley told students they would take part via headset from a cubicle in an audio "discussion" with 1 or 4 other participants (independent variable, randomly assigned), staged an epileptic seizure in the "discussion," and noted whether the students left the cubicle to seek help (dependent variable) while holding other factors constant (control procedures), such as prerecorded, constant "discussion" and "seizure." 

External Validity

Generality of findings from an empirical study across populations, settings, methods of measurement, and
in experiments or quasiexperiments, operational definitions of treatments. Example: Extent to which findings of adverse effects of noise on performance in a laboratory also appear in workplaces. (See also internal validity; threehorned dilemma.) 

Extraneous Variable

A factor other than those on which an empirical study focuses that could influence a variable measured
in a research project. Example: In an experiment on effects of meditation on bloodpressure, saltintake, which also influences bloodpressure. (Experimenters focus on extraneous variables that could prompt an alternative explanation of differences among treatment groups on a dependent variable. See also: confounding; control procedures; threats to internal validity.) 

Field Experiment

Type of experiment conducted in a natural or reallife setting. Example: Study of New York City taxi
cabs, some randomly given deceleration lights to reduce rearend collisions, compared later on accident rates with identical taxicabs with no added lights. (See also laboratory experiment; quasiexperiment.) 

Field Study

Empirical datacollection in a natural or reallife setting using only measurement, no manipulated variables.
Example: Herzberg's interviews of employees to identify specific "satisfiers" and "dissatisfiers" in their jobs. (A field study yields only correlations, which by themselves do not support causal inference.) 

Independent Variable

Factor manipulated in an experiment by creating ≥ 2 comparable groups or conditions that differ only
on that factor, later compared on the dependent variable(s). Example: In a study on the effects of noise on proofreading errors, manipulating noise by assigning participants to 2 groups: quiet or intermittent bursts of 90dB sound. (The name, "independent variable," refers to its role in an experiment as the hypothesized cause of variation in the dependent variable, and to the fact that the experimenter manipulates the independent variable before assessing its effects  making it independent. Some researchers use "independent variable" incorrectly to refer to a measured trait with hypothesized "effects" on other, measured variables. For example, a researcher might erroneously label gender as an independent variable, and another factor like perceived empathy as a dependent variable, ignoring the fact that measured traits like gender and empathy aren't independent.) 

Internal Validity

In an experiment or quasiexperiment, the extent to which the research design rules out, or at least renders
doubtful, any possible, alternative explanations besides the manipulated variable for observed variation in a measured variable. Example: In a quasiexperiment on the effects of safety training on accidents at manufacturing facilities, the extent to which the research design rules out explanations for concurrent decreases in accident rates due to factors besides the training, such as employees' learning on the job at the same time as the training. (See also external validity; threats to internal validity.) 

Laboratory Experiment

Experiment conducted in a controlled environment. Example: Carter's (1979) study in a military
laboratory of effects of the number of objects shown on a radar screen on participants' search time for specific objects. 

Random Assignment

Dividing participants among ≥2 experimental treatment conditions in a way that makes all participants
equally likely to receive a particular treatment, for instance, using a table of random numbers, fair cointoss, or fair lotterylike procedure. (See also experiment, quasiexperiment.) Example: For an experiment on the effects of number of listeners on bystander intervention in a staged emergency, researchers assign participants to 1 of 6 conditions by rolling a fair die. 

Sample Survey

Empirical study of a sample deliberately selected to represent a larger population, involving collection of
data via interview and/or questionnaire. Example: Nielsen survey of a sample of U.S. television viewers, based on responses to phone interviews and written questionnaires. 

Simulation Study

Empirical study done in a laboratory or controlled setting with no manipulated variable, just one
treatment, and ≥1 measured variable(s), in an attempt to recreate key aspects of a reallife situation. Examples: a) Milgram's study of obedience of orders to give electrical shocks to a "learner." b) Zimbardo's study of college students roleplaying prisoners & guards. (Also called a laboratory simulation.) 

Third Variable

When two measured variables show a correlation, an extraneous variable that could potentially influence
both, and could provide an alternative explanation of why one of the correlated variables might not cause variation in the other. Example: Where population density correlates with crimerates, a third variable is poverty, which correlates with both crimerates and population density; poverty might foster both highdensity living and high crime rates. (See also causal inference.) 

ThreeHorned Dilemma

Inevitable conflict in research in which choosing a method that maximizes any 1 of 3 desirable
features – a) precision & control; b) ecological integrity / system context; and c) external validity or generality across populations and settings – minimizes the other 2 desirable features. Examples: a) Laboratory experiments offer precision & control, and allow manipulation of variables, but use artificial settings and selective participant groups not necessarily representative of other settings or populations. b) Field studies in selected, natural settings leave ecology relatively undisturbed and preserve system context, but lack generality in relation to other settings or populations, and allow little precision or control. 

Content Validity

The extent to which a test, questionnaire, or other multiitem measure incorporates a representative subset
of the entire domain targeted for measurement. Example: A written test used in selecting candidates for the job of firefighter includes items corresponding with all major job tasks, like driving a fire engine, operating a fire hose, using an axe to open airvents in walls, etc. (Type of validity of measurement.) 

Criterionrelated Validity

Extent to which the values from a new measurement procedure correlate with values from
another, wellestablished measure (criterion) of the same variable for the same individuals. Examples: Establishing validity of a new measure of the trait Extroversion by collecting data on it and on an established measure of Extroversion like the NEOPI Extroversion scale among the same individuals. b) Correlation of scores on an experimental measure of intelligence with scores by the same people on the StanfordBinet IQ test of intelligence. (Criterionrelated validity is a kind of validity of measurement.) 

Interval Scale

Type of measurement that assigns values to a variable with an underlying continuum so increments of a
given size anywhere on the continuum correspond with same increment of the attribute (equal intervals), but zero on the scale does not indicate complete absence of the attribute. Example: temperature in degrees Fahrenheit, °F. (See also nominal measurement, ordinal measurement, ratio scale. Because an interval scale has an arbitrary zero, ratios formed from values of this type of measurement are incorrect. For instance, 100°F is not twice as warm as 50°F.) 

Internal Consistency Reliability

Type of reliability of measurement that applies to the homogeneity of items in a multiitem
questionnaire intended to measure one attribute, as indicated by the correlation among individual responses to the items. Example: xx (See also Coefficient Alpha; splithalf reliability coefficient.) 

Measurement

Systematically assigning numbers or names to specific observations or events to represent values of a
variable, either membership in one of ≥ categories or a point on a continuum. Examples: a) Classifying people by religious affiliation; b) Using scores on an IQ test to represent individual intelligence. 

Nominal Measurement

Assigning categories as values of a variable that has no underlying continuum (or simply naming,
hence the name "nominal"), to indicate differences in kinds, but not degrees or amounts, of the attribute. Examples: a) Gender, male or female. b) Marital status: Single or married. (See also interval scale, ordinal measurement, ratio scale.) 

Ordinal Measurement

Assigning ranks as values of a variable that has an underlying continuum, indicating the relative
order from highest to lowest, and leaving unspecified the degree or amount of difference between ranks, which can vary. Examples: a) Finishing position in a race: first, second, third, etc. b) Ranking employees on performance. (See also interval scale, nominal measurement, ratio scale.) 

Ordinal Measurement

Assigning ranks as values of a variable that has an underlying continuum, indicating the relative
order from highest to lowest, and leaving unspecified the degree or amount of difference between ranks, which can vary. Examples: a) Finishing position in a race: first, second, third, etc. b) Ranking employees on performance. (See also interval scale, nominal measurement, ratio scale.) 

Ordinal Measurement

Assigning ranks as values of a variable that has an underlying continuum, indicating the relative
order from highest to lowest, and leaving unspecified the degree or amount of difference between ranks, which can vary. Examples: a) Finishing position in a race: first, second, third, etc. b) Ranking employees on performance. (See also interval scale, nominal measurement, ratio scale.) 

Psychological Test

Measurement of a carefully chosen example of an individual’s behavior, including paperandpencil
questionnaires, work samples, and others. Example: a) NEOPI, copyrighted, written test designed to assess the "Big Five" personality traits. 

Psychological Test

Measurement of a carefully chosen example of an individual’s behavior, including paperandpencil
questionnaires, work samples, and others. Example: a) NEOPI, copyrighted, written test designed to assess the "Big Five" personality traits. 

Psychological Test

Measurement of a carefully chosen example of an individual’s behavior, including paperandpencil
questionnaires, work samples, and others. Example: a) NEOPI, copyrighted, written test designed to assess the "Big Five" personality traits. 

Ratio Scale

Type of measurement that assigns values to a variable with an underlying continuum so increments of a given
size anywhere on the continuum correspond with same increment of the attribute (equal intervals), and zero indicates complete absence of the attribute. Examples: a) Temperature in degrees Kelvin, °K. b) Elapsed time in seconds. (See also interval scale, nominal measurement, ordinal measurement. The name, "ratio," means ratios can be accurately formed from this type of measurement. For instance, 6 seconds is three times as long as 2 seconds, and 10°K is half as warm as 20°K.) 

Ratio Scale

Type of measurement that assigns values to a variable with an underlying continuum so increments of a given
size anywhere on the continuum correspond with same increment of the attribute (equal intervals), and zero indicates complete absence of the attribute. Examples: a) Temperature in degrees Kelvin, °K. b) Elapsed time in seconds. (See also interval scale, nominal measurement, ordinal measurement. The name, "ratio," means ratios can be accurately formed from this type of measurement. For instance, 6 seconds is three times as long as 2 seconds, and 10°K is half as warm as 20°K.) 

Ratio Scale

Type of measurement that assigns values to a variable with an underlying continuum so increments of a given
size anywhere on the continuum correspond with same increment of the attribute (equal intervals), and zero indicates complete absence of the attribute. Examples: a) Temperature in degrees Kelvin, °K. b) Elapsed time in seconds. (See also interval scale, nominal measurement, ordinal measurement. The name, "ratio," means ratios can be accurately formed from this type of measurement. For instance, 6 seconds is three times as long as 2 seconds, and 10°K is half as warm as 20°K.) 

Reliability of Measurement

Consistency or reproducibility of a measurement procedure. Examples: a) alternate forms
reliability; b) retest reliability; c) internal consistency reliability. (See also Coefficient Alpha; Splithalf reliability coefficient.) 

Reliability of Measurement

Consistency or reproducibility of a measurement procedure. Examples: a) alternate forms
reliability; b) retest reliability; c) internal consistency reliability. (See also Coefficient Alpha; Splithalf reliability coefficient.) 

Reliability of Measurement

Consistency or reproducibility of a measurement procedure. Examples: a) alternate forms
reliability; b) retest reliability; c) internal consistency reliability. (See also Coefficient Alpha; Splithalf reliability coefficient.) 

Retest Reliability

Consistency in the results of 2 successive applications of exactly the same measurement procedure to the
same population, indicated by the correlation of the two sets of measurements (a retest reliability coefficient). Examples: a) testing a group twice with the StanfordBinet IQ test and assessing the correlation of the two sets of scores. b) Giving the Industrial Reading Test to 50 job applicants, readministering next day, and calculating the correlation between the two sets of scores. (A type of reliability of measurement. See also alternate forms reliability; internal consistency reliability.) 

Retest Reliability

Consistency in the results of 2 successive applications of exactly the same measurement procedure to the
same population, indicated by the correlation of the two sets of measurements (a retest reliability coefficient). Examples: a) testing a group twice with the StanfordBinet IQ test and assessing the correlation of the two sets of scores. b) Giving the Industrial Reading Test to 50 job applicants, readministering next day, and calculating the correlation between the two sets of scores. (A type of reliability of measurement. See also alternate forms reliability; internal consistency reliability.) 

Retest Reliability

Consistency in the results of 2 successive applications of exactly the same measurement procedure to the
same population, indicated by the correlation of the two sets of measurements (a retest reliability coefficient). Examples: a) testing a group twice with the StanfordBinet IQ test and assessing the correlation of the two sets of scores. b) Giving the Industrial Reading Test to 50 job applicants, readministering next day, and calculating the correlation between the two sets of scores. (A type of reliability of measurement. See also alternate forms reliability; internal consistency reliability.) 