Study your flashcards anywhere!

Download the official Cram app for free >

  • Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off

How to study your flashcards.

Right/Left arrow keys: Navigate between flashcards.right arrow keyleft arrow key

Up/Down arrow keys: Flip the card between the front and back.down keyup key

H key: Show hint (3rd side).h key

A key: Read text to speech.a key


Play button


Play button




Click to flip

56 Cards in this Set

  • Front
  • Back
experimental designs
3 typesExperimental Designs: characterized by random assignment to experimental and control groups (include classical, posttest only control group, and Solomon 4-group studies).
Quasi-Experiments: use precision matching to assign subjects (includes time-series and counter-balanced designs).
Preexperimental Designs: lack equivalence across groups and include one-and two group ex post facto and one-group before/after designs.designs address problems of invalidity
the classic experimental design
Three key elements: equivalence, pretests and posttests, experimental and control groups.
Equivalence: necessary to ensure that experimental and control groups are alike; researcher employs random assignment (where all subjects have an equal probability of being selected into a particular group).
Matching is an alternative way of achieving equivalence, with both groups being matched on all characteristics the researcher believes to be important.Pretest: observation prior to treatment; posttest: observation after treatment
Experimental group: group of subjects that receive treatment; control group: group of subjects that do not receive treatment.
Design eliminates threats from testing, instrumentation, statistical regression, history, and maturation (which would affect control group and experimental group). Equivalence ensures that there is no selection bias and that mortality is the same.
posttest only: experiemental design
A weaker design: cannot control for most other rival hypotheses (history, etc). Advantages: eliminates testing effects and reactivity.
soloman 4 group experimental design
Weakness: expensive, difficult. Advantage: all of those associated with classic experiment, but can sort out testing effects.
E 01 X 02
E 01 02
E 01 X 02
E 01 02
preexperimental designs. One-group ex post facto (aka one shot case study):
Advantage: quick and easy to do, and often the only way one can collect data. Very common in CJ research.
One-group ex post facto (aka one shot case study):
Weaknesses: virtually all threats to internal validity.
pre experimental design One-group before-after design:
01 X 02 Advantages: is a longitudinal design, with the inclusion of a pretest
Weakness: adding a premeasure introduces testing effects, and no possibility of comparison with a control group.
pre experimental design Two-Group ex post facto design
Advantages: no risk of testing effects
Weakness: impossible to determine if groups are equivalent
cross sectional
study of one (representative) sample of a group at one time.
study of one group over a period of time. The only way one can ascertain time order.
Time series: involve measuring a single variable at successive time points.
Interrupted time series: measurements are taken for equivalent lengths and amounts both before and after the treatment.
Trend studies: analyze different samples of the same general population over time.
Panel studies: examine a select group over time
Not true experiments because they do not employ random selection into groups
quasi experiments: Time-series design
o1 o2 X o3 o4
It is most desirable to have at least 10 pretests, but you need at least two. These pretests establish whether there are trends at work.
Why is this design better than one before/after observation?Multiple interrupted time series designs use a comparison (not a ‘control’) group.
quasi-experiments Counterbalanced Designs
used to manage the problem of multiple treatment inference, where X1 refers to one treatment, X2 to another, and so on.
E X101 X202 X303 E X201 X302 X103
Kansas City Preventative Patrol Experiment
Multiple groups (proactive, reactive, usual)
Observed: crime against people & businesses, perceptions of safety
Multiple groups (arrest, separate, mediate)
Observed: rearrests
Minneapolis Domestic Violence Experiment
advantages/disadvantagtes of kansas city and misnneapolis
Lots of variability: some occur in the laboratory (best internal validity; worst external validity) and some in the field (worst internal validity, but best external validity).
Offer best control for stuff that affects internal validity.
Experiments are relatively quick and inexpensive.
Experiments are easy to manage (because everything is defined in advance) and to replicate.
Experiments also can be applied to natural settings
Artificiality: the controls imposed by the researcher are unlikely to reflect real life (especially in laboratory experiments).
Difficulty in manipulating variables of interest, owing to subject or implementer resistance.
Experimenter effects
a procedure used in research by which a select subunit of a population is studied in order to analyze an entire population. Sampling is much less expensive than surveying an entire population
sampling frame
a complete list of the population or universe that one is interested in studying.
types of sampling
Simple Random
Stratified Random
Systematic (multistage)
probability sample
refer to samples that permit the estimation of the likelihood of each element of the population being selected in the sample.
Simple Random Samples (SRS) probabilty sample
where each unit in the population has an equal probability of being selected.
Equal Probability of Selection Method (EPSEM): important in that most statistical procedures assume that the sample is drawn based on probability. Since there is no guarantee that our SRS is a perfect match of the population, such statistics allow us to generate “confidence intervals” that incorporate error.
Often impossible to get a complete list of elements.
Random sampling can be a cumbersome procedure, and expensive
SRS does not guarantee a representative sample.
perfect for statistical procedures
stratified random samples proportionate and disproportionate
probability sampleing
Proportionate: where you divide the population into strata in order to ensure that the proportion of certain characteristics in the sample are identical to the proportion in the population.
Disproportionate: where you oversample certain groups in order to have enough cases for meaningful comparison with other groups.
clustered sampleing probabilty sampleing
Used in field interview surveys, where subjects are widely dispersed. Goal is to reduce cost, since SRS isn’t practical.
Population is divided into clusters (e.g., city blocks, census tracts).
multistage sampling probability sampleing
Combine stratified sampling, cluster sampling, and SRS.
Researcher divides geographic regions into regions and randomly sample each region. The region is divided into smaller units, and the researcher randomly samples these units. The process continues until the researcher has obtained desired elements.
non-probability samples
With these samples, you cannot generalize to a larger population.
quota sample- non probability
a nonprobability stratified sample. The researcher tries to make sure that the sample has the desired proportion of age, sex, etc. The researcher uses judgment, rather than probability, to select samples
accedental samples nonprobability samples
no stratification; only researcher judgment.
Purposive Samples non probability
represent the selection of subjects based on the researcher’s skill, judgment, and needs. Include “focus group” data collection, where volunteers are brought together to obtain in-depth qualitative data.
Mock Trials: designed to simulate a trial in all respects. Goal is to mimic a trial beforehand.
Criminal profiling: talk to offenders to learn more about them, for purposes of identifying potential future offenders.
snowball sampleing non probability
used in exploratory research (where little is known) and with hard-to-get samples. One gets one subject, who identifies other potential subjects, and so on.
surveys analytic/descriptive
Descriptive survey research: uses statistical probability to ask whether something that is true of the sample is true of the population.
Analytic survey research: attempts to address issues of cause and effect.
Note that surveys have weaknesses: surveys do not measure directly observed behavior, but rather attitudes or claimed behavior.
guidelines for questionaires
First, have a clear research problem in mind, as well as an understanding of what kind of data you would need to address that problem!
Create a variables list, which lists the questions and identifies which concepts these questions seek to measure. This helps you to eliminate duplicate items, unwanted questions, or too much emphasis on certain topics.
questionarie wording
Use language geared toward the target population.
Avoid jargon, unless your sample can be reasonably assumed to understand it.
If your sample speaks a foreign language, you’ll need to write the questionnaire in their language.
Be aware of who should answer the questions.

Avoid biased or leading questions
Avoid double-barreled questions
Avoid asking questions in an objectionable manner.
Avoid assuming prior information on the part of the respondent.
Avoid vague wording. Use language with common and clear meaning.Avoid asking more than one needs to know. It just makes the survey longer.
Avoid “response set” patterns by using reversal questions.
E.g., “I am usually hot tempered” and “It takes a lot to make me angry” (Strongly agree to strongly disagree)
threatening questions
Be casual: “Do you happen to have murdered your wife?”
Use a numbered card: “Would you please read off the number on this card which corresponds to what became of your wife?”
Natural death
I killed her
Other (Please explain)
The everybody approach: “As you know, many people have been killing their wives these days. Do you happen to have killed yours?”
The other people approach:
Do you know any people who have murdered their wives?
How about yourself?
Secret ballot approach
Open ended (unstructured questions)
Advantage: greater detail in answers
Disadvantage: hard to code, and irrelevant answers
Close ended (structured questions)
Advantage: easy to code, and easier for respondents
Disadvantage: lack of detail
Always pretest or pilot your survey! You need to know if the questions are clear or understandable.
questionaire organzation
Question order can influence responses as well as willingness to participate and answer questions!
A questionnaire is best that begins with an interesting question; avoid boring demographic stuff (but is this really true?).
Don’t get too interesting. Don’t ask super sensitive questions.
mail survey
The mail survey is popular among researchers since it is generally the least expensive and best for getting the largest samples.
Survey is great for all kinds of purposes and topics, including those that are sensitive.
Note that nonresponse is increasing as people are inundated with surveys (average response is around 48.4%). Why might nonresponse be a problem?
They afford wide geographical and perhaps more representative samples at a reasonable cost, time, and effort.
Surveys require no field staff (like in interviews).
No interviewer bias effects
Greater privacy
More time to think out answers (very useful if one needs to look up information, like previous year’s income).Relatively high nonresponse (20% response in first-wave surveys).
Lack of uniformity of responses
Slowness of responses to follow-up attempts.
Respondents can misinterpret questions.
Higher costs due to multiple follow-ups.
how to eliminate disadantages of mail in survey
Two types of nonresponse: those who have yet to respond and those who refuse to cooperate. In the latter case, you do not want to pressure them to participate—instead create a “replacement pool.”
Other techniques:
Follow up: include renewed mailings of questionnaire or shortened questionnaire. Also, postcards and phone calls. (In anonymous surveys, respondents can send a postcard in to indicate that they have completed the survey, so they do not get any reminders). You should follow up after the peak period of responses come in.
Offered remuneration: rewards or incentives to participate. Some researchers enclose money (to promote guilt at not participating) or else promise payment on completion. Researchers also offer to share the results of the research.
Attractive format: have the survey look as professional as possible; however, cost can be prohibitive.
Sponsorship and endorsements: used to enhance legitimacy of the survey. Note prestigious sponsors (people not associated with prestigious institutions get lower responses). Also, endorsements from prominent individuals.
Personalization: includes handwritten notes, showy stamps, but not much is known about its effect.
Shortened format: used in case length of survey might be an issue.
Good timing: send the survey during time periods where other surveys are not being sent (or during holiday seasons).
self reports
Purpose is to ask respondents to admit to various behaviors. Original intent was to overcome limitations of UCR.
Most self-reports sample students of all ages.
Problems with self-reports:
Reported behavior might not equal actual behavior (e.g., lying, forgetfulness).
Poor or inconsistent instruments (lack time reference).
Poor research design
Poor samples (blacks are often underrepresentedReliability: in recent years, there is a high correspondence (82%) in answers given on retests.
Validity: researchers can check validity in a number of ways:
Use of other data: compare self-reports to arrests or school disciplinary data, for instance.
Use of other observers: e.g., teachers, parents, friends.
Known group validation: compare group outcomes to alternative sources (e.g., UCR)
Use of lie scales: ask questions that no one would respond in a particular way to (e.g., “I always tell the truth; I never feel sad). If there are too many “incorrect” answers, then the researcher can throw out the responses.
Measures of internal consistency: looks at consistency of responses for items asked in opposite ways.
internet survey
A new means of collecting survey data
Email surveys
Web-based surveys
High access (at least to specialized populations)
Fast and cheap
Easy publication of results.Disadvantages:
Access limited to those with web access or email accounts (42% of U.S. in ‘00).
Response rates tend to be low (email is easy to delete).
Lack of anonymity (due to cookies; hacking)
Serious bias in samples (lack of women, elderly, minorities).
error in research
Error is another term for invalidity, which is always present in research.
No a priori predictions (I.e., no theory), poorly defined variables, confounded variables, poor sampling
Validity: asks “does my measuring instrument in fact measure what it claims to measure?”
Reliability: asks “do my measures yield the same results time and time again if I repeat the study?”
how to determine validity
Five types:
Face validity
Content validity
Construct validity
Pragmatic validity
Convergent-discriminant validity
face vailidity
asks “does the instrument measure, at face value, what I am attempting to measure?”
A very judgmental approach that does not necessarily have an empirical basis.
Only concerned with “does the measure look good?”
content validity
asks “does each item, or the content of the instrument, measure the concept in question?”
Focus is on a set of items within a domain of meaning (e.g., crime, happiness)Another judgmental approach and usually nonempirical (though researchers sometimes use statistics to ascertain content validity).
One should eliminate questions that “do not fit” or contribute to predictive validity.
construct vaildity
refers to the fit between theoretical and operational definitions of a concept.
An instrument itself may be measuring different or even multiple theoretical concepts.E.g., a methods test using lots of high-level language may measure literacy as well as knowledge of methods
pragmatic validity
asks “does the measure work?” There are two types:
Concurrent validity: does the measure enhance the ability to gauge present characteristics of the item itself. That is, you would compare results to existing measures.
Predictive validity: do the measures accurately forecast outcomes they are supposed to?
Convergent-Discriminant validity/Triangulation
asks “do different methods or measures produce different (discriminant) or similar (convergent) results?”
Relies on triangulation, the use of multiple methods to see if obtained results are similar.
Validity can be ascertained with pretests of the instrument.
shown through stable and consistent replication of findings.
Stability of measurement: where a respondent will give the same answer to the same question upon a second testing.
Consistency of measurement: where a set of items used to measure the same phenomenon (or concept) are highly related to each other.
3 ways to show reliability
multipule forms
split half techinques
where the same instrument is administered twice to the same population. Identical results imply stability of measurement.
Be aware that repeated measurements can lead to testing effects. One may need comparison groups to untangle testing effects (one group gets no posttest).
multiple forms
a disguised test-retest that uses alternate forms that measure the same thing. Again similar results imply stability. Still subject to testing effects.
split half techniques
the most popular test of reliability, as it does not require repeated measures. Researcher randomly splits the questionnaire after administering it, and each half is treated as a separate instrument. The random halves are compared to see if they are similar or dissimilar.
advantages/disadvantages of internet/ phone survey
Advantage: Interviews provides contact between the researcher and subject
Higher response rate
Clear up misunderstandings about questions
Interviewer can be an observer, too
Flexibility to use audiovisual aids
Interviewer discretion about question order
Time-consuming and costly
Interviewer effects/bias
Interviewer error
Management and supervision more difficult
Less anonymity
how to interview
Training and orientation of interviewers
Can last from a day to a week
Purpose is to get interviewer familiar with the instrument and purpose of study
Boost interviewer morale
Arrange the interview
Arrive at times when subjects are likely to be at home.
Furnish interviewers with official identification (to avoid stigma as salespeople)What kind of clothing/demeanor would you use if interviewing
demenor of interview
Make sure interviewers dress in a similar style to those being interviewed (but comfortably).
Interviewers should also be a similar age, race, and sex to the subjects.
Language should be similar to subjects’
Must assure subjects that answers will be in strictest confidence
Be friendly, enthusiastic about the study, considerate, and outgoing (to build rapport). But also persistent.
administering the interview
Should be done in an easy, relaxed, informal way. It should not be an interrogation
With sensitive questions, the interviewer can try to convince the subject to answer by assuring them of confidence and that their answer is essential to the success of the project.
Probes: follow-up questions to focus, expand, or clarify a response given. Interviewers should have an idea what kind of response is needed in order to know when a probe is appropriate.
Beware of being baited into a long conversation; keep the interview on track and moving brisklyExiting: close with light conversation, thank the subject, and answer any final questions the respondent may have at the end.well-executed interview survey will achieve a completion rate of 80-85%
advantages/disadvantages telephone survey
No field staff (save $$$)
Easier to monitor interviewers (less bias)
Larger sample sizes due to speed of interviewing
Low nonresponse
Easy follow-up if subject is not home
Hard to get in-depth responses over the phone
Less access to poor (who tend not to have phones) and rich (who tend to be unlisted); solved by random digit dialing
viticm surveys and problems
Large samples cost $$$
Why is large sample needed?
False reports
Mistaken reporting (inaccurate interpretation of what happened)
Poor memory
Sampling bias
Over- and under-reporting
Interviewer effectsDesigned to get at “dark figure of crime” (I.e., crimes that are not reported to the police).
Found that victimization occurs a lot more often than the police know about.
How are they done?
1/3rd by telephone, the rest are face to face
contorl for error in victim survey
Bounding: designed to eliminate telescoping through panel design
Reverse record checks: check with police records to learn about crimes reported to police