Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
66 Cards in this Set
- Front
- Back
Seven threats to internal validity
|
History, maturation, testing, instrumentation changes, statistical regression to mean, selection bias, ambiguity about direction of causality
|
|
Pre experimental designs
|
Have worst internal validity, lacks randomization, for pilot or exploratory studies
|
|
One shot
|
Pre experimental design - no pretest, one group, one observation, after X, can establish causation are correlation,
|
|
One group pretest post test design
|
Pre experimental design - Doesn't rule out external causes, looks like 01 X O2
|
|
Post test only, not equivalent groups
|
Pre experimental design - X O, O -- that can't assume the groups are equivalent, can't rule out history maturation direction selection bias
|
|
Threat to internal validity in quasi experimental designs
|
Still has threat of selection bias, still suffers testing and instrumentation issues
|
|
Examples of quasi experimental designs
|
Not equivalent comparison, time series,
|
|
Time series designs
|
Have no control are comparison group, used in longitudinal studies, subject to history threat, many observations then intervention than more observations
|
|
pre-test/posttest control group design
|
“classic experiment”, eliminates all threats to internal validity and meets all criteria for causality
|
|
posttest only control group design
|
good for everything but ambiguity of direction, use it when you can’t afford a pretest, or when threat of testing is too high
|
|
solomon four-group design
|
expensive but good for determining testing threat -- 2 exp and 2 control groups, one of each gets a full pretest/posttest format, and the other one of each only get posttest
|
|
alternative treatment design with postest
|
basically, two experimental groups, and each gets one treatment
|
|
external validity
|
extent to which findings can be generalized; depends on sample approach and study design
|
|
threats to external validity
|
research reactivity, placebo effect (defeat each with replication)
|
|
placebo use in swk research
|
sugar pill obviously doesn’t work here -- use an intervention with a known effect
|
|
threats to experimental designs in practice
|
ethics of random assignment, contamination of the study, treatment fidelity, attrition
|
|
single case designs
|
when n = 1, can be one person or one group, is great for practice, use the individual as their own comparison group
|
|
advantages of single case designs
|
can give fast feedback, adds to the body of knowledge for agency, demonstrates the usefullness to funders, can be integrated well into practice
|
|
steps for using experimental designs in practice settings
|
define the target problem, define the goals, define the intervention -- all of this is basically what you’re doing in practice anyway
|
|
principle of repeated measurement
|
using measures of DV over the course of treatment to determine effect of IV
|
|
measurement methods in practice
|
direct observation, count of behaviors or thoughts, client logs, scales or inventories, available records
|
|
overcoming measurement fallshorts
|
triangulation! measure early and measure often! perhaps change methodology
|
|
types of single case designs
|
AB, ABAB, Multiple components, multiple baselines
|
|
troubles with using ABAB designs
|
can you ethically withdraw the intervention for the second A? and will there not be a carryover effect from the first B?
|
|
trouble with multiple components designs
|
(1 baseline and more than one intervention) what about effects of order and carryover? it’s tough to ensure incremental validity. but, you can mix up the order upon replication - still trying to 5-10 measurements per phase
|
|
multiple baselines design
|
providing the same intervention to 2 or more people or settings; offset the interventions, look to see if the pattern repeats
|
|
how to analyze single-case data
|
visually -- look at the responses; statistically; clinically (swkr and client decide if it has helped); replicate!!
|
|
qual v. quant: benefits of qual methods
|
depth of understanding, flexibility, subjectivity (can be good and bad)
|
|
weaknesses of qual methods
|
low generalizeability, not easily replicable (though that doesn’t bother people who do this kind of research)
|
|
addressing weaknesses in qualitative methods
|
triangulation -- use field research, interviewing, case studies, grounded theory, etc
|
|
three types of qual interviews
|
informal conversational; interview guide approach; standardized (interview schedule)
|
|
why use each kind of qual interview?
|
informal is informal -- flexibility! , interview guide approach keeps the conversational style butmakes sure the issues are covered , standardized minimizes interviewer effects to ensure consistency and be efficient
|
|
when to do field research?
|
for topics not easily reduced to numbers; where the information is best gathered and understood in natural settings; when studying processes over time
|
|
interviewing tools
|
DAR (often your phone), field journal or notebook (which you might not use until after the interview, also, do your notes in stages (make sketches at the time, and flesh it out after)
|
|
emic v. etic perspectives
|
emic = in the Midst. etic = Ten foot pole
|
|
Life history interview
|
aka oral history -- how do the participants understand the events of their lives
|
|
Essay: What are some similarities between informal conversational interview and life history interview?
|
It’s an un planned and unanticipated interaction between the interviewer and respondent, and it’s open ended
|
|
benefits of focus groups
|
you can systematically and simultaneously question people - inexpensive - speedy - flexible - may get unique results
|
|
four roles of the qualitative observer
|
complete particpant; particpant as observer; observer as participant; complete observer
|
|
phenomenology
|
“getting into the skin of the client” i guess? ; making it first-person
|
|
Ethnography
|
similar to phenomenology; understand their beliefs; see world through their eyes
|
|
essay: advantages and disadvantages of case study format - example of a situation?
|
in depth description, careful exploration, use it as a basis for future studies, but... low generalizeability, limited scope; example is examining the intervention for low-acheiving students on one particular student
|
|
grounded theory
|
begin with observations and look for patterns or common categories; you’re not tryingto confirm hypotheses; is kinda inductive and deductive at once; and it parallels wi
|
|
parallels btwn grounded theory and swk practice
|
focus on informant’s perceptions; understanding the case within its social context; is inductive and deductive; uses open-ended techniques; natural setting; balance between emic and etic
|
|
Data analysis for qual research
|
stuff like coding; important: it oftens happens at same time, and informs the course of the research
|
|
grounded theory as data analysis technique
|
constant comparison method; use the theoretical sampling method (where you use up similar cases, then move on to deviant cases)
|
|
types of notes used in memoing
|
code notes (id’ing labels and meanings); theoretical notes (reflections on concepts and relationships between them); operational notes (practical issues about data collection, etc)
|
|
types of memoing
|
sorting memo (presenting key themes, trying to discover order in the data); integrating memo (ties together sorting memos); elemental memo (analyzes issues in the data)
|
|
four stages of constant comparison method
|
compare incidents and look for evidence in other cases; integrate categories when you notice relationships between concepts; delimit the theory when you determine some stuff is irrelevant; writing the theory for others
|
|
essay: what is coding and why is it useful when processing qualitative data?
|
coding is the process of standardizing, defining, and grouping the concepts found in textual data, so the researcher may analyze and compare patterns
|
|
steps and methods used in qualitative data analysis, and how are they related
|
coding; creating codes; memoing; discovering patterns; grounded theory; semiotics; conversation analysis
|
|
6 strategies for enhancing rigor of qual studies
|
prolonged engagement; triangulation; peer debriefing; negative case analysis; member checking; auditing
|
|
examples of the kinds of measures used in SC designs
|
triangulate with direct observation, frequency count, client logs, standardized measures, rating scales, available records
|
|
strategies for increasing likelihood of program evaluation findings being used
|
involve stakeholders in planning; share feedback trhoughout; be present whenever possible; tailor eval to the practitioner’s needs or interests
|
|
3 general purposes of program evaluation
|
assess success; assess problems in implementation; gather info for program planning and development
|
|
whare summative program evaluations? which general purpose do they fulfill?
|
assess success, often quantitative;
|
|
what are formative evals? which general purpose do they fulfill?
|
obtaining info for planning and developing programs
|
|
goal attainment evals: what and are they summative or formative
|
summative; looking for causality -- has the program caused its goals to occur?
|
|
program implementation evals: summative, formative? which purpose do they fulfil?
|
formative -- assesses problems and gathers furhter info on improving the program
|
|
process evals? sum or form? which purpose?
|
formative -- assesses problems and gathers furhter info on improving program
|
|
5 ethical guidelines to research participation
|
must be volunteer; must get informed consent; must do no harm; must avoid deception; must protect confidentiality / anonymity
|
|
four levels of measurement and how they differ
|
nominal, ordinal, interval, ratio
|
|
univariate statistics, examples?
|
describe single variables, often categorical, not focused on relationships
|
|
inferential statistics?
|
rule out chance; speak to generalizability
|
|
examples of political considerations when planning research
|
is it okay to withold treatment from some? are we avoiding a group because talking about them is taboo?
|
|
why don’t findings get used in program evals?
|
parties unhappy with findings; if findings contradict deeply held beliefs; if the implications are unclear
|