Use LEFT and RIGHT arrow keys to navigate between flashcards;
Use UP and DOWN arrow keys to flip the card;
H to show hint;
A reads text to speech;
117 Cards in this Set
- Front
- Back
research
|
knowledge building through logic and observation.
|
|
social work research
|
building knowledge for social work practice, compassionate, problem-solving and practical endeavor.
|
|
research process
|
develop hypotheses-> operationalize concepts-L> design your research-> collect data-> process and analyze data ->interpret the results-> write it up.
|
|
ways of knowing
|
common sense, authority, popular media, personal experience, scientific inquiry, tradition.
|
|
scientific alternative
|
a way of thinking about and investigating assumptions, most likely to help us do our jobs as social workers most effectively.
|
|
features of scientific method
|
produces tentative, provisional knowledge, empirical, strives for objectivity, employs certain rules, procedures and techniques and they are tranparent.
|
|
flaws in scientifc inquiry
|
inaccurate observations, overgeneralization, selective observation, ex post facto, and ego involvement.
|
|
characteristics of pseudoscience
|
extreme claims based on testimonials, overgeneralized, concoct, unusual and speculative explanations for its effectiveness, and they concoct jargon.
|
|
types of scientific knowledge
|
descriptive, predictive, prescriptive.
|
|
philisophical paradigms of research
|
positivism and constructivism
|
|
research methods
|
quantitative, qualitiative
|
|
EBP
|
a process which practitioners consider is the best scientific evidence available which is pertinent to a particular practice decision as an important part of their decision-making.
|
|
EBP process
|
questioning, serach for evdience, critically appraise the evidence, choose the msot appropriate one, use it, evaluate. QUESCC
|
|
how to address feasibility obstacles to EBP
|
planning out the scope, time, money and ethics.
|
|
voluntary participation
|
voluntary without coercion, intimidation, or promises or rewards.
|
|
informed consent
|
revealing all aspects of reserach that might influence a decision to participate. consent is a clear, usually written, agreement to participate.
|
|
anonymity
|
respondent may be considered anonymous when the researcher cannot identify a given response with a given respondent
|
|
confidentiality
|
researcher is able to identify a given person's responses but essentially promises not to do so publicly.
|
|
dual relationship
|
researcher and practitioner are the same. practitioner uses patients as study participants.
|
|
withholding treatment
|
not giving patients treatment to use them as a control group.
|
|
use of placebos
|
not giving patients actual treatment.
|
|
Institutional Review Board
|
are mandatory for research in agencies receiving federal money. researchers can try to ensure that their studies are ethical by obtaining the consent of an independent panel of professionals. the board panelists review research proposals involving human subjects and rule on their ethics.
|
|
deductive process
|
theory->hypothesis->observation->confirmation
THOC |
|
inductive process
|
observation->pattern->hypothesis->theory OPHT
|
|
hypothesis
|
a tentative and testable statement about relationship between variables, translated from research questions.
|
|
features of a good research question/hypothesis
|
value-free, narrow, specific, clear, has more than one possible answer, testable, purposeful in policy and practice.
|
|
attributes
|
characteristics of persons or things.
|
|
variables
|
logical groupings of attributes.
gender-attributes are male and female. |
|
constants
|
aspects of an experiment which does not change.
|
|
independent variable
|
influence, cause, or affect the phenomenon being studied; proceed the dependent variable in time; not determined within the system under investigation; the causes of them lie outside the system classic e.g. background characteristics of individuals, such as race/ethnicity, sex, age, region, gender).
|
|
dependent variable
|
variables most interesting to researchers, influenced by independent variables, determined within the system under investigation, e.g. student achievement scores.
|
|
mediating variable
|
a variable that comes between IV and DV in the causal chain.
|
|
moderating variable
|
not influenced by IV but that can affect the strength or direction of the relationship between IV and DV.
|
|
confounding variable
|
an extraneous variable whose presence affects the variables being studied so that the results you get do not reflect the actual relationship between the variables under investigation.
|
|
control variable
|
conclusions can be radically different when confounding variables or lurking variables aren't controlled. 'held constant' or partialled out.
|
|
types of relationships between variables
|
no relationship, correlational, causal, non-linear, linear
|
|
no relationship
|
indicates no relationship between two variables. a correlation coefficient of 0 indicates no correlation.
|
|
correlational
|
in both positive and negative correlation, there is no evidence or proof that changes in one variable cause changes in the other variable. A correlation simply indicates that there is a relationship between two variables.
|
|
causal
|
when one variable causes a change in another variable. these types of relationships are investigated by experimental research in order to determine if changes in on variable actually result in changes in another variable.
|
|
linear
|
proportional-positive and negative
|
|
non-linear
|
curvilinear: a relationship in which the nature of the relationship changes at certain levels of the variables.
|
|
limitations of correlational analysis
|
truncated or restricted range, bivariate outliers (outliers that throw off the correlation they weren't controlled for therefore made a correlation impossible.
|
|
degrees of imposed control
|
experimental, quasi-experimental, pre-experimental, non-experimental
|
|
experimental
|
a research method that attempts to provide maximum control for threats to internal validity by 1. randomly assigning individuals to experimental and control groups. 2. introducing the independent variable (which typically is a program or intervention method) to the experimental group while withholding it from the control group. and 3. comparing the amount of experimental and control group change on the dependent variable.
|
|
quasi-experimental
|
diesings that attempt to control for threats to internal validity and thus permit causal inferences but that are distinguished from true experiments primarily by the lack of random assignment of participants.
|
|
pre-experimental
|
pilot study designs for evaluating the effectiveness of interventions that do not control for threats to internal validity.
|
|
non-experimental design
|
just observation no intervention etc.
|
|
number of subjects in a study
|
group v.single subject
|
|
nature of data
|
quantitative, qualitative, mixed
|
|
quantitative
|
things that are quantifiable (temperature, weight, etc).
|
|
qualtitative
|
data that are not easily reduced to numbers, meanings (transcript of focus group, face-to-face in depth interviews),
|
|
mixed
|
a design for collecting, analyzing, and mixing both quantitative and qualitative data in a single study or series of studies to understand a research problem. improves generalizability with deep and contextual understanding of the phenomenon of interest.
|
|
time dimension
|
cross-sectional and longitudinal
|
|
cross-sectional
|
examines a phenomenon by taking across section of it at one time and analyzing that cross section carefully.
|
|
longitudinal
|
to describe processes occurring over time conducting observations over an extended period. data could be collected at one point in time but data correspond to events that occur at different chronological points, it is still longitudinal. better for single -cases because it varies the IV.
|
|
major purpose of the study
|
exploratory, explanatory, descriptive, evaluative.
|
|
exploratory
|
to provide a beginning familiarity iwth the topic/area, typical when a researcher is examining a new interst, when the subject is relatively new and unstudied, usually with a small sample, non-representative (insufficient to provide conclusive answers to research questions).
|
|
descriptive
|
to describe situations or events.
|
|
explanatory
|
takes the description further and explains why
|
|
evaluative
|
to evaluate social policies, programs, and interventions, could be done via exploration, description, and explanation.
|
|
construct an instrument
|
to develop test measurement instruments that can be used by other researchers or by practitioners as part of the assessment or evaluation aspects of their practice, teh question at hand is whether a measurement instrument is useful and valid to be applied in practice and research.
|
|
when to use single case evaluation design
|
the primary treatment goal is change in some clinet's behavior, attitude, or perception. the target for change can directly influence through work with client. change is something the practioner can directly influence through work with the client. useful in evaluating interventions and programs, useful in monitoring client progress.
|
|
key characteristics of single case evaluation
|
single case, time is how IV varies
|
|
steps in single case evaluation
|
define IV and DV, decide how to measure DV, conduct baseline phase, conduct the intervention phase, graphing and analyzing data.
|
|
triangulation
|
measuring more than on indicator of target problem using multiple instruments. (self-report corroborated by significant other).
|
|
measurement and data collection issues
|
what, when, how to meausre, and who should measure.
|
|
stability and discontinuity
|
want stability within phases and discontinuity between (change).
|
|
types of single case design
|
AB, ABAB,multiple baseline, multiple component (ABCD).
|
|
AB
|
basic single-case design
|
|
ABAB
|
withdrawal/ reversal design-taking away intervention and reintroducing it.
|
|
multiple baseline
|
different people (disjointed interventions), different behaviors, different settings.
|
|
multiple component
|
attempt to determine which parts of an intervention package really account for the change in target behavior.
|
|
complicating factors
|
incomplete data, ethical issue, client's awareness, carry over effect.
|
|
ways to enhance generalizability
|
direct replication, clinical replication, systematic replication.
|
|
direct replication:
|
repetition of intervention by same practitioners.
|
|
clinical replication
|
repeating an intervention package to serve clients in the same setting to multiple problems that cluster together.
|
|
systematic replication
|
vary the setting, practitioners, problems or a combination.
|
|
visual analysis
|
level-change in level is discontinuity, stability, trends, improvement, deterioration or no change).
|
|
strengths of single case design
|
inexpensive, little time, understandable, instantaneous feedback.
|
|
weaknesses of single case design
|
client's measurement, support of supervisors and administration, findings are vulnerable to misuse by supervisors and administration.
|
|
purpose of program evaluation
|
assess success of programs, problems in implementation of program, to obtain information needed to program planning and development, practical purpose, depending on purpose. summative or formative.
|
|
politics of program evaluation
|
people with vested interests, due to the pressure from these people, research on program evaluation could be influenced.
|
|
internal evaluators
|
greater access to program information, feasible, involved staff could be more open, could be less objective.
|
|
external evaluators
|
not always free of politics.
|
|
barriers to address pitfalls in PE
|
intervention infidelity, contamination of control condition, general resistence of staff, resentment to the case management protocol, client recruitment and retention.
|
|
how to address pitfalls of PE
|
involve agency staff, bring food, don't use jargons, make brief and simple, use graphs, illustrations, strategy. minimize interaction between control and experimental group members use on-going recruitment, reimburse them, do a pilot study to assess intervention fidelity, potential areas of problems, etc.
|
|
types of PE
|
goal attainment and monitoring program implementation
|
|
goal attainment
|
mostly quantitative, is it effective program?
|
|
monitoring program implementation
|
did they adhere to the implementation protocols? or how to best implement and maintain the program.
|
|
cauality
|
what caused the effecsts, were desired effects reached?
|
|
experimental research validity
|
can observed changes be attributed to your program or intervention and not to other possible causes.
|
|
requirements in a causal relationship
|
temporal precedence, co-variation of IV and DV, and no alternative explanations.
|
|
temporal precedence
|
the cause preceds the effect in time.
|
|
co-variation of IV and DV
|
IV and DV should correlate with each other.
|
|
internal validity threats
|
maturation, history, statistical regression, instrument change, testing, selection bias.
|
|
history
|
extraneous event that occurs during the course of the research which then influences the outcome.
|
|
maturation
|
normal growth between pretest and posttest and then would have lerned these concepts anyway, even without program.
|
|
testing
|
the effect on the posttest of taking pretesting may have primed participant answers
|
|
instrument changes
|
any change in test from pretest to posttest
|
|
statistical regression
|
statistical phenomenom that occurs whenever you have a nonrandom sample from a population and two measures that are imperfectly correlated, the pretest average will appear to improve even if never given treatment.
|
|
selection bias
|
any factor other than the program that leads to postest differences between groups.
|
|
external validity
|
determines generalizability, refers to the extent to which intervention could apply to larger population.
|
|
types of experimental design-experimental family
|
pre-/post-test control group, post-test only control group design, solomon four-group design, alternative treatment design with pretest, dismantling studeies. SAPPD
|
|
quasi-experimental family tests
|
nonequivalent comparison groups design, time series design.
|
|
pre-experimental family tests
|
one-shot case study, one-group pre-post test design, posttest only with nonequivalent groups, pilot.
|
|
randomization
|
askign voluntary participation, internal validity.
|
|
random sampling
|
picking participants randomly, helps with external validity- not present in single case designs.
|
|
measurement bias
|
the research/rater to improve the results of the experiment in favor of the hypothesis
|
|
research reactivity
|
controls might learn about the treatment from treated people.
|
|
diffusion of treatment or imitation
|
a threat to the validity of an evaluation of an intervention's effectiveness when practitioners who are supposed to provide routine servies to a comparision group implement aspects of the experimental groups intervention in ways that tend to diminish the planned differences in teh interventions received by the groups being compared.
|
|
compensatory equalization
|
practitioners in the comparison, routine treatment condition compensate for the differences in treatment between their group and teh experimental group by providing enhanced services that go beyond the comparison treatment regimen for their clients.
|
|
compensatory rivalry
|
practitioners in comparison group decide to compete with practitioners in the experimental group by improving extra efforts.
|
|
resentful demoralization
|
client in comparison group becomes resentful that they don't get to receive the treatment and drop out.
|
|
attrition
|
experimental mortality; HIV test; patient dies of HIV.
Obesity study, client dies of heart attaache. non-random dropout. |
|
trend
|
study changes with some general population over time.
|
|
cohort studies
|
examine more specific sub populations as they change over time.
|
|
panel studies
|
examines the same set of people each time.
|