• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/64

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

64 Cards in this Set

  • Front
  • Back
PICO
Way to frame the clinical question. Population, Intervention, Comparison, Outcome
Ex: Kindergarteners with artic disorders, individual pull out, group pull out, ability to consistantly produce /s/.
EBP
-Process that aims to provide clients and practitioners info needed to choose the best procedures for a clients benefit.
Evidence Based Practice: find systematic reviews, individual studies, assess the evidence, make a decision. Day to day monitoring for new evidence.
1. scientific research
2. practitioners clinical experience
3. clients unique attributes
(any study can be flawed, highest levels of evidence do not exist for all clinical problems, EBP useul for for ID research needs
Hierarchy of EBP / Evidence Grading
"well designed"
Ia. meta analysis of >1 randomized controlled study
Ib. randomized controlled study
IIa. controlled study
IIb. quasi-experimental study
III. non-experimental study (correlation or case study)
IV. expert committee report, consensus, clinical experience of respected authorities.
(strongest greater than 1, low incidence look at case study)
Parts of a Research Article
I. Intro (problem, lit review, research question)
II. Method (participants, subject characteristics, setting, independent variable, instruments (dependent variable), design, procedures-how documented.)
III. Results (comparisons, summary, statistical significance)
IV. Discussion (specific conclusions of researcher, limitations of study, conclusions)
Why Research? How do we Research?
-Provides us with truths, evidence in which we base our decisions
-removes faulty truths
-introduces theory
*Start with a question
*Develop problem statement (including variables of interes, relationship between variables, types of subject)
How do you solve a research Problem?
1. Gather info and tease out variables. Lit review. find support and conflicts. start broad, end narrow.
Theories
*Set of interrelated contructs, definitions, and proposition that present a systematic view of phenomenoa by specifiying relations among variables wit the purpose of explaining and predicting phenomena.
*Does research support theroy?
*Theory thrives with scientific validation.
Hypothesis
*A conjectural statement of the relation between two or more variables.
-generated based on knowledge, theory, and evidence of lit.
Induction
-Particular to general (through observation and personal experience, generally research question is done inductively.)
Deduction
-General to particular (start with general statements, take a broad thought and apply to personal experiences)
Research Triangle
*Problem statement ->Independent variable -> Dependent Variable
What are Variables?
-Narrowly defined aspects of an event
-Can be measured or manipuated
-Methodology designed to isolate and control variability sources
-Typically categorized as dependent and independent (also active, assigned, intervening)
Independent Variable
-Explain dependent variables
-Often manipulated (to affect change)
-"cause" of study
-predictors (GRE's predict grad school success)
-can be active or assigned (categorical / non manipulated).
*It is important that the I.V., or active variable, can be manipulated when treatment is introduced or withdrawn / varied. In group or single subject.
(Ex: first year and second year grad students (assigned / not manipulated) want to study their success in grad program based on GRE (active).
Non-Manipulated IV
-Attribute IV (males more likely to be diagnosed w autism?)
-Predictor variable (scores of PAT predict reading success in grades 1 & 2?)
Dependent Variables
*The effect.
*Can be measured, not directly manipulated.
*defined operationally
*SLP Research is designed to investigate causes of disorders and effect on behaviors associated with the disorder.
-Categorical-male vs female
-Continuous-based on a score from standardized test, weight (more sensitive indicator)
Experimental Research
Most stringent. (only with randomization)
-One or more factors are active / manipulated
-effects are measured
-conducted under controlled conditions
-uses randomized sampling
(expensive and labor intensive, treatment bias=John Hawthorne effect-subjects improve bc they are being studied.)
Randomized Controlled Study
Standard of treatment efficacy. Should be able to be replicated.
Metanalysis
Take group of similar studies and draw a conclusion.
Independent Variable vs Dependent Variable
IV (Cause)- -> DV (Effect)
predictor ->predicted
Treatment ->outcome
Classification ->criterion
Intervening / Confounding Variables
*Those variables that exert or may exert influence on the DV
(Ex. effect of 30 minutes of exersise per day on heart health. IV-30 min ex. DV-heart health. confounding is things that can influence outcome if you don't control).
Quantitative Research
-A formal objective, systematic process in which numerical data are used as evidence to test hypothesis, refine theories, and advance knowledge, technique and practice (Burns & Grove 09)
Group Designs
Large numbers of subjects are utilized to:
-classify or quantify info
-examine relationships between variables
-examine differences between groups on dependent measures to establish a cause and effect relationship.
(numbers based on convenience. minimum is 15. look at correlation and cause)
Campbell and Stanley's Classification
R=randomization, O=observation / assessment X=treatment.
Pretest/Post test randomized control group.
R oXo (random, observe, treat, observe)
R o o
Post test only randomized control group (R x o, R o x)
alternating treatment design (R o x1 o x2 o)
Factorial Research Design
-Two or more IV
-Not a true experimental design. no manipulation or random sampling.
-multiple levels of independent variables
ex. type of treatment (oral vs total comm.), age of cochlear implant, DV is expressive language)
Quasi Experimental Study
-Two or more groups (IV cannot be manipulated easily)
-No random selection
-Extraneous variables cannot be easily controlled (children who are language impaired vs typical)
-Match groups on relevant variables, nonequivalent control group design (o1 x o2 / o3 o4)
Descriptive Research (non experimental)
-Observe group differences
-Developmental trends
-Relationships among variables
-Observation of relations between attribute IV and DV)
Ex. personality characteristics in female athletes with VCD.
Types of Descriptive Research
"correlation does not cause causation"
-Comparative
-Developmental (track speech over time, longitudinal or subsets)
-Correlational
-Survey
-Retrospective-ex post facto. past therapy. ex. success of social skills groups in columbia by looking at employment.
Non-Experimental Designs
-No control group
-Usually looking at something other than cause and effect
-longitudinal (same subjects over long period of time)
-Cross sectional (several groups each representing a different time period)
*Common types:
A. one group pre/post test design 01 x 02
B. developmental designs
C. surveys
D. correlational designs
Sampling
participants -> treatment -> no treatment
-How one chooses their "population" is critical in confidently interpreting results.
-Sampling creates external validity of study, so the results can be generalized.
-Inferences are only as good as the method used to draw the sample.
First Step in Sampling
*Define a target population
-characteristics of pop / appropriate selection criteria ID'd.
-Attempts to isolate only important characteristics
-Hold constant other non-relevant characteristics
-choose representative sample
(Ex. consensus every 10 years are used to get info on minorities. sample is subgroup of whole)
Considering Size and Power of a Sample
-Sample size is # of participants
-Sample size is related to the "power" of the design
-Power is the ability of a research design to detect sig. treatment effects (design sensitivity).
-Smallest sample should be 15-30.
-"power" is <80 participants and unbiased / equal selection.
Factors that Affect Design Sensitivity: IV
-strength of treatment (effect size)
-Control condiiton
-Treatment group integrity (fidelity of treatment)
Factors that Affect Design Sensitivity: Sample Size
-Sample size: increase sample size, use Power Analysed software to obtain 80% power.
(strength shows improvement over time.)
Factors that Affect Design Sensitivity: DV
-Precise unit of measurement (e.g. standardized tests)
-Consistency in measuring procedure
-Uniform response of participants to treatment.
Factors that Affect Design Sensitivity: Statistical Analyisis
-Larger alpha (.05 typical)
-One-tailed directional tes of the hypothesis
-Interval or ratio data
-Control for varience ( e.g. analysis of co-varience)
UnBiased Sampling Techniques
-random (sample vs census)
-simple random sample
-Systematic sampling
-Stratefied random sample
-Cluster
(few and far between, need lots of time and $).
-Leads to the highest level of design, also known as experimental designs (randomized control study)
-Ensures results are representative of desired target group, including surveys.
Biased Sampling Techniques
(Non-random)
-Samples of convienence (accidental samples)
-volunteer
-deliberate
-matched
*not as generalizable, but it's what we deal with most as SLP's.
Random vs. Randomization
-A group can be randoly chosen
-Or a group can be selected from an intact population and then randomly assigned (randomization of treatment)
ex. foundations were randomly assigned .51 or .52.
-randomization is controlled population
Simple random sampling
-ID and define population
-Determine sample size
-list all members of the population
-Assign all members a consecutive #.
-No one sample will be like parent population, but some samples are better and have a smller % of error than others. Error of stats.
Types: simple, systematic, stratified, cluster.
Systematic Sampling
-not used often
-type of random sampling if the list of subjects is randomly ordered (eg. pick every 5th person).
Stratified Sampling
-Looks at subgroups in a population (know parent popul.)
-Most be represented in the same proportion that they exist in population
-Subgroup or "strata" of particular interest (equal groups / proportional stratified groups).
-divide on a certain characteristic that you deem important, but no rule to have proport. subgroups.
Cluster Sampling
-Randomly selects groups, not individuals
-All members of selected groups share similar characteristics (geographic areas?_
-Useful with large populations of interest.
Sample Size
-30 is general guildline for correlational, causal comparitive and experimental study
-Descriptive is 10-20% of pop
-Computer generated power analyzes that performs sample size calc.
Sampling / Subject Selection
-How are you choosing subjects?
-Ar you trying to "match" them on possible influencing characteristics?
-Think about your study-list characteristics of subjects. List possible "confounding influences"
-Goal-want results to reflect differences to selected variable (IV) not confounding variables.
Other Sampling Considerations
-Sampling error
-Sampling bias
Design Considerations
-Between subjects (separate groups of subjects then compared, number of groups, total # participants).
-Within subjects (repeated measures with same group of subjects, one group)
Group Designs and Levels of Evidence
*Experimental
*Quasi-experimental
*Descriptive (correlational, survey, developmental, retrospective (after the fact), comparative)
Validity
1. validity of measurements (DV)
2. validity of design
*Effort to remove influence of any extraneous variable that might affect the DV.
*Uncontrolled extraneous variables are the threats to the validity of an experiment
Internal Validity
Concerned with threats or factors other than the IV that affect the DV *Most important*
External Validity
Extent to which the results can be generalized
Campbell & Stanley's Classification System: MRS. SMITH
(internal validity factors)
M=maturation
R=regression
S=selection of subjects
S=selection by maturation
M=mortality
I= instrumentation
T=testing
H=history.
External Validity and Factors that Affect It
(similar to internal, but not all)
-subject-selection (narrow down)
-Reactive or interactive effects of pretesting (subject becomes aware of nature of study, does what they think administrator wants)
-Reactive arrangements (1. environment /hawthorne effect/ patients know they are being watched. John Henry-control group not being treated special, so they work harder. 2. Diffusion, make sure 2 groups aren't talking. treatment outside own study? other therapy?
-Multiple treatment interference
Fidelity of Treatment
*Intervention is introduced exactly the same way each session
*This is also a threat to validity of a study although not one of Campbells and Stanley's
Strategies for Assessing and Enhancing Fidelity
1. Videotape
2. do booster sessions (patient drift).
3. training manual to train interventionist
4. have practitioners and interventionists keep a log.
5. conduct interview to see what was delivered to patients (recommended by ASHA).
Considerations in EBP
Current best evidence, clinical expertise (advisers), client/patient Values (therapy fits needs of pt)
10 Questions for Critical Appraisal of Research Studies (CARS)
1. (intro)Purpose and goals stated clearly?
2. (method) Sample size adaquate/justified?
3. Subjects similar enough to clients to transfer outcomes?
4. Were measurements valid and reliable?
5. Was a signifigant change reported?
6. Change interpreted for clinical importance?
7. (conclusion) research Q's answered
8. conclusions consistent w results?
9. limitations discussed?
10. alternative explanation offered?
Research Designs
Group (quantitative) - todays focus
Single Subject
Qualitative
Survey Research
*Follow basic steps
*Not just set of Q & A
*Most difficult to select valid Q's for survey (see literature)
*Collect quantifiable info from members of a population
*Survey participants used from appropriate sampling techniques
-A type of descriptive research (non-experimental)
-Often assesses attitiutes, opinions, preferences, demographics, practices, and procedures
-Varied uses: public opinion, developmental follow-up
-mail, telephone or computer
(LCC uses this often, as we are not a "1a" research funded program. many other schools use lesser "non-experimental" tests.
How to Construct a Survey: Do's
-should be brief, attractive, and easy to respond, 1-2 pages
-developing subareas (based on informal surveys and experts of lit) can help in development (focus group)
-Pretest questionnaire for readability and to determine if you are targeting valid areas
*Write Q's so info can be quantified.
-closed Q's w forced choices
-Likert Scale (5-7 pts) ranking
-written in descending order
How to Construct a Survey: Don'ts
-Ask question that you don't know the answers
-Avoid slag, jargon, acronyms, or tech terms
-Avoid negative phrasing
-Avoid open ended questions
Survey Data Collection Methods
Mail, phone, email, personal administration, interview (mail response rate is small)
Correlational Research
-asks a relationship question, two or more variabels are compared
-Correlation does not imply causation
-as one variable's strength increases, the other also does.
-strength and direction has a pos/neg coefficient.
-correlation coefficient is "pearson r" or +/- 1, strongly correlated, +/- .35 is low, .65 is moderate.
ex. verbal GRE and GPA of grad students strongly correlated.
Co-efficient tells us (scatterplot)
1. strength
2. direction
*strong correlation better you can predict between variables*
Correlation and Regression Analysis
-how well can one variable or set of variables predict to another variable
-predictor IV
-predictor DV
EBP Systematic Reviews
-Cochrane Database of Systematic Reviews
-ASHA EBP compendium
-ASHA practice policy docs