• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/54

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

54 Cards in this Set

  • Front
  • Back
What are four ways that research adds credibility of communication disorders?
- Licensure and certification
- continuing education requirements/ keeping up with research
- accreditation of training programs exposing students to research
- code of ethics that includes research ethics
-professional meetings to present research
- professional peer- reviewed journals publish research
- profession produces its own independent research
-professional association that facilitates research
What’s the difference between a primary and a secondary source?
Primary sources are research articles

secondary sources are textbooks and presentations
What is a peer review?
Peers will look at your study and if you are interpreting in an inappropriate way they will let you know.

Peer review is the evaluation of work by one or more people of similar competence to the producers of the work (peers.) It constitutes a form of self-regulation by qualified members of a profession within the relevant field. Peer review methods are employed to maintain standards of quality, improve performance, and provide credibility.
Level 1A systematic reviews
A systematic review can be with or without a meta-analysis
Aggregate results from several well-designed studies
Helps to establish causality which cannot be established with one study
Provides an estimate of the degree of effectiveness of a treatment (effect size)
Level 1A systematic review
Increases statistical power due to increased n
Helps determine future research needs
Good reviews should use unbiased search and retrieval processes and logical guidelines for including and excluding studies in the analysis
Level I: Randomized Control Trials (RCTs)
Often required by the FDA for introduction of new drugs to the market
They require large n, double-blinding and randomization
They involve several steps beginning with animal study moving to small n studies with humans and then large n studies
Problems that develop result in termination of the study
Level I: Randomized Control Trials (RCTs)
RCTs are rare in CSD because:
Sample sizes are often small
No FDA requirement prior to new treatments
Expense and expertise of administering treatment (licensed SLP)
Randomization often not possible
Level 2: Nonrandomized Intervention Studies
Include quazi-experiments where intact groups are assigned to treatments
Most of the intervention research in CSD is at this level or lower and suffers from:
Lack of randomization
Use of intact groups
Subject selection bias
Small n
Lack of blinding
Level 3: Non-intervention Studies
Prospective Cohort Studies: A group of participants are followed over time. The group is typically at risk for some disorder (language disorder) based on the presence of predictor variables (middle ear pathology)
The group is followed to see who develops the disorder
Can be used to follow the long-term effects of a disease or condition
Level 3: Non- intervention studies continued
Retrospective Cohort Studies: A group of participants is recruited who have already demonstrated a condition and some predictor variable
The studies attempt to find the links between the variables and condition by comparing them with those who had the variable but not the condition
Level 3: Case Studies
These investigate participants in detail and then compare their profile to typical cases.
Level 4: Expert Opinion
Has been the basis for much of the information included in position statements, practice guidelines, and preferred practice patterns.
Now there is greater emphasis on use of research in creating these guidelines
Not all research is equal
What is the difference between a hypothesis and a theory?
hypothesis: A tentative explanation, testable with data

theory: A well developed explanation using a framework of concepts, principles, and hypotheses.
Hypothesis
A tentative explanation, testable with data
Theory
a well developed explanation using a framework of concepts, principles and hypotheses.
Dependent Variable
The outcome of interest, which should change in response to some intervention.

The DV is the one that changes in response to the IV
Independent Variable
the intervention, or what is being manipulated.
The DV is the one that changes in response to the IV
Extraneous (Confounding) Variables
A variable other than an IV that might influence the DV.
Operationism
Defining the meaning of terms in accordance with the conditions of their measurement.
Variable
A variable is defined as anything that varies or changes in value. Variables take on two or
more values.
Probability
The likelihood that a certain event will or won’t occur relative to other events (Norman & Streiner)
Detached role of the investigator
The investigator has little interaction with the participants and attempts to eliminate any bias.

This is important in order to keep the credibility and validity of the study
Sample
The participants used in the study
Population
The group to which the researcher would like to generalize the findings
Reliability
The consistency of scores

The ability to produce the same score over repeated testing or across different raters

Reliability is a measurement of variability between repeated administrations or observations
Interjudge Reliability
which has to do with agreement between two or more judges about the occurrences of an observable event. If a researcher is in the process of selecting a dependent variable it should be one that two people can observe and agree on the score for that measurement.
Test-Retest validity
which is an index of the stability of a particular measurement. No one would think a dependent measure has value if it changes drastically every time you take it.

Validity of the test where it can be given to a person two times and expect to make the same score
Validity
How accurately something measures what it is supposed to measure.
It allows us to draw conclusions
PPVT-R
Truthfulness
You are measuring what you claim to measure
Content Validity
Refers to the completeness of the test
Samples from the spectrum of skills
Example: an artic test that only samples some phonemes would have weak content validity
Criterion Validity
Establishing validity by using an external criterion
Two types of Criterion Validity:
Concurrent Validity
Predictive Validity
Concurrent Validity
When a new measure is compared to a widely accepted standard.
Predictive Validity
A test’s ability to predict performance.

Ex: GRE, SAT, ACT
Internal Validity
The degree to which the differences in the DV are due to the experimental manipulation and not some extraneous variable.
External Validity
The degree to which the results are generalizable beyond the sample used in the study.
Threats to Internal Validity
1. History
An event that occurs outside of the study and affects the DV…
Control for it…
with a control group.

2. Maturation
A personal change as a result of growth or maturation

3. Testing
The pretest will be the same as the post-test
Control for it by…
Using a control group or increasing length of time b/w testing

4. Instrumentation
Instruments lack reliability, validity, or both
Control for it by…
Use reliable, valid instruments; train observers

5. Statistical Regression
The tendency of scores to regress toward the average score (higher scores naturally drop; lower scores naturally increase).
Control for it by…
Using well-matched control groups

6. Differential Selection of Subjects
Use of already formed groups that might be different
Control for it by…
Take case histories (if possible) and pre-test

7. Mortality
Participants discontinue participation
Control for it by…
Large # of subjects, pretesting & matching, communication, incentives
History
An event that occurs outside of the study and affects the DV. Control for it by using a control group
Maturation
A personal change as a result of growth or maturation that; control for it by using a control group
Testing
The pretest will be the same as the post-test
Control for it by…
Using a control group or increasing length of time b/w testing
Instrumentation
Instruments lack reliability, validity, or both
Control for it by…
Use reliable, valid instruments; train observers
Statistical Regression
The tendency of scores to regress toward the average score (higher scores naturally drop; lower scores naturally increase).
Control for it by…
Using well-matched control groups
Differential Selection of Subjects
Use of already formed groups that might be different
Control for it by…
Take case histories (if possible) and pre-test
Mortality
Participants discontinue participation
Control for it by…
Large # of subjects, pretesting & matching, communication, incentives
Threats to External Validity
1. Pretest-Treatment Interaction
Pretest influences outcome
Control for it by…
Control groups who have/have not been pretested

2. Multiple-treatment interaction
What part of treatment influenced the results?
Control for it by…
Establish comparison/control groups using different components

3. Selection-Treatment Interaction
Groups not randomly assigned or already exist
Control for it by…
Pretest, or use random selection

4. Specificity of Variables
Too many specific variables limit the generalization…
Control for it by…
Replicate studies and tweak them

5. Treatment Diffusion
Interaction between treatment group and control group.
Control for it by…
Try to keep groups as separate as possible

6. Experimenter Effects
Researchers may unintentionally influence the outcome
Control for it by…
Double-blind studies / researcher awareness

7. Reactive Effects
Being part of a study can influence participants
Control for it by…
Provide information to the participants/extend study to control for novelty
The Novelty Effect
The Hawthorne Effect
Pre-test Treatment Interaction
Pretest influences outcome
Control for it by…
Control groups who have/have not been pretested
Multiple- Treatment Interaction
What part of treatment influenced the results?
Control for it by…
Establish comparison/control groups using different components
Selection Treatment Interaction
Groups not randomly assigned or already exist
Control for it by…
Pretest, or use random selection
Specificity of Variables
Too many specific variables limit the generalization…
Control for it by…
Replicate studies and tweak them
Treatment Diffusion
Interaction between treatment group and control group.
Control for it by…
Try to keep groups as separate as possible
Experimenter Effects
Researchers may unintentionally influence the outcome
Control for it by…
Double-blind studies / researcher awareness
Reactive Effects
Being part of a study can influence participants
Control for it by…
Provide information to the participants/extend study to control for novelty

The Novelty Effect
The Hawthorne Effect
Nominal Scales
Variables that are categorical, and represent discretely separate groups or categories.
Gender, public/private, present/absent
Ordinal Scales
Categories in rank order (high to low; low to high)
Information is ranked, but cannot be multiplied or divided
Voice rating scale, attitudinal scale, Olympic medals.
Interval Scales
Categorize and rank data, and the distance between the scores is equal.
There is no “true” zero point
There is an arbitrary maximum point
Test scores
Ratio Scales
Variables with a true zero point and no arbitrarily set limit
Height, weight, time