• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/65

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

65 Cards in this Set

  • Front
  • Back
Science
Latin antecedent from the word "scire" (to know). One means of obtaining knowledge. Science best described as a process.
Belief
Conviction of truth of some statement or the reality of some being or phenomenon especially when based on examination of evidence; beliefs are potentially testable. "If you don't brush your teeth after every mean, you will get more cavities than if you do brush them."
Opinion
A view, judgement, or appraisal formed in the mind about a particular matter. "You should brush your teeth after every meal."
Advice
Recommendation regarding a decision or course of conduct. "Brush your teeth after every meal."
Intuition
The power or faculty of obtaining your knowledge or cognition without rational thought or inference. A hunch. Bypassing ordinary ways of reasoning.
Method of tenacity
Holding onto something. Beliefs that are held because this has always been know to be true. Beliefs may exist contrary to evidence. Repetition of these beliefs reinforce them.
Method of authority
Holding a belief because someone told me it is true.
Rationalism
Knowledge gained through reasoning based on assumptions. If the assumptions are wrong the reasoning will not be correct.
Empiricism
Knowledge based on observation and sensory evidence. "Seeing is believing." Lead to the scientific method.
Inductive reasoning
Issac Newton (late 16th C).
Start with observations. Observations lead to formulation of inferences and theories to explain observations.
Deductive reasoning
Francis Bacon (early 16th C).
Starts with hypothesis or idea - then deduces what we would expect to find based on the idea. General premise is used to explain observations. Eg. Premise: Gravity makes things fall. Observation: the apple hit me on the head. This was due to gravity.
Basic components of science
Data/observations: collect information.
Hypothesis: tentative explanation of observations.
Theory: explanation that has been thoroughly tested. Explanatory- explains WHY data and patterns exist. Phenomenological- describe process (phenomena) without explanation. Eg. Laws: "every action has an equal and opposite reaction."
Scientific Process
Observations --> Hypothesis --> Predictions --> Experiment --> Data --> Formulate/revise theory --> Observations
Scientific process: shaping principles
Non-empirical factors involved in selecting a "good theory."
Examples: Law of parsimony (simplest explanation). Empirical adequacy (explains the greatest amount of data). Other factors: self consistency, ties in with other theories.
Audiologist and clinical practice
Old-school clinician: disconnect between clinic and research. "What I do in clinic doesn't have anything to do with research." Primary method of acquiring knowledge. No research collaborations.
New: Clinical scientist. Identify new research needs. Critical evaluation of research articles. Application of research to practice. Scientific contributor.
Evidence Based Practice
Framework of constructs and methods that originated in clinical medicine. "Evidence-based medicine is the integration of best research evidence with clinical expertise and patient values." "The conscientious, explicit and judicious use of current best evidence in making decisions about the care of individual patients. This involves the integration of clinical expertise with the best available external evidence."
Goals of EBP
Clinical expertise.
Best current evidence.
Client values and choices.
EBP Rationales
Substantiate what we do. Establish and maintain professional credibility. Convince their patients and 3rd party payers that our service options provide benefits that justify the cost. Satisfy accrediting agencies (JCAOH). Avoid mismatch between best evidence and actual practice patterns.
EBP is a process
Constant process of inquiry: "Why am I doing things this way?" "Is there evidence that can guide me to a better outcome?"
EBP Steps
1. Formulate a question
2. Search for evidence
3. Critical appraisal of the evidence
4. Implementation (evidence, expertise, pt values)
5. Follow-up
EBP Step 1
Asking questions.
PICO
Patient - define patient characteristics of interest.
Intervention - define intervention (treatment).
Comparison - define comparison condition (optional).
Outcome - define outcome measure (must be specific).
Behavioral data
Test scores (e.g. % correct speech recognition).
Ratings (e.g. quality of sound, satisfaction ect.)
Preference (e.g. prefer A over B).
Categorizing behavior - frequency of responses.
Response time.
Physiological data
Amplitudes, latencies/duration, histograms, threshold (physiological).
Other types of EBP questions
Diagnosis - test selection and interpretation.
Harm or etiology - harmful effects of a Tx? How can these be avoided?
Prognosis - what is the patient's likely course of disease, or how to screen for or reduce risk.
Prevention - how can the patient's risk factors be adjusted to help reduce the risk of disease?
Qualitative - helps to understand clinical phenomena with emphasis on understanding the experiences and values of patients.
Diagnosis - e.g. in patients with suspected depression what is the accuracy of a two-question case-finding instrument for depression compared with six previous validated instruments.
Two types of quantitative research
Descriptive: subject characteristics, relationships, no manipulation.
Experimental: manipulation of variables.
Broad categories of research
Qualitative and quantitative
Variables: categorized according to how they are used
Research variables: independent and dependent.
Measurement variables (may be independent or dependent): qualitative (categorical) or quantitative (numerical).
Extraneous variables: exert an unwanted influence.
Research Variables
Dependent - variable being measured.
Independent - active variable (experimental): manipulated, must have two or more levels (e.g. HA type). Attribute variable (experimental or descriptive): not manipulated.
4 types of descriptive research
Comparative research
Naturalistic research
Survey research
Correlation research
Comparative research
Involves "comparison of subject characteristics, cultures or political systems." Two disadvantages: Difficulty attributing causation. Categorical comparisons - lose information if categories are based on underlying numerical info (severity vs. PTA).
Naturalistic research
Involves the "observations of behavior and/or events that occur in a natural setting." Includes developmental research - very common in language acquisition studies. Can be qualitative or quantitative.
The role of naturalistic observation
Descriptive provides data base that can lead to subsequent, more highly controlled research. Two problems threaten soundness of observations: 1) delimiting choice of behaviors to observe. 2) Reactivity. Controlling threats: Unobtrusive observations and measurements (e.g. evidence of behavior - graffiti, tracks.)
Advantages and disadvantages of naturalistic observation
Advantage: can help define the problem and raise questions for future controlled studies.
Disadvantages: does not allow us to assess relationships among events. More difficult to reproduce. Difficulty maintaining a descriptive rather than interpretive level of observation.
Developmental research
Measures changes in behavior or subject's characteristics over time. May be conducted in naturalistic or lab setting. Three types of developmental plans: 1) cross-sectional, 2) longitudinal, 3) semi-longitudinal.
Correlational research
To study the relationships among two or more variables. Two main questions: 1) How close is the relationship between the variables (and what is the nature of the relationship)? Answered by the correlational coefficient. Explain positive and negative correlation. 2) How well can performance on one variable be predicted from knowledge of another variable? (regression analysis).
Advantages and disadvantages of correlational research
Advantage: Can make predictions if relationship is strong. Can't do this with comparative research.
Disadvantages: Does not imply causation. Hidden variables may exert an influence.
Survey research
Assembling information about characteristics, practices and opinions. Broad categories: attitude surveys, demographic/lifestyle surveys.
Case study research
Designed to examine specific individual in depth. To illustrate important principles that might be overlooked in group data. To highlight unusual cases. To explore new therapy technique.
Retrospective research
Compiling data from past clinical records. Potential problems: Reliaibility issues - how were data collected and by whom? Data may be missing or difficult to quantify. Data may be collected differently by different people.
Two sources of variance
Error variance and systematic variance.
Error variance
Variability in measures caused by unsystematic random fluctuations (due to chance). Factors affecting error variance: 1) inconsistency of measurement, 2) inconsistency of subject responses, 3) inconsistency of test conditions (other than those planned).
Systematic variance
Variability in measures caused either by variables under study or some other extraneous variable.
In selecting a particular measurement for an experiment, these factors should be considered
1) Measurement utility
2) Reliability
3) Validity
Measurement utility
Test sensitivity - test is sensitive when it frequently identifies the presence of a disorder when it is truly there.
Test specificity - test is specific when it rarely identifies a person as having a disorder when the person truly does NOT have it (low false alarm rate).
Reliability
AKA: reproducibility.
Precision of measurement which can be assessed by examining the consistency or stability of a test or measure. The degree that test scores are free from errors of measurement.
4 types of reliability
1) Test-retest
2) Internal consistency
3) Equivalent forms
4) Observer
Test-retest reliability
"Consistency/stability of a test measure over time." Assessed by determining how well scores are correlated over time - high correlations = good reliability. Scores that are different on two occasions - does not necessarily mean poor reliability. What does low test-retest reliability indicate?
Internal consistency
Consistency of items within a test. Assessed by evaluating the split-half reliability: scores on first half of test compared to the second half or odd items compared to even items.
Equivalent forms
AKA: alternate forms reliability.
Is performance equivalent across two or more forms of the same test?
Observer
Intra-observer reliability: consistency of observer judgements within an observer.
Inter-observer reliability: consistency of judgements across observers.
Validity
The degree to which a test measures what it purports to test.
A valid test must be reliability but a reliable test is not necessarily valid.
Content validity
How well does the test sample the targeted characteristics? Ex. Interested in measuring child's use of irregular verbs. Do the questions elicit the use of irregular verbs?
Criterion validity
How well does the test correlate with a known indicator of the behavior or characteristic it is supposed to measure?
Predictive validity - How well does a test predict a particular outcome?
Concurrent validity - When a test and outside validating measure are done at the same time.
Construct validity
How well does the test reflect a theoretical construct (or explanation) of the characteristic/behavior being measured?
Convergence evidence - Is the new test supported by old measures?
Discriminant evidence - Does test have correlations you would not expect?
Functional validity
How the tests are used and interpreted.
Experimental research
To purposes of experimental research design: 1) to answer the research question(s), 2) to minimize contamination of results by extraneous variable. Aims of experimental research: 1) manipulate independent variable(s), 2) hold other potential independent variables constant.
Descriptive research
Tends to be messier, more difficult to control extraneous variables.
Internal validity
The degree to which the design has accomplished what it was intended to accomplish. Experimental research: Need to be reasonably certain that change in dependent variables can be attributed to manipulation of independent variable(s) & not by extraneous variable(s). The fewer the alternative explanations, the greater the internal validity.
External validity
Concerns the question of generalizability. To what populations, settings, measurement variables can this effect be generalized?
Within-subject designs
A single group of subjects tested under more than one condition.
Between-subject designs
"Group design"
More than one group of subjects tested under one condition.
Mixed design
"Group design"
More than one group tested under different conditions.
Threat to internal validity: unmatched groups
Only applicable with 2+ groups.
Subject groups should be equal on important factors. For single-group designs it will just add noise to the data and the design will be less sensitive.
Threat to internal validity: attrition
AKA "experimental mortality"
Differential loss of subjects in different groups used in study. Problem in the design requires carefully matched groups. Not a problem if different groups are not being compared. Can be serious when pairs of subjects are matched across groups.
Threat to internal validity: interactions among groups
Compenatory Rivalry: rivalry between control and experimental group. Experimental group may get "extra." May affect behavior of controls (competitiveness). Resentful demoralization: may cause loss of motivation. Opposite effect of compensatory rivalry. Solution: avoid exposing groups to one another if possible.