• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/86

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

86 Cards in this Set

  • Front
  • Back
Purpose of PE
To provide information on the effectiveness of programs. In order to optimize:
- Outcomes
- Efficiency
- Quality of health care
Primary Function of PE
Provide data on the extent to which a program's objectives are achieved
Process Evaluation
Focus on program's activities
Outcomes are difficult to measure because:
- Lack of consensus on definition
- Not enough time
--> Usually focus on extent to which programs achieve more easily measured goals and objectives
Program Impact
- Scope of Effect
- Duration of outcomes
- Extent of influence
PE consists of following activities
- Posing questions
- Setting standards
- Designing the evaluation
- Selecting participants
- Collecting, managing, analyzing data
- Reporting results
Setting Standards
Deciding what information is necessary to provide convincing evidence of program's effectiveness
(Merits are equated with effectiveness)
- must be measurable and credible
Evaluation Design
The manner in which the environment is structured so that any documented effects seem to have resulted form the program can be linked to it
Selecting Participants
- Inclusion and exclusion
- Number of people
Reliability
Consistency of Data
Validity
Accuracy of data
Evaluation Report
Description of a program's characteristics and explanations/judgments of program's merits
(Created after data is collected; report is disseminated)
Baseline Data
Shows participant's conditions before the program begins
- Describes the characteristic of participants
- Used when interpreting effects of the program
Interim Data
Collected During course of program to show progress in meeting needs
Formative evaluation
Interim data collected at start of program, but before the conclusion to describe progress
Process Evaluation
Extent to which planned activities are implemented
- findings may be reported at any time
- Can show that proper protocol is not being taken for example
- Almost always useful
Qualitative Evaluations
Collect data through in person interviews, observations and written documents
- Personalized
- add emotion
- compliment existing data
- help define goals
Participatory Evaluation
Invites representatives of the community/organizations that will be affected by evaluation findings to join the team as partners
- Reduces health disparities!!
--Improves research using local knowledge/history
--Concerns are viewed ecologically
- Assists with resources
Ecologically
In political and Social context
Evaluation Framework
Provides guidance for evaluation helping them to be certain that their evaluations overall design and consider the origin of contests of the programs they examine
PRECEDE-PROCEED Framework
Assessment of:
-Implementation/evaluation of process impact/outcome
- Social
- Administrative/policy
- Behavioral/environmental
- Educational/ecological
- Epidemiological
Social Assessment
Determine perceptions of people's needs and quality of life
Epidemiological Assessment
Identify health programs that are most important
Behavioral/environmental
Uncover factors that might contribute to health problems
Educational/Ecological
Identify factors that might foster change in health behavior
Administrative/policy
Review policies/resources that facilitate or hinder implementations
Implementation/evaluation of process impacts and outcomes
Asses programs activities and outcomes
RE-AIM
Reach, Efficacy, Adoption, Implementation, Maintenance
== Provides a score that can be applied to program/evaluation
Reach
Percentage of potential participants exposed
Efficacy
Intended effects and possible unintended outcomes
Adoption
Participation rate of eligible subjects and how well the setting/people who deliver the intervention as intended
Maintenance
Long term effects and extent to which it is continued
Purpose of Framework
- Provides guidance!!!
- Models predict behaviors/outcomes!!!!!!!!
Institutional Review Board
Review the design of evaluation to guarantee that structure will protect rights and privacy
Belmont Report
Ethical Principles for evaluation research. Includes:
- Respect for persons
- Beneficence- treatment in ethical manner
- Justice- those who reap benefits vs. the burden are balanced
Quality Improvement evaluations
Improve deficiencies in health care quality
Informed Consent
All subjects are knowledgeable about risks/benefits of participation and activities
HIPAA
Safeguards health information of individual obtaining health care
Evaluation Questions
Guide evaluators in gathering/analyzing data on characteristics and merits of program
Objective
Refers to specific goals of a program
Substantive
Years of life saved
Cost Benefit analysis
Relationship between cost and monetary outcomes
Cost effectiveness analysis
Relationship between cost and substantive outcomes
Sensitivity analysis
Measuring program by increasing funding incrementally to test sensitivity of effectiveness to changes in funding levels
Measure effectiveness in terms of:
Program's structure, process or outcomes of health care
Structure of care
environment, setting and organization of care
Process of care
What is done to and for the patients (procedures, tests, prevention, treatment, etc)
Outcomes of Care
The results for the patient
- Morbidity and mortality
Questions asked when comparing two groups:
- Are the groups comparable?
- Is the magnitude of difference meaningful?
Pilot Study
Small Scale Study to get estimates of effect sizes
- When you cannot find prior research
Expert
Professional, consumer or representative likely to use results of an evaluation
Guidelines for Expert Panel
- Must specify evaluation questions
- Provide data to assist them
- Select experts based on knowledge/influence and how they will use findings
Published Data Sets
Provide benchmarks to measure effectiveness of new programs
- Must be TRULY applicable
Cost Effectiveness Standards
A is effective and a lower cost program
Cost Benefit
A has MERIT if it benefits or equal to or exceeds costs
Cost minimization
A and B have equal benefits but A is lower cost
Cost Utility
A produces "x" quality of adjusted life years are lower cost than B
Impact Evaluation
Examines proximal outcomes (relationship between randomized intervention and effectiveness end points)
Explanatory/predictor Variables
Indpendent variables that are present before start of program and are used to explain/predict outcomes
Outcome Variables
Dependent variables that are factors that evaluators expect to measure
- Ex. health status, knowledge, etc.
Patient Mix
Characteristics of patients that might affect outcomes
- Ex. SES, functional status scores, chronic disorders
Steps/Questions of ED
1. EQ and ES
2. Independent variables
3. Exclusion/inclusion criteria
4. Control group vs. no control group
5. When will measures be taken
6. How often will measures be taken
Minimum Design
Uses standards as the guide
Eligibility Criteria
Foundation for evaluators inference of conclusions about the groups who are most likely to benefit from participation
---> comes from evaluation questions
Exclusion Criteria
Exclude potential participants whose inclusion is likely to impair functioning of the evaluation or skew its data
Group Requirement
Control group must start out demonstratively like the experimental in composition but definably and measurably unlike the other group
Pre-measures
help select groups to participate, check the need for program and ensure comparability of groups
Prospective Investigation
Data is collected for the specific purpose of the evaluation commencing at the start of the evaluation
- Provide the most control
Summative evaluation
Retrospective studies, historical analysis
- Leaves too much to chance
Randomized controlled trials/true experiments
Evaluations with concurrent controls in which participants are randomly ASSIGNED to groups
Nonrandomized controlled trials
Quasi Experiments
- Evaluations with concurrent controls which participants are NOT randomly assigned to groups
Before and after designs
Evaluations with self controls
- Require pre and post measures
- Group participation serves as its own comparison
--> biases such as motivation and historical events
--> depends on number and timing od measurements
--> can be ineffective if data is collected before wanted outcome
Membership Bias
Bias found in preexisting groups because one or more of the same characteristics that cause people to belong to the group are associated with the outcome evaluated
Non-response bias
participants in evaluation study is voluntary
- Those who accept invitation may be somewhat different from nonaccepters in willingness to try new programs
Descriptive/observational study
lack interventions
- use surveys to guide them in developing in programs and setting standards for merti
Cross Sectional Design
Colelct baseline information o experimental and control groups to guide program development and as source of data regarding program/envionrmetn
- Portrait of one group at ONE period of time
Cohort
Group with something in common that is followed over period of time
- prospective study and longitudinal
- Determine the extent of program's effects that have lasted
Factorial Design
-Experimental or descriptive
- Descriptive- lays out all combination of several independent variables at one time
- Experimental- three or more factors are independent variables are possible
Main Effect
Effect of each factor on the dependent variable
Interaction Effects
Effects on the dependent variable that occur as a result of the interaction between two or more independent variables
Internal Validity
Program is effective in specific experimental instance
- evaluator can tell if a finding is due to program or some other factor/ bias as a result
- Risks: maturation, instrumentation, history and attrition
External Validity
Evaluator can demonstrate that the program's results are applicable to participants in other places/at other times
Maturation
People may mature intellectually/emotionally and this new maturity may be just as important to producing change
Instrumentation
Unless the measures used to collect data are dependable, the evaluator cannot be sure that the findings are accurate
Attrition
Participants who remain in the evaluation sample are different from those who drop out
Hawthorne Effect
In experimental evaluation participants behave in atypical way because they know they are in a special program