• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/18

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

18 Cards in this Set

  • Front
  • Back
Objective 1, Unit 1: Explain the concept of research design, including 3 distinct sources of variance:
The goal of research design is to control for three sources of variance:

•Maximize experimental variance in the outcome variable which is attributable to the independent variable under study.

•Minimize extraneous variance which is attributable to factors not related to the variable under study that can interfere or have an effect on the outcome variable that you are measuring.

•Minimize error variance in the outcome variable which is attributable to random fluctuations.
How do you minimize extraneous variance?
Extraneous Variance is attributable to factors unrelated to the variable under study.

•Minimize extraneous variance in the outcome variable by identifying and controlling threats to design validity (internal validity).

•Largely do that by adding a control group or a comparison group (he summarized slide).

•i.e. control for all things other than the intervention that affect the outcome/validity threats (selection bias, regression to the mean, maturation etc).
How do you minimize error variance?
Error Variance: is attributable to random fluctuations.
Sometimes called measurement error or unreliability.

•Impacts Precision ⇒Total variance = True Variance + Error Variance
•Minimize error variance by selecting the appropriate measurement methods and applying them in an appropriate context.
How do you Maximize Experimental Variance?
Maximizing Experimental Variance by:

•Having groups being as different as possible with respect to the variable under study (e.g. the program).

- Reduce the diffusion effect and contamination by keeping the control and study group separate?? (from index cards?)
Objective 2, Unit 1: Factors affecting Outcomes: Describe 3 broad classes of factors influencing outcomes of an evaluation study.
Outcomes we observe are influenced by the:

1) Fidelity of the Implementation: how one carries out the program
2) Design and analysis: is the design and analysis accurate? Are the data being interpreted correctly?

3) Theory and program: was theory used to design the program?
Objective 3, Unit 1: Identifying and describe 3 abuses of evaluation and give one example of each.
1. Ingratiating/Ass Kissing: -doing an evaluation with the intent to obtain positive results.
Example: Designing an instrument with the following rating scale: Excellent, Very Good, Good.

2. Eliminating/Coercive/Ass Kissing: doing an evaluation with an intent to obtain negative results or to find out why a program, intervention or treatment is not working very well.
Example: Designing an instrument with the following rating scale: Extremely poor, very poor, poor, fair.

3. Accountability/Ass Covering/Bureaucratic: Doing an evaluation with the intent of accountability to the funding source.
Example: Spend so much time/money on record keeping, getting approval and so much bureaucracy that it diverts so much of the money away from the purpose that it should be used for, that program can’t be implemented properly. For, example, devoting an extensive amount of time to keeping track of all activities of all the staff all of the time.
What are 4 Categories of Evaluation?
1. Program Planning
2. Program Monitoring/Process Evaluation
3. Impact Assessment
4. Efficiency Evaluation
Objective 4, Unit 1: List three questions that might be addressed by the following category of evaluation:

Program planning
Program Planning Questions:

• What is the nature, scope, magnitude, severity, and urgency of a problem?
• Who is primarily affected by this problem?
• What are the alternative strategies that can be used to resolve the problem?
• What resources are required by the alternative strategies and are they available? (Michelle’s index cards)
• To what extend are they acceptable to the target population? (Michelle’s index cards)

These program planning questions tend to revolve around three issues:

1. Importance-How important is the problem?

What is the magnitude? How many people are affected by this problem? What’s the Severity of the problem? How severe are the consequences?

2. Feasibility-Is the problem feasible to address?

3. Desirability (or acceptability)-How are you going to address the program in a way that is acceptable to the audience that you are trying to reach?
Objective 4, Unit 1:List 3 questions that might be addressed by the following category of evaluation:

Program Monitoring/(Process Evaluation)
Program monitoring questions: (basically anything related to the fidelity of implementation)
• Is the intended target population being reached?
• Is the program being implemented as planned?
• What is the nature of program personnel with which the program is most likely to be successful or unsuccessful.
o What are the qualities of the people that are most likely to be successful?
• What are the key elements of the program?
• What are the underlying theories upon which the program is based?
• What are the factors that are most significant to facilitating or hindering program implementation?
• What are the most appropriate strategies for promoting implementation?
• What is the mechanism by which the program is intended to achieve its desired outcomes?
• What type of personnel will help the program be successful?
Objective 4, Unit 1: List 3 questions that might be addressed by the following category of evaluation:

Impact/Outcome Assessment
Impact (Outcome) Assessment Questions:
• To what extent did the program achieve what it was intended to achieve?
• Were the objectives achieved? If so, did the achievement of the objectives results in the expected impact?
• How useful is the program and the theory upon which it is based?
• How many participants successfully completed the program? (Michelle's index cards)
Objective 4, Unit 1: List 3 questions that might be addressed by the following category of evaluation:

Efficiency Evaluation:
Efficiency Evaluation Questions:
• Are the benefits derived from the program worth the costs?

• Are there other programs that are run more efficiently?

• How could the program be run more efficiently?

• For the amount of the money, did we maximize the impact (From Michelle's Index Cards)??

Two general classes of efficiency evaluations:
1) Cost Benefit analysis: Translate effects to monetary terms and say well how much is worth monetarily?

2) Cost Effectiveness Analysis: Effects specified in qualitative terms. For example: How much does it cost to increase the colonoscopy screening rate from 60% to 65%? Put a dollar value on those increments and compare.
Objective 5, Unit 1: List 5 conditions that help establish a causal relationship….Does A cause B? What is the evidence?
1. If a A causes B then A and B must co-vary---that is, there must be a relationship between A and B.

2. Is there is a dose relationship between A and B.

3. If A causes B, then A must precede B…temporality.

4. Relationships between A and B are shown consistently by different researchers studying different groups of people at different times.

5. By minimizing extraneous variables, Rival hypothesis (other possible causes of B) are ruled out.

6.There is a theoretical or biological reason (s) why A would cause B.
Objective 6: Unit 1: Describe 10 BARRIERS TO EVALUATIONS.
Describe 10 barriers to evaluations:

1. Lack of money (EXPENSIVE to evaluate)
2. Lack of time
3. Lack of administrative support (PEOPLE reluctant to be evaluated/concerned faults will be revealed)
4. Lack of technical assistance/qualified people
5. Reliable/valid instruments to measure/constructs of interest
6. Ethical constraints/consent
7. Neglect to measure program implementation

-3 Specific Issues

1. Rigor VS. Significance
Tight Controls VS. Generalize

2. Experimental VS. Placebo

3. Internal VS. External Validity
Efficacy VS. Effectiveness
CASUALITY VS. PLACES/TIMES
Objective 6: Unit 1: Describe 5 BARRIERS that make school-based program evaluations especially problematic.
1. Consent, cooperation, contamination, unit of analysis.
2. Long nature of objectives.
3. Controversy over what should be evaluated (educational objectives, student behavior...).
4. School health education is designed to influence health (rather than illness).
5. School health attempts to influence many different and related behaviors.
6. School health addresses large groups/youth only.
7. School educators may not be trained on evaluation theory/methods, behavior change theory and practice.
8. Teachers may feel threatened.
9. Difficulty observing health behavior in school.
10. Difficulty determining whether appropriate behaviors result from school health education.
Objective 7, Unit 1: Distinguish Between formative and summative evaluation
Formative Evaluation-
o Evaluating a program during the initial development and use for the purpose of improving the final product.
o Has to do a lot with implementation monitoring questions.

Summative evaluation:
o Evaluating programs to make judgments about their worth.
o Is the program able to do what it was intended to do? Mostly done once the program has ended.

• Want do complete a formative evaluation before summative evaluation.
Objective 8, Unit 1: Describe 2 definitions for process analysis and give an example of the kind of data that would be collected for each.
If you are doing a Process analysis or evaluation focusing on implementation monitoring: What are the key aspects of implementing the program under study? “What really happened, what did/didn’t work, what does it mean to “implement” the program?”
• Which learning activities and material were used?

VS.

If you are doing a Process analysis about the meditational variables: What are the variables that mediate the relationship between the independent (e.g. program) and dependent (e.g. outcome) variable? What processes or pathways were involved in making the program work?
• Did students who learned more content information and skills perform better on the outcomes under study?
Objective 9, Unit 1: Describe two different positions regarding what should be evaluated in school health education.

(he did not provide this slide….Michelle’s index cards).
The goal of school health should be to:
1) Help people make informed decisions about health.
2) Influence the behaviors and health status of people to be consistent with objectives established by health authorities.
Objective 10, Unit 1: List 10 preliminary questions to be considered when planning an evaluation study?
1. What are the questions addressed by the study?
o Is it an outcome, formative evaluation, program planning, efficacy study?

2. Is the study necessary?

3. Will the study provide the information I need?

4. Can my study be incorporated into a larger project?
o Sometimes called ancillary studies.

5. How much is already known about my question?

6. What have previous researchers learned?

7. What have been the weaknesses of previous studies?

What is:
8. The target population?
9. The overall design?
10. The time and cost required?

What are:
11. The political and practical considerations?
12. Audiences you intend to serve?
13. Assumptions, theories, and limitations.
14. The possible (good and bad) side effects?
15. Exactly what can be measured?
16. Can this be achieved with a reasonable degree of reliability and validity.