• Shuffle
    Toggle On
    Toggle Off
  • Alphabetize
    Toggle On
    Toggle Off
  • Front First
    Toggle On
    Toggle Off
  • Both Sides
    Toggle On
    Toggle Off
  • Read
    Toggle On
    Toggle Off
Reading...
Front

Card Range To Study

through

image

Play button

image

Play button

image

Progress

1/87

Click to flip

Use LEFT and RIGHT arrow keys to navigate between flashcards;

Use UP and DOWN arrow keys to flip the card;

H to show hint;

A reads text to speech;

87 Cards in this Set

  • Front
  • Back

When do you want to think about doing evaluations?

Ideally during planning and implementation. Not just at the end.

What is meant by the methods of evaluation? And what are some examples?

Methods are indicating the way in which the data are collected as a part of the eval. Consists of surveys, interviews, or observations (can do one, all, or combination).



What is the dependent variable?

The outcome variable. This is the same thing specified in your outcome objectives. The end goal.

What is reliability (consistency)?

The extent to which the data are free of errors... So is the instrument, tool, measure, or test used to collect the data flawed? Is there a reason respondents may vary day to day? Think about how temporality, where, when, and how can affect reliability.

What is validity?

The degree to which the tool captures what it is intended to. Is it accurate?

Evaluation Design

The grand scheme that delineated when and from whom data are collected.

Retrospective designs

Gather data at the point of intervention backward in time.

Prospective designs

Gather data forward in time; begins at a point prior to start of the intervention

Time Span of the Evaluation

The length of time encompassed by the evaluation.. How close is the eval to the Ix?

Time Span: Cross Sectional

Data collected at one point in time. "Snap shot."

Time Span: Longitudinal

Data collected over time

Time Span: Pre-test/post-test

Before and after Ix

Attrition

The loss of participants over time. Can be due to death, moving, loss of interest, can't find them...can control some of those.



Group who participates in Ix

Experimental/Intervention/Exposed Group

Group who didn't participate in Ix

Control/Comparison/Unexposed group

What does your eval design heavily depend on?

YOUR QUESTIONS THAT YOU WANT ANSWERED



What are some common eval questions?

Does a particular program cause a particular change in knowledge/attitudes/behavior? What aspects are essential to change? What is most influential? Will this work across communities?

Threats to Internal Validity: Instrumentation

Could be a scale/BP cuff. Do you see changes in how something works. Also applies to people using the tool. Instruments and observer must be consistent.

Threats to Internal Validity: Regression to the Mean

Overtime, people on extremes will go backward towards the mean. Is a natural phenomenon.

Threats to Internal Validity: Maturation

Mental/cognitive/physical changes that naturally occur. So changes could be attributed to this. Also--how much time is lapsing?

Threats to Internal Validity: Testing

Just by asking the questions, you're opening the door to seeing them understand what you're researching. How and when you ask, as well as what you ask.

Threats to Internal Validity: History

Your observed program results can be bc of other interal or external events. Something other than your Ix causing your results.

Threats to Internal Validity: Selection

When you have a non-equivalent comparison group. How people got in Ix is explanation for your results. Be alert for this bias if working with a non-equivalent comparison group.

Designs: One group, Post test only

Data collected on the outcome from only participants, and only after completion of the program. No comparison group. Pros: Fast, easy, cheap. Cons: History and Maturation issues, less depth.

Designs: One group, pre-test/post-test

Evaluation collects outcome data only on participants, before & after the program. Pro: Only one group asked, baseline assessment, observations happen more than once so can track changes. Con: No comparison group. Maturation, History, Instrumentation, Regression to Mean, Testing threats. Not sure if changes are results of those, or the Ix.

Designs: Comparison Group, Post-test only

Collects data from participants as they complete the program, as well as nonparticipants from target audience. Pro: Can rule out history and maturation. Con: Groups are not randomly assigned.

Designs: Ecological Design

Considers differences between groups as opposed to differences between individuals; comparison happens only at a population level. Ecological Fallacy can exist. Con: Only group data.

Designs: One group, Time series.

Collecting data at multiple times across Ix time. 1 of 2 designs that can be used if the entire population is the unit of analysis. Pros: More data, can control for maturation, testing & regression to the mean. Helps identify for external factors.

Designs: Multiple group, Time series.

Data is collected from a targeted population and a comparison population at several time points before/after the program. Pros: Repeated measures, two groups, enhance external validity. Cons: Seasonal variation? Other variation?

Designs: Two group, Retrospective (Case Control)

Use historical, existing data to compare those with and without the outcome, based on whether they were exposed to the program.

Designs: Patched-up Cycle

Allows the post-program data from participants in an initial cycle to become a comparison group for the pre-program data for the next cycle of participants. Pros: History and maturation threats avoided.

Designs: Two Group, Pre-test/Post-test

Collects outcome data from participants and nonparticipants, both before and after the program. Participants and non-participants should be as similar as possible.

Designs: Two group, Pre-test/Post-test, with random assignment (Randomized Trial)

Highest quality level of evidence. Random assignment is the process of determining at random who receives the health program and who does not (maximizes comparability). Pros: Controls for confounders, avoids ambiguity in time--exposure always happens first.

Considerations in choosing a design: Causality

Is the Ix the causal factor?

Considerations in choosing a design: Bias

Are there biases?

Considerations in choosing a design: Retrospective vs Prospective

Do you want to look back in time or forward in time?

Considerations in choosing a design: Time span

Time allotted for pre/post tests

Considerations in choosing a design: Groups

Comparison & participant group should be as similar as possible.

What is the importance of focusing your evaluation?

Having a focused set of things you want to evaluate will keep costs down, make data manageable, and keep the program and evaluation focused of key outcomes. Realistically you can't evaluate everything.

Three Characteristics of the "right" evaluation questions

1. Can relevant data be collected (Can you get the info?). 2. Is more than one answer possible? Perceptions, attitudes, and beliefs--is it subjective? 3. Do stakeholders want the information this evaluation will produce?

Sampling: Random Sample

If each member of the population has a known chance of being chosen to participate in the program, then the sample is random.

Sampling: Nonrandom sample

Sampling does not rely on chance to determine who is in the sample

Sampling, Hard to Reach Pop: Snowball Sampling

When respondents refer you to other potential respondents

Sampling, Hard to Reach Pop: Venue Based sampling

Sampling through locations that cater to a specific type of person. All about place! E.g. hospitals, bars, liquor stores

Sampling, Hard to Reach Pop: Respondent Driven Sampling

Combines snowball sampling with a mathematical model that weights the sample to compensate for the fact that the sample was collected in a non-random way

Sampling: Convenience Sample

Invited everyone who is accessible to participate. Ie) asking your class

Sampling: Purposive sample

Participants are chosen based on a specifiv characteristic. E.g. race/ethnicity, disease status, age, enrollment in school district

Sampling: Quota sample

Participants are selected on a specific characteristic, but the number of participants is matched to their representation in the pop at large. E.g. Matching race to demographic representation in Buffalo

Sample Size: Power

The level of probability that a significant result may be found by the evaluation, if the study was in fact successful.

Sample Size: Effect Size

The degree of difference that is expected.

Sample Size: Power Analysis

The statistical process of analyzing the relationships among the power of a study, the type of statistical tests to be done, the number of variables, the effect size, and the sample size. Plug in numbers, effect size, and power you want to tell sample size. The bigger the sample size, the greater the ability to detect differences.

Response Rate

the percentage of individuals who were invited to participate and who actually participate

Non-response bias

Extent to which there are systematic differences between those who participate and those who don't.

Incentives

Money, gift certificates, etc. Giving cash, return postage, etc.

Response Bias

Intentional, unconscious way that an individual selects his/her/their response

Social Desirability

Telling you what you want to hear. This decreases with open ended questions, assuming its anonymous. Also can decrease it by asking the same question in different ways, or framing socially desirable options last.

Primary Data

Evaluator collects the data. E.g. Surveys, focus groups, in depth interviews

Secondary Data

Existing data is used. Vital records, medical records, national surveys.

Established measures

Take advtg of validated questions that exist. Also, validated measures and scales should be used--don't just create something new.

Physical data

Biological samples, measurements (height, weight)

What do really good qualitative studies do?

Provide data that changes how we think about a social process.

What is the aim of great qualitative research?

To understand an experience from the respondent's POV.

Step 1 to Qualitative Research

Learn as much as you can about the people, events, locations, stakeholders, and gatekeepers involved in the phenomenon you seek to study.

Getting In: Join Relevant Organizations

Sign up for on-line fourms, list serves, attend meetings. Eg. support groups, advisory boards

Getting In: Volunteer

Act as a volunteer at organizations relevant (even marginally) to your research interest. E.g. Homeless shelters

Getting In: Shadow

Contact individuals at organizations related to your interest and ask to shadow them. E.g. How clinics are currently using PrEP

What do Key Informant Interviews do?

Gives you and opportunity to ask people for their perspective on constructs relevant to your specific research interests. May need IRB approval if you're doing several, so they can be used as data.

Qualitative Sampling: Convenience

Participants or cases that are accessible and willing

Qualitative Sampling: Critical Cases

Exemplar cases or participants, cases important in some unique way.

Qualitative Sampling: Deviant Cases

Highly unusual participants or cases

Qualitative Sampling: Maximum Variation

Participants or cases with differing experiences. Don't want everyone to be the same, helps to get breadth

Qualitative Sampling: Random Purposeful

Participants or cases randomly selected from a large sample pool.

Qualitative Sampling: Typical Cases

Usual or normal cases.

Qualitative Sampling: Theory based

Participants or cases with the theoretical construct you're interested in.

"Inconvenient Facts"

Qualitative researchers should try to identify these.

The Ethnographic Trial

Inconvenience Sampling

Are there people outside of the sample whose existence is likely to have impact for the argument I'm making?

In Depth Interviews

Can be open ended, structured (don't deviate from script), or semi-structured. Has individual participants. Data collected: Interviewer's field notes (nonverbal too), audio recorded interview, relevant notes

Observation: Non-participant Observation

Researcher acts as "fly on the wall"

Observation: Participant Observation

Researcher engages in activity with respondents

Observation: To understand...

How events unfold, how people act/make decisions in realistic settings, how people interact, how process change over time, who the key players are.

Observation: Sample & Data

Sample: Individuals, Settings, or Both


Data: Researcher's field notes

Ethnography

To learn and understand cultural phenomena which reflect the knowledge and system of meanings guiding the life of a cultural group. Combines interviews, observation, participant observation. Sample: individ, settins, or both. Data: Field Notes, Interview transcripts

Focus Groups

Assess group dynamics. Sampling. Led by a moderator. Data: Interview notes, audio recording, video recording

General Strategies for Data Analysis

Transcribe data; read and re-read; code (for reliability and consistency)' analysis of themes or patterns; iterative, evolving process; data saturation

IRB and Qualitative Research

If you're collecting data, need IRB approval. Informed consent may take various forms depending on the nature of the data collection.

Nurturing lasting relationships

Respondents, informants, and gatekeepers may be people you will repeatedly ask to participate. Think how you will nurture the relationships.... Hand written thank you notes, volunteering your own time, providing research updates