The experimental (randomized/treatment group). This design engages assembling a set of individuals eligible and willing to participate in the program and putting them into a group that will receive the intervention (treatment group). Our group will consists of 40 girls between the ages of 13-17 years old African American. …show more content…
Determine the research method or design is right for each evaluation question, because one method may not work for all questions. To accomplish this goal of the design, the evaluation design must be precise and effective. The journalist of the Poverty Reduction & Equity identifies three categories, methodologies for an evaluation designs: 1) experimental; 2) quasi-experimental; and 3) non-experimental. However, the Government Accountability Office (GAO) Designing Evaluation Handbook identifies four categories, methodologies for an evaluation designs: 1) process and outcome monitoring or evaluation; 2) quasi-experiments: single group; 3) quasi-experiments: comparison group; and 4) randomized experiments: control group. Step 4: Establish data sources and collection methods to get credible information. When gathering data for your evaluation design use a diversity of resources, such as previous history, program staff and material, earlier research on the program, and participants. Data sources and collection occurs in the arena where the participant(s) experienced the problem or issue. Also, focus on collecting data that will accomplish the outcomes and link to the observed …show more content…
The last step includes accurately reporting information from literature review, summarizing or synthesizing information, and meaning, organizing the data where the reader can understand how it relates to the evaluation question(s). Did the information gather, meet the needs of the program, and how can I turn all the information gathered into something meaningful. After clarifying the evaluation design question(s), the next step is the five basic components and they are:
• “The evaluation questions, goals, and scope;
• Information sources and measures, or what information will be required;
• Data collection methods, including any sampling rules, or how information or evidence will be acquired;
• An analysis plan, including evaluative criteria or comparisons, or how or on what basis program performance will be judged or evaluated;
• Evaluation of study limitations