Difference Between Monitoring And Evaluation

Superior Essays
4.2.1 What is the difference between monitoring and evaluation?
The terms monitoring and evaluation are often used interchangeably, but they are two distinct sets of organizational activities that are related but not identical. Monitoring is a systematic collection and analysis of information during the life time of a project (Shapiro, n.d.). The main aim of monitoring is to improve the effectiveness and efficiency of an organization or project in terms of its activities. The benchmark for monitoring is the targets set by the organization or project at the beginning and the actual activities planned to be carries out at various stages. Monitoring enables the organization to (1) remain on track and (2) to identify when things are not going according
…show more content…
Thirdly is extra-tester reliability which means that the evaluator’s conclusions should not be influenced by peripheral conditions. Thus to say that the outcome of an evaluation should have no bearing on the evaluation object.
Validity
According to Hughes and Niewenhuis (2005) validity is a measure of ‘appropriateness’ or ‘fitness for purpose’. Just as with reliability, there are three categories of validity:
1. Face validity: implies a match between what is being evaluated and how it is being done. For example, if you are evaluating how well someone can bake a cake or drive a car, then you would probably want them to actually do it rather than write an essay about it (Hughes and Niewenhuis, 2005).
2. Content validity: This means that what you are evaluating is actually relevant, meaningful and appropriate and there is a match between what the project is setting out to do and what is being evaluated (Hughes and Niewenhuis, 2005).
3. Predictive validity: An evaluation system has predictive validity if the results are still likely to hold true even under conditions that are different from the test conditions (Hughes and Niewenhuis, 2005).
…show more content…
However Hughes and Niewenhuis, (2005) argues that some ‘subjectivist’ methodologies to evaluation would differ.
Transferability
Although each evaluation should be designed around a particular project, a good evaluation system is one that could be adapted for similar projects or could be extended easily to new activities of a project. That is, if the project progresses and changes over a period of time in response to need, it would be useful if the project team would not have to rethink the whole evaluation system. Transferability is therefore about the shelf-life (robustness) of the evaluation and also about maximizing its usefulness (Hughes and Niewenhuis, 2005).
Credibility
The term credibility refers to the idea that people actually have to accept and believe in your evaluation system. The evaluation thus ought to be authentic, honest, transparent and ethical. Hughes and Niewenhuis (2005) oulined three points that need to be adhered to to ensure credibility. Thus, none of your stakeholders should (1) questions the rigour of the evaluation process, (2) doubt the result of the evaluation report, or (3) challenge the validity. According to Hughes and Niewenhuis (2005), if any of these points are bridged, then the evaluation system loses its credibility and is not worth using

Related Documents

  • Decent Essays

    Evidence-based practice (EBP) has changed nursing significantly. EBP contributes to safe, quality nursing standards. Utilizing the most up to date research in the nursing field can assist in enhancing the care environment and nursing practices. In this portion of my educational training, I learn how to assess the validity of research studies.…

    • 428 Words
    • 2 Pages
    Decent Essays
  • Great Essays

    Research Design Setting The settings in which this data was collected were vocational rehabilitation centers in each state in the United States. Each year, service providers are required to enter data into a national database regarding individuals who have exited vocational rehabilitation programs under their care. This data is organized into a national dataset, which contains each exiting participant. Exits are interchangeably described as “closures” and fall into seven different closure categories.…

    • 1542 Words
    • 7 Pages
    Great Essays
  • Improved Essays

    STRATEGIC MANAGEMENT In assessing F1, attribute b) should be given the greatest weight. a) Situational Analysis: The situational analysis is appropriate for the strategic alternatives and business issues being addressed (good quality, depth, breadth, and make sense). It includes internal and external scans, a financial assessment of the company’s current state, and other relevant qualitative information not provided in the high level SWOT in the Backgrounder (e.g. strategic goals, stakeholder preferences, constraints, other SWOT points, KSFs/competitive advantages, and ratios).…

    • 3252 Words
    • 14 Pages
    Improved Essays
  • Superior Essays

    Nt1310 Unit 2 Study Guide

    • 1179 Words
    • 5 Pages

    5. The commitment to group procedures (e.g. participating, taking responsibility). 6. The amount of contribution (e.g. gathering and researching information, preparing written reflections). 7.…

    • 1179 Words
    • 5 Pages
    Superior Essays
  • Improved Essays

    According to McMillian, validity is when a measure actually measures what it is supposed to measure. For example, in my field study, if I am measuring the ability of a student to speak or understand words I must focus on the attainment of words and not the student’s behavior because the results will not be valid. 9. “An operational definition indicates how the concept is measured or manipulated, that is, what “operations” are performed to measure or manipulate the variable.” (…

    • 1055 Words
    • 5 Pages
    Improved Essays
  • Improved Essays

    There are now expectations for precision in matching service to need, for performance measurement, for overall program cost-efficiency and cost-effectiveness, for transparency, and for reporting of results. Data collection and recording on individual cases play a part in all of these (Kettner, 2012). There are three reasons to collect data; Validation, accountability and decision making as described below. Validation…

    • 377 Words
    • 2 Pages
    Improved Essays
  • Superior Essays

    General Information The Personality Assessment Inventory (PAI) was developed by Leslie C. Morey Ph.D. in 1991 and revised in 2007. It is published through Psychological Assessment Resource and is a multidimensional objective inventory designed to measure psychopathology and treatment planning for various psychopathological conditions. The current PAI form is not a revision of normative data, test form, or interpretative guidelines from the original 1991 edition. The current version reflects the revision and publication of a second edition of the PAI professional manual to describe research related to the instrument since the original publication of the manual in 1991.…

    • 2052 Words
    • 9 Pages
    Superior Essays
  • Improved Essays

    Reliability knows that if you run a test more than one time the same result will occur each time. Validity is defined as the degree to which an instrument…

    • 739 Words
    • 3 Pages
    Improved Essays
  • Improved Essays

    Nt1310 Unit 5 Exercise 1

    • 681 Words
    • 3 Pages

    1. Measures – p.117 – techniques or instruments used for measurement, referring to quantitative devices Researches can use different instruments to determine the level of concussion a player may have sustained 2. Nominal Scale – p.119 – a type of measurement that classifies a category based on numbers Concussions are measured using a nominal scale of 1-5, one being least severe and five being the most severe. 3. Frequency Distribution – p.121 - a type of measurement that helps organize data by grouping the data based on the number of times the same score was achieved…

    • 681 Words
    • 3 Pages
    Improved Essays
  • Superior Essays

    There are many different forms of validity that are used to assess the relevance of a specific test. The traditional forms of measurement used include criterion validity, construct validity, discriminant…

    • 2192 Words
    • 9 Pages
    Superior Essays
  • Great Essays

    For example meeting outcomes and objectives identified in assessment plan, ensuring evidence is coherent, accessible, realistic, relevant, and attributable and achieved within any time constraints. The assessor also needs to consider if the evidence is compatible with the learning programme and required assessment outcomes and whether the evidence is contextualised. Another way for the assessor to support their judgements on whether evidence is sufficient, authentic and current is to ensure they are adhering to organisation, industry, awarding body and government requirements and standards. 5.2 Explain how to ensure that assessment decisions are made against specified criteria, valid, reliable and fair To ensure assessment decisions are made against specified criteria, are valid, reliable and fair the assessor must clearly identify the range of evidence being used which should be both current and relevant. Assessment and assessment decisions need to be appropriate to the assessment criteria and have a valid currency, standard or level.…

    • 3956 Words
    • 16 Pages
    Great Essays
  • Improved Essays

    • The common data quality problem in healthcare performance measurement is: First, the lack of knowledge about the purpose of healthcare performance measurement. The purpose is to: - Assessment of current performance: need to find out the strength and weakness of current process - Demonstration and verification of performance improvement: evaluate and compare whether the improvement had made any difference. - Control of performance (Joshi, Ransom, Nash, & Ransom, 2014, p.135)…

    • 850 Words
    • 4 Pages
    Improved Essays
  • Improved Essays

    Flyway Airways Essay

    • 840 Words
    • 4 Pages

    4. Considering the two methods outlined, what types of validity would you consider to be demonstrated by the two approaches to measuring quality? Defend your position. Types of validity…

    • 840 Words
    • 4 Pages
    Improved Essays
  • Improved Essays

    Internal Validity Essay

    • 573 Words
    • 3 Pages

    This is because there are many threats to internal validity and it is sometime difficult to control all of them, especially the loss of subjects. 2. In chapter 6, we discussed the concept of external validity. In what ways, if any, are internal and external validity related? Can a study have internal validity but not external validity?…

    • 573 Words
    • 3 Pages
    Improved Essays
  • Improved Essays

    (1) In your own words, define, then compare and contrast the different theories of intelligence that are presented in the textbook, including Spearman 's G Factor, Gardner 's Multiple Intelligences, Sternberg 's Triarchic Theory and the concept of Emotional Intelligence. Sternberg proposed that there consist three types of intelligence: analytical, creative and practical. Analytic intelligence consists of problem-solving; creative intelligence deals with new ideas, new ways of problem-solving and processing certain aspects of information; practical intelligence, in other words, "street smarts," involves the ways people get through life. In general, these three types of intelligence work systematically to solve problems.…

    • 724 Words
    • 3 Pages
    Improved Essays