Monitoring & Evaluation

Get Started. It's Free
or sign up with your email address
Monitoring & Evaluation by Mind Map: Monitoring & Evaluation

1. 3. Contextual Indicators

2. MEAL

2.1. Designing Logic Model

2.1.1. Theory of Change

2.1.1.1. Long Term change

2.1.1.1.1. Direct & Indirect Impact

2.1.1.2. Pre-condition & Pathway

2.1.1.2.1. Connection between changes

2.1.1.3. Assumptions

2.1.1.4. Elements

2.1.1.4.1. Final Goal

2.1.1.4.2. Activities

2.1.1.4.3. Intermediate outcome

2.1.1.4.4. Assumptions

2.1.1.4.5. Evidences

2.1.1.4.6. Enablers

2.1.1.5. Factors affecting on Approaches

2.1.1.5.1. Purpose of Theory of Change

2.1.1.5.2. Size & complexity of the project

2.1.1.5.3. Stage of development

2.1.2. Result Framework

2.1.2.1. Goal

2.1.2.2. Strategic objectives

2.1.2.3. Intermediate Result

2.1.2.4. Outputs

2.1.3. Logical Framework(Log Frame)

2.1.3.1. Objective Statement(co1)

2.1.3.2. Assumptions (Co4)

2.1.3.3. Indicators (Co2)

2.1.3.3.1. Quantitative

2.1.3.3.2. Qualitative

2.1.4. Indicators

2.1.4.1. 1. Custom Indicator

2.1.4.2. 2. Standard Indicators

2.1.4.3. USAID Criteria for Selecting Indicators

2.1.4.3.1. 1. Direct. 2. Objectives 3. Useful for Management. 4. Attributable. 5. Adequate. 6. Disaggregated, as necessary

2.1.4.4. Performance Indicator's Process

2.2. -Setp1: Develop a participatory process for identifying performance indicators. -Step2: Clarify the Result. -Step3: Identify possible Indicators. -Step4: Assess the best candidate indicators, using the indicator criteria. -Step5: Select the best Performance Indicators. -Step6: Fine Tune when necessary

2.3. Planning Activities

2.3.1. Performance Management Plan(PMP)

2.3.2. Indicators Performance Tracking Table

2.3.3. feedback & response

2.3.4. Learning Plan

2.4. Collecting Data

2.5. Analyzing Data

2.6. Using Data

3. Monitoring

3.1. Continual & systematic collection of data to provide information about the project

4. Evaluation

4.1. Focused on Design, implementation & results of an going & complete project

4.1.1. Evaluation categories

4.1.1.1. External or independent Evaluation

4.1.1.2. Self Evaluation

4.1.1.3. Semi Independent (out sourcing third party)

4.1.1.4. Joint Evaluation (Internal & External)

4.1.1.5. Peer evaluation

4.1.1.6. Participatory Evaluation

4.1.1.7. End of phase

4.1.1.8. EX-post Evaluation

4.1.1.9. Real Time Evaluation(RTE)

4.1.2. Evaluation Empowerment principles

4.1.2.1. Improvement

4.1.2.2. Community Ownership

4.1.2.3. Inclusion

4.1.2.4. Democratic Participation

4.1.2.5. Social Justice

4.1.2.6. Community Knowledge

4.1.2.7. Evidence based strategy

4.1.2.8. Capacity building

4.1.2.9. Org learning

4.1.2.10. Accountability

4.1.3. Cross-Cutting Approaches

4.1.3.1. Gender responsive

4.1.3.2. Utilization Focused

4.1.3.3. Equity Focused

4.1.3.4. Empowerment

4.1.3.5. Participatory

5. Accountability

5.1. A commitment to response to the needs of stakeholders in the activities

6. Learning

6.1. Adaptive Management

6.1.1. Change as we go

6.1.2. Pause and reflect on what works and what doesn’t work

6.1.3. Understand what the beneficiaries want/need/think in order to improve the program

6.1.4. Allow everyone to understand goals and objectives so they can work towards them and correct course when needed

6.1.5. Deliberately plan for how to meet external challenges or factors outside of control that affect the program

6.2. Constant improvement in implementation to achieve better results

6.3. Learning Plan /Agenda

6.3.1. Learning Question

6.3.2. What are the key learning questions to: • Explore, challenge, or validate the development hypotheses and underlying assumptions • Fill gaps in our technical evidence base? • Develop scenarios and identify game changers?

6.3.3. Timing/Key Decision Point

6.3.3.1. At what key decision-making points will learning from answering these questions be relevant? How will we apply learning during design and implementation? If limited applicability / relevance to key decisions, reconsider whether this should be a priority learning question.

6.3.4. Learning Activity

6.3.5. Who will be responsible for implementing learning activities? What additional resources do we need? (event/activity budgets, etc.)? Identify which of the resources needed are already available and which would need to be obtained.

6.3.6. Resources

6.3.6.1. What learning activities will we implement to answer these learning questions? When / how will they be implemented? When / how will we analyze and synthesize our learning? Prior to finalizing learning activities, determine whether anyone else has already investigated the learning question. If no one else has, consider who else might have this question and how might you collaborate with them to answer it.

6.4. CLA

6.4.1. CLA in the Program Cycle

6.4.1.1. COLLABORATION

6.4.1.2. LEARNING

6.4.1.3. ADAPTING

6.4.2. Enabling Conditions

6.4.2.1. CULTURE

6.4.2.2. Processes

6.4.2.3. Resources

7. M&E system

7.1. Methodology

7.1.1. Define the scope & purpose

7.1.2. Perform a situational analysis

7.1.2.1. mechanism

7.1.2.1.1. A literature review

7.1.2.1.2. site visite

7.1.2.1.3. Group discussion

7.1.2.1.4. Interview different groups

7.1.3. Consult with relevant stakeholders

7.1.4. Identify the key levels

7.1.5. Select key focus area

7.1.6. Fill in the grid

7.1.7. Workout the details

7.1.8. Integrate the M&E system (horizontally & vertically

7.1.9. Rollout the system

7.2. Components

7.2.1. Organizational Structures with M&E Functions

7.2.2. Human Capacity for M&E

7.2.3. Partnerships for Planning

7.2.4. Frameworks/Logical Framework

7.2.5. Work Plan and costs

7.2.6. Communication, Advocacy and Culture

7.2.7. Routine Program Monitoring

7.2.8. Surveys and Surveillance

7.2.9. National and Sub-national databases

7.2.10. Supportive Supervision and Data Auditing

7.2.11. Evaluation and Research

7.2.12. Data Dissemination and Use