By Binod Chapagain and Erica Kuhlik
“Results-based management provides a set of principles, approaches and tools which can help us achieve… goals. By always trying to answer the ‘so what difference does our intervention make?’ question, we will keep our focus on how we can support real and sustainable improvements being made in the lives of those we serve.” –Helen Clark, Former Administrator, UNDP
For decades, development organizations such as Pact have been making efforts to monitor and evaluate their programming. All legitimate development organizations now have Monitoring and Evaluation (M&E) experts doing this critical work. Yet few outside of M&E circles understand the topic, and some even try to avoid it. Here, we attempt to debunk popular myths about the art and science of impact measurement.
Myth #1: M&E comes at the end of the project cycle.
According to traditional project cycles, M&E professionals are expected to wait to come in until the final stage of a project. But how can M&E professionals help improve project performance if their involvement comes at the end – if they do not have full information about a project’s scope and did not develop the indicators that determine the measurement of success? If project objectives and indicators conflict or are not clear or measurable, it is difficult to measure project achievements.
M&E staff can effectively measure and document project results at different levels, such as processes, outputs, outcomes and impacts. But they must understand the project’s context, intricacies and history. Monitoring, if its objective is to support management decisions, take corrective actions and generate learning, should be integrated at all steps of project cycle management, whereas evaluation can be at midterm and at the end of the project. The incorporation of M&E at project inception is also critical to establishing a project’s baseline values to measure impact at completion.
Myth #2: M&E is the sole responsibility of M&E staff.
Evidence shows that M&E increases ownership and improves performances, self-organization and outcomes when it is participatory and has the direct engagement of program staff, stakeholders, duty bearers and right-holders in the process.* In order to achieve this engagement, however, M&E staff need to make monitoring data more accessible, user-friendly and less intimidating for non-M&E staff.
If monitoring is considered a stand-alone function, and program staff are not involved in data collection and quality control, there will likely be spotty information on true impact, making it hard to learn and adapt programming.
To make M&E more user-friendly, program managers and program and M&E staff should have a clear division of roles in the M&E processes. The team should agree on data requirements for the program baseline, progress, outcomes and impacts considering the project result frameworks. Big Data trends, if not done strategically, make data collection, management and use complex, even intimidating.
An M&E system becomes unusable if there is a mix of different levels of data – i.e., inputs, outputs, process, outcomes and impacts, in a single template. Information about inputs and outputs is collected per intervention/event/activity, while measuring outcomes and project impact is a long-term process that uses different tools. Understanding these stages helps determine the correct information collection tools, frequency and processes. If we attempt to combine these data collection processes, the tools will be confusing and probably useless.
Finally, M&E may be daunting to non-M&E staff because data is often managed in complex databases and spreadsheets by M&E staff, which program teams may not be able to easily access. However, if M&E is integrated with technology (including mobile technology) and uses interactive software to facilitate recording of information and running system-generated reports and dashboards, program team participation in M&E will likely increase, with the development of relevant skills. This process brings M&E, program teams and leadership together in data management and use for decision-making.
Myth #3: M&E is only about numbers.
Usually, M&E is indeed measured by numbers recorded in a spreadsheet and presented in tables and graphics. It is important to have numbers, but they are meaningless without interpretation and application to program improvement. For example, numbers can report how many people received training, how many drinking water taps were installed or how many children were immunized. But numbers do not explain what changed or how these interventions affected people or the environment. Numbers alone are insufficient to learn the differences our interventions have made – central to result-based management. A robust M&E system will clarify what quantitative or qualitative data is needed, and for what.
Myth #4: M&E is a boring job.
There is a general perception that M&E is a boring job, a function to please donors. Yes, this may be true if M&E is siloed from program management and strategic decisions, or if M&E staff do not understand program nuances or participate in issue analysis and program development. However, if properly integrated into programs, M&E supports evidence-based decision-making, which is central to organizational innovation… and this is far from boring.
Binod Chapagain is a monitoring and evaluation advisor in Pact’s Thailand office.
Erica Kuhlik is Pact’s results and measurement technical manager, based in Washington, D.C.
Lead photo: Community members in Thailand prepare a local resource map. Photo by Binod Chapagain.
*Hilhorst, T., & Guijt, I. (2006). Participatory Monitoring and Evaluation a Process to Support Governance and Empowerment at the Local Level. Guidance Paper. Amsterdam: KIT.