Valuing and enhancing evaluation skills

October 14, 2014

There has been a lot written about why we evaluate development programs and the importance of rigorous evaluation. But this is not what this blog is about, and if you need to be convinced about the importance of evaluation, check out this Stanford Social Innovation Review blog entry.

In the past few years, we at Pact have been placing greater emphasis on rigorous evaluations and strong measurements. When possible, we have begun to use quasi-experimental designs so that we are able to attribute outcomes to our programs. In Nepal, for example, we designed an evaluation for a governance program that measures the ‘supply’ (local government) and ‘demand’ (citizens) sides of government in both project and control communities.

Program evaluation is not easy, particularly given the increased complexity of development programs:  we are often implementing programs with several distinct yet integrated components and simultaneously working within multiple local systems, across various sectors, and in collaboration with stakeholders to achieve the same development goal. With such increased complexity, evaluation approaches have also evolved and evaluators are now using new methods and techniques in their evaluation designs, such as rapid feedback mechanisms, integrating external datasets, and creating counterfactuals from existing data sources.

Similar to other international nongovernmental organizations, Pact often hires independent evaluators. Part of the rationale behind this is to avoid the inherent bias of evaluating our own programs. But it can also be because we lack the time. To be perfectly candid, in some cases, we use independent evaluators because we lack the evaluation skills. While all of Pact’s M&E personnel are skilled in setting up and implementing robust monitoring and reporting systems, only some have had meaningful experience designing and conducting rigorous evaluations.

But one can’t learn evaluation skills overnight, or even through a single course or workshop, and as an organization that focuses on capacity development, Pact is committed to strengthening our staff’s competency in evaluation over the long term.

As a starter, we wanted to understand what specific evaluation skills were lacking among our M&E staff. We noticed that when consultants were hired to evaluate a program, they were sometimes given free rein in the design and overall direction of the evaluation – because they were seen as the “experts.” When the evaluation concluded months later, we would receive the evaluation report and see that the evaluation was not useful mainly because the right questions were not being examined (for more on this topic, read this Better Evaluation blog). Part of this was due to the misunderstanding about whether the program can give guidance on an independent evaluation about what is evaluated or how it is evaluated. Another issue is that evaluation is often seen as an “audit,” rather than an opportunity to learn and improve services to beneficiaries.

To address these gaps, we have developed a new handbook: “Field Guide for Evaluation: How to Develop an Effective Terms of Reference.” This handbook is a practical guide to managing and leading evaluation efforts, focused around developing an evaluation plan, or evaluation “Terms of Reference.” We developed this module so that each chapter’s learning objectives and exercises relate to sections of an evaluation protocol. This handbook is only one resource among many to develop evaluation skills. We hope to continue providing our internal staff with useful resources and opportunities to deepen evaluation practice and bring increasing skills to the programs serving the communities we care about.