On Monitoring, Evaluation and Learning of Capacity Building activities – methodologies
I have already written on this post on Lessons learned on promoting better links between research and policy in Latin America (Weyrauch 2013). In this second post, I will focus my reflection on the steps that bring to reliable estimates of the impact of training activity – specifically, defining the expected impact, developing impact indicators and estimating the impact of training.
What is the expected impact of CB activities?
As I did previously, I will consider capacity as a multi-dimensional concept that can be defined at individual, organisational and systemic levels. The CB design should, since the beginning, define the theory underpinning the expected change or impact over time. What is the contribution that our CB activity or programme will make to that change? Change will be in knowledge and skills of the individuals in the short term, in behaviour and organisational practice in the medium term and in an overall better functioning system in the long term. I think that measuring impact of training activities on change in capacity is one of the most fascinating challenges in our sector and creative and rigorous methodological designs (indeed, rigorous designs can be also creative!) have an important role for succeeding in this task.
What are appropriate indicators to measure impact of CB activities?
The next step is developing the most appropriate performance indicators to measure impact. For example, how do we evaluate change in confidence of the participants in a particular topic object of CB activity? One possible way could be directly asking if after receiving the CB participants feel more confident – that tells us something important but cannot still be considered an objective proxy for confidence. This information could be triangulated by asking the participants to select what further CB needs they perceive among a selection including also the topics that were already undertaken. We could reasonably expect that individuals who feel more confident will tend to select other training needs than the ones being object of evaluation. This is just an example and the point I would like to make here is that with all necessary caveats, it is crucial using intellectual and creative energy to define how this and other information will be collected to be able to implement meaningful analysis and assessment of CB activities.
What is the estimated impact of CB activities?
The following action in evaluating CB activity is estimating the net impact of it. The essential question to answer is ‘What would have happened if beneficiaries had not participated in that activity?’ The first best solution to answer this question is comparing the outcomes for the participants with a control group, a sample of similar (comparable!) individuals who did not participate in the CB activity. When collecting information from a control group is not possible, it is crucial to follow participants over time and trace changes in their knowledge, skills or behaviours that can be attributed to the CB activity they participated in. A significant variety of methodologies – such as experimental and quasi-experimental designs, observational designs, secondary studies, small N impact evaluations and purely qualitative methodologies (DFID, 2012 and 2013) – can help to carry out rigorous assessments of CB activities. At INASP, for example, we are tackling the practical challenge of comparing target and control groups remotely by using assessment tools which estimate change in knowledge and core concepts by testing the participants just before and immediately after the training activity was delivered.
I believe the ideal approach to evaluation design is to have a theory of change tested through mixed designs, using a combination of quantitative and qualitative methodologies to obtain a reliable estimate of the change we want to track – taking in consideration issues like selection and response bias and small sample size – and explore the ways impact was reached and change happened. I believe this attitude to rigorous evaluation is crucial to have results that are not only useful for reporting to funders, but especially to link the evaluation to both internal and external learning process. Moreover, as mentioned in the Lesson Learned, linking reflection and learning to reporting is good practice and makes something that is formally required also useful and desirable per se.
To sum up, I think that an efficient MEL system for CB activities is one in which the programme objectives contribute to define the expected impact on beneficiaries. Once impact has been defined, indicators and data collection methods are (creatively) designed to allow measuring that impact. Finally the system has to be supported by ad hoc context related evaluation designs that use mixed methodologies to estimate the net impact that the CB activities had on beneficiaries at different level.
In the following post, I will discuss some challenges and paradoxes that I believe we need to consider when carrying out monitoring, evaluation and learning (MEL) activity related to CB programmes.