top of page
  • Writer's pictureClara Richards

Lesson #16: Bring others into monitoring, evaluation and learning of capacity building activities

At Spaces for Engagement, our M&E approach was linked to the way the programme was developed: since we presented yearly plans on activities to be conducted, we mainly annually measured success by evaluating impact of each planned CB activity. However, after reading Antonio Capillo´s reflection in his previous post on this topic, I believe we could have begun every year by designing an approximate theory underpinning the expected change or impact over time, as short as this was. This means we could have defined the contribution that each CB activity would make to desired change.

In our case we mainly used traditional written or online evaluations by participants, conducted at the end of each course/conference. In some cases, we also conducted a Survey Monkey several months after.

Evaluations revealed mostly a high degree of satisfaction with trainings but did not allow us to detect if new skills and capacities had indeed been acquired. However, testimonials of participants when interviewed after courses frequently highlight how the training has not only helped them at the individual level to work better but also to do things differently with their teams. Of course, this is just a self-assessment so to improve the way we measure this type of results in the future we should combine personal evaluations with some external methodologies to corroborate this. In this direction, Capillo´s idea on seeking ways to triangulate responses is a very good one.

However, we also need to go beyond new capacity developed since CB is not just an end in itself for us. In fact, in Learning purposefully in capacity development (2009) Peter Taylor and Alfredo Ortiz stress the importance of measuring how capacity development contributes to wider development processes and to sustainable capacity, in addition to measuring the quality of the CD process itself. This is clearly related the objectives of CB and what we desire to achieve through it, not a minor question at all.

In their analysis, time –as expected- plays a key role. They state that it may also be useful to gear CB more towards nurturing long term, even unintended outcomes. They propose the notion of standing capacity which is useful in order to measure capacity beyond pre-programmed, immediate performance. We tend to do better the latter: in our case we have been very effective in terms of evaluating each course and workshop by the end of it through written evaluations so as to assess issues like degree of satisfaction with materials, tutors, content, facilitation, etc. This is very important to improve upcoming similar activities but does reveal very little about if and how new capacities have been developed.

This tension between evaluating ad hoc and short term activities versus assessing our contribution to mid-term outcomes surely resonates with what most of us can observe in the field. In fact, one participant of one online course who was interviewed for this paper expressed in a very graphic way: “I am still taking the course”. He mentioned that he was still reading some additional materials that we had recommended, applying in his work some of the learned tools, re-organizing his ways of thinking and approaching issues that emerge in his work, etc.

Taylor and Ortiz go on arguing that open systems thinking and learning approaches through the use of storytelling may prove to be more strategic and efficient than the instrumental approaches often used by donors. This was a very clear to us when hearing stories by participants on why they decided to take the course, and what had changed in them and their work after taking it. However, we seldom have time and resources to have this type of conversations and then systematize what emerges so as to detect valuable lessons.

Another approach is the one suggested by Capillo:  “the ideal approach to evaluation design is to have a theory of change tested through mixed designs, using a combination of quantitative and qualitative methodologies to obtain a reliable estimate of the change we want to track – taking in consideration issues like selection and response bias and small sample size – and explore the ways impact was reached and change happened.” I believe this could become a good way forward though at developing countries we would seldom count with enough resources to gather this type of data throughout a project. Also, I would add an emphasis on working jointly with beneficiaries in terms of both designing the theory and also checking how we could all use results to learn and improve our work.

I insist on some sort of participatory methodologies because I believe participants are also accountable for change (or lack of it). In fact, it is even more difficult to assess how much of what is detected through formal evaluations has to do with what we´ve done and how much with the individual´s own CB process since by listening to participants one could clearly see that much of what they were talking about had to do with internal processes that had begun before deciding to take part of the CB activity. Thus, if we bring them on this practice we might do more justice to results.

3 views0 comments
bottom of page