top of page
  • Writer's pictureClara Richards

On Monitoring, Evaluation and Learning related to Capacity Building activities (II) – paradoxes and

I have written here on what are the elements of an effective Monitoring, Evaluation and Learning (MEL) system for capacity building (CB) activities. However, we live in a complex world, which is less boring (more stressing?) also because of its unpredictability. In this post, I would like to reflect on uncertainty by discussing some paradoxes and challenges that we might encounter when developing and implementing MEL for CB activities and suggest some ways of tackling them.


Effective approaches, negative impact

Sometimes, unintended dynamics resulting from the provision of CB activities are not accounted for when implementing MEL of CB activities. Organisations usually want their employees to receive training and we can reasonably assume that employees themselves are usually willing to participate in the training activities they need. If this is the case and participants have enhanced knowledge, skills and ability resulting from participating in CB activities, we are actually providing the tools that will make them more desirable in the job market. The short-term unintended effect is that participants, now with a more competitive profile, will be more likely to change job to another organisation to progress in their career. As a result, the organisation is now worse off than if the employees were left with the original asset of skills without participating in any CB activity. In paradoxical terms, building capacity at the individual level may actually weaken the capacity at the organisational level.

How do we account for this when planning MEL of CB activities?  This is an open question without a unique right answer to it. I believe instead that any MEL system should take this in consideration in a context-specific manner. Let us consider, for example, a CB programme for researchers having the expected objective to increase participants’ writing skills in academic English. If the training was successful in increasing participants’ skills, a further side effect for researchers who benefit from it is that they will be more likely to publish higher quality papers and hence become more competitive in the job market. Ergo, they will also be more likely to change job and leave their current position. If this happens, it will be probably perceived as positive from the participant’s point of view, but it might be easily argued that now the original employer who sent them to participate in the CB programme is weakened as a result of one of its researchers being more skilled than before and leaving his job because of that. A MEL system taking into account this possibility since the initial stage, in its theory of change (TOC), will be probably able to capture this phenomenon, that could be broken down in different impacts with heterogeneous directions at three levels (capacity levels defined here):

  1. At the individual level, the training programme had a positive impact on participants’ skills

  2. At the organisational level, the training had a negative impact on the employer who sent their researcher to participate in the CB activity and saw him leaving the job as a result of enhanced individual skills

  3. At the system level, the training had a positive impact on the whole research system, with more researchers who are more able to write in academic English

A recommendation that could come from this hypothetical evaluation finding could be to suggest to the organisations sending their employees to participate in CB activities to get their prior formal commitment to transfer the new learnt skills to their colleagues in similar positions after the training.

On reliability of self-perceptions – do you know what you don’t know?

A major challenge in evaluating impact or effectiveness of CB activities is having to rely on self-perceptions of the individuals to determine whether there was the expected impact. Even if relevant, self-perceptions just tell us part of the story, and need to be supported by more objective measurements. I have mentioned here possible mitigations to make self-perceptions more useful for rigorous evaluation outcomes, and I would like to give one more argument to strengthen this point that self-perceptions may be misleading indicators for CB, well expressed in Essentials of Utilisation-Focused Evaluation (Patton, 2012): “The Dunning-Kruger effect describes the phenomenon of people making erroneous judgements about their competence. Those with low competence tend to rate themselves as above average, while the highly skilled tend to underrate their abilities. This means that, ironically and perversely, less competent people rate their own ability higher than more competent people (Dunning & Kruger, 1999)”, what could be considered a contemporary version of Socratic concept that knowledge is awareness of the limits of what we know.

This is a complex world, we know that. And I can see two extreme opposites in dealing with complexity, especially in the evaluation debate. Some might argue that this is such a complex world that it is reductive to try to measure its outcomes and base decisions on such uncertain information – therefore, it is better not to even try to measure it! Others strongly believe any change we want to obtain, it cannot be considered a change if we are not able to measure it.

I stand somewhere in the middle between this two extremes. I think that recognising complexity makes evaluation challenging and exciting at the same time. And I believe that an honest and rigorous approach to MEL can simultaneously help us to recognise limits, challenges (and paradoxes!) of any methodology we use and incentivise us to solve them creatively and responsibly at the same time.

0 views0 comments
bottom of page