top of page
  • Writer's pictureClara Richards

Cry for help: monitoring, evaluating and learning in the South, how can we do it better?


mundo

By kat m research at flickr.com under CC license


Catherine Fisher´s post on our Lessons learned paper as well as Antonio Capillo´s points regarding M&E of capacity building activities have made me reflect more deeply on how we can better grapple the way we “measure” what we achieve through capacity building in terms of strengthening the link between research and policy.

As Catherine rightly highlighted in our paper we can say very little about how the Spaces for Engagement programme has contributed to any changes in the links between research and policy in Latin America, or indeed the other continents to which this prolific programme extended.  She continues stating that “the authors acknowledge that their evaluations mainly focused on the quality of the intervention rather than any subsequent changes that resulted, whether in terms of participants’ knowledge, behaviours and attitudes or broader outcome changes.   However, this limits the amount that can be said about the effectiveness of any of the capacity development activities described on promoting greater use of research in policy processes.”

I totally agree that we cannot draw from our traditional surveys to participants whether we can link our trainings to specific changes in their knowledge, behaviours, etc. Based on their feedback and what they revealed in interviews conducted six months or more after the training, we are informally confident on how we have contributed to their understanding on how they can make research better interact with policy. Some had expressed that they think differently when dealing with concrete project challenges in the field; others have discovered that their teams need to improve their strategic planning by using specific tools shared in the courses, etc.

However, this does not suffice to make the point on our real contribution to promoting better use of research in policymaking. So, why can´t we have more systematized evidence? In the first place, and as argued in the paper, we did not design this programme knowing that it would last five years. On the contrary, we found out at the end of each year that we could propose how to continue for one more year. This is very usual in developing countries´ contexts. Most of us in this field devise and implement one or two year projects. In that sense, how much can be monitored and evaluated that is not focused on specific outputs?

Second, with projects being very short and generally scarce in terms of resources, we can seldom invest time and energy in planning, monitoring and evaluating. I think about other projects, like many supported by DFID where teams can delve into a long inception phase, and the realities are quite contrasting. Obviously, we may expect very different things from a group and the other in terms of strategically mapping out change expectations and tracking how these are met or not.

Third, most of teams in developing countries do not count with an M&E expert who can make sure that this lens is correctly developed and applied through our activities. Thus, we end up coming up with some specific and simple way of asking about external feedback on our activities and we may also add some concrete stories of change or outcomes that every once in a while emerge naturally from these processes. They tend to sound anecdotal, though.

This is certainly not an excuse for not doing better at monitoring, evaluating and learning in the South. On the contrary, it is a cry for help: is there a way to come up with simple and non-expensive ways of doing this better? Could we, for example, team up with Northern experts in larger organisations who can mentor us through a tailor made process? Or could we join among Southern institutions and pile up scarce resources to devise mechanisms that help us collectively come up with some new answers and own methodologies?

I believe there is now a very good opportunity to start learning from others and also coming up with our own responses based on similar challenges and contextual realities. For example, a group  within the very interesting programme The Think Tank Exchange is currently exploring how to jointly reflect about and develop new resources so that their institutions can start conducting internal assessments of their performance.

I have also very recently suggested a member from IDS who is organising an event in Lima where many think tanks will meet to learn and discuss on M&E why not have them develop their own tools based on sharing existing ones, instead of having them fully apply a fixed methodology/tool which has been developed by others. Trainings could become spaces where participants get to know how others do it (we need to build on others, of course!), interact with designed and pre-made methodologies but then come up with their own resources that reflect how they usually work, what they feel more relevant to their contexts, etc. Again, co-producing knowledge on how to do this better is something we at P&I would like to get engaged in. Any others inspired by this idea?

3 views0 comments
bottom of page