top of page
Writer's pictureClara Richards

Implementing an M&E system – a postcard from the trenches

Implementing an M&E system – a postcard from the trenches

‘Seriously?! As if we don’t have enough work already! Besides, we’re researchers, not bureaucrats’. As jarring as it is to the ear of a monitoring and evaluation specialist, this reaction is not uncommon in think tank corridors upon introduction of an M&E system. So what stands a believer to do? Well, though I certainly don’t have the answer, I could offer a glimpse into one programme’s M&E (mis)adventures. If you and you and you add your insights to the list, we might be off to a good start. The reflections that follow are from my experience as programme manager at the Governance of Africa’s Resources Programme – a policy research programme of the South African Institute of International Affairs (SAIIA).

The way I see it, two of the main goals of monitoring and evaluation are 1) accountability, and 2) learning. Accountability is crucial. Think tanks usually rely on taxpayers’ money. Unless we can prove that this is money well spent, think tanks might be among the first to fall victim to budget cuts. After all, we peddle the intangible. Claiming that “knowledge is power” – as most of us do – saddles us with the burden of proof. Moreover, M&E increasingly forms an integral part of civil society’s good governance framework. Many of us in civil society see our role, at least in part, as providing oversight, whether over government or the corporate sector. If we want to be taken seriously, we need to ensure that our own houses are in order.

As valid as the reasons above are, those in the trenches – researchers, programme officers – often perceive them as sticks. So what carrots can we offer? I would argue that most of the positive incentives relate to M&E as a learning opportunity. Asking “how can we do what we do, better?” allows those think tank staff to tap into deeper levels of motivation:

Firstly, most of us in the think tank world are there at least partly because we would like to make a difference. Again, the challenge is that we find ourselves dealing in the intangible. Though we believe in the power of ideas, most of would admit that at some point we wished that we could point to 50 new boreholes or a newly built bridge before knocking off for Friday drinks. Equally, most researchers could relate to the feeling of spending months slaving over an article only to send it into outer space with a vague hope that someone, somewhere will read it. Before the time of M&E, the most we could hope for was a sporadic email as evidence that someone “out there” engaged with our ideas. An M&E system helps us not only to be more targeted from the outset, but also to build a bridge between our day-to-day work and the bigger change we would like to see in the world.

Secondly, continuous learning contributes to ongoing improvement and eventual mastery. Those who work in think tanks are not the only ones who value excellence. In fact, in his bestselling book Drive author Daniel Pink identifies the following three key elements of motivation: autonomy, purpose and mastery. The first could be the topic of another post. Suffice to say here that the need for autonomy is one that most researchers – and by extension most think tank staff – would recognise. The other two, namely purpose and mastery, are the subject of this post. Linking our work to a higher sense of purpose was discussed above. In the case of SAIIA, I can attest that the fact that it was rated as the top think tank in sub-Saharan Africa for a number of consecutive years was a source of pride to its employees. I can also attest that the development and introduction of an M&E system saw a marked acceleration of both institutional and individual learning.

So what are some of the practical ways in which we integrated M&E and learning at SAIIA’s Governance of Africa’s Resources Programme? I will focus specifically on the programmatic (i.e. not institutional) level.

At the outset, it really helped to identify an M&E champion (or champions) – this is someone who sees the value of M&E and who would ideally shoulder a significant proportion of the responsibility for integrating M&E into a programme’s day-to-day functioning. This person could also act as a buffer against M&E creep. That said, it is important that all staff are involved (each with their own clearly defined roles and responsibilities).

For instance, those responsible for periodic M&E analysis provided researchers with feedback and then engaged them in conversation about this. This could take the form of a 10-minute conversation around a couple of Google Analytics graphs: “This publication was very popular – any idea why? What accounts for this big spike in downloads? Did the publication date coincide with a big news event? Was it the catchy title? Was there a special effort to disseminate it: a media briefing; a popular twitter thread; a targeted dissemination drive? Let’s see how people came across the article… When they accessed it…”

We weren’t above some friendly competition either and kept track of those publications that garnered the most downloads. How about a small rewards ceremony as part of the year-end function? It only takes a little nudging and encouragement to convince researchers of the benefits of some good upfront planning, followed by a targeted dissemination strategy. Beyond harnessing people’s innate competitive streak, a little self-promotion also contributes to building one’s own professional profile.

Similarly, a 5-minute team debrief following an event could yield some valuable insights into what worked well and what didn’t.

This is all good, but tracking downloads or participants’ feedback still left us on the short-term outcome level. How does one involve staff in tracking some higher level outcomes? We found that a mid-term review workshop provided a valuable opportunity for more in-depth reflection. The use of “impact logs” also proved useful. Throughout, staff were also encouraged to tell their stories. For instance, instead of one person picking the 4 big stories to be included on the programme’s page of the institution’s annual highlights document, researchers were encouraged to submit their nominations. These had to be framed in narrative format, with supporting facts and figures. Finally, I would also suggest including a post-mortem of one or two things that didn’t go quite as planned. These would (of course!) not go in the highlights document, but could well prove some of the most valuable learning opportunities.

Alright, enough from me. I’d love to hear some of your stories.

0 views0 comments

コメント


bottom of page