top of page
  • Writer's pictureClara Richards

Defining capacity, identifying challenges and other thoughts on ‘Lessons Learned’

[Editor’s note: This is the first of two posts by Antonio Capillo, who is M&E Officer, Evaluation, Learning and Communication at INASP. Antonio’s role includes designing and implementing rigorous mixed methodologies to monitor and evaluate INASP’s activities and overall strategy]

What is capacity and what are the key challenges for capacity building organisations?

Capacity is a multidimensional concept and one possible way to define it has three levels. First, capacity is the attribute of the individuals to fulfil their role and be able to adapt their response to changing conditions and contexts. At a higher level, the capacity of organisations – such as civil society organisations, universities, and even ministries and other political bodies – is an efficient allocation (and re-allocation) of human, financial and infrastructural resources in an ever-dynamic environment, where individuals move continuously, and their capacity with them. Finally, when individuals and organisations are capable to respond to continuous (an often unpredictable) changes, the whole system will be capable to generate capacity at any level in a sustainable and durable way.

In order to build the capacity of individuals, organisations and systems it is paramount for CBOs to have in place effective Monitoring, Evaluation and Learning (MEL) that will support adaption of the strategy through ongoing measurement and assessment of current approaches to CBThe paper by Vanesa Weyrauch (2013) Lessons learned on promoting better links between research and policy in Latin America leads the reader in this direction since its inspiring opening: “How often do we stop to reflect on how we are doing what we are doing?” The 20 lessons collected in this publication are both an intellectual contribution to the general debate on capacity building (CB) and also a practical tool to inform best practices in the future, coming from rigorous analysis and in consultation with other experts in the sector – I am glad that INASP, the organisation I work for, is among them.

INASP’s core programme Strengthening Research Knowledge Systems (SKRS) is designed to support research production and access through a multidimensional approach to CB for institutions and individuals from over 20 partner countries in Africa, Latin America and Asia. One core approach to our strategy is training, that can be in person (through workshops and mentoring), online and embedded (a mix of in person and online training). I would like to recall some of the lessons identified by Vanesa and her team (on capacity building objectives, selection processes), discuss how they may apply generally, share my experience and seek to suggest new directions for discussion and future analysis with a focus on MEL approaches to training.

On establishing CB objectives

I think that Vanesa touches a crucial issue – a training workshop is not an event per se, but instead should be designed considering heterogeneous and exogenous factors that influence quality, effectiveness and long-term impact of CB designs. I agree with Horton (2002) that strategy planning linked to definition of specific objectives – or blueprint approaches – is not sufficient to ensure success in CB activities and should not be implemented just “for the sake” of monitoring and evaluation (M&E) processes. That is why M&E should support a process of continuous learning and consequent adaptation of our training strategy.

The way we approached the design of our training strategy at INASP for the next 5 years (2013-2017) is by discussing, reflecting on and assessing what we did in the past and how our experience could inform the design of new effective training strategies in the future. This was achieved through defining since the beginning the short, medium and long term objectives of our training activities, by carrying out a risk assessment and conceiving mitigation strategies to reduce the expected impact of the identified risks coming true. The process benefitted, since the initial stage, of the input the crosscutting Evaluation, Learning and Communication team (my team at INASP) and we expect this will result in an effective system of MEL and adaptation of our strategy to respond to an ever-changing environment.

On the selection of participants

Selection of participants contributes together with factors like training delivery technique, definition of the objectives and planning of follow up actions for the participants to ensure high quality of training. The selection process, when thoughtfully designed and implemented, can bring multiple benefits. For example, as similarly stated by Julie Brittain and reported in the paper, the participants will commit since the beginning with the training objectives if a competitive selection process is designed (Brittain in Weyrauch, 2013). Secondly, when trainers are directly involved in the selection process, they will have a clear picture of the profile and expectations of the participants and be able to adapt the training design and delivery to the participants’ needs. Also, the organisations sending their employer to receive training will be sure that the programme will be designed and delivered to concretely benefit the participants.

In a reality of economic and time constraints, choices have to be made in order to allocate resources in the optimal way to ensure that training will reach the expected objectives and effective MEL systems have a crucial role to play to inform these choices by answering to key questions, such as:

  1. What are the benefits of using competitive processes to select the participants compared with defining an ideal participant’s profile and asking the organisations to nominate based on well-defined criteria?

  2. Is it better selecting participants with different backgrounds, levels of understanding and perspectives or having instead participants that are more homogeneous?

  3. Do alternative selection processes fit differently according to organisational cultures, participants’ expectations or type of training design?

  4. Would well-selected participants have reached the same objectives if they did participate in training activities (strictly related to impact evaluation design and counterfactual analysis)?

  5. What is more important between a good trainer and well-selected participants?

I think that one of the main objectives (and challenges!) for the Evaluation, Learning and Communication team at INASP will be the formulation of the right questions to answer in relation to selection process, how this influences quality of training and whether this will result in effective training and positive externalities also in the long term.

Reflections, challenges, and paradoxes

I have already mentioned the importance that assessing risk and designing ex ante strategies to mitigate against it plays since the planning stage of any CB activity. On the other hand, it is also important that the right balance is found between the time that is spent planning and the space left for implementation. The challenge (and mission!) for MEL teams in CBOs is then to facilitate the planning stage and highlight itself as an essential part of the implementation. The successful outcome will be having easy-to use and useful M&E tools integrated in the implementation of the strategy.

There is general agreement on the importance of selection processes. There is less agreement, I think, on who is the most suitable individual that should benefit from CB. For example, imagine we want to organise a training on Evidence-Informed Policy Making for policy makers with the main objective to incentivise demand for research-evidence in the Italian Ministry of Agriculture (I have chosen Italy, my home country, that definitely needs more EIPM!). We have one place left for our workshop and we have to decide between a high level public servant, directly interacting with the minister, but having no experience at all of science and scientific method, and a junior staff in the same ministry, having instead a high-level scientific background, but relegated by one of the Ministry’s offices to a role of canvasser.

I think that our selection choice should significantly depend on our objectives. Do we want to change the ministry’s operational mode in the short term or to have a long term impact in changing entirely the culture on EIPM? Do we want to promote a capable individual expecting this will be an example for the entire organisation in the future or change the organisational culture from the top? Answering these questions on the CB objectives will help to understand who we will select in the end, but still it will be important to monitor, evaluate and learn whether we did the right choice to reach our objectives, or even if the objectives were not well defined since the beginning. I like expressing in paradoxes, and I think that the described case falls nicely into one: ‘The wrong individual may actually be the right one to participate in training activities’

That was my first attempt to reflect on Lessons learned on promoting better links between research and policy in Latin America (Weyrauch, 2013), a very well written and useful paper which I think has the potential to become a fundamental starting point for general discussion between CBOs and sharing learning on what works and under what conditions, to avoid duplications of efforts and mistakes and to build the basements for new and evidence-based approaches to CB. Thanks to Politics and Ideas for inviting me to be part of this constructive process. In a second post, I will instead focus my reflections on the part of the paper on evaluating impact on CB and training activities and reporting for learning.

1 view0 comments
bottom of page