top of page
  • Writer's pictureClara Richards

Nine challenges towards think tanks evaluation in Latin America

[Editor’s Note: This post was written by Natalia Aquilino, Director of the influence, Monitoring and Evaluation Programme at CIPPEC, with Leandro Echt’s collaboration.]

Consensus regarding evaluation relevance as a transparency and accountability strategy is rising for Latin American think tanks. Evaluation initiatives are being triggered by programmes with strong regional presence such as the Think Tank Initiative as well as by a legitimate bottom-up need from the TTs themselves (such as in CIPPEC’ and FUSADES’ cases).

But the road to solid evaluation parameters in our institutions is still sinuous, fuzzy and full of challenges.

We believe there are two main scenarios for TT accountability in our region. On one hand, there are those organizations that receive funds from just a few donors, in which predictability is a key factor. On the other hand, there are TTs that run on a multi-donor scheme which provides lower funding sustainability.

In the first case, the accountability frameworks that are adopted may come from donors that have their own established mechanisms, indicators and methodologies for monitoring and evaluation. In the second case, most of the mechanisms are set at the project level where the reporting process takes place on a delivery basis and where focus is more at individual actions rather than at the institutional influence. This is where an adequate M&E strategy starts to make sense and can be found helpful to enhance control over the institutional agenda, availability of information, organizational alignment and the capacity to communicate results.

Focusing on our own aggregated influence results rather than on unitary actions can improve and help to better capture a think tank’s impact.

If we agree that evaluation provides short, medium and long term institutional benefits, which are the main challenges to advance in consolidating an evaluation culture in TTs?

1. Overcoming an adverse political and institutional context and lead the change. In many of our countries a culture of evaluation is still pending. This can make us hesitate when addressing such processes for our institutions.

2. Building internal consensus at management and staff levels. Evaluation takes time and needs resources allocated to its implementation. If staff members and teams adhere to its relevance and value, it increases the likelihood that the process will flow and be sustainable.

3. Addressing think tanks evaluability conditions. This involves analyzing whether the institution is able to be evaluated in a way credible and reliable, as well as looking critically at planning and implementation processes. It also entails examining if the institutional programme design includes evaluation elements, if it can deliver reliable information for effective results, if it tackles multiple actors views and if sufficient resources are devoted to examine whether the organization met the planned objectives.

4. Clarifying and agreeing on what is to be evaluated. Are we thinking about a particular project, the work of a specific area, general institutional impact, or the impact on the results? Agreeing with Weyrauch and Mendizabal, we believe that working on multiple fronts may be excessive but addressing some strategic issues in a timely manner can help build confidence in the process and improve it when opportune.

5. Establishing who will assess who. Would evaluation be a function of senior management or a self-managed process by members of the institution? Will it combine elements of both models? In addition, it is worth asking if it will move forward internally with staff or through external experts to provide an outsider’s view of the organization.

6. Pondering how to evaluate. That is, assigning responsibilities to managers and staff in the various roles within the process: who will design and plan the evaluation, who will collect the information, how will the different actors be involved and who will make decisions based on their results.

7. Planning how to manage emerging recommendations. This is perhaps one of the most important and least visible elements of evaluation processes. It involves acting on the findings by creating specific plans for potential areas for improvement.

8. Providing effective funding for evaluation activities. Multiple options exist to solve this issue and give institutional sustainability to the initiative. One option would be to provide central resources, another one to develop a fund that is fed by certain percentage of project resources. Unfortunately, we still do not see global interest in financing such projects through external sources.

9. Defining what to do with the information generated by the processes. Finally, once the organization has the information, think tanks must agree on what to do with it: targeting audiences, customizing communication messages, shaping formats and defining timeframes.

Linked to challenge 9, as Vanesa Weyrauch proposed, it is important to start thinking about how we introduce learning in the M&E process: how do we use the information that M&E systems generate in terms of critical thinking and learning, and in consequence, changing?

Moreover, as Vanesa also highlights, it is important that an organization acknowledges from the very beginning which is the main driver to develop an M&E system within it, because “the real intentions and purpose behinds its development will have a clear impact in how this system is received and used later on”. This explicitness will help to dilute fears from the staff and will align the entire organization under a common objective.

Overall, evaluation is a road worth traveling. Institutional assessment gives us back an image of ourselves that is essential to help us think and understand how we influence but also how we manage and align resources to deliver change more effectively and strategically.

2 views0 comments
bottom of page