top of page
  • Writer's pictureClara Richards

The Topic Guide on Politics and Ideas: Capacity building


Capacity building

In this section:


Approaches to developing research capacity & evidence literacy/use of evidence in policymaking

The literature in the area of capacity building to strengthen research and policy linkages refers to a number of different types of capacity; for instance, research generation (e.g. universities, education systems, institutional weakness of think tanks), research use (e.g. low demand for research amongst policymakers), and understanding of research (e.g. the general public discourse, level policy debate). Capacity building interventions and the discussion surrounding them therefore address varying entry points in accordance with the identified problem, targeting academic, public sector and NGO actors, as well as designing intervention that act on an individual, organizational and public discursive level.

Assessing capacity to generate and use evidence

There is wide agreement across the literature that research capacity (particularly in the social sciences) is weak (Krishna & Krishna, 2010), and that this is a hindrance to policymaking; and further, that it is essential to formulate more rigorous diagnostic tools prior to undertaking capacity-building initiatives (Nath, 2011).

What affects capacity to undertake, use, and understand research? The literature points to a number of exogenous and endogenous factors affecting, for instance, the strength of the higher education sector and investments in particular areas of research. In their comparative report on the state of social sciences in Asia, Krishna & Krishna (2010) argue that the relative success of India in this regard is due to its vast technological and institutional infrastructure, which has facilitated the growth of research centres and university departments, as well as ensuring sufficient publication networks for researchers to both disseminate their work and access information. In contrast to science and technology the social sciences are thought to have been overlooked in terms of research funding, to the detriment of public policy design and implementation (PASGR, 2010British Academy & Association for Commonwealth Universities, 2009).

Broadbent (2012) points to cultural factors relating to the pursuit of knowledge in African contexts, including the existence of disincentives which discourage the use of ‘evidence’ for fear of seeming ‘un-African’; as well as to the way in which the ill-defined discourse governing international development discussions effectively ‘deskill’ participants in developing countries by providing a set of concepts, ideas, and narratives that can often be used without critique.  Further, according to a UNESCO report in 2006 which assessed research capacity amongst higher education, the data shows that in terms of personnel engaged in research, Africa has the lowest research capacity in the world. The report recommends that Africa’s reliance on external funding from donors has become an impediment to building research capacity, recommending that additional investment needs to be made by national governments and – potentially – the private sector (Sanyal & Varghese, 2006).

Given that research capacity is an ill-defined area which could refer to different skills amongst different actors, blueprints for assessing research capacity across these different contexts are not available. However there are examples of attempts to measure research capacity in particular areas. One approach is to assess higher education. As indicated, UNESCO focuses upon research personnel in higher education establishments as well as enrolment numbers (Sanyal & Varghese, 2006). A similar attempt by UNESCO to develop a framework for assessing higher education capacity is presented in in Meek & van der Lee (2005) as a way of offering guidance in the development of research capacity benchmarks in Asia and the Pacific. Both consider the capacity of policymakers to use and understand research, developing and testing an ‘evidence literacy’ diagnostic tool (Nath, 2011) and using a questionnaire to ascertain how policymakers use research (Uneke et al, 2010).).

Approaches to capacity building

Capacity building can take place at different levels, though there is an increasing emphasis on the need to adopt a systemic, organization-level approach to capacity building (Buldioski, 2012).  Pound and Adolph (2005) present a general framework for approaching the building of capacity at organizational level, including steps to monitor the external environment;  review the organization’s strategy; identify capacity needs and plan for the development of this capacity; negotiate external support; implement the capacity development process; and monitor and evaluate the progress made in building capacity.  In recent years there have been two key shifts in apparoach: i)from a focus on building the capacity of research producers to a focus on building the capacity of research users; ii) from targeting of individuals to a more holistic, organizational approach.

However approaches to building research capacity in terms of generation, understanding and use will differ in accordance with the political context, resources available, and specific nature of an intervention’s objective, with approaches often approaches based on what is thought to be an issue relating to the supply of research, its demand, or the sharing or brokering of knowledge. Entry points identified in the literature include:

  1. Stimulating ‘demand’ for research amongst policymakers (Newman et al, 2012) and improving ‘evidence literacy’ (Nath, 2011).

  2. Improving research quality through investments in higher education (British Academy & Association for Commonwealth Universities, 2009) (Mendizabal, 2013Broadbent (2012), the provision of grants to researchers (Buldioski, 2012), as well as a number of other approaches which include mentoring, learning alliances, write-shops, exchange programmes and scholarships, north-south twinning arrangements, and south-south partnerships (Pound & Adolph, 2005).

  3. Developing knowledge-sharing networks across a variety of research actors (Majanovic, 2012Jones et al, 2007).

A further set of skills to be developed is concerned with research uptake – how research is communicated, and with what impact.

Lessons learned in capacity building

Lesson-learning in this area is constrained by a lack of evidence with regards to ‘what works’ across the various entry points identified, with monitoring and evaluation systems being weak or undocumented (Jones et al, 2007Newman et al, 2012). When they do exist, monitoring and evaluation processes tend to focus on short-term outputs and clearly visible changes rather than subtle, more nuanced improvements in capacity (Ortiz & Taylor, 2010). In an ODI assessment of research capacity strengthening in Africa the authors concluded that improved monitoring and evaluation was essential if donors and national government were to make a concerted –an harmonised– attempt to build research capacity in Africa and elsewhere. However, the report does note that the following positive impacts are reflected: stronger North-South partnerships had been forged; dissemination of research papers had been widespread; increased enrolment rates in local MA and PhD programmes; improved research administration and research management capacities; and improved research quality and researcher skills.

The shift to organization-level approaches reflects the realisation that often capacity building initiatives generally tend to avoid tackling problems relating to weak or absent knowledge systems and institutional weakness, and instead focus on less-effective workshops and the training of individuals (Jones et al, 2007Broadbent, 2012Buldioski, 2012). Further, capacity support can only be provided by donors if organizations are both willing to change and understand why their capacity needs to be improved (Buldioski, 2012).

RAPID’s reflections on its own capacity-building work also indicates the need to ensure organizations that receive support are well targeted: often the group followed ‘demand’ and thus ended up working with organizations with very little research capacity or interest. It was found that in ‘donor-rich’ contexts where there is little competition for funds, organizations have found the kinds of competencies and skills offered by RAPID less appealing than those working in more competitive funding environ­ments such as Latin America and Southeast Asia. Thus, donors need to be careful that demand for capacity building does not only come from themselves (Mendizabal et al, 2011).

Research use amongst donors

Concerns over the capacity of donor personnel to use and understand research are also present, with evidence from DFID and the World Bank indicating that the use of knowledge within donor organizations needs to be prioritised as much as that of its partners and stakeholders. In the case of DFID, Jones and Mendizabal (2010) recommend that human resourcing undergoes a rethinks due to a need to ‘raise the bar’ in terms of the capacity of advisors to understand and use research to inform their decisions; while a World Bank study on demand for research concludes that building ‘capacity’ to demand research entails creating stronger incentives for learning and more relevant and accessible research products (Ravallion, 2012).

Resources

Research capacity

Krishna, V., and Krishna U., (2010) Social Sciences in South Asia. 2010 World Social Science Report – Knowledge Divides Background paper. Paris: UNESCO.

Addressing the social sciences, this background paper examines South Asia’s research capacity. While the post-war period has witnessed a moderate growth in the number of universities, specialized research institutions, private bodies, governmental and non-governmental organizations conducting Social Science Research (SSR) in South Asia, the social sciences’ expansion has followed a different trajectory in the various countries in the region. The authors argue that these variations can be explained by a number of factors, including the size of the country, the historical context of both the colonial and post-colonial eras shaping the emergence and development of these countries, the nature of their political regimes and difference in their other socio-economic-religious and cultural factors. Currently, India dominates given its vast infrastructure and large pool of intellectual capital, while in there are serious concerns that in Pakistan, Bangladesh and Sri Lanka social scientists only teach and do not undertake any research. The paper warns of an ‘intellectual and institutional crisis’ in South Asian social sciences in contrast to investments being made in science and technology.

Sanyal, B.C.,  and Varghese, N.V. (2006). ‘Research Capacity of the Higher Education Sector in Developing Countries’.  Paris: UNESCO.

Based on the available sources of information, this paper argues that the knowledge divide is deep and is heavily tilted in favor of developed countries. Developing countries suffer from a lack of both financial and human resources in R&D and need to improve their capacity to produce knowledge domestically as well as to absorb knowledge produced elsewhere. In particular, there is a need for reviving and strengthening the university system in developing countries to strengthen their research capacities. This change should be reflected in resource allocation to higher education and research, and in the provision of opportunities to expand graduate programs and improve female participation rates. Currently, there is a huge disparity between researchers per country:  in Finland there are 7,992 researchers per million inhabitants stays at the one end of the spectrum while Burkina Faso has 17 and The Republic of Congo has 30. Overall the data shows that in terms of personnel engaged in research, Africa has the lowest research capacity and North America and Europe have the highest. Further, data on the enrolment in institutions of tertiary education indicates that regional enrolment rates vary widely.  The world average was 24 per cent participation in higher education in 2004 (the highest being in North America and Western Europe at 70 per cent) in some countries in Africa the rate was 1 per cent in countries such as Burkina Faso and Tanzania etc. Instead of relying on bilateral and multilateral aid to fund research, as countries such as Uganda do, investments must be made in higher education and research, the paper argues, with the private sector having a potentially pivotal role to play in this regard.

Broadbent, E. (2012) ‘The Politics of Research Uptake: Synthesis of Case Study Findings’. London: EBPDN/Mwananchi

In this paper the author argues that a more comprehensive attempt at assessing current capacity to undertake, understand, and use research in Africa is required if capacity is to be built. Based on the premise that policy debates provide a ‘window’ into the way a society thinks and speaks, the findings of this four country study indicate that levels of capacity are often not thought to be low due to a lack of understanding of what it means to possess ‘capacity’ in the area of evidence and/or research. Further, the author argues that capacity building initiatives must focus upon the philosophy of science, critical thinking as a methodological approach, and research methodologies within higher level curricula. Importantly, there is a wide gap between reported capacity to use evidence – particularly amongst policymakers – and actual capacity. Further, incentives not to build one’s capacity exist in Africa: the ‘instrumentalization’ of lack of capacity can be extremely beneficial; particularly for those resisting reform, for it can stall a policy discussion. Cultural labels relating to what is ‘African’ and what is ‘white’ can also create an incentive to eschew research, evidence, and the written word.

Assessing capacity

Nath, C.  (2011). ‘Use of scientific and technological evidence within the Parliament of Uganda’. London: INASP/Parliamentary Office for Science and Technology.

In 2008 the UK Parliamentary Office of Science and Technology (POST) and the Parliament of Uganda began collaborating in order to strengthen the Parliament of Uganda’s handling of Science, Technology and Innovation (STI).  Participants decided prior to the progamme’s commencement that a baseline study was needed in order to gather information on how effectively the Parliament of Uganda currently handles STI. The study’s approach has forged a path into the study of ‘evidence literacy’ in capacity building programmes.  Overall, the study found that MPs have low levels of scientific literacy, although the majority still consider themselves ‘well informed’. Further, MPs find it difficult to distinguish reliable scientific evidence from unreliable evidence and knowledge sharing is weak. In terms of the Parliamentary Research Service, staff have limited access to information on science and technology and the Parliamentary Library stocks limited resources. Researchers also have poor links with the STI community and need to improve their information literacy skills. The study recommended that training is conducted for MPs on information literacy and scientific method; for clerks in effective report writing; and for research staff on information literacy, summarising skills and science communication.

Uneke, C.J., Ezeoha, A.E.,  Ndukwe, C.D.,  Oyibo, P.G., Onwe, F., Igbinedion, E.B., and Chukwu, P.N. (2010). ‘Operational Manual for Strengthening Institutional Capacity to Employ Evidence in Health Policymaking for Developing Countries: The Nigeria Experience’. Geneva: World Health Organization.

As part of the WHO’s Health Policy and Systems Research Project which aimed to form a baseline on the use of health policy and systems evidence in terms of both challenges associated with its use and current capacity issues this paper an operational manual based on Nigeria’s experience of implementing the project. The project’s intervention areas include individual capacity strengthening, strengthening of skills to develop research initiatives, to build an enabling environment for evidence use, including the use of incentives. In order to assess capacity levels the project adopted the following methodology: i) Structured pre-tested questionnaire including questions on level of health systems research that the health ministry is engaged in, formal/official collaboration, informal/unofficial collaboration, key activities/objectives of the health ministry in terms of health policy and systems research and use of evidence in policy making, key strengths, weaknesses, opportunities, and threats of the health ministry in HPSR and use of evidence in policy making, level of staff awareness of HPSR and use of evidence in policy making, computer literacy and skills in the use of information technology pertaining to health policy and evidence use; ii) Key informant interviews of a selected number of individuals in the target group, conducted face-to-face or by telephone using an interview guide; and iii) Focus group discussions 9-12 persons among the target group conducted with a moderator or two co-moderators using a discussion guide that centred on HPSR and use of evidence in policy making. The manual contains a number of templates to be used for capacity assessment in this area.

Lynn Meek, V. and van der Lee, J.J. (2005) ‘Performance Indicators for Assessing and Benchmarking Research Capacities in Universities’. Background Paper prepared for the Global University Network for Innovation – Asia and the Pacific. Bangkok: UNESCO.

This paper is a step towards producing a regional benchmarking policy for university research capacity building in Asia and the Pacific. The paper was commissioned specifically to: i) assess the quality of research programmes in a university (including impact of research; sustainability of research; importance of research; potential of research); and ii) enabling a university to benchmark its research capacity with other universities found in the country/region for the purpose of ranking/rating the universities in terms of their research capacity/performance. The paper therefore offers general guidance in developing a benchmarking framework, though it does not provide a set of benchmarking indicators due to the varying contexts in which they will be applied.

Ortiz, A. and Taylor, P., ‘Learning purposefully in capacity development Why, what and when to measure?’, Brighton: Institute of Development Studies

Many capacity development processes aim at long-term sustainable change, which depends on seeing many smaller changes in what are often intangible fields (rules, incentives, behaviors, power, coordination etc.). Yet, M&E processes of capacity development tend to focus on short-term outputs and clearly visible changes. This paper offers some ideas on how to deal with this paradox by delving into what capacity development elements M&E can and should be able to measure. This, the authors set, depends on whether capacity development is considered a means or an end. The paper explores predominant ways of thinking about M&E, and discusses whether these are able to grasp the unique nature of M&E of capacity development. M&E should be able to measure how capacity development contributes to wider development processes and to sustainable capacity, in addition to measuring the quality of the capacity development process itself. It may also be useful to gear capacity development more towards nurturing long term, even unintended outcomes. The authors further draw a number of lessons linked to current evaluation dilemmas: evaluating the fit between capacity development efforts and ongoing development processes; taking into consideration that capacity development is not always a linear process; and understanding that the difficulty of attributing changes to specific capacity development efforts is an important clue as to how capacity development works.

Capacity-building approaches

Buldioski, G. 2012. ‘Capacity building for think tanks’. Goran’s Musings blogpost. 26th November 2012.

In this blogpost the author sets out the basis for a comprehensive capacity building initiative for think tanks. Starting with the approach’s principles, it is argued that donors should only fund training to think tanks that demand it – anything else is ‘futile’. Further, that mentoring, on-the-job training and learning, peer-to-peer and expert exchanges should take precedent ahead of one-off and short training activities, and that these need to be done with a mid-to-long-term horizon. Capacity building efforts also must enjoy wider organizational buy-in, even though the focus may only be on one person. The author also suggests that donors should encourage a healthy competition for training places. Capacity-building efforts can be directed at three tiers: basic (junior researchers); intermediate (researchers and senior researchers); and advanced. Lastly, capacity-building efforts can take a number of forms, including: core or project grants; mentoring, coaching, peer-to-peer exchanges, collective learning grants; and training events/activity series.

Marjanovic, S., Hanlin, R., Diepeveen, S., and Chataway, J. (2012) ‘Research Capacity Building In Africa: Networks, Institutions And Local Ownership’.  Innogen Working Paper No. 106. London/Cambridge: ESRC/RAND Europe.

This Working Paper has been produced as part of the Wellcome Trust’s African Institutions Initiative, one aim of which is to strengthen health research capacity in Africa. The authors argue that Africa’s key priorities in this field are supporting networked models to enhance the impact and efficiency of investments in health research capacity-building in Africa; the importance of ensuring stronger local ownership of initiatives; and the importance of building sustainable research institutions. However, despite the importance of research capacity-building for improving health outcomes, the evidence base on ‘what works’ and what does not in research capacity-building in African contexts remains fragmented. A key problem identified is that existing literature on research capacity-building tends to discuss policy-relevant issues at a relatively high-level with less insight into the nuances of implementing research capacity-building models and policy choices in every-day practice, or potential solutions to capacity-building challenges. In this paper the authors address this gap through an analysis of how multi-partner networks are built and how their success depends on building institutional level capacity strengthening within partner institutions with reference to the Initiative, which funds 7 interdisciplinary health research capacity-building consortia incorporating 51 institutions in 18 African countries, and 17 partners across Europe, the United States, Australia and Malaysia.

Jones, N., Bailey, M., and Lyytikainen, M. (2007) ‘Research Capacity Strengthening in Africa: Trends, Gaps and Opportunities’. A scoping study commissioned by DFID on behalf of IFORD. London: Overseas Development Institute.

This report provides an overview of donor support for development capacity building in Africa to inform the Department for International Development (DFID)’s thinking about the role they can respectively play in the region, either as individual institutions or in partnership with other donors. The study consisted of a desktop/web review of grey and published literature, a systematic review of existing evaluation documents and key informant interviews with donors, intermediary organizations and a number of African institutions that receive support. By reviewing what is meant by capacity building in the area of research, and identifying a number of comparative entry points to assess how capacity is supported by donors the paper concludes that there are a number of positive impacts noted following capacity building support, including the creation of networks and partnerships, greater dissemination of research, increased enrolment in MA and PhD programmes, and improved research administration and research management capacities. However challenges include the limited demonstrable impact of research on policy, limited demand for research, a lack of quality assurance in research production, inadequate monitoring and evaluation mechanisms, and limited inroads into institutional strengthening.

Kellerman, R., Klipstein-Grobusch, K., Weiner, R., Wayling, S., and Fonn, S. (2012) ‘Investing in African research training institutions creates sustainable capacity for Africa: the case of the University of the Witwatersrand School of Public Health masters programme in epidemiology and biostatistics’.  Health Research Policy and Systems, Vol. 10, No. 11. Access online

This article presents the findings of a review of a 3-year investment by the Special Programme for Research and Training in Tropical Diseases (TDR) in research training at the School of Public Health, University of the Witwatersrand, South Africa. The review found that investing in African institutions to improve research training capacity resulted in the retention of graduates in Africa in research positions and thus greater research output. However, challenges remain if funding for students bursaries is not available.

British Academy & Association for Commonwealth Universities (2009) ‘The Nairobi Report: Frameworks for Africa-UK Research Collaboration in the Social Sciences and Humanities’. London: British Academy & Association of Commonwealth Universities

This report is the culmination of a two-year process of reflection and discussion among African and UK scholars initiated by the British Academy and the Association of Commonwealth Universities, providing practical steps that need to be taken to strengthen humanities and social science research in Africa aimed at African universities and governments; national and international funders; and UK universities, the UK government and UK agencies involved in higher education and international collaboration. The report argues that it is evident that humanities and social sciences research is in urgent need of support across the continent, with universities and researchers facing many challenges as a result of declining funding despite a huge increase in enrolments. Infrastructure and facilities are insufficient and incomes have fallen, and many academics have been forced to sacrifice research in the face of impossible teaching and administrative commitments, and as they are compelled to find alternative sources of income. The report provides three central areas for intervention in higher education: Improving institutional foundations by focusing on structures, systems and governance; Strengthening cross collaboration with and within Africa through communities and networks; and investing in individual early-career researchers in the humanities and social sciences, including training in proposal writing and project management.

Partnership for African Social and Governance Research (PASGR) (2010) ‘The Proposed Research Programme and Collaborative Graduate Programme’. Consultation Paper. Nairobi: PASGR.

The Partnership for African Social and Governance Research (PASGR) was initiated by the UK’s Department for International Development (DFID) to contribute to stronger evidence based political and social research and analysis in Africa. The initiative consists of a Research Programme and a Collaborative Graduate Programme. The Research Programme features four modalities: Policy Research Grants to finance multi-year, multi-faceted research by organizations; Partnership and Network Research Grants would involve PASGR working with other organizations on regional and international research initiatives; Commissioned Studies to allow PASGR to respond flexibly to subjects of special interest; and Special Research Awards would accommodate unsolicited proposals, small grants for such purposes as doctoral thesis or post-doctoral research, preparation of case studies, and the application of skills obtained through a PASGR supported training activity. The CGP involves PASGR establishing formal partnerships with a number of universities. These partners would benefit from a professional development facility primarily for teaching staff from CGP universities; Masters level “building block” courses that will eventually form the core of a future collaborative master’s programme that can also be used by CGP universities to strengthen existing disciplinary programs at the master’s level; and, a collaborative doctoral level programming that would build on the master’s programme.  The PASGR programme is based on a survey of higher education in Africa, undertaken by the Association of Commonwealth Universities. The findings of the survey indicated that  master’s programs tend to have a weak focus on research methods in the social sciences, with graduates in ‘traditional’ social science disciplines in decline in favour of  thematic subjects such as Conflict Studies.  When research methods are taught, there is little opportunity to put this learning into practice.  These limitations have important implications for graduates who go on to public sector jobs (especially involving policy development) who may not be equipped to understand research, to distinguish “good” research from “bad” or to appreciate the benefits or limitations of applying various research methods to different kinds of policy problems.

Newman, K., Fisher, C., and Shaxson, L. (2012) ‘Stimulating demand for research evidence: what role for capacity-building?’ IDS Bulletin Vol. 43, Issue 5.

The authors argue that while supporting evidence based policy by focussing on the supply-side of research (for instance, helping researchers communicate and package their findings) it is also just as important to focus upon the demand-side amongst users of research. Reframing the discussion in terms of ‘evidence-informed policy’ the authors describe it as policy that has considered a ‘broad range’ of research evidence, including citizen knowledge and the current realities of policy debates. Acknowledging that research-based evidence does not improve policy per se, it is argued that “where the will to develop policies which benefit society exists, better policies can be achieved when research is systematically considered as one factor in decision-making” (p. 18). Research demand encompasses both the capacity and motivation to demand research.  A number of approaches to building capacity to demand research are identified: using diagnostic tools, for instance to assess ‘Evidence Literacy’; training, though short-term fixes are not always sufficient; mentoring, for instance the UK Parliamentary Office of Science and Technology (POST) supports a mentoring scheme for parliamentary researchers in the Parliament of Uganda as part of their POST Africa programme; linking schemes; organizational policies; and societal interventions, such as the Development Policy Research Month (DPRM) in The Philippines which is designed to raise awareness. However, there is actually very little evidence on what type of intervention works best; therefore future capacity-building attempts need to be evaluated.

Mendizabal, E. (2013). ‘An alternative to the supply, demand and intermediary model: competencies for all’. Onthinktanks blogpost. January 13th 2013.

In this blog article the author argues that the separation between those who supply research, those who demand it, and those who act as ‘intermediaries’ is artificial and a limiting factor in the research-to-policy discussion. However, there is a creeping recognition (e.g. from AusAID) that these categories are infact porous. Referring back to his previous articulation of ‘boundary workers’ the author emphasises the need for organizations across the research and policy constellation to possess a set of basic competencies that ensure they are: a) active and respected members of the various communities that it seeks to bring together; and b) able to add value to that interaction by undertaking research, analysis, and/or reflection, and/or the application of ideas into practical actions. In practice, this for instance means that policymakers must have policy analysis skills in order to assess the credibility of evidence. It also means that organizations across the board need to be able to think critically, and evaluate and reflect upon their experience. The focus on the competencies of the ‘suppliers’ of research (policy research institutes, NGOs, think tanks) from programmes led by the ODI, DFID, and the Think Tank Initiative needs to be widened out across the ‘sector.’

Lessons learned

Mendizabal, E., Datta, A., and Young, J. (2011). ‘Developing capacities for better research uptake: the experience of ODI’s Research and Policy in Development programme.’ Background Paper. London: Overseas Development Institute.

The past decade has seen an increasing focus on the capacities of policymakers, researchers and donors to generate and use evidence. This paper provides an assessment of RAPID’s experiences in this area. Tracing the programmes development and approach, the following themes are identified within their capacity building work: 1) Policy entrepreneurship; 2) Research communications; 3) Knowledge management and learning; 4) Outcome Mapping; 5) Monitoring and evaluation; 6) Network development and facilitation; 7) Organizational and project management. The lessons which emerge from RAPID’s reflections include:

  1. Contrary to prior assumption, research capacity itself is very limited in some contexts, and especially capacity to research the interface between research and policy.

  2. There is less interest in studying the research-to-policy interface than assumed, with CSOs tending to work in specific policy areas.

  3. Capacity-building work was not always well-targeted, leading to RAPID working with organizations with very little research capacity due to ODI’s consultancy-style funding model.

  4. Organizations operating in donor-rich contexts such as parts of sub-Saharan Africa (with little resulting competition over funds) have tended to find the competencies and skills offered by RAPID and ebpdn less appealing than those working in more competitive funding environ­ments such as Latin America and Southeast Asia. Instead, demand for RAPID-type work in sub-Saharan Africa has tended to come from donors (like GDN or IDRC) or from NGOs wishing to develop their evidence-based advocacy capabilities.

  5. Longer-term capacity building activities, in which there is space to reflect on tools and sustainable communities of practice are built, are more beneficial than workshops.

Mizumoto, A. (2010). ‘Evaluation of “Strengthening ICTD Research Capacity in Asia”(SIRCA) Programme’. Commissioned by the Singapore Internet Research Centre (SiRC) and International Development Research Centre (IDRC).

This report presents the findings from an evaluation of the “Strengthening ICTD Research Capacity in Asia” (SIRCA) Programme, which was established with the support of the International Development Research Centre (IDRC) under the “Developing Evaluation Capacity in ICTD” (DECI) project. DECI aims to build evaluation capacity in ICTD among its research partners in Asia by providing technical assistance to researchers to enhance evaluation knowledge and skills. An external consultant  evaluated the period from the Programme’s inception in March 2008 to July 2010. The SIRCA programme aimed to o the following: enhance research capacity in the region, demonstrated by the increased quality and reach of strong, methodologically rigorous, theoretically sound research findings; create space for discussions and knowledge sharing on ICTD social science research issues in Asia; create linkages among emerging ICTD researchers in Asia, and among established and emerging researchers through the mentorship program; and heighten awareness of ICTD research published by Asian-based researchers through dissemination of findings in international peer-reviewed publications and conferences. The evaluation found that the programme had led to a number of positive outcomes, including peer-reviewed publications and participation in international ICTD conferences. The report recommends that the programme takes advantage of opportunities to strengthen the programme by clarifying the programme’s mission to fund emerging social scientists or to fund well-established ICTD researchers, and work towards refining and developing more monitoring indicators to chart progress.

Pound, B. and Adolph, B. (2005) ‘Developing the Capacity of Research Systems in Developing Countries: Lessons Learnt and Guidelines for Future Initiatives’. Study commissioned by DFID Central Research Department. London: DFID.

According to this commissioned report, capacity is defined as: ‘the ability of individuals, organizations and systems to perform and utilise research effectively, efficiently and sustainably in addressing local, national and regional priorities that will contribute to poverty reduction and the achievement of the Millennium Development Goals, and to continuously learn and adapt to new challenges’. In the context of research for development, research capacity therefore involves: the systems, facilities and resources to work with relevant stakeholders to identify and define relevant researchable problem areas; develop and maintain research partnerships and networks; plan and implement research tasks; participate in and utilise international research; evaluate, select and adapt research findings; and publish, disseminate and apply research findings. The document offers a wider understanding of capacity development than traditional approaches, which focus on the enhancement of knowledge and skills of individuals, by looking at the total internal organizational environment (systems, structures, incentives, values, facilities, infrastructure and resources), the national innovation environment (policies, laws, economic trends, resource base, markets and international relations) and linkages with local, national and international stakeholders. Experiences from health research suggest a four-phased approach to capacity development: awareness creation; planning and implementation; expansion, and consolidation. In terms of assessing capacity development programmes, the paper offers generic monitoring indicators which fall under four capacity development “outputs”: High research quality outputs relevant to developmental needs; Effective and efficient organizational systems; Implementation of a multi-stakeholder research approach; Creative, dynamic and sustainable research organizations. At the centre of this approach, then, stands the development of organizational capacity in a locally-embedded system.

Research use amongst donors

Ravallion, M. (2011) ‘Knowledgeable bankers? The demand for research in World Bank operations’. Background paper for 2011 Report on the World Bank Research Program. Washington D.C: World Bank.

This study evaluates demand for research within the World Bank, finding that while there is a stronger demand for research amongst staff working on poverty, human development and economic policy tend to value and use research more compared with staff in the traditional sectors of Bank lending—agriculture and rural development; the latter sectors account for 45 percent of lending but only 15 percent of staff highly familiar with Bank research. Building ‘capacity’ here entails creating stronger incentives for learning and more relevant and accessible research products, argues the author. However the study suggests, citing previous research, that the Bank’s internal research capacity may lead to less ‘demand’ for new research, this stifling innovation. The study also indicates that there is lower familiarity with World Bank-generated research within Country Offices when compared to the Headquarters office, suggesting that investments in communications and technology (e.g. intranet) are not yet being realized at all levels.

Jones, H. and Mendizabal, E. (2010). ‘Strengthening learning from research and evaluation: going with the grain: Final Report’. London: Overseas Development Institute.

How does DFID currently learn lessons from commissioned evaluations and research findings, and how can this be improved? Based on the findings of this study – which find that there is little use of intermediaries and more of a direct relationship between ‘user’ and ‘producer’ of knowledge – the authors offer a number of recommendations on how DFID could improve lesson learning capacity. These include: Establishing more formal and long term relationships with key UK-based think tanks and research centres and globally to provide high quality short and long term research and evaluation-based lessons to DFID and DFID staff; Strengthening the research and research uptake teams to act more as experts or matchmakers between researchers and policymakers rather than focus their attention on synthesis and dissemination and the support of intermediary portals and project; Better targeting of research funding, and provide clearer signals to researchers (outside DFID) about the specific current and future information needs of the organization at the global, regional and national levels; Focusing efforts and resources on improving the communication of research outputs and findings through mechanisms that promote and strengthen professional relationships

between researchers and policymakers (e.g. regular seminars and events); and, continue to promote the use of funds available for quick policy research within policy teams. The paper also emphasizes the need for DFID to review it human resourcing approaches, and attempt to ‘raise the bar’ in terms of the capacity of its advisors to understand and use evidence.


Developing policy influence capacity

As Section 2.3 demonstrates, policy influence has become an integral part of the research process. Yet even though the capacity to undertake, undertake and use research may exist, the translation and dissemination of this research requires a different set of skills. This is arguably why knowledge brokers and translators have found themselves at the forefront of research-to-policy discussions of late and researchers encouraged to develop their communication as well as research skills. Following from Section 5.1 this section asks how the capacity of those wanting to influence policy can be built.

According to Court et al (2006), lack of capacity is the main reason why civil society organizations do not influence policy in the way they would like, a symptom of which is a lack of ownership over the development process (Keizjer et al, 2011). However this thinking suggests that influence is directly proportional to capacity. As Section 2.4 demonstrates, there are a number of other factors influencing the level of influence an organization has. While not sufficient, however, a degree of capacity is regarded as necessary to influence policy.

Policy influence is not an easily-defined outcome; and what ‘policy influence’ looks like will depend on an organization’s strategy (see Policy Influence and  Monitoring and Evaluation). Yet formulating the kinds of strategies needed to influence policy is a real challenge for many organizations, with many lacking the capacity to undertake an appropriate analysis of their influencing context (Simpson et al, 2006). Further difficulties relating to a lack of awareness of the policy process, ability to use evidence, and networking challenges are also identified in the literature.  To address this, an ODI paper by Start & Hovland (2004) presents a number of tools to help organizations assess context, as well as plan how they should best approach policy influence in accordance with their position and resources; while Jones & Hearn (2009) discuss Outcome Mapping as a ‘realistic alternative’ for influencing strategies, and Weyrauch a& Echt (2012) offer guidance on how to develop a policy influence plan.

In terms of guidance, the literature is in agreement that there is no blueprint or ‘one size fits all’ (Simpson et al 2006) approach to building capacity for influence; however there is a trend towards moving away from an individual-based approach focused on leadership, management, and operation towards a ‘systems’ perspective which involves a more holistic view. In a review of capacity building best practices Court et al (2006) argue that capacity building initiatives work best when there is broad-based participation and a locally-driven agenda; it builds on local capacities; takes a long-term approach with continuous learning and adaptation; and integrates activities at different levels to address often complex problems.  Mendizabal and Zeuthen (2012) emphasise the need to target capacity-building activities at the right people and organizations, as well as to recognise the limitations of web-based – as opposed to face-to-face – approaches.

Interestingly, the literature on capacity for policy influence tends to assume that the use of evidence is an integral part of any capacity-building initiative and policy influence strategy. As discussed in Section 2.1, 2.2, and 2.4, this is not the case per se – successful policy influence does not always involve the use of evidence (Chowdhury et al, 2006). As Young & Mendizabal (2009) argue, facts are not always “enough”, so policy arguments need to be presented in a way that cohere with existing ideas, beliefs, and values.  Other approaches identified include networking with other organizations (Court et al, 2006).

However, policy networks are also viewed as an effective way of both influencing policy and promoting the sharing of research (Selvood & Weyrauch, 2007); and as the discussion in Section 2.4 and 2.2 demonstrated, while using ‘stories’ to present research findings has   caused some concern regarding the simplification of complex research findings, Weiss’ ‘Enlightenment’ model of research utilization suggests that this is not entirely at odds with evidence-based policy decisions. The literature also includes guidance on how to engage in research-based policy advocacy (Blagescu & Young, 2006), with other important insights coming from the more general literature on successful advocacy campaigns (Stalker & Sandberg, 2011) (Save the Children, 2007), including the need to learn across advocacy coalitions and foster good relationships with other advocates .

Importantly, there is also an increasing awareness of the need for donors to improve their own influencing capacity while engaging in national policy processes in recipient countries (Jones, 2011Maetz & Balié, 2008). These discussions emphasise the need to use contextual knowledge and engage in policy discussions with policymakers in developing countries while using evidence.


Resources

How to influence

Start, D. and Hovland, I.  (2004) ‘Tools for Policy Impact: A Handbook for Researchers’. London: ODI.

How can researcher ensure that their findings have an impact upon policy? As civil society organizations are increasingly recognising the need to influence policy and decision making processes more effectively, the importance of basing policies on sound research and evidence is clear.  This paper offers guidance to ‘evidence-based civil society organizations’ (or ‘think tanks’) which might include organizations more used to interest-group campaigning and advocacy who have a rich source of knowledge on an issue. The group also includes research institutes and universities. The authors present and discuss a number of tools for civil society organizations to influence policy, including: The Four Types of Policy Entrepreneurship; Boston Box; Networking; Policy papers;  A lobbyist’s hierarchy of needs; Getting to Yes; 4 Ps of being influential; Engaging public participation; and campaigning alliances.

Court, J., Mendizabal, E., Osborne, D, and Young, J.  (2006) ‘Policy Engagement: How Civil Society can be More Effective’. London: Overseas Development Institute

In order to influence policy in a way that benefits the poor, CSOs in developing countries need to engage in government policy processes more effectively. This paper argues that increased democratisation and enhanced communication technology there is huge potential for partnerships between civil society organizations and policymakers, yet civil society organizations are not making the best use of their resources. They are often marginalised, their legitimacy questioned, and their evidence base challenged by researchers. The authors assert that civil society organizations must make better use of evidence and subsequently better engage in the policy process at various stages. Currently, lack of capacity is the main reason why civil society organizations fail to influence policy in the way they want. Largely, this lack of capacity concerns contextual analysis, awareness of the policy process, ability to use evidence, and networking challenges. It is argued that capacity building approaches need to adopt a systems perspective rather than focussing on improving the leadership, management, and operation of an organization. The authors argue that capacity building initiatives work best when there is broad-based participation and a locally-driven agenda; it builds on local capacities; takes a long-term approach with continuous learning and adaptation; and integrates activities at different levels to address often complex problems.

Young, J., and Mendizabal, E. (2009) ‘Helping Researchers become Policy Entrepreneurs’. ODI Briefing paper 53. London: Overseas Development Institute.

In this influential briefing paper Young and Mendizabal suggest that ‘facts alone – no matter how authoritative – may not be enough’ when attempting to influence policy. Based on the Research and Policy in Development (RAPID) group’s work over the last ten years the authors describe policy processes as complex and rarely logical, with policy being only weakly informed by evidence due to the power of personal ideas, values, and beliefs. It is recommended that ‘policy entrepreneurs’ who wish to influence policy using evidence need to possess an in-depth understanding of the context in which they operate, by understanding political dynamics and key players. They also need to be good storytellers able to synthesise simple compelling stories from the results of the research; good networkers to work effectively with all the other stakeholders; and good engineers, building a programme that pulls all of this together. However turning a researcher into a policy entrepreneur involves a number of sometimes difficult steps: a fundamental re-orientation towards policy engagement rather than academic achievement; greater engagement with the policy community; developing a research agenda focusing on policy issues rather than academic interests; acquiring new skills or building multidisciplinary teams; establishing new internal systems and incentives; spending much more on communications; producing a different range of outputs; working more in partnerships and networks; and potentially working with a different funding model.

Jones, H., and Hearn, S. (2009) ‘Outcome Mapping: a realistic alternative for planning, monitoring and evaluation’. ODI Background Note. London: Overseas Development Institute.

Outcome Mapping (OM) is a planning, monitoring and evaluation tool which enables reseachers and practitioners to measure influence in terms of changes in behaviour, actions, and relationships of individuals. Actor-centric, OM is an ideal tool for projects in which capacity building is an objective. However OM also builds the capacity of those who use the tool, by allowing adequate space for political analysis and actor mapping. Yet employing OM also requires a degree of capacity, and can be especially challenging when staff are familiar with other tools such as the Logical Framework Approach. It is therefore recommended that the tool is led by an experienced practitioner. OM is also particularly helpful when tackling complex problems where a number of inter-connected issues are involved and ‘progress’ relies on the interactions of many different actors and plans may require revision in accordance with real-time changes.

Save the Children (2007) ‘Advocacy Matters: Helping children change their world. An International Save the Children Alliance guide to advocacy’. London: Save the Children

This guide is for practitioners who are involved in advocacy. It can be uses to help people running an advocacy workshop, or as a general advocacy resource. The training material consists of a mixture of practical exercises and theory so that participants learn about advocacy in a way that is relevant to their specific needs and context, and they will come out of the workshop with a draft of an advocacy strategy. Through the pages, participants will: gain a deeper understanding, and develop a working definition, of experience and evidence-based advocacy as it applies to children’s needs and rights; understand the basic elements of advocacy, its role in Save the Children, and how it is integrated into programme work to achieve real and lasting results for children; learn a set of steps to plan for strategic advocacy and begin to develop an advocacy plan related to your work;  strengthen personal relationships with fellow advocates, learning from each other’s experience, and working towards building a community of advocacy practitioners; and develop a plan to share this workshop’s learning with colleagues, allies and constituents.

Simpson, A.,   Cass, S., and Tomlinson, B. (2006)  Chapter 6: “Building Knowledge and Capacity for Policy Influence”, in Building Skills and Capacity for Policy Influence. Ottawa: Canadian Council for International Co-operation (CCIC).

In terms of capacity building, this chapter argues that no ‘one size fits all’ for policy organizations looking to build their capacity for policy influence, and therefore the focus must be upon assessing context. However, reflecting on their experience in the Building Knowledge and Capacity project, they advise that organizations may also need to re-evaluate the importance placed on field experience when recruiting and hiring staff due to the competencies and mindsets required for good programming work “on the ground” often being the antitheses of those required for good policy work. Thus, in order to build policy capacity, it is critical to hire individuals who have good policy skills and understand how policy is developed in government circles, and balance these qualifications with those required to meet the organization’s program in the field. It is further recommended that organizations make a clear commitment of a percentage of their revenue to staffing for policy work. However capacity for policy work is often built most effectively by learning by doing as part of a larger community of practice.

Weyrauch, V., and Echt, L.  (2012)  ‘How to design a policy influence plan?’ Buenos Aires: CIPPEC.

The policy making process exists within a changing and highly volatile context. It is complex, and different players intervene, each with their own interests and motivations. Considering this complexity and the reigning chaos in the public policies area (product, among other things, of the numerous players seeking to influence it, the political environment itself, and unexpected events). The ten guides of the series “How to design a policy influence plan?” address the various components of a public policy influence plan, which consists on the defini­tion of a series of components which contribute to specify and define opportunities for the organizationobjectivesactors and alliancesa concrete proposalstrategiesmessages and channels for communicationnecessary resources and a system to M&E policy influence. Readers can also find the Spanish version.

LSE Public Policy Group (2011) ‘Maximizing the impacts of your research: a handbook for social scientists’. London: London School of Economics.

This handbook seeks to open the door to researchers achieving a more professional and focused approach to their research from the outset. It provides a large menu of sound and evidence-based advice and guidance on how to ensure that their work achieves its maximum visibility and influence with both academic and external audiences. It provides information on what constitutes good practice in expanding the impact of social science research, and also surveys a wide range of new developments, new tools and new techniques that can help make sense of a rapidly changing field.

Mendizabal, E. and Zeuthen, M. (2012). ‘Developing research communications capacity: Lessons from recent experience.’ London: Mendizabal Limited/Integrity Research.

In 2011, both the Overseas Development Institute (ODI) and International Network for the Availability of Scientific Publications (INASP) designed and delivered a capacity development project to improve the research communications capacity of several African research grantees of the International Development Research Centre (IDRC). In this paper the initiatives are reviewed and key lessons identified. The authors present the following key messages to take away from the review:

  1. Limits of planning: In both cases, as well as in other cases consulted, the interventions did not go as planned. There is little that ODI or INASP, as the service providers, could control and several grantees faced conflicting demands, lost interest, or were simply not capable of taking advantage of the services offered by the either ODI or INASP.

  2. Lack of interest from grantees: Despite expressing interest in being involved in projects, several grantees did not engage in the learning aspect of the project and did not change their research communications strategy. In short, their participation was driven more by an interest in being part of such initiative to satisfy donor demands rather than in the initiative itself,

  3. Researchers have other interests and pressures besides communications: most researchers are not only often more interested in researching than communicating but the business models of their organizations often demand that they spend a significant amount of time seeking and delivering new projects. As a consequence, any activities that are not seen to directly support their core business are unlikely to be given the priority they demand to be effective.

  4. Face-to-face is better than virtual, but the web is a good alternative: ODI’s original proposal had been to host the grantees for a few weeks to give them a chance to meet the team in charge of research communications and even participate in some activities. The idea was rejected and webinars were introduced as an alternative. Though they worked well INASP’s event worked more effectively, ensuring attendance and discouraging other distractions.

  5. If it is not done at the beginning, then it is probably too late: In all cases the researchers had finished or were about to finish their research. The project was therefore final activity for the grantees, an ‘add on’ to the project with only months to go. Furthermore, while the support provided was intended to lead to a communications strategy, there were no additional funds to implement such a strategy and there was little incentive to invest time in research communications work.

  6. The right people matter: Both ODI and INASP intended to develop the capacity of the networks or organizations involved and not just that of the individuals who participated in the capacity development projects. The ambition was for the people receiving the support to then go on and train or mentor other members of their networks or organizations. However, the participants were not always the ‘right’ people for this objective. While senior researchers, network coordinators, and even communicators may be excellent candidates to make use of any skills learned during the webinars or workshops, this does not necessarily make them the most appropriate ‘trainers of trainers’.

  7. Local or regional facilitators and mentors: INASP’s approach involved using regionally -based facilitators and mentors and this had a positive effect on the project. Conversely, ODI was able to connect with the grantees it was supporting only after visiting their offices, and concerns about the consultants’ lack of familiarity with their context were raised.

  8. No one is starting from scratch: It is important to remember that all the grantees, to different degrees, have some sort of research communications capacity; and in some cases, their personal and professional networks ensure greater levels of impact than any formal research communication strategy could ever promise. Furthermore, many communication tactics and channels that are common for developed countries or the United Kingdom, and that ODI and INASP are more familiar with, may not be appropriate for the grantees’ contexts.

Influence and donor relations

Jones, H. (2011) ‘Donor Engagement in Policy Dialogue: Navigating the Interface between Knowledge and Power’. Thinkpiece for AusAID. London: ODI.

This thinkpiece, produced for AusAID, is designed to guide donor organizations who are increasingly engaging in ‘policy dialogue’ in the countries they work. The author presents an attempt to support donors influence successfully for maximum development success. Recent evidence from DFID suggests that for relatively low costs large changes can be made when donor representatives engage directly with policymakers in developing countries, particularly when ‘sector specific’ advisors were deployed. Thinking about how to approach policy dialogue for policy influence is therefore an important way of ensuring that aid funds are used effectively, as well as working towards greater donor harmonisation. However, Jones is careful to point out that engaging in policy dialogue in this way confronts real issues of power, and donors must be aware that engaging in the policy process is rarely neutral. A thorough contextual understanding of the policy environment is therefore required from donor representatives.

Maetz, M., and Balié, J. (2008) ‘Influencing policy processes: Lessons from experience’. Rome: FAO

This paper identifies lessons on the basis of the FAO’s experience of trying to affect policy change in developing countries. A key finding of this study is that influencing the policy process requires a focus not only on technical skills (e.g. economics, agriculture, forestry, trade, rural development, etc.), but also on “soft” skills such as sociology, political science, negotiation, facilitation, consensus-building and conflict resolution. Facilitation requires neutrality in cases where the conditions are favourable to change, and in less favourable conditions, advocacy and alliance-building may be needed. The lessons identified by the authors indicate that when donors try to influence national policy processes, stakeholders with the power to block progress must be identified and brought on board.

Keijzer, N., Spierings, E., and Heirman, J. (2011) ‘Research for development? The role of Southern research organizations in promoting democratic ownership: A Literature review’. Maastricht: ECPDM.

Research organizations can promote ownership of the development process, argues this paper. The paper – drafted to support the OECD DAC Working Party on Aid          Effectiveness – explores how researchers in the global south can be supported to build their capacity to influence the development agenda from a locally-owned perspective. Citing the importance of political economy analysis to understand the context in which they are working, it is recommended that in turn more effort is made by donors to understand the ways in which information is understood and used in developing countries. The paper also highlights the need for capacity strengthening of southern research organizations, and the need for donors to prioritise investments in this area.

How important is evidence?

Pollard, A., and Court, J.  (2005) ‘How civil society uses evidence to influence policy – A literature review’. ODI Working Paper 249. London: ODI.

This Working Paper is predicated upon the belief that the use of evidence is central to both influencing policy and ensuring policy is effective. Internally, the authors argue that the use of research helps civil society actors become ‘better organizations’. However, civil society organizations – which are ‘reservoirs of research’ – do not always make the best use of the evidence they possess. The paper is therefore oriented towards helping civil society organizations to better understand how evidence can be used to influence the policy process at different stages: agenda-setting; formulation; implementation; and monitoring and evaluation. The authors argue that evidence is influential because it enhances the legitimacy of a civil society organization in technical, legal, and moral terms; it enhances the effectiveness of projects and allows for the gathering of lessons learned which can be shared with others; it aids integration between practical reality (for instance, in delivering services) with the rest of the policy process; it translates communal knowledge into legitimate evidence, thus ensuring that the policy process is participatory; it allows for greater access to policymakers and inclusion in policy debates; and it helps facilitate ongoing and interactive discussion between civil society and policymakers.  On this basis, the paper emphasizes the need to support civil society organizations to better use evidence for influencing purposes.

Kornsweig, J., Osborne, D., Hovland, I., and Julius Court (2006) ‘CSOs, Policy Influence, and Evidence Use: A Short Survey’. London: Overseas Development Institute

What support do CSOs in the developing world need to influence policy? This paper presents the findings of a survey of staff members from 130 civil society organizations across 33 countries. The research demonstrated that policy influencing was a high priority for all types of civil society organization, with Governance/Accountability and Rural Livelihoods/Agriculture being the most frequently-cited area of focus. However, the authors highlight that policy influencing concerns were usually directed at cross-sectorial issues. The majority of respondents reported that their principal way of influencing policy was to network with other organizations, while three of the four lowest responses – ‘work on projects commissioned by policymakers’, ‘newsletter to policymakers’, and ‘insider lobbying’ – were all activities most directly related to working with policymakers. Importantly, respondents favored activities that are indirect. In terms of evidence, respondents preferred case studies as a means of influence, though nearly a third said that academic papers were considered to be highly effective. The ways in which civil society organizations could be supported to improve their policy impact included: More financial support; Creation of space for civic engagement in policy discussions/public dialogue/dissemination bodies; Cooperation of legislative bodies; Monitoring and evaluation of policies and policymakers; Build capacities/train professions with regards to research and policy development, including lobbying and influencing skills, and the creation of a research unit within civil society organizations.

Matondi, P.B., and Rukuni, M. (2010) ‘Rebuilding Capacity For Policy Analysis And ‘Pro-poor’ Policy Making In Africa’. Harare: Rusivo Trust.

This paper deals with Africa-wide capacity for policy engagement on land issues and agricultural investment in the context of Africa’s infamous ‘land grabs’. The authors argue that African institutions are weak in speaking out on the issue due to under-funding and a lack of research base with which to work. The paper suggests that more ‘action research’ is required in order to generate relevant and timely information while also implementing initiatives to practically improve the lives of poor people. The Rusivo Trust endeavours to do the following in this area: trends analysis of existing practices; examining implications for governance; impact and outcome assessments; and an analysis of agrarian change and structural reform. African research networks are encouraged as a means of sharing knowledge and building a critical mass of evidence for policy influence.

Chowdhury, N., Finlay-Notman, C., and Hovland, I. (2006) ‘CSO Capacity for Policy Engagement: Lessons Learned from the CSPP Consultations in Africa, Asia and Latin America’. Working Paper 272. London: Overseas Development Institute.

The findings from ODI’s consultations with partners in Africa, Asia and Latin America proceeded upon the belief that there is a low interest in research amongst policymakers , and while in almost every country there are research institutions that are either wholly funded by government (e.g. the Bangladesh Institute of Development Studies) or partially funded by government (e.g. the Centre for Social and Economic Studies in Indonesia), in reality both seem to have only limited impact on advancing policymakers’ interest in research. In the African consultations, the problem of the ‘Politics of Participation’ was a key discussion point. Here it seemed that capacity was not the biggest problem: CSOs may be invited to join the agenda-setting ‘debate’, but after the government has made a decision, meaning that civil society is sought for legitimation rather than informing purposes. Moreover, policies are often short term and formed as a reaction to a crisis, such the Malawian food crisis which heralded new discussion on food policy. There is therefore little time or appetite for sustained research, and even where research may already have been conducted, policy is again formed irrespective of this. The implication here is that reform of the entire political system is necessary – in other words, changing the process, not just the policy. The ‘capacity’ question thus appears less relevant. However, in terms of actionable steps civil society organizations can take to maximise their (evidence-based) influence in policy is to: establish credibility with policymakers, for instance in some contexts by not appearing too aligned with donors; have (and use) access to solid, appropriate research that produces accurate, usable evidence, as much to affirm their credibility to policymakers, and help form good relationships, as to actually use in the policymaking process itself; and document evidence taken from the grassroots. Yet the African consultations threw up the issue of the importance of oral communication, implying that technical reports (and building the capacity of civil society organizations to produce them) might not be the most effective way of influencing policy.

Policy advocacy

Blagescu, M., and Young, J.  (2006) ‘Capacity Development for Policy Advocacy: Current thinking and approaches among agencies supporting Civil Society Organizations’. ODI Working Paper 260. London: Overseas Development Institute.

This Working Paper is part of the ODI’s Civil Society Partnerships Programme (CSPP) designed to help civil society organizations in developing countries to engage in development policy using evidence, and presents a summary of current thinking of issues on capacity building for Northern and Southern organizations involved in using research-based evidence in policy processes. However there has to date been a lack of systematic monitoring and evaluation of capacity building initiatives, meaning that approaches lack coherence. Based on ODI’s own experiences of capacity building initiatives this review draws out two lessons: (i) that a broader range of approaches is necessary to respond to the complexities of the current context; and (ii) no approach can be imposed on sceptical individuals, organizations or communities. The paper subsequently examines the approaches taken by eleven different organizations, including the African Capacity Building Foundation (ACBF), the Japan International Cooperation Agency (JICA), and the INTRAC Praxis Programme.

Stalker, C., with Sandberg, D. (2011) ‘Capacity building for advocacy’. Praxis Paper 25. Oxford: INTRAC.

This guidance note suggests that before an intervention takes place it is important to identify the ‘under-capacity’ problems and best solutions to address them. However, assessing advocacy capacity building interventions is a complex business and dimensions of success and change are not always clear. According to the authors, these complexities again highlight the importance of a sound diagnosis and setting clear, plausible objectives against which to measure change. This paper shows that for a capacity building intervention to be effective, civil society organizations must be aware of the complex nature of advocacy and follow a strategic, diagnostic approach.

Networks

Selvood, I., and Weyrauch, V. (2007) ‘Weaving global networks. Handbook for policy influence’.  Buenos Aires: Fundación CIPPEC.

Concluding that global networks involving the use of research to influence policy, CIPPEC have presented four case studies on global or regional networks that concretely illustrate their diverse challenges and how these networks have been able or not to face them. The selected case studies are: GCAP (Global Call Against Poverty); IFRTD (International Forum for Rural Transport and Development); TILAC and the CICC (Inter American Convention Against Corruption); TKN (Trade Knowledge Network). It is suggested that networks function to support capacity across network members to share lessons and increase their impact upon the policy process. Networks offer civil society organizations a number of benefits, including shared mechanisms to facilitate the transfer of knowledge between members and to policymakers; they draw attention to new issues in the global agenda; they offer a new mechanism to bridge diverging problem assessments and political constellations and thus promote a unified stance on a policy issue; and they highlight issues of accountability within the policy process.

Case studies

Macedo de Jesus, A. (2010) ‘Policymaking and Interest Groups: How do Local Government Associations Influence Policy Outcome in Brazil and the Netherlands?’ Brazilian Political Science Review, Vol. 4, No. 1, pp. 69-101.

This study considers policy influence at local level in both Brazil and The Netherlands in order to identify what factors make policy influence more likely. The author concludes that groups who represent ideas and messages supported by the rest of the populace are more successful than ones that do not; that financial resources are essential for a group to possess the necessary capacity to influence at local level. It is also important for a group wishing to influence policy at local level to be aware of formal and informal relationships between local government and the Executive and Parliament at national level, indicating the centrality of adequate mapping and analysis prior to influencing activities.

Kibua, T.N., and Oyugi, L.N. (2006) ‘Chapter Twelve: Influencing development policies through research: the Kenyan experience’. In Elias T. Ayuk and Mohamed Ali Marouani (eds). The Policy Paradox in Africa – Strengthening Links between Economic Research and Policymaking. Ottawa: IDRC.

This chapter examines the policy-making process in Kenya and examines the divergence between theory and practice in policy-making by offering a case study of how the Institute of Policy Analysis and Research (IPAR) has influenced policy.  In order to influence policy the authors highlights the importance of recruiting highly qualified and credible researchers, possessing adequate financial means, and strong networks and relationships with stakeholders which do not compromise independence. Indeed, IPAR’s success in influencing policy hinges on the perception of stakeholders about its credibility. While there is no single index for measuring capacity for policy influence, it is possible to identify successful examples of policy influence. The authors offer two examples from IPAR: a paper produced by one of its research associates on privatization of security in Kenya; and a series of studies produced on the public transport sector following demands from the government.

12 views0 comments
bottom of page