top of page
  • Writer's pictureClara Richards

The Topic Guide on Politics and Ideas: The monitoring and evaluation of research for influence and i

The monitoring and evaluation of research influence and impact

In this section:

In the last ten years there has been an increasing emphasis on ensuring that research has an influence or ‘impact’ on policy. Research impact and research influence have therefore become key concerns for researchers and policy researchers engaged in the translation of research findings. It is also a major concern for funders of research, who increasingly require a demonstration of value for money (VFM) in order to communicate research worth to taxpayers or contributors. In order to assess influence, or impact, monitoring and evaluation mechanisms need to be in place in order to deliver real-time feedback on the uptake of research findings and – subsequently – the effect this has had on the policy process.

The literature tends not to distinguish clearly between research influence and research impact, with the two often being used interchangeably. This presents a significant problem, for the two – in theory – describe different things. Traditionally, impact refers to a one-off event; influence to a less tangible, continuous process. Often, research impact is understood in terms of the influence it has upon, for instance, policy or the academic community. In practice therefore, it is difficult to isolate influence from impact when discussing research and policy.

ODI’s seminal guide to monitoring and evaluating policy research projects as an overall process identifies five areas of consideration (Hovland, 2007):

  1. Strategy and direction: The basic plan followed in order to reach intended goals – was the plan for a piece of communications work the right one?

  2. Management: The systems and processes are in place in order to ensure that the strategy can succeed – did the communications work go out on time and to the right people?

  3. Outputs: The tangible goods and services produced – is the work appropriate and of high quality?

  4. Uptake: Direct responses to the work – was the work shared or passed on to others

  5. Outcomes and impacts: Use of communications to make a change to behavior, knowledge, policy or practice – did communications work contribute to this change and how?

The following discussion looks at both research uptake, closely associated with assessing research communications, of which there is a burgeoning literature addressing the subject; and research outcomes and impacts in order to review the monitoring and evaluation of research influence and/or impact.

Policy influence

The literature on research influence and/or impact is part of a wider discussion concerning policy influence and its measurement (see Section 2.3; 4.2). There is a level of debate over whether influence can be demonstrated and different influences compared, though there have been attempts to guide researchers, civil society organizations, and governments in how to plan, monitor, and evaluate influencing strategies (Jones, 2011DFID, 2010Weyrauch et al, 2007). In discussing both how policy influence and research influence and/or impact can be measured, the literature identifies a number of problems:

  1. Nature of policy: Policy change is a highly complex, non-linear process, the results of which are uncertain due to the multitude of forces and actors involved in the policy process at any given time. This means it is extremely hard to both plan a set of activities based on a likely chain of events, and very difficult to ‘trace’ influences when reflecting back on policy change.

  2. The attribution problem: Determining the links between policy influencing activities and outputs, and changes in policy (variously defined) is complicated. Often, attribution will only be partial and changes will be the result of a number of factors.

  3. Defining success: ‘Outright success’ in terms of achieving the specific changes that were sought being rare, and some objectives modified or abandoned along the way – thus, objectives formulated at the outset of influencing work may not be the best yardstick against which to judge its progress if a logframe and baseline is constantly being re-worked.

  4. Integration: The monitoring and evaluation of influence requires monitoring and evaluation processes to be integrated within the life cycle of the project, resulting to additional work and the risk of complicating what is thought to be a ‘simple’ project.

  5. Time frame: Policy changes occur over long timeframes that may not be suitable to measurement within the ‘usual’ rhythms of projects and evaluations in aid agencies.

  6. Resourcing: Monitoring and evaluation is also time and resource intensive, and staff may lack the required know-how to plan and undertake such activities in conjunction with projects and programs, leading to objectives that are not readily monitored or evaluated.

Research uptake

Previous sections have drawn attention to the ‘communications turn’ in research as researchers and funders seek new and innovative ways of presenting and disseminating research findings in order to achieve maximum influence by ensuring there is an adequate ‘uptake’ of research. Though this will also depend on research quality, organizational capacity, and financial resourcing, research uptake is here presented in terms of research communications.

While practitioners, policymakers, and researchers are adamant that research communication is central to research reaching policymakers and having an influence on policy, there are considerable doubts over current levels of capacity to monitor and evaluate current research communication efforts, and therefore limited opportunities for learning how to improve communications strategies In order to ‘scale up’ research impact, researchers need to be able to answer whether communicating their research is ‘making a difference’ (Perkins, 2006). The literature offers the following advice to researchers and research organizations engaged in research communications:

  1. Given growing donor concerns with how widely research is communicated (often as a proxy indicator of impact) it is worth ensuring that the communications ‘reach’ of research can be demonstrated through continuous, real-time monitoring against a plan;

  2. Space must be built within the project cycle to ensure that there is room for reflection and learning throughout the research communications process, with a degree of flexibility to allow for greater responsiveness (Bucher & Yaron, 2006);

  3. Technological tools offer a good opportunity to assess research communications in a way that considers the active engagement of the audience with the research being communicated, for instance the innovative use of Twitter (Scott, 2012LSE, 2012);

  4. Other methods include Impact Logs, New areas for citation analysis, and User Surveys (Hovland, 2007);

  5. Clarity over which audiences research is being communicated to in order to measure uptake against planned targets (Perkins, 2006);

  6. Awareness that communication is often reduced to vertical information delivery or public relations, rather than part of a process of meaningful engagement in development processes, and thus approaches to measuring uptake can be limited in terms of assessing against overall strategic objectives (Lennie & Tacci, 2011);

  7. Further, sharing knowledge through research communications does not occur though the uploading and downloading of documents but rather how knowledge is used and whether its application leads to concrete development results (Clark & Cummings, 2011

Evaluating research influence and/or impact

Following insights garnered from the literature on research communication, the literature on research influence and/or impact is clear that evaluating research must move beyond considering issues relating to ‘uptake’ (e.g. citations). This type of influence and/or impact is often referred to in terms of ‘academic impact’ when uptake is limited to academic circles, and distinguished from what is thought to be a more nuanced consideration of ‘external impacts’, i.e. how research influences non-academic actors (LSE, 2011a). However this demarcation is more complex due to the seeming dichotomy created between ‘academic research uptake’ and ‘non-academic policy influence’. In fact, research uptake may be high within policymakers, yet have little influence or impact on policy.

A number of perspectives and approaches on monitoring and evaluating research influence and/or impact can be identified in the literature, largely offering general guidance (related also to research communication) and possible tools. Key insights are as follows:

  1. There is no one “right” approach to advocacy evaluation, and some options fit certain advocacy efforts better than others, and different evaluation users will make different choices (Coffman, 2009)

  2. The same challenges associated with research communications, including the complexity of social change; difficulties in integrating a monitoring and evaluation system into a project cycle; the need for longitudinal studies rather than after-project evaluations which may not capture long-term influence or impacts (Hovland, 2007); and the importance of ‘moving’ Logframes;

  3. The need to define what research influence and/or impact looks like: usually this would entail the identification of objectives, but also includes an awareness of different types of policy change (see Section 2.3);

  4. Helpful tools to measure influence and/or impact include Outcome Mapping; RAPID Outcome Assessment; Most Significant Change; Innovation Histories; and Episode Studies (Hovland, 2007Jones & Hearn, 2010);

  5. Case studies can also be an effective way of obtaining rich and complex information (Davies & Nutley, 2005); as are Participatory Rural Appraisals and Rural Appraisals, Developmental Evaluation, rights-based approach methodologies, contribution assessment, and Appreciative Inquiry (Lennie & Tacci, 2011);

  6. Monitoring and evaluation systems need to become more participatory, flexible and holistic in order to provide space for different types of knowledge and ongoing organizational and stakeholder learning and reflection. This does, however, go against dominant trends towards demonstrating impact rather than demonstrating learning (Lennie & Tacci, 2011).

Value for Money

Funders of research also have to consider research influence and/or impact and how this fits into their own strategic framework and objectives. Increasingly, funders seek a level of certainty that research they fund will be influential, leading to a perceived lack of innovation and funding of new partners, particularly in developing countries where the intellectual ‘pool’ is limited. Research funding is therefore hard to obtain, and would-be grantees are keen to stick to propose research projects likely to gain funding, often the expense of the kinds of research a country needs.

There is also an increased demand from funders for researchers to evaluate the influence and/or impact of their work in order to help public and private sector funding organizations, such as the IDRC, DFID, and the William and Flora Hewlett Foundation, both assess their own contribution to change and justify their expenditures (Lindquist, 2001McGann, 2006DFID, 2005). Interestingly, South Africa are thought to have ‘escaped’ the need to demonstrate value for money to donors due to their national government’s substantial investment in nationally-led research processes (Nakabugo, 2012)

The Value for Money agenda has received widespread criticism for monetarizing and instrumentalizing an endeavor that has been traditionally viewed as above financial weighting. In practice, calculating the value of research has proved problematic as approaches have tended to confuse value with impact rather than quality. However, the attempt to ensure grantees are more accountable to their funders; and funding bodies more accountable to their contributors has also been welcomed as a necessary, democratic development (Antinoja et al, 2011).


Hovland, I. (2007). ‘The M& E of Policy Research’. Working Paper 281. London: ODI.

This paper aims to advance understanding on how to monitor and evaluate policy research. While conventional academic research is usually evaluated using two approaches: academic peer review, and number of citations in peer-reviewed publications. For policy research programs, these evaluation tools have proven limited and do not capture the broader aims of policy research, such as policy impact, changes in behavior, or building of relationships. Policy research programs need new monitoring and evaluation (M&E) approaches in order to know whether they are making a difference, not only in the academic world but also in the world outside academia. In this review of approaches to monitoring and evaluating research impact (with a particular focus on research communication), Hovland distinguishes five cumulative levels of consideration:

  1. Strategy and direction: The basic plan followed in order to reach intended goals – was the plan for a piece of communications work the right one?

  2. Management: The systems and processes are in place in order to ensure that the strategy can succeed – did the communications work go out on time and to the right people?

  3. Outputs: The tangible goods and services produced – is the work appropriate and of high quality?

  4. Uptake: Direct responses to the work – was the work shared or passed on to others

  5. Outcomes and impacts: Use of communications to make a change to behavior, knowledge, policy or practice – did communications work contribute to this change and how?

For each level a number of tools to help plan, monitor and evaluate each aspect of policy research are explained. In terms of uptake, the author suggests the adopting the following approaches: Impact Logs; New areas for citation analysis; and User Surveys. In terms of outcome and impact the following tools are offered: Outcome Mapping; RAPID Outcome Assessment; Most Significant Change; Innovation Histories; and Episode Studies.

Policy influence

Jones, H. (2011) ‘A guide to monitoring and evaluating policy influence’. ODI Background Note. London: ODI.

Given that influencing policy is a central part of much international development work, including for donor agencies, there is a need to ensure that influencing work is properly monitored and evaluate in order for continued lesson learning and maximum effectiveness.

In the field of policy research this is a particular concern, and there is an increasing recognition that researchers need to engage with policy-makers and influence policy if their research is to be deemed to be of public worth. However monitoring policy change is not straightforward – it is a highly complex process shaped by various interacting forces and actors. In terms of specific challenges, it is difficult to determine the links between policy influencing activities and outputs, and changes in policy (variously defined). This ‘attribution problem’ is well-known within monitoring and evaluation theory and practice. Further, the nature of policy influencing work presents further challenges to more traditional M&E approaches, with ‘Outright success’ in terms of achieving the specific changes that were sought being rare, and some objectives modified or jettisoned along the way. This means that objectives formulated at the outset of influencing work may not be the best yardstick against which to judge its progress. In addition, policy changes to occur over long timeframes that may not be suitable to measurement in the usual rhythms of projects and evaluations in aid agencies. Monitoring and evaluation is also time and resource intensive; and staff may lack the required know-how to plan and undertake such activities in conjunction with projects and programs, leading to objectives that are not readily monitored or evaluated. Jones highlights the importance of formulating a Theory of Change prior to an intervention in order to set a benchmark against which influencing activities can be monitored and their impact evaluated.

DFID (2010) ‘HTN on How to plan an influencing approach to multilateral organizations’. London: DFID.

This How to Note for DFID staff, developed using the RAPID Outcome Mapping Approach (ROMA) heralded a concerted effort by DFID not only to take its influencing activities seriously, but also how they are monitored and evaluated through better planning. The step-by-step guide is briefly summarized as follows:

  1. Define the objective of your influencing

  2. Understand the policy context: policy spaces

  3. Identify who you want to influence

  4. Develop a theory of change

  5. Analyzing power and influence of key actors

  6. Mapping external relationships and team skills

  7. Developing an activity plan

  8. Monitoring and learning tools

In terms of Step 8, the most relevant to this discussion, the guidance note advises that there are different ways that you can monitor the success of your influencing approach by using the type of influencing activity you are undertaking as the basis for what information you will collect and review to monitor impact: evidence and advice based – trying to influence a multilateral through scientific evidence and advisory support – or lobbying and negotiation based – trying to influence a multilateral through attending meetings and diplomatic methods.; public campaigns and advocacy based – trying to influence a multilateral. Another way is to use progress markers, particularly useful for lobbying and negotiating type influencing activities. For both types of monitoring, it is important to agree how often you will review progress, and also to consider if monitoring will fit into your existing structures – for example using weekly team meetings – or whether it would be appropriate to organize (regular) review sessions, to assess progress more thoroughly.

Weyrauch, V., D´Agostino, J., & Richards (2011). ‘Learners, practitioners, and teachers: handbook on monitoring, evaluating and managing knowledge for policy’. Buenos Aires: Fundación CIPPEC.

(In Spanish)

Based on reflections, work methodologies and practical tools, the aim of the handbook is to guide an organization from monitoring its practices to using knowledge to improve its performance. After identifying internal and external challenges and opportunities for Latin American (but also developing countries) think tanks in monitoring and evaluating their policy influence, the authors address why and how monitoring and evaluation (M&E) plan should be developed, and then offer a guide to creating a KM system resulting from M&E practices. Throughout these steps, communication is considered as a fundamental strategy to reach consensus with those who could affect or could be affected by the process of developing these practices. Finally, the handbook shares the experience of some organizations trying to take this road, a road that implies important organizational changes. One of the main assumptions of the handbook is that M&E need to be considered as an opportunity to promote and appreciate the learning process through the members of the organization’s experience.

Lardone, Martín and Roggero, Marcos. Study on monitoring and evaluation of the research impact in the public policy of Policy Research Institutes (PRIs) in the region. CIPPEC and GDNet.

Access online (English version)

Access online (Spanish version)

The general aim of our study is to analyze the current state of policy research institutes’ capacities to monitor and evaluate (M&E) their actions of impact on public policy, and also to identify the M&E impact mechanisms currently available. Authors hope to identify those factors which facilitate or obstruct the capacity of PRIs to monitor and evaluate their influence on public policy. To do so, they pose a series of questions: How do PRIs monitor and evaluate the impact of the research they carry out on public policy? How do they measure the influence of the knowledge and evidence they produce in shaping and implementing public policy? Which relevant methodologies does this type of organization have at its disposal to follow up and evaluate impact on public policy? What are the current capacities of the PRIs involved in the field?

Coffman, J. (2009). ‘Overview of current evaluation practice’. Center for Evaluation Innovation.

There is no one “right” approach to advocacy evaluation. Some options fit certain advocacy efforts better than others, and different evaluation users will make different choices. This brief offers an overview of current practice in the rapidly growing field of advocacy evaluation. It highlights the kinds of approaches being used, offers specific examples of how they are being used and who is using them, and identifies the advantages and disadvantages of each approach. The brief addresses four key evaluation design questions and then offers common advocacy evaluation responses to those questions: 1) Who will do the evaluation? 2) What will the evaluation measure? 3) When will the evaluation take place? 4) What methodology will the evaluation use?

Coffman, J. (2009). ‘A user’s guide to advocacy evaluation planning’. Harvard Family Research Project. Cambridge, M.A:  Harvard University

The guide was developed for advocates, evaluators, and funders who want guidance on how to evaluate advocacy and policy change efforts. This tool takes users through four basic steps that generate the core elements of an advocacy evaluation plan, including what will be measured and how. The tool helps users to: 1) identify how the evaluation will be used and who will use it to ensure the evaluation delivers the right kind of information when it is needed, 2) map the strategy being evaluated to illustrate how activities lead to policy-related outcomes, 3) prioritize the components that are most essential for the evaluation to make sure the evaluation is resource-efficient and manageable and 4) identify measures and methods that signal whether advocacy strategy elements have been successfully implemented or achieved.  Because most users want help determining which outcomes and methods are most relevant or appropriate in an advocacy and policy context, the tool includes a comprehensive list of outcomes, measures, and methods that users can choose from when developing their own evaluation plans.

Organizational Research Services (2007). A guide to measuring advocacy and policy

The overall purpose of this guide is twofold. To help grantmakers think about and talk about measurement of advocacy and policy, this guide puts forth a framework for naming outcomes associated with advocacy and policy work as well as directions for evaluation design. The framework is intended to provide a common way to identify and talk about outcomes, providing philanthropic and non-profit audiences an opportunity to react to, refine and adopt the outcome categories presented. In addition, grantmakers can consider some key directions for evaluation design that include a broad range of methodologies, intensities, audiences, timeframes and purposes.

Riesman, J., Gienapp, A., and Stachowiak, S. (2007). ‘A handbook of data collection tools: companion to “A guide to measuring advocacy and policy”’. Seattle: Organizational Research Services

This handbook is dedicated to providing examples of practical tools and processes for collecting useful information from policy and advocacy efforts.  These examples are actual or modified tools used for evaluating existing campaigns or related efforts. Authors aimed to identify a wide range of data collection methods rather than rely primarily on traditional pre/post surveys and wide opinion polling. When possible, they included innovative applications of tools or methods to provide a broad range of options for grantees and funders. They primarily identified sample tools to measure the core outcome areas related to social change or policy change. For each outcome area, readers will find several data collection options as well as relevant methodological notes on ways to implement or adapt particular methods. In addition, the handbook offers examples of tools and methods related to other types of evaluation design.

Devlin-Foltz, D., Fagen, M.C.,  Reed, E., Medina, R., Neiger, B.L., (2012) ‘Advocacy Evaluation: Challenges and Emerging Trends’. Health Promotion Practice, Vol. 13, No. 5 581–586

Devising, promoting, and implementing changes in policies and regulations are important components of population-level health promotion. Whether advocating for changes in school meal nutrition standards or restrictions on secondhand smoke, policy change can create environments conducive to healthier choices. Such policy changes often result from complex advocacy efforts that do not lend themselves to traditional evaluation approaches. In a challenging fiscal environment, allocating scarce resources to policy advocacy may be particularly difficult. A well-designed evaluation that moves beyond inventorying advocacy activities can help make the case for funding advocacy and policy change efforts. Although it is one thing to catalog meetings held, position papers drafted, and pamphlets distributed, it is quite another to demonstrate that these outputs resulted in useful policy change outcomes. This is where the emerging field of advocacy evaluation fits in by assessing (among other things) strategic learning, capacity building, and community organizing. Based on recent developments, this article highlights several challenges advocacy evaluators are currently facing and provides new resources for addressing them.

Research uptake

Mendizabal, E. (2013). ‘Research Uptake: what is it and can it be measured?’. Onthinktanks blogpost. January 21st 2013.

What is meant by ‘research uptake’, and how is it measured? Reflecting on the author’s experiences and involvement in current discussions, the article argues that too much emphasis is being placed on ‘uptake’ rather than research quality While making a number of important points the author also described research funders as concurrently not taking research impact seriously enough: the rate of return in aid-funded research is largely thought about after the research has taken place and the money has been spent. Such large investments, Mendizabal argues, would be subject to unparalleled levels of scrutiny and information-gathering prior to the release of funds in a private sector context. In addition to this point the article also problematizes the concept of ‘uptake’, and instead suggests that: a) research uptake could be better described as ‘sidetake’ in many cases, where instead of flowing ‘up’ to policymakers research flows to other researchers; and b) that the notion of ‘downtake’ is also applicable when describing research aimed not at policymakers but the public. Thus, the author assumed that ‘uptake’ refers to the transferal and use of knowledge from researchers to policymakers. Uptake is opportunistic and often a matter of luck, and thus not the best way to assess the ‘rate of return’ on a research investment. Quality, the author argues, needs to be given price of place in the evaluation of research influence and impact.

Butcher, C. and Yaron, G. (2006), ‘Scoping study: Monitoring and Evaluation of Research Communications’. Brighton: Institute of Development Studies

In this study key approaches and methods used in research communication are shared, highlighting the role of a variety of face-to-face and technology-based ways of gleaning information about what works with regard to evaluating programmes and then disseminating research – and why. One element concerns the centrality, within recent approaches to M&E, of the participatory development of indicators – either quantitative or qualitative. Also, based on the findings, the authors suggest that implementers of research communication projects or programmes collect better baseline data, carry out regular monitoring as well as evaluation, undertake more strenuous identification of audiences and pathways for the communication of research, and build space for reflection and learning throughout the project cycle. Finally, the authors emphasize the importance of establishing a relationship with those that the research is intended to benefit, focusing on the following elements: potential users are more likely to use the research if the research (and source of research) is trusted; research is more likely to be assimilated if it comes through routes that people are familiar with; and the influence of research findings is likely to be cumulative and needs to be built up over time.

Perkins, N. (2006), ‘Proving our worth: developing capacity for the monitoring and evaluation of communicating research in development’, Research Communication Monitoring and Evaluation Group Programme Summary Report.

This programme report presents the results of a workshop in which an informal network composed of representatives from a number of UK organizations concerned about the impact of research on the reality of poverty discussed how to explore and analyze the different models for monitoring and evaluating research communication. Supported by DFID, the workshop aimed to find ways to integrate monitoring and evaluation processes into the lifespan of a research project. The discussion was a result of the Central Research Department at DFID has prioritizing communication as a cross-cutting theme within the institution’s research and policy divisions.  ‘Scaling up’ the impact of research, however, is tempered by significant gaps in the capacity to develop, monitor and assess communication strategies for research. Herein lays a challenge for the research community – ‘to know if it’s making a difference’. Participants in the workshop emphasized five critical areas which need to be considered in the future: the collection of better baseline data; engaging in dialogues that inform strategies; regular collection of monitoring data, not just evaluations; a greater identification of audiences and pathways for the communication of research; and the need to build space for reflection and learning throughout the project cycle.

Lennie, J., and Tacchi, J. (2011). ‘United Nations Inter-agency Resource Pack on Research, Monitoring and Evaluation in Communication for Development’. New York: United Nations.

This report highlights a number of trends, challenges and approaches associated with researching, monitoring and evaluating Communication for Development within the UN context. Regarding the approaches, methodologies and methods, findings highlight the need for a more flexible approach in selecting and using R, M&E approaches, methodologies and methods; the value and importance of a participatory approach, the benefits of a mixed methods approach; and the importance of using approaches and methodologies that consider the wider context and structural issues.  The authors identify a number of challenges in the monitoring and evaluation of Communication for Development, including the concern that communication, as understood by decision-makers, is often reduced to vertical information delivery or public relations, rather than part of a process of meaningful engagement in development processes, despite communications being heralded as a major pillar of participatory development.

Clark, L. and Cummings, S. (2011). ‘Is it actually possible to measure knowledge sharing?’ Knowledge Management for Development Journal, Vol. 6, No. 3, pp. 238-247

This document describes a discussion that took place within the KM4Dev community, which covered a wide range of topics: knowledge sharing and behavioural change, complexity theory, subjectivity and possible indicators, and the nature and value of scientific exploration. One of the main conclusions of the discussion was that sharing knowledge does not occur though the uploading and downloading of documents but rather it is how knowledge is used and whether its application leads to concrete development results. The article also goes through the dilemma of understanding knowledge as a product or output that we can count, and the behaviour change resulting from application of new knowledge (which is a process and therefore much harder to measure). Moreover, it sets that it is not just about knowing more but using the knowledge to do things differently and that we not only need to think about the data we need to collect to prove our contribution to behaviour change but what we plan to do with the data and how we aggregate indicators to turn data into compelling evidence. Finally, the authors argue that we need to accept that what works in one context or situation is not necessarily suitable or advisable in another, so we need to embrace both the complex and context-specific nature of our work and be prepared to adapt to our circumstances.

Scott, N. (2012) ‘A pragmatic guide to monitoring and evaluating research communications using digital tools’.  Onthinktanks blogpost, January 6th 2012.

Measuring the success of an individual or organization is difficult, as there are a number of conceptual, technical and practical challenges to finding evidence. So how can the dissemination of research be measured? Nick Scott has created a monitoring and evaluation (M&E) dashboard to track how outputs fare (for the author’s organization, the Overseas Development Institute). This dashboard would be able to assess success in reaching and influencing audiences and identifying what actors lead to that success. Lessons learned during the process is that organizations should only measure what they can measure, and to not let measuring get in the way of a communications strategy. For example, counting web site visits might not be a reliable way to evaluate research impact.

Brown, A. (2012) ‘Proving dissemination is only one half of your impact story: Twitter provides proof of real-time engagement with the public’, Blog Impact of Social Sciences. London: London School of Economics

The article explains how to use Twitter to monitor real time responses, in order to assess the effect on a public community. Twitter is extremely valuable as a way of assessing individual responses to broadcast media; in order to harness such reactions; real time monitoring seems to be the best option.

DFID (2007). ‘Lessons Learnt in Research Communication: Monitoring and Evaluation and Capacity Development’. Report of a lesson-learning workshop. London: DFID.

This is a report of a research communications lesson sharing workshop, organised by DFID’s Central Research. General discussions around research communication raised two critical points: the need to develop a framework that integrates research, communication and development values, to allow a common purpose; and the need for a shared vision of purpose between researchers and those whose primary role is communication, when developing and implementing a communication strategy.

Brody, T. (2006). ‘Evaluating Research Impact through Open Access to Scholarly Communication’.  Southampton: University of Southampton.

This thesis concludes that open access provides wider dissemination and distribution possibilities to those research articles posted on open access pages and maximizes impact as more people can read them sooner. Citations are also more frequent and can be used as a new web metric that measures download impact.

Research impact

London School of Economics (2011a) ‘A beginner’s guide to the different types of impact: why the traditional ‘bean-counting’ approach is no longer useful in the digital era’, Blog Impact of Social Sciences. London: London School of Economics.

How do we define one type of impact from another? The article takes a closer look at the differences between academic impact and external impact, a step away from the traditional passive approach to making impact and towards a digital era solution. Far from ‘maximal views’ of impacts, that tend to see it as ‘the demonstrable contribution that excellent research makes to society and the economy’ (Research Council UK), the article proposes a ‘minimal view’ of impacts as  recorded or otherwise auditable occasions of influence from academic research on another actor or organization. If impact is usually demonstrated by pointing to a record of the active consultation, consideration, citation, discussion, referencing or use of a piece of research, in the modern period this is most easily or widely captured as some form of ‘digital footprint’. The author also differentiates research ‘academic impact’ (when the influence is upon another researcher, university organization or actor) ‘external impact’ (when an auditable influence is achieved upon a non-academic organization or actor in a sector outside the university sector itself).

London School of Economics (2011b) ‘Impact is a strong weapon for making an evidence-based case for enhanced research support but a state-of-the-art approach to measurement is needed’, Blog Impact of Social Sciences. London: London School of Economics.

What, precisely, is research ‘impact’? What is the best way to demonstrate it? And what does ‘impact’ entail for the social sciences? The article presents two approaches or frameworks to assess research impact: the ‘Payback Framework’ and the ‘SIAMPI’ approach. The author argues that ‘impact’ need not be conceived of in purely economic terms: it can embody broader public value in the form of social, cultural, environmental and economic benefits; metrics-only approaches are behind the times: robust ‘state of the art’ evaluations combine narratives with relevant qualitative and quantitative indicators; rather than being the height of philistinism, ‘impact’ is a strong weapon for making an evidence-based case to governments for enhanced research support; the ‘state of the art’ is suited to the characteristics of all research fields in their own term; there should be no disincentive for conducting ‘basic’ or curiosity-driven research: assessment should be optional and rewarded from a sizeable tranche of money separate from quality-related funds.

Davies, H., Nutley, S.,  and Walter, I.  (2005) ‘Approaches to assessing the non-academic impact of social science research’. Report of the ESRC symposium on assessing the non-academic impact of research 12th/13th May 2005.

This report summarizes the findings of an ESRC symposium on assessing the non-academic impact of research. It lays out the reasons why we might want to examine the difference that research can make. It then explores different ways of approaching this problem, outlining the core issues and choices that arise when seeking to assess research impact. The report suggest that impact assessment may be undertaken for one or more of the following main purposes: accountability, value for money, learning, or auditing evidence-based policy and practice. Three key messages from the symposium were that: (1) approaches to assessing impact need to be purposeful, pragmatic and cognizant of the complexities involved; (2) comprehensive impact assessment of an entire portfolio of research is often impractical for timing and cost reasons; (3) impact assessment may need to focus on some key case studies and use an expert panel to provide an informed opinion about the overall impact of a research programme or funding agency. Finally, authors conclude that no single model or approach to assessing non-academic research impact is likely to suffice. Instead, the appropriateness of the impact assessment approach will be a function of many factors including, inter alia: the purpose of the assessment; the nature of the research; the context of the setting; and the types of impact of key interest.

Beardon, H. and Newman, K., 2009. How wide are the ripples?’ IKM Emergent Working paper No. 7. Bonn: Information and Knowledge Management (IKM) Emergent Research Programme, European Association of Development Research and Training Institutes (EADI).

Based on different case studies, this paper explores how widely the information generated through participatory processes, especially at grassroots level, is recognized and used. The authors found a distinct lack of actual policies and procedures for strengthening and broadening the use of information generated through participatory processes in international development organizations. However, they found that some of the fundamental questions, such as: What could this type of information be used for? Who should be using it, or paying it attention? How could it be stored, packaged or disseminated in order to have more influence? were in practice rarely being asked, let alone answered.

Donovan, C. and Hanney, S., 2011, ‘The ‘Payback Framework’ explained’, Research Evaluation, Vol. 20, No. 3, pp. 181-183

The Payback Framework, originally developed to examine the ‘impact’ or ‘payback’ of health services research, is explained. The Payback Framework is a research tool used to facilitate data collection and cross-case analysis by providing a common structure and so ensuring cognate information is recorded. It consists of a logic model representation of the complete research process, and a series of categories to classify the individual paybacks from research. Its multi-dimensional categorization of benefits from research starts with more traditional academic benefits of knowledge production and research capacity building, and then extends to wider benefits to society.

Jones, H., and Hearn, S. (2009) ‘Outcome Mapping: A realistic alternative for planning, monitoring and evaluation’. ODI Background Note. London: ODI

Outcome Mapping (OM) is an approach to planning, monitoring, and evaluating social change initiatives developed by the International Development Research Centre (IDRC) in Canada. At a practical level, OM is a set of tools and guidelines that steer project or programme teams through an iterative process to identify their desired change and to work collaboratively to bring it about. Results are measured by the changes in behavior, actions and relationships of those individuals, groups or organizations with whom the initiative is working directly and seeking to influence. This paper reviews OM principles to guide donors considering support for projects using OM, and other decision-makers seeking methods to improve the effectiveness of aid policies and practice. It asks: 1. What makes OM unique and of value? 2. For which programs, projects, contexts and change processes is it most useful? 3. How can donors facilitate its use, and what are the potential barriers?

Pellini, A.,  Anderson, J. H., Thi Lan Tran, H., and Irvine, R. (2012) ‘Assessing the policy influence of research: A case study of governance research in Viet Nam’. ODI Background Note. London: ODI.

This Background Note describes a case study of one attempt to assess the impact of a knowledge product: The Vietnam Development Report 2010 – Modern Institutions (VDR 2010). The authors set that the assessment of the policy influence of research products requires a familiarity with different definitions of policy change, different approaches for policy influences, and the knowledge of processes and tools to assess research outputs, the process for producing them, and their policy outcomes and impact. Assessments could also include capacity development as part of the assessment project of the local research institute. Finally, they share some specific lessons: a policy influence assessment should be part of the plan for producing and communicating research outputs, citation analysis requires time and has to be seen as a complement to the qualitative analysis of uptake and influence, external assessments should complement the internal self-assessment, it is difficult to develop stories of change from a single research output, especially after short periods of time, and a longer time frame is needed to assess the impact of such studies on institutional change. To conclude, the authors argue that the experi­ence with the Assessment of the VDR 2010 shows that policy influencing programs have to take a long view – one that sees policy influence assessments as part of the policy influencing process.

Assessing value for money in research

Lindquist, E.A.  (2001) ‘Discerning Policy Influence: Framework for a Strategic Evaluation of IDRC-Supported Research’. Ottawa: IDRC.

This document was produced to help IDRC assess how the research it supports has influenced policy processes. In terms of how policy influence is understood this paper provides three key approaches: i) Expanding Policy Capacities: Research can have influence by improving knowledge and supporting the development of innovative ideas, lines of thought and questions, as well as supporting actors to communicate ideas; ii) Broadening Policy Horizons: Research can provide opportunities for networking/learning , frame debates with new concepts and stimulating public debate and quiet dialogue between policymakers, and helping researchers to adopt a broader understanding of issues; and iii) Affecting Policy Regimes through modification  of existing policies and programs or their redesign.

McGann, J. (2006). ‘Best Practices for Funding and Evaluating Think Tanks’. Prepared for the William and Flora Hewlett Foundation. Ambler: McGann Associates.

The study documents and analyzes the existing pre-grant assessment criteria, methods of grant monitoring and evaluation, and effective funding mechanisms for think tanks in developing and transitional countries. It reveals that pre-grant assessment is most effective when three levels of evaluation are employed: (1) Country Assessment; (2) Policy and NGO Assessment; and (3) Institutional Assessment. Further, grant monitoring and evaluation is sufficiently comprehensive when two levels of assessment are utilized: (1) Output Evaluation (involving activities and dissemination, and quality and policy relevance); and (2) Impact Evaluation (involving organizational impact evaluation, and policy impact evaluation). Moreover, the report identifies recommended best practices in pre-grant assessment and grant monitoring and evaluation.

Antinoja, E., Eskiocak, O., Kjennerud, M., Rozenkopf, I., Schatz,  F.  (2011). ‘Value for Money: Current Approaches and Evolving Debates’. London: London School of Economics.

Who are non-governmental organizations providing value to – donors, taxpayers, or beneficiaries? Who defines value? And who should define value? This report argues that while stakeholders’ definitions of Value for Money differ, a combination of economy, efficiency and effectiveness seems to be at its core, complemented with good business practices, Option Appraisal and participation. To a large extent, Value for Money is about the long-standing ambitions of improving existing systems, optimal use of resources, and continuous capacity building and learning. It also sets that current discussions on Value for Money seem to have a strong focus on increasing accountability to donors rather than beneficiaries, but both from an effectiveness and ethical perspective, participation of beneficiaries needs to play a role both in defining and measuring Value for Money. This report identifies a number of important dimensions and proposes a simplified framework to assess potential techniques for measuring Value for Money. According to this framework, measurement techniques differ mainly in their ability to measure what matters, to measure comparably, and to measure contribution.

DFID (2005) ‘Rate of return of research: A literature review and critique’. London: DFID.

The study investigates what is known about rates of return to research and assesses key evidence that has been presented on agricultural and health research in particular. General findings suggest that: there is a robust positive relationship between spending on research and development and economic growth; the social return is significantly higher than the private return, suggesting that research and development will be under-funded if left to the market and highlighting the role of the public sector in this area; though research and development predominantly occurs in advanced market economies, there are significant spillovers from developed countries to developing countries via international trade. Finally, the study share recommendations to organizations, governments and donors involved in agriculture and health research.

Nakabugo, M. G., (2012) ‘Donors and ‘Value for Money’ Impositions: South Africa’s Exceptionalism in Research Development and International Cooperation in Higher Education’, in NORRAG NEWS, Value for Money in International Education: A New World of Results, Impacts and Outcomes, No.47, April 2012, pp. 102-103

This article argues that South Africa’s massive investment in research development puts it in the position where it is able to decide on its own research agenda and Value for Money measures without much external interference of donors. It is this same spirit of ownership that it seems to nurture as an emerging donor in its recent international cooperation programs with other universities and institutions in Sub-Saharan Africa.

3 views0 comments


bottom of page