Methodological caution when presenting evidence
This post appeared originally at on think tanks, and it comments on a post by Michael Bassey of the LSE British Politics and Policy blog, who wrote about the UK Cabinet Office’s announcement on the implementation of a new initiative that would build on existing evidence-based policy making to guide decision making on £200 billion of public spending.
Academics and scholars should be happy that a government is actively trying to include evidence in its policymaking process.
Nevertheless, all of this excitement comes with a side of methodological caution. Bassey makes reference to a paper he wrote in 2001 for the Oxford Review of Education, titled “A Solution to the Problem of Generalisation in Educational Research: fuzzy prediction”. In this paper, he concluded that social science research, because it is social, embraces a multitude of variables, which makes it impossible to generalise. What is possible is to invoke the principle of “fuzziness” and thus develop the idea of fuzzy generalisation: the social scientist can then say “x in y circumstances may result in z”.
When working with policy makers, researchers have to tell them what may work instead of what will work, due to this fuzzy generalisation. Bassey calls this way of informing “best-estimate-of-trustworthiness”, or BET.
Others take it further: Andrew Pollard, assistant director at the Institute of Education, University of London, believes that policy should not be evidence-based but evidence-informed. Bassey also gives a final note of caution: while fuzziness might apply to a research conclusion, it can never apply to the research methodology. This should always be made very clear to policy makers.
Bassey’s view on evidence based policy can be an interesting addition to what has been already criticised about the process of research uptake. For instance, Andries du Toit’s paper on the politics of research looks critically at the assumptions made by researchers on the link between research and policy, and suggests that evidence is not always as well received as researchers like to think. More importantly, researchers’ zeal in having evidence used in policy making can be potentially harmful – the idea that policy should be about what “works” and leave everything else aside can lead to the elimination of political debate. The concept of fuzziness and BET can be used here to prevent the latter.
Additionally, ideas are not always easy to convey: presenting evidence to your audience, whether it be politicians, policymakers or a broader audience, might not always have the impact expected by researchers. How one approaches and uses evidence can be influenced by quite unscientific factors, as Emma Broadbendt’s paper on the political economic of research uptake in Africa points out: objectives, expectations, understandings, motivations and commitment regarding evidence also come into play.