top of page
Writer's pictureClara Richards

Stocktaking on Research Quality at the Think Tank Initiative Exchange


Two weeks ago the TTIX2015 took place in Istanbul and some initial reactions have already emerged. Enrique Mendizabal did an exercise of finding “the elephants in the rooms”, Vanesa Weyrauch reflects on the exchange itself and what it means for innovation among think tanks and policy makers, and  Richard Darlington wrote postcards, bringing an outsiders perspective on the TTIX.  I want to do something a bit different.

When I found out that TTIX would focus on research quality, I was quite excited (maybe you have seen my previous posts that focus on these issues here and at On Think Tanks). I wasn’t that worried that we were doing so much “research quality” talk, in fact, the frustrating part for me was that the conversations often diluted into other topics, and so it was hard work to summarize key takeaways on the main issue of the discussion.  I took a couple of days after the conference to reflect, and here are mine:

Research Quality is taken for granted

The first session of the conference started with the debate of the definition of research quality. As others have already reported, this conversation led probably to more questions than answers as we struggled as a group to define research quality and determine the indicators for quality, is it about publishing in journals? Is it about influencing policy? We had no clear answers and this shows my main concern: we overlook research quality.

Many, many, many times I hear statements that begin with “Well, first you do good research… and then you communicate, influence, have impact…” being the latter concepts the ones that get much more attention than the quality of research. Quality is taken for granted, but it shouldn’t.  The perfect example are the discussions on poor research that has impacted policy. Do you really want to be the researcher that impacts with poor quality research? I am guessing not.

This was also exemplified by the fact that some speakers used the expression of “research value chain”, depicting a linear process, starting with setting questions, selecting method, collecting data, analysing, communicating, influencing, impacting…” But I do not think this is a linear process. Especially when planning on influence policy research and analysis one shouldn’t wait until the end of a project to fix it in the context and debate.

The insider’s and outsider’s perspective on quality (and impact) are different

When discussing quality there are two approaches. As a researcher, research team or centre, one can ask: “What can I (we) do better to improve our research?” As a donor, policymaker, user the question may be “How do I know that what they do is good enough, trustworthy or credible?” These debates were intertwined throughout the conference, but I think they are slightly different questions, and the takes for each in the conference are different. The key debate centred on whether policy impact should be a marker for research quality.

So, as a research trying to do better research, should you aim at influencing policy above all? I do not think so. In fact, doing so undermines the complexity of the policy process and the different actors that are involved in it. Does this mean that researchers shouldn’t take policy influence into account? Well, not either. I think that as researchers trying to be part of the policy debate we must be very clear about this objective, but we shouldn’t judge our own work on whether a policymaker takes the recommendations or not at the end of the day. In fact, influencing policymaking might take a long time (and it is rare that we can attribute it to a sole piece of work, and may also happen in many various forms, different from just a linear uptake of a set of recommendations). For me, striking this balance means operationalizing into the research process analytical tools to understand the context, the politics and policy processes so as to reinforce the interaction between the problem to be solved and the type of analysis, recommendations or implications that research can contribute with. We need to develop concrete tactics to do this alongside the traditional research process, and not just as an afterthought when the research is finished. Some researchers have this implicit knowledge and they are very good at defining relevant questions, finding windows of opportunity, and ultimately creating research that is better fit for purpose. We could all benefit from codifying this knowledge and tools. I think we have said context is important enough by now. Now we need to figure out what to do about it in practice (if interested, please read these first ideas here).

I think this is what we can honestly do.  The other options, such as telling policymakers what they want to hear, or modifying our results or findings, shouldn’t be an option.

From an outsider´s perspective, is influencing policy a marker of quality? Well, not really. As we have said, impacting is a complex business, and although ideally good research gets to influence policy, this is not always the case. Sometimes good research doesn’t get the attention deserved or its time hasn’t come yet; maybe the audience is biased against it or its author. On the other hand, there are cases were bad research gets a lot of attention (think about climate change spurious research). I have no answer for thess questions except knowing that assessing research quality is risky business for reviewers, donors, policymakers.

From the policymakers’ panels we got a glimpse at their methods to assess research (as imperfect as then can be). So if they are not listening to you, it’s not always that they do not understand or they do not care. It is not that they are dumb or cannot read. It might be that they do not consider it worth their time.

Who is responsible for research quality?

This questions sparked some attention in the debates, and there were some takeaways for each level: while some alluded to the responsibility of the think tank, and to the broader society, it was not forgotten than at the end of the day, being a good researcher is also a personal decision. Here some takes on these three levels of responsibility:

Individual – I like to stress this point to researchers all the time, it is our name in that paper at the end of the day! My main takeaway at the individual level was intellectual integrity who some speakers alluded to. It is key to be honest, about our own believes, our capacities, level of expertise, experience and objectives. This sounds easy, but in practice it might be difficult and costly. Furthermore, the institutional demands might push researchers in other directions.

Institutional – It was emphasized constantly that institutional commitment is key. If a centre is committed to high quality, it might be able to set some incentives, or revision mechanisms in place to promote higher quality. But these, of course, have a limit, as the wider research ecosystem of a country will have a strong impact on quality, on salaries for researchers and how competitive think tanks are as places to work in. The panel on self-assessment based on the OCB Book or the panel on peer review systems shed some light on these institutional challenges.

Social – Finally though, research quality is a wider issue. Is it possible to do high quality research in settings where research is not valued? Can there be ‘islands’ of excellence in a non-conductive environment? As much as my idealist self would hope for this possibility, in practice it seems that is not the case. The interventions in the conference pointed to the limits of our efforts of sustaining quality if there is not a functioning ecosystem where good research is praised and bad research is denounced and that catalyses support for good research to be produced.

The life of think tanks is many-sided, juggling between research, communications, influence, management. The conference though confirmed one of my worrying concerns, we need just as much work on the research dimension as in the others. We cannot just assume that we are doing good quality research.

What was missing at the TTIX

As said before, I would have liked the discussion to be more focused and centred in research quality. To accomplish this, it would have been useful to have preliminary research and think pieces to guide the conversations. This could have helped keeping us on track, and able to leave the conference with a much more concrete outcome. One of these concrete outcomes could have been a more formal document to guide us forward, such as a ‘Declaration for Research Quality for Think Tanks’ that could move think tanks as a community into a next level in the debate.  As in a couple more years we will have a third (and maybe last) TTIX, we can maybe learn and each time be able to have more focused discussions, and concrete outputs.

1 view0 comments

Comentarios


bottom of page