Should it matter if Research findings are surprising/new?

Had an interesting exchange recently during a launch webinar for the new IDS report, Navigating Civic Space in a Time of Covid. The headline finding is:

‘The pandemic brought the suspension of many fundamental freedoms in the name of the public good, providing cover for a deepening of authoritarian tendencies but also spurring widespread civic activism on issues suddenly all the more important, ranging from emergency relief to economic impacts. Research partners in the Action for Empowerment and Accountability (A4EA)‘s Navigating Civic Space in a Time of Covid project have explored these dynamics through real-time research embedded in civil society in Mozambique, Nigeria, and Pakistan, grounded in a close review of global trends.’

I’d skimmed the report and listened to the presentations. The broad findings were. States used Covid to squeeze civic space even further; the pandemic aggravated/highlighted existing inequalities; civic organizations fought back; everything moved online.

As far as I could tell, there was nothing new or surprising about these findings, so I asked the annoying question in Q&A – what’s new here?

Rosie McGee argued in reply that the case studies (from Mozambique, Pakistan and Nigeria) constituted a ‘concrete illustration and provided granularity and corroboration of what we were seeing.’ She and others tried to identify some surprises, e.g. the identikit similarities in how different governments cracked down on civic action; stigmatization of communities/individuals or the emergence of new actors such as medics in Nigeria. But to be honest, I don’t think they can really have found any of those that surprising.

I just mentioned this blog to the IDS’ John Gaventa in an early morning email exchange on something else, and got this off the cuff response:

‘The question is always: New to whom? Surprising to whom? You see this stuff every day. Many people do not. And also, repetition can build awareness of patterns. Three independent studies, in separate contexts, found largely similar points.  This helps us see the larger picture, and that’s important.’

I’ve been chewing on that ever since. A few thoughts.

If a piece of research confirms that we thought would happen actually happened, why is that a problem? As my colleague Irene Guijt pointed out ‘Sometimes confirmation is the finding. It’s part of helping to shape the discourse, multiple people seeing and saying the same thing. Surprise can be overrated’. Research should be about knowledge, not necessarily novelty.

But there is no question that a lack of surprises/novelty makes it much harder to get anyone’s interest in your work: journalists, other academics, NGOs are all more likely to click through if there’s a hint of novelty or man-bites-dog rather than just confirmation of what they already think.

And my experience is that you can almost always find something to say, a new angle, pull out something that, while not totally novel, is at least a bit striking.

If you don’t do that, is it because it’s not there or because you weren’t looking hard enough? If our assumption is that context specificity really matters, then if you see the same thing everywhere, does that say more about your priors than reality? That you see what you want/expect to see unless something new really bangs you on the head?

Or is it because of some other factor – maybe specialists in a given issue (such as civic space) are more content to confirm their ideas, rather than disrupt, whereas flighty generalists like me get bored easily and always need a bit of stimulus? There are downsides to both approaches, clearly.

And maybe Covid has had a chilling effect on this. In the past, I’ve found new ideas come from side conversations, often on the margins of work, or in the bar during a conference or on a trip. I often use that scene from The Wire where Bunk advises rookie detective Kima to keep ‘soft eyes’ that pick up unexpected clues in their lateral vision. And of course, all that has come to an end with Covid – we sit on zoom all day, in formal conversations about what is going on, so it is hardly surprising that accidental insights are in short supply. Maybe we should try and recreate some space that allows for that kind of conversation, but I’ve got a feeling it could very easily get very cringe-worthy!

Thoughts?

And here’s the Soft Eyes scene. God, I love The Wire.

Subscribe to our Newsletter

You can unsubscribe at any time by clicking the link in the footer of our emails. For information about our privacy practices, please see our .

We use MailChimp as our marketing platform. By subscribing, you acknowledge that your information will be transferred to MailChimp for processing. Learn more about MailChimp's privacy practices here.

Comments

7 Responses to “Should it matter if Research findings are surprising/new?”
  1. I don’t get it; so you are suggesting that we only embark upon research if it will give surprising answers? Or we don’t publish it if it’s not surprising?

    Quite a lot of recent climate research and publishing, for instance, has been to confirm the ‘bleedin obvious’ because of the malevolent influence of the Kochs and Murdochs of this world. Was that wasted time?

    What about the Covid lab-leak? Do we have to decide in advance what would be surprising and then figure out whether any proposed research would thrill us?

    But yes, it’s fun to have a go at research, here’s an example:
    https://www.cnsr.ictas.vt.edu/mentoring/Dictionary%20of%20Useful%20Research%20Phrases.htm

    I now look forward to your revised, surprising and more infrequent FP2P blog …

  2. Iain Smith

    Thank you for the piece. John’s point about confirming trends is important. However, I think it also works ‘downwards’ as well. Having confirmed that a general principle holds true in different contexts (e.g. Mozambique, Pakistan and Nigeria) is really important for local actors and CSOs who are trying to make sense of complexity. Having this evidence can make all the difference for compelling advocacy or funding applications. I guess this is the joy and tragedy of research: we can never know who is going to need what information at what time. To me, the bigger crime is not doing more to make relevant findings available and accessible to those who need it. The divide between academia and practice is too broad, with too few platforms (like this) trying to close the gap.

  3. Kim

    Research is not about surprise or not – it’s about truth. If something surprises you, it could just be because you’re not paying attention, you have biases, etc. If something doesn’t surprise you, then we are creating evidence for what was long suspected. Take for example the “surprising” (horrifically tragic) research that found an unmarked grave with over 200 residential school indigenous children in BC recently. People all over Canada and the world expressed shock. But it’s not surprising if you’ve been paying attention. Because survivors have said this for decades: there are missing children that were whispered to have been murdered, but said to have run away, or nobody acknowledged, etc.. Quite frankly, nothing about horrors in indigenous schools should be surprising or shocking anymore. We should expect every new finding. But we need to uncover those findings for human decency’s sake, reconciliation, etc. Regardless of whether they are surprising.

    I also find it unfortunate that you only value research that is surprising because it sets up a research agenda (and funding regime) as problematic as the one we currently have – we only look for the research that is a silver bullet, rather than that which expands our knowledge.

    And as a last point, if we don’t do research that confirms our expected results, then we land in a huge knowledge gap, one where anything goes. Everyone then gets to fly by on what they perceive is true, rather than what is. McGee’s research a case in point. Indeed, that’s ripe for a Wild West of fake news. If people don’t want to hear what they suspect, then they are bad analysts. That’s not the fault of researchers and their outcomes. You don’t convert research agendas away from good research for the benefit of a weakening social framework.

  4. Cheyanne Scharbatke-Church

    We often see this reaction in evaluation as well – when our evaluative conclusions reinforce what a switched on program manager already knows they express frustration/disappointment and sometimes a sense that the evaluation effort wasn’t worthy of the investment. While I understand this to a degree as evaluations are often spurred because people want to learn something new, at the same time, external confirmation that your rapid scans and intuition are correct do have value IF you choose to capitalize on it. In the context of evaluation, it also raises the issue as to how evaluation questions ended up in a TOR/SOW that internal actors already confidently knew the answer to, but that is the subject of another blog post.

  5. Dan

    Funny how the assessment of whether results are “surprising” is always made post hoc. You read a report, follow the logic, nothing leaps out as challenging, and so you say “oh yes, I could have predicted this”. But DID you predict it?
    Did you, at the start of the research, take the time to map out your assumptions /existing knowledge & evidence and what you expected the findings to be? Were you able to take the research questions and, without doing the research, articulate a logical and convincing answer that matches with what the research has now found? Did you post the questions onto https://socialscienceprediction.org before the research and get predictions from others in the field, or did you yourself go onto there and provide predictions for research being done by others?
    Even if you weren’t involved in the research, as in the case in the article, you could still read the questions in the intro and then sketch out a few bullets on what you expect to find before reading the rest of the report. That would also help you to answer the question on “what is new for me?” without needing to ask time-filler questions in Q&As. Maybe next time a piece of research comes along in an area you know something about, you could publish a blog BEFORE you see the findings, setting out what you expect to have been found (and why), and then in a follow-up blog we can share in your surprise or lack thereof.
    Without that, a purely post hoc assessment of the “surprise factor” of a piece of research in an area you already know something about is pretty much doomed to become merely an assessment of how well-written and logical the report is. A good report will take you along a series of little steps, all of which you “already knew” (or which “make unarguable sense”, aka things you can’t possibly admit to not having already known), and what matters is less about whether any individual step is surprising than whether you as the reader had previously thought to put them together in that order.

Leave a Reply

Your email address will not be published. Required fields are marked *