Why we can’t trust academic journals to tell the scientific truth

Academic journals don’t select the research they publish on scientific rigour alone. So why aren’t academics taking to the streets about this?

Hundreds of thousands of scientists took to streets around the world in April. “We need science because science tells the truth. We are those who can fight the fake news,” a friend who participated in one of the March for Science rallies told me. I really wish this were true. Sadly, much evidence suggests otherwise.

The idea that the same experiment will always produce the same result, no matter who performs it, is one of the cornerstones of science’s claim to truth. However, more than 70% of the researchers (pdf), who took part in a recent study published in Nature have tried and failed to replicate another scientist’s experiment. Another study found that at least 50% of life science research cannot be replicated. The same holds for 51% of economics papers (pdf).

The findings of these studies resonate with the gut feeling of many in contemporary academia – that a lot of published research findings may be false. Just like any other information source, academic journals may contain fake news.

Some of those who participate in the March For Science movement idealise science. Yet science is in a major crisis. And we need to talk about this instead of claiming that scientists are armed with the truth.

Ninety-seven per cent of March For Science participants (pdf) want policymakers to consider scholarly evidence when drafting their policies. I, too, used to think that influencing policymakers was part of a modern academic’s job description. Now I am less confident in the validity of many research findings.

There are multiple reasons for the replication crisis in academia – from accidental statistical mistakes to sloppy peer review. However, many scholars agree (pdf) that the main reason for the spread of fake news in scientific journals is the tremendous pressure in the academic system to publish in high-impact journals.

These high-impact journals demand novel and surprising results. Unsuccessful replications are generally considered dull, even though they make important contributions to scientific understanding. Indeed, 44% of scientists (pdf) who carried out an unsuccessful replication are unable to publish it.

I have personal experience of this: my unsuccessful replication of a highly cited study has just been rejected by a high-impact journal. This is problematic for my career, since my contract as an assistant professor details exactly how many papers I need to publish per year and what kind of journals to target. If I meet these performance indicators, my career advances. If I fail to meet them, my contract will be terminated 19 months from now.

This up-or-out policy encourages scientific misconduct. Fourteen per cent of scientists (pdf) claim to know a scientist who has fabricated entire datasets, and 72% say they know one who has indulged in other questionable research practices such as dropping selected data points to sharpen their results.

The solution to this crisis is not to abandon performance indicators such as the number of papers published in high-impact journals. Universities are large and complex organisations and require indicators to manage themselves. Yet overreliance on performance indicators neglects that scientific discovery is not only the result of academic competence, but also of pure chance.

Scientists usually prefer to attribute discoveries to their academic competence. One study estimated, though, that up to 50% of all scientific discoveries may be the result of chance. Consider that the pacemaker, safety glass, artificial sweetener and plastic were all discovered by chance.

We need a system of rewarding academics that acknowledges that good research doesn’t necessarily always produce the best discoveries. Those who fail to produce surprising results need to be able to publish the dull but worthwhile ones in high-impact journals. And academic careers also need to advance if one excels in teaching, but fails to hit a winning streak in research for a few years.

Slowly, this is being recognised within academia. For instance, the online platform Preclinical Reproducibility and Robustness specialises in the publication of replication studies in the life sciences. In the Netherlands, where I work, several universities also offer career tracks towards a professorship for academics that are outstanding lecturers. But many more of these initiatives are needed.

Scholars complain these days that trust in science is in decline. But there are good reasons for it. If we scientists take to the streets to claim we are armed with the truth, only further disillusionment will follow. When it comes to the future of science, we may need to do less marching and more talking among ourselves about how to improve our academic processes.

Source