Why we can’t trust academic journals to tell the scientific truth

Why we can’t trust academic journals to tell the scientific truth, by Julian Kirchner.

The idea that the same experiment will always produce the same result, no matter who performs it, is one of the cornerstones of science’s claim to truth. However, more than 70% of the researchers, who took part in a recent study published in Nature have tried and failed to replicate another scientist’s experiment. Another study found that at least 50% of life science research cannot be replicated. The same holds for 51% of economics papers.

The findings of these studies resonate with the gut feeling of many in contemporary academia –- that a lot of published research findings may be false. …

There are multiple reasons for the replication crisis in academia — from accidental statistical mistakes to sloppy peer review. However, many scholars agree (pdf) that the main reason for the spread of fake news in scientific journals is the tremendous pressure in the academic system to publish in high-impact journals.

These high-impact journals demand novel and surprising results. Unsuccessful replications are generally considered dull, even though they make important contributions to scientific understanding. Indeed, 44% of scientists who carried out an unsuccessful replication are unable to publish it.

I have personal experience of this: my unsuccessful replication of a highly cited study has just been rejected by a high-impact journal. This is problematic for my career, since my contract as an assistant professor details exactly how many papers I need to publish per year and what kind of journals to target. If I meet these performance indicators, my career advances. If I fail to meet them, my contract will be terminated 19 months from now.

This up-or-out policy encourages scientific misconduct. Fourteen per cent of scientists claim to know a scientist who has fabricated entire datasets, and 72% say they know one who has indulged in other questionable research practices such as dropping selected data points to sharpen their results.