Retractions in scientific literature


This post reproduces a write up in the Guardian newspaper of the UK. It was written by Adam Marcus who is the managing editor of Gastroenterology and Endoscopy News and Ivan Oransky who is the global editorial director of MedPage Today. They are the co-founders of Retraction Watch

News that Peter Chen, an engineering researcher in Taiwan, managed to game the peer review system and sneak into print at least 60 publications in a single journal is certain to raise serious questions about the integrity of the process by which scientific publishers vet papers. Those doubts only get stronger when you consider that this wasn’t the first time a scientist attempted such a scheme.

In 2012, a Korean plant chemist was caught cheating the peer review process and was forced to retract 28 articles. (He had already retracted seven others for different reasons, making a total of 35.) The publishing giant Elsevier retracted 11 papers the same year after what it called a “hack” of its editorial publishing system. The publisher Springer has also had at least two cases of retraction after it was discovered that the papers had been peer-reviewed by one of the authors.

That’s a total of more than 100 retractions for bogus peer review as a result of vulnerabilities in publishers’ editorial systems. To be fair, this represents only a tiny fraction of roughly 1.4m articles published by science journals each year. But retractions for all reasons, from honest error to plagiarism to the outright faking of data, are on the rise.

The number of retractions in the first decade of the 21st century was 10 times larger than that at the end of the 20th. And that doesn’t include a couple of recent extremely high-volume recidivists, such as Chen, the Dutch social psychologist Diederik Stapel (with 54 retracted papers) and Yoshitaka Fujii, whose 183 or so retractions make him the worst known offender.

What drives scientists to commit fraud? The common theme of many of these stories is that researchers felt great pressure to publish papers, and get them cited, because those are the currency of tenure and grants. It’s unclear, however, as a 2013 study in the journal PLOS Medicine
noted, whether the growth in retractions “reflects an increase in publication of flawed articles or an increase in the rate at which flawed articles are withdrawn".

Not all of these take-backs could have been prevented with better peer review or stricter scrutiny from editors. But some of them could, whether by plagiarism screening programs such as CrossCheck, or even Google, which in our experience proves pretty useful as a first-pass system for identifying misused text.

The problem is knottier when it comes to finding evidence of falsified results or doctored figures and images. Although the human eye isn’t particularly good at catching dodgy images, emerging software can pick up signs of image manipulation such as reversal, rotation, duplication and other common tricks of the trade. These programs aren’t quite ready for widespread use, but the time is coming.

So what’s to be done? No journal, either in print or online, has any excuse not to be using plagiarism detection tools on every manuscript it receives. The same should go for fledgling image detection systems when they become mature.

Another encouraging development is the rise of post-publication peer review, which has been made possible in recent years by the availability of papers online. Contributors to PubPeer, for example, have found signs of flawed or falsified results, leading to papers being retracted.

Some critics of PubPeer – which allows anonymous posting – and related sites have argued that they are little better than nests of libel. But PubPeer is in fact carefully moderated, and the results are hard to argue with. In one case from 2013, an article in the prestigious Journal of Biological Chemistry was pulled after a commenter on PubPeer raised questions about the images in the paper. And last month, the authors of a paper in Current Biology retracted their article in the wake of a flood of comments on PubPeer, and a university committee ruling that there had been image manipulation.

The high-profile case of the two recently retracted stem cell papers from Nature, arguably the world’s most prestigious science journal, demonstrated what may become a common narrative – and also a common attitude among editors. Nature tried to argue that although post-publication peer review had indeed uncovered serious problems with the paper, those problems weren’t the ones that forced the retraction. The latter, they said, required a university investigation. On the surface, that’s true. But there would have been no investigation had post-publication peer reviewers not caught the errors that Nature’s peer reviewers had missed.

Post-publication peer review needn’t replace traditional peer review. Both are important. But if publishers don’t begin to acknowledge the limitations of peer review, and fully embrace post-publication peer review – which, after all, better reflects the constant self-correction for which science wants to be known – you may be reading far more headlines about analyses in PubPeer than you do about papers in Nature or Science.

Comments

Popular posts from this blog

The Teesta Floods of 1968

The Uttar Banga Anath Ashram ( North Bengal Orphanage)

Eyewitness to the Great Calcutta Killing