On 4 October 2013, Science published a special issue on communication in science containing the ‘ open access sting article’ that went on to cause huge controversy worldwide. The study consisted of John Bohannon deliberately submitting articles with mistakes to various open access journals. Out of the 304 journals the paper was submitted to, 157 journals accepted it. The study also revealed that the editors, publishers and the bank accounts of journals are often based continents apart from each other. This mind-blowing data left people dumbstruck worldwide and the article remains one of the most downloaded articles from the Science website each month.
Within days of being published, the article was scrutinized in the mainstream media for only targeting open access journals. The title of the study is ‘Who’s afraid of peer review?’ and many claimed that it highlights weaknesses of the current peer review system more than anything else. The study took many journals from ‘Beall’s list’, a known list of hoax open access journals previously defined by Jeffrey Beall, so sceptics claim that it was bound to produce the results it did. In the paper, Bohannon acknowledged that “open access is a good thing,” but felt it required further analysis to make it work. However, the study also took journals that were listed in the Directory of Open Access Journals (DOAJ) and 45% of the journals that were listed in DOAJ and were included in the study, accepted the paper after their ‘peer review’ process. Lars Bjørnshauge, an advocate of the open access publishing model at the Scholarly Publishing and Academic Resources Coalition (SPARC), was astounded by the results and acknowledged “the number of predatory publishers and predatory journals has continued to escalate at a rapid pace.”
The study clearly highlighted that action must be taken soon. Many publishers are now reviewing their peer review models. One option seems to be post publication peer review, where articles are published and then critiqued afterwards with the aim of gathering constructive criticism from as many different scientists as possible. However, this model raises many questions as to how the comments and the quality of the reviewers will be monitored. Another problem would be the length of time a paper containing flaws would remain on a website until it was removed due to the feedback. Would this version of a paper be citable? The papers which would get removed first would probably be the ones that address the most sensitive or controversial issues, but what about the less sensitive, yet equally incorrect papers?
Another option is open peer review, where the reviewers’ names are included on the peer review reports and all versions of the manuscript are made available online when the article is published along with all the reviewers’ comments and responses to the reviewers. Some publishers, such as BioMed Central, already have this service; it is called the pre-publication history. The double blind peer review system is another option. Here, the reviewers are unaware of the identity of the authors and vice versa. The idea is to encourage complete anonymity throughout the whole process, so any bias is excluded.
Organisations such as PubPeer offer readers the opportunity to comment on a paper with anonymity, so inviting more objective comments. Rubriq is an organisation offering an independent peer review system. A recent study conducted by Rubriq highlighted that each paper has on average 2.3 reviewers and consumes approximately 11.5 hours of reviewers’ time. Rubriq discovered that approximately 15 million hours of reviewers’ time is being wasted every year; this is more time than the human genome project took, which was spread across 200 labs and a period of 13 years. With 39% of all papers rejected after peer review, there is clearly a lot of time being wasted, especially if papers are being resubmitted to different journals and thus being re-reviewed.
Some publishers are now offering the service to pass on reviewers’ comments to other journals for a small charge to stop duplication of work. This is essential for journals such as Genome Biology, which passes 40% of the scientifically sound content it rejects to its sister journals. Although Rubriq recognises that each journal has its own requirements, it believes that every journal has the same core elements to evaluate research. If this duplication of work is avoided, a lot of time can be recovered, which can then be invested back into scientific research. It has also been shown that many papers are ripped apart on Twitter within days of publication. As social media has become such a powerful tool in recent years, it is also being considered as a contributor to a reformed reviewing process.
It is clear that the current peer review process contains many flaws, some of which have been demonstrated in John Bohannon’s sting article. Publishers and other organisations are testing new models and ideas. These must be analysed critically before the paradigm shift is made from the current model in the coming years.
IMAGE: AJ Cann