Open Peer Review

From P2P Foundation
Jump to: navigation, search

= Open Peer Review is a form of Peer Review, where readers have the right to consult the commentaries by peers in the scientific validation process.


Open peer review consists of signed reviews that can be posted on the Internet. This transparency aims to resolve some of the drawbacks of anonymous reviewers in the normal peer review process.

Peer Commentary refers to the added possibility to add comments.

Here's a comment by the Open Access expert Peter Suber:

"Open review and open peer review are new terms for evolving phenomena. They don't have precise or technical definitions. No matter how they're defined, there's a large area of overlap between them.

If there's ever a difference, some kinds of open review accept evaluative comments from any readers, even anonymous readers, while other kinds try to limit evaluative comments to those from "peers" with expertise or credentials in the relevant field. But neither kind of review has a special name, and I think each could fairly be called "open review" or "open peer review"." (email correspondence, June 23, 2007)


"Stacy Konkiel", Shades of Open Peer Review:

"In recent years, scientists have increasingly called for an Open alternative to traditional peer review. This has manifested in journals adopting Open Peer Review (OPR), researchers taking to their blogs to review already-published work, and the proliferation of Open and Post-publication Peer Review sites like Faculty of 1000, PubPeer, and Publons.

Each shade of OPR has its advantages and disadvantages. Let’s take a closer look.

Open Peer Review for journals

Here’s how Open Peer Reviews work, more or less: reviewers are assigned to a paper, and they know the author’s identity. They review the paper and sign their name. The reviews are then submitted to the editor and author (who now knows their reviewers’ identities, thanks to the signed reviews). When the paper is published, the signed reviews are published alongside it.

Journals including BMJ and PeerJ require or allow Open Peer Reviews.

Participating in journal-based OPR can be a good way to experiment with OPR as it’s officially sanctioned by the author, journal, and reviewer alike.

One drawback to this type of Open Peer Review is that journals sometimes do not provide permanent identifiers for the reviews themselves, making it difficult to track the reach and impact of your review rather than for the journal article you’ve reviewed. Luckily, PeerJ is working to change that–they’re now issuing DOIs for Open peer reviews, which comprise 40% of their reviews.

Third-party Open and Post-publication Peer Review sites

In the past few years, a number of standalone, independent peer review sites have emerged: PubPeer, Publons, and Faculty of 1000 are among the many. These sites allow you to review both published and under-review papers on their platform, and in the case of Publons, export your reviews to journals for use.

These sites also allow you to submit your reviews as Open Peer Reviews, and to create profiles showcasing your peer reviews. Some sites like Publons also issue DOIs for reviews, making them citable research objects.

  1. PubPeer,
  2. Publons,
  3. Faculty of 1000,
  4. directory,

Blogging as Open Post-publication Peer Review

In this type of Open Peer Review, academics take to their blogs to share their thoughts on a recently published paper or preprint. These reviews can run the gamut from highly-technical reviews oriented towards other scientists (a good example is this post on Rosie Redfield’s blog) to reviews written for a more general audience (like Mike Eisen’s post on the same study).

A major advantage to blogging your Open Peer Reviews is that you don’t have to have permission to do it; you can just fire up your blog and start reviewing. But a downside is that the review isn’t formally sanctioned by the journal, and so can carry less weight than formal reviews.

No matter what type of Open Peer Review you opt for, if it’s got your name attached to it and is available for all to read, you can use it to showcase your expertise in your area of research." (


  • Journals including BMJ and PeerJ [1] require or allow Open Peer Reviews.

The Nature Experiment


The journal Nature undertook an experiment in open commentary in 2006, but concluded it was not successfull:

"There was a significant level of expressed interest in open peer review… A small majority of those authors who did participate received comments, but typically very few, despite significant web traffic. Most comments were not technically substantive. Feedback suggests that there is a marked reluctance among researchers to offer open comments." (

Other examples, by Michael Nielsen:

"The Nature trial is just one of many attempts at comment sites for scientists. The earliest example I’m aware of is the Quick Reviews site, built in 1997, and discontinued in 1998. Physics Comments was built a few years later, and discontinued in 2006. A more recent site, Science Advisor, is still active, but has more members (1139) than reviews (1008). It seems that people want to read reviews of scientific papers, but not write them." (


The blog-based open peer review of a book, four lessons:

Status Report

Trends in 2006, summarized by Peter Suber at

"Experiments combining Open Access with new forms of peer review were not new in 2006 but burst into scholarly consciousness almost as if they were new. The main cause was a series of well-publicized initiatives, from the open-review experiment at Nature to, Biowizard, Philica, and PLoS ONE. It got help from the US Patent Office's venture into open patent review and Grigory (Grisha) Perelman's decision to disseminate his award-winning work on arXiv and dispense with publication in a peer-reviewed journal. Whenever I covered these stories in my blog or newsletter, I was afraid to give the impression that OA intrinsically favored one kind of peer review or that we had to wait for consensus on the best method of review before proceeding to implement OA. This misunderstanding did occur in 2006, as in the past, but much less often than another that I didn't expect. A surprising number of journalists, even science journalists, mistook open review for non-review. This is a fallacy even for those who think open review is a step backwards. It reminds me of the early days of the OA movement, when journalists and publishers couldn't hear a description of OA, no matter how clear and detailed, without leaping to the conclusion that the idea was to bypass peer review and violate copyright. The fallacious leap should decline over time, as open review becomes more familiar and debate turns to specific differences between open and conventional review and even different flavors of open review. But for now we're still stuck in the period when even small suggestions for reform trigger defensive panic." (


John Moor at

"The term 'peer review' is often equated with 'gold standard'. Hence, the politically motivated, lazy or unscrupulous can use the peer-reviewed literature selectively, to make arguments that are seriously flawed, or even damaging to public policy.

This kind of fiasco might be avoided if the public had better access to the peer-reviewed literature, and if bona fide scientists were willing to give the public more assistance in interpreting it properly." (requoted from

Why open review has been a failure so far

Michael Nielsen:

"The problem all these sites have is that while thoughtful commentary on scientific papers is certainly useful for other scientists, there are few incentives for people to write such comments. Why write a comment when you could be doing something more “useful”, like writing a paper or a grant? Furthermore, if you publicly criticize someone’s paper, there’s a chance that that person may be an anonymous referee in a position to scuttle your next paper or grant application.

To grasp the mindset here, you need to understand the monklike intensity that ambitious young scientists bring to the pursuit of scientific publications and grants. To get a position at a major University the most important thing is an impressive record of scientific papers. These papers will bring in the research grants and letters of recommendation necessary to be hired. Competition for positions is so fierce that 80 hour plus work weeks are common. The pace relaxes after tenure, but continued grant support still requires a strong work ethic. It’s no wonder people have little inclination to contribute to the online comment sites." (

More Information

Debate on peer review in Nature, at

On the relative failure of Nature's own Open Peer Review experiment, at

Call for Open Peer Review



  • quantitative measures of your reviews’ quality, like
  1. Peerage of Science’s Peerage Essay Quality scores, [2]
  2. Publons’ merit scores [3], or a number of
  3. other quantitative indicators of peer-review quality [4]


Open Peer Review and Peer Commentary,

Critique of classic peer review, at

Opinion piece on the merits of open peer review by João Pedro de Magalhães.