Peer Review
In Peer Review, scientific articles are vetted by scientific colleagues.
It should be noted that the process of vetting in peer production, i.e. Communal Validation, based on Anti-Credentialism, is different.
See also our entry on the new trend of Open Peer Review
Discussion
Peer review as a feudal knowledge exchange system
See the arguments here:
- Reinventing academic publishing online. Part I: Rigor, relevance and practice. by Brian Whitworth and Rob Friedman. First Monday, Volume 14, Number 8 - 3 August 2009 [1]
Michael Nielsen: Three Myths about Peer Review
Excerpted from http://michaelnielsen.org/blog/?p=531:
Myth number 1: Scientists have always used peer review
The myth that scientists adopted peer review broadly and early in the history of science is surprisingly widely believed, despite being false. It’s true that peer review has been used for a long time - a process recognizably similar to the modern system was in use as early as 1731, in the Royal Society of Edinburgh’s Medical Essays and Observations (ref). But in most scientific journals, peer review wasn’t routine until the middle of the twentieth century, a fact documented in historical papers by Burnham, Kronick, and Spier.
This was a common practice in the days before peer review became widespread: decisions about what to publish and what to reject were usually made by journal editors, often acting largely on their own. These decisions were often made rapidly, with papers appearing days or weeks after submission, after a cursory review by the editor. Rejection rates at most journals were low, with only obviously inappropriate or unsound material being rejected; indeed, for some Society journals, Society members even asserted a “right” to publication, which occasionally caused friction with unhappy editors (ref).
What caused the change to the modern system of near-ubiquitous peer review? There were three main factors. The first was the increasing specialization of science (ref). As science became more specialized in the early 20th century, editors gradually found it harder to make informed decisions about what was worth publishing, even by the relatively relaxed standards common in many journals at the time.
The second factor in the move to peer review was the enormous increase in the number of scientific papers being published (ref). In the 1800s and early 1900s, journals often had too few submissions. Journal editors would actively round up submissions to make sure their journals remained active. The role of many editorial boards was to make sure enough papers were being submitted; if the journal came up short, members of the editorial board would be asked to submit papers themselves. As late as 1938, the editor-in-chief of the prestigious journal Science relied on personal solicitations for most articles (ref).
The twentieth century saw a massive increase in the number of scientists, a much easier process for writing papers, due to technologies such as typewriters, photocopiers, and computers, and a gradually increasing emphasis on publication in decisions about jobs, tenure, grants and prizes. These factors greatly increased the number of papers being written, and added pressure for filtering mechanisms, such as peer review.
The third factor in the move to peer review (ref) was the introduction of technologies for copying papers. It’s just plain editorially difficult to implement peer review if you can’t easily make copies of papers. The first step along this road was the introduction of typewriters and carbon paper in the 1890s, followed by the commercial introduction of photocopiers in 1959. Both technologies made peer review much easier to implement.
Nowadays, of course, the single biggest factor preserving the peer review system is probably social inertia: in most fields of science, a journal that’s not peer-reviewed isn’t regarded as serious, and so new journals invariably promote the fact that they are peer reviewed. But it wasn’t always that way.
Myth number 2: peer review is reliable
Every scientist has a story (or ten) about how they were poorly treated by peer review - the important paper that was unfairly rejected, or the silly editor who ignored their sage advice as a referee. Despite this, many strongly presume that the system works “pretty well”, overall.
There’s not much systematic evidence for that presumption. In 2002 Jefferson et al (ref) surveyed published studies of biomedical peer review. After an extensive search, they found just 19 studies which made some attempt to eliminate obvious confounding factors. Of those, just two addressed the impact of peer review on quality, and just one addressed the impact of peer review on validity; most of the rest of the studies were concerned with questions like the effect of double-blind reviewing. Furthermore, for the three studies that addressed quality and validity, Jefferson et al concluded that there were other problems with the studies which meant the results were of limited general interest; as they put it, “Editorial peer review, although widely used, is largely untested and its effects are uncertain”.
In short, at least in biomedicine, there’s not much we know for sure about the reliability of peer review. My searches of the literature suggest that we know don’t much more in other areas of science. If anything, biomedicine seems to be unusually well served, in large part because several biomedical journals (perhaps most notably the Journal of the American Medical Association) have over the last 20 years put a lot of effort into building a community of people studying the effects of peer review; Jefferson et al’s study is one of the outcomes from that effort.
In the absence of compelling systematic studies, is there anything we can say about the reliability of peer review?
The question of reliability should, in my opinion, really be broken up into three questions. First, does peer review help verify the validity of scientific studies; second, does peer review help us filter scientific studies, making the higher quality ones easier to find, because they get into the “best” journals, i.e., the ones with the most stringent peer review; third, to what extent does peer review suppress innovation?
As regards validity and quality, you don’t have to look far to find striking examples suggesting that peer review is at best partially reliable as a check of validity and a filter of quality.
What about the suppression of innovation? Every scientist knows of major discoveries that ran into trouble with peer review. David Horrobin has a remarkable paper (ref) where he documents some of the discoveries almost suppressed by peer review; as he points out, he can’t list the discoveries that were in fact suppressed by peer review, because we don’t know what those were. His list makes horrifying reading.
Here’s just a few instances that I find striking, drawn in part from his list. Note that I’m restricting myself to suppression of papers by peer review; I believe peer review of grants and job applications probably has a much greater effect in suppressing innovation.
- George Zweig’s paper announcing the discovery of quarks, one of the fundamental building blocks of matter, was rejected by Physical Review Letters. It was eventually issued as a CERN report.
- Berson and Yalow’s work on radioimmunoassay, which led to a Nobel Prize, was rejected by both Science and the Journal of Clinical Investigation. It was eventually published in the Journal of Clinical Investigation.
- Krebs’ work on the citric acid cycle, which led to a Nobel Prize, was rejected by Nature. It was published in Experientia.
- Wiesner’s paper introducing quantum cryptography was initially rejected, finally appearing well over a decade after it was written.
To sum up: there is very little reliable evidence about the effect of peer review available from systematic studies; peer review is at best an imperfect filter for validity and quality; and peer review sometimes has a chilling effect, suppressing important scientific discoveries.
At this point I expect most readers will have concluded that I don’t much like the current peer review system. Actually, that’s not true, a point that will become evident in my post about the future of peer review. There’s a great deal that’s good about the current peer review system, and that’s worth preserving. However, I do believe that many people, both scientists and non-scientists, have a falsely exalted view of how well the current peer review system functions. What I’m trying to do in this post is to establish a more realistic view, and that means understanding some of the faults of the current system.
Myth: Peer review is the way we determine what’s right and wrong in science
By now, it should be clear that the peer review system must play only a partial role in determing what scientists think of as established science. There’s no sign, for example, that the lack of peer review in the 19th and early 20th century meant that scientists then were more confused than now about what results should be regarded as well established, and what should not. Nor does it appear that the unreliability of the peer review process leaves us in any great danger of collectively coming to believe, over the long run, things that are false.
In practice, of course, nearly all scientists understand that peer review is only part of a much more complex process by which we evaluate and refine scientific knowledge, gradually coming to (provisionally) accept some findings as well established, and discarding the rest. So, in that sense, this third myth isn’t one that’s widely believed within the scientific community. But in many scientists’ shorthand accounts of how science progresses, peer review is given a falsely exaggerated role, and this is reflected in the understanding many people in the general public have of how science works. Many times I’ve had non-scientists mention to me that a paper has been “peer-reviewed!”, as though that somehow establishes that it is correct, or high quality. I’ve encountered this, for example, in some very good journalists, and it’s a concern, for peer review is only a small part of a much more complex and much more reliable system by which we determine what scientific discoveries are worth taking further, and what should be discarded." (http://michaelnielsen.org/blog/?p=531)
Difference between Communal Validation and peer review
Peer production is based on equipotential participation (see Equipotentiality, i.e. the a priori self-selection of participants, and the communal vetting of the quality of their work in the process of production itself. Peer review is based on credentialism, peer production vetting is based on Anti-Credentialism. Peer review is part of an elaborate process of institutional and prior validation of what constitutes valid knowledge; peer production vetting is a posteriory vetting by the community of participants.
A quote on the difference between peer to peer processes and academic peer review:
“One of the early precedents of open source intelligence is the process of academic peer review. As academia established a long time ago, in the absence of fixed and absolute authorities, knowledge has to be established through the tentative process of consensus building. At the core of this process is peer review, the practice of peers evaluating each other's work, rather than relying on external judges. The specifics of the reviewing process are variable, depending on the discipline, but the basic principle is universal. Consensus cannot be imposed, it has to be reached. Dissenting voices cannot be silenced, except through the arduous process of social stigmatization. Of course, not all peers are really equal, not all voices carry the same weight. The opinions of those people to whom high reputation has been assigned by their peers carry more weight. Since reputation must be accumulated over time, these authoritative voices tend to come from established members of the group. This gives the practice of peer review an inherently conservative tendency, particularly when access to the peer group is strictly policed, as it is the case in academia, where diplomas and appointments are necessary to enter the elite circle. The point is that the authority held by some members of the group- which can, at times, distort the consensus-building process - is attributed to them by the group, therefore it cannot be maintained against the will of the other group members." (Felix Stalder in: http://news.openflows.org/article.pl?sid=02/04/23/1518208 )
Peer Review is not Obsolete
Ward Cunningham at http://www.re-public.gr/en/?p=141
"Does the proliferation of wikis mark the eventual end of peer review? How is this development changing the nature of scientific communities?
W.C.: Wiki does not threaten peer review. Science needs peer review and it will get it. I do not see knowledge produced through wikis as being on the same ground with scientific knowledge. Wiki is best seen as a way of reporting, sharing, coordinating, problem framing and agenda setting. A wiki works best where you’re trying to answer a question that you can’t easily pose, where there’s not a natural structure that’s known in advance to what you need to know.
Science is based on repeatable experiment. The peer review is a means of assessing the quality of the experiments, not voting on the preference for a particular result. But we should not forget that what you get as a wiki reader is access to people who had no voice before. The people to whom we are giving voice are aware of what it’s like to write, and ship, a computer program.
If you want to contribute to a scientific journal you should be peer reviewed. Part of peer review is that you’re familiar with all the other literature. And the other literature somehow that has spiralled off into irrelevance. What was being written about programming didn’t match what practicing programmers felt. With wiki, practicing programmers who don’t have time to master the literature and get a column in a journal that’s going to be read have a place where they could say things that are important to them. The wiki provides a different view. In fact you can tell when someone is writing on wiki from their personal experience versus when they are quoting what they last read. "
(http://www.re-public.gr/en/?p=141)
Research reveals weaknesses of peer review processes
Summary of the research from http://jp.senescence.info/thoughts/peer_review.html
"Although peer review is plagued by elitism, bias, and abuse, it is deemed by many as essential to the scientific process. Ironically, peer review has no valid scientific base. The few studies done on peer review suggest that reviewers -- also known as referees -- vary markedly in their opinions. Moreover, peer review does not prevent scientific fraud, hardly detects errors, and only modestly improves scientific quality (Smith, 1999; van Rooyen et al., 1999; Rothwell and Martyn, 2000). Importantly, peer review continues to be a conservative process that smothers innovative and unconventional ideas." (http://jp.senescence.info/thoughts/peer_review.html)
Limitations of the present regime of peer review
By M. Guedon at http://scholarlypublishing.blogspot.com/2007/07/scholarly-communication-open-access-and.html
"the present system is too rigid, too unwieldy to permit such small-scale, yet potentially crucial interventions. To make the proper corrections, one would have to republish and perhaps even go through the publisher if it is in print. The communication process is therefore limited or blocked.
There is a second type of difficulty: the present system of scholarly publishing relies more on a credential system and a co-operative system rather than on the intrinsic quality of individual intelligence and the excellence of the submitted text. One does not enter scientific or scholarly territories without showing the right kinds of references - diplomas, titles, names of institutions, etc. As a result, the scientific and scholarly enterprises work as a two-tier system where the authorized write and read and the others do not write and often cannot read because of economic barriers, such as high subscription prices and lack of affiliation to the right library).
To address these obstacles, M. Guédon touches on the granularity issue. The article is not the only possible model to contribute to scholarly or scientific research. This is even truer of the monograph in the humanities and, in fact, the article has superseded the monograph in most disciplines. He suggests that knowledge should be regarded as a conversation. People should freely be able to contribute to it. In the scientific community for example, moving closer to a wikipedia model could be the way of the future as knowledge would be made available to everyone; it can be created together, modified on a global scale, improved upon, and so forth. However, the argument of quality comes to mind. He counters that the present criteria for quality inherently rest on a hierarchical vision of society. When excellence is sought, the greater the number of minds involved, the greater the quality of the work done: the case of free software and some recent analyses of Wikipedia confirm this general rule. The greater the numbers of people involved in an issue, the better the answers are crafted. Consequently, the lines that separate the experts from the rest of society should be erased. We will always have experts in various fields, but to limit contributions to knowledge as a whole to experts only is to deprive all of humanity of its enormous potential for distributed intelligence." (http://scholarlypublishing.blogspot.com/2007/07/scholarly-communication-open-access-and.html)
Christopher Turner: Peer Review is open to Fraud
Summarized from http://www.convergence-cpt.com/FraudPeerReview.html.
The writer is the author of the book Convergence at http://www.convergence-cpt.com/index.html
The book is now available through Kindle (see link below) as well as other e-reader platforms. The original platform was not user friendly, hence the update to Kindle and such [2]
Christopher Turner, neuroscientist:
"Scientific fraud and career development
Fraud, by whatever means, can give an individual a huge advantage over those playing by the rules. Whereas the more blatant forms of fraud (such as data fabrication) can cause considerable harm, the more subtle forms of scientific deception (false claims of authorship, intellectual theft and shameless self-promotion, for example) can be just as damaging.
The community will self-correct, right?
That the community will self-regulate is sadly true only for the most blatant of cases and has validity only if the perpetrator is both caught and the deception openly acknowledged (as well as dealt with effectively). Relying on self-regulation to police fraudulent behavior is dangerous, as not only will most fraud thus go unaccounted for but the damage to the victim(s) never gets acknowledged.
Hidden cost of fraud
Even when fraud is somewhat transparent (as in the case of Scott Reuben) it may take a great deal of time before the deception is revealed (12 years in this case, and it's anyone's guess just how many other careers were adversely affected in that time). Once embedded as "solid citizens" of the science community, there’s no telling what such fraudulent people will do when they are elevated to positions of influence and power. What is more likely, that they will switch to more ethically-driven behavior after they’ve made it or continue to use the same tactics that got them there in first place?
Peer Review: is it fatally flawed?
How often have you said (or heard a colleague say) “How on earth did that get past the reviewers?”. How often have you been a reviewer of a manuscript and spotted serious conceptual flaws, alarming methodological problems or addled-mindedness about the design, only for the editor to completely over-ride your recommendation (to significantly revise or reject the paper) and instead accept the manuscript essentially unchanged? When editorial discretion borders on nepotism, the peer-review process becomes nothing more than a checked box in an online survey.
Does the current “Peer Review” system invite deception?
As long as publications and grant support are used as critical benchmarks for career progress, the peer review system will remain vulnerable to fraud. There is growing dissatisfaction with this system and the emergence of online “open access” models suggest something better is on the horizon, though for some it may not arrive soon enough. At the moment it appears we are stuck with the classic peer-review system because, much like democracy, it may not always be pretty but what else is there?
Did Reuben fail Science or did Science fail itself?
In a recent article by Adam Marcus, Josephine Johnston (an attorney specializing in research integrity) was asked her opinion about the Reuben fraud: "It’s usually just one article, not a body of work,” Ms. Johnston said. “What’s particularly surprising given the dimensions of the case, is that Dr. Reuben’s research managed to raise no alarms among reviewers”. She added, “.....the peer review system can only do so much. Trust is a major component of the academic world. It’s backed up by the implication that your reputation will be destroyed if you violate that trust.” Expressing surprise that no alarms were raised illustrates why the peer-review system needs to change: we trust it too much yet that trust is violated a lot more than either most are aware or would care to admit.
Fraud of the highest order - should a doctoral degree ever be rescinded?
In one of the more spectacular cases of fraud (see Jan Hendrick Schõn), malfeasance was discovered by scientists doing what they are trained to do: be skeptical. By necessity these days, most research is conducted as a team but, by mere proximity alone, careers can be compromised or ruined because a given research group did not realize they were working with a less-than-honorable colleague.
Closing Statement
Career development in the sciences is based on fractions these days, and most of us are generally as good as each other. So to stand out you need an edge and fraud (subtle or blatant) can give unscrupulous people a huge strategic advantage. We CAN fix the problem of professional malfeasance, but much like Wall Street, we may have to suffer catastrophic collapse before we realize we should have done something years ago.
Is peer review a failure for scientific validation?
Adam Mastroianni:
"the hypothesis seemed so obviously true: science will be better off if we have someone check every paper and reject the ones that don’t pass muster. They called it “peer review.”
This was a massive change. From antiquity to modernity, scientists wrote letters and circulated monographs, and the main barriers stopping them from communicating their findings were the cost of paper, postage, or a printing press, or on rare occasions, the cost of a visit from the Catholic Church. Scientific journals appeared in the 1600s, but they operated more like magazines or newsletters, and their processes of picking articles ranged from “we print whatever we get” to “the editor asks his friend what he thinks” to “the whole society votes.” Sometimes journals couldn’t get enough papers to publish, so editors had to go around begging their friends to submit manuscripts, or fill the space themselves. Scientific publishing remained a hodgepodge for centuries. (Only one of Einstein’s papers was ever peer-reviewed, by the way, and he was so surprised and upset that he published his paper in a different journal instead.)
That all changed after World War II. Governments poured funding into research, and they convened “peer reviewers” to ensure they weren’t wasting their money on foolish proposals. That funding turned into a deluge of papers, and journals that previously struggled to fill their pages now struggled to pick which articles to print. Reviewing papers before publication, which was “quite rare” until the 1960s, became much more common. Then it became universal. Now pretty much every journal uses outside experts to vet papers, and papers that don’t please reviewers get rejected. You can still write to your friends about your findings, but hiring committees and grant agencies act as if the only science that exists is the stuff published in peer-reviewed journals. This is the grand experiment we’ve been running for six decades.
The results are in. It failed."
(https://experimentalhistory.substack.com/p/the-rise-and-fall-of-peer-review)
More Discussion
Three part critique by Samir Chopra and Scott Dexter:
"we go on to talk about open, non-anonymous peer review as a particular solution, and about free software's methods of peer review and its value as an ideal for the practice of computer science at large. In the second post, I want to talk a bit about how badly, it seems to me, peer review is busted in the sciences. This will be anecdotal, insofar as I will be reliant upon my own experiences and observations. Still, considered as a report from the trenches, it might have some value for the reader. I should also qualify my comments by saying that while peer review seems to work reasonably well in journal article review, it is undeniably broke in conference article and grant proposal review, two fairly large and important parts of the practice of science today. We can then return to the solutions mentioned above." (http://decodingliberation.blogspot.com/2008/02/problems-with-peer-review-part-one.html)
Read: Part One ; Part Two; Part Three
Critique by George Siemens:
" I’m dissatisfied, and growing more so, with the process for the following reasons:
1. The process takes a long time (anywhere from about eight months to several years – depending on the field). By the time an article is finally in print format, it’s often partly obsolete, especially in the educational technology field.
2. The process is not about quality. I’ll get into this a bit more later in this post, but from my experience, many, many good articles are poorly reviewed simply because the reviewer is not well informed in the area. I frequently turn down review requests when I feel I am not capable of serving the process well. I’m not convinced this is often the case. At several recent conferences, I was exploring the poster sessions (often comprised of articles that are “downgraded” to poster sessions at research-focused conferences). I was surprised at the exceptional quality of several posters. Inexplicably, excellent research-based papers were not receiving the attention they deserved (especially when accepted papers were of noticeably poorer quality). I can only conclude that reviewers failed to understand the research they were reviewing.
3. The process is not developmental. With few exceptions, journals and conferences run on tight time lines. A paper that shows promise is often not given time to be rewritten due to time constraints. Peer review should be a developmental process (I threw out a few ideas on this process in Scholarship in an Age of Participation). Journals should not be knowledge declaration spaces. Journals should be concerned with knowledge growth as a process in service of a field of inquiry." (http://www.connectivism.ca/?p=160)
More Information
- Stevan Harnad on why we need peer review, at http://cogprints.org/1646/
- Grazia Ietto-Gillies: Replacing Peer Review by a ex-post bottom up peer comments system
- Lessons from the History and Philosophy of Science regarding the Research Assessment Exercise, at http://www.ucl.ac.uk/sts/gillies/
- History and current practice of peer review, overview article at http://www.textjournal.com.au/april08/johnston_krauth.htm This article by Prof. Donald Gillies shows examples of why an excessive reliance on peer review can impede scientific progress, as major advances were in their time rejected by their peers.
- Peer Review: The View from Social Studies.
- Reinventing academic publishing online. Part I: Rigor, relevance and practice. by Brian Whitworth and Rob Friedman. First Monday, Volume 14, Number 8 - 3 August 2009 [3]
Cases
Electronic Transactions on Artificial Intelligence
ETAI "has a two-stage process, with a three-month open review stage followed by a speedy up-or-down refereeing stage (with some time for revisions, if desired, inbetween). This process, the editors acknowledge, has produced some complications in the notion of “publication,” as the texts in the open review stage are already freely available online; in some sense, the journal itself has become a vehicle for re-publishing selected articles.
Comment Icon0 ETAI’s dual-stage process highlights a bifurcation in the purpose of peer review: first, fostering discussion and feedback amongst scholars, with the aim of strengthening the work that they produce; second, providing a mechanism through which that work may be filtered for quality, such that only the best is selected for final “publication.” Moreover, by foregrounding the open stage of peer review — by considering an article “published” during the three months of its open review, but then only “refereed” once anonymous scientists have held their up-or-down vote, a vote that comes only after the article has been read, discussed, and revised — such a dual-stage process promises to return the center of gravity in peer review to communication amongst peers.
Comment Icon0 ETAI’s process thus highlights the relatively conservative move that Nature made with its open peer review trial. First, the journal was at great pains to reassure authors and readers that traditional, anonymous peer review would still take place alongside open discussion. There was, moreover, a relative lack of communication between the two forms of review: open review took place at the same time as anonymous review, rather than as a preliminary phase, preventing authors from putting the public comments they received to use in revision. And though the open review was on some level expected to serve as a parallel to the closed review process — thus Miller’s disappointment that the comments weren’t as thorough as traditional peer reviews — they weren’t really allowed to serve a parallel function: while the editors “read” all such public comments, it was decided from the beginning that only the anonymous reviews would be considered in determining whether any given article was published." (http://mediacommons.futureofthebook.org/mcpress/plannedobsolescence/one/the-future-of-peer-review/)
arXiv's author verification system
"Such papers are very often submitted to arXiv before they are submitted to journals – sometimes because the authors want feedback, and sometimes simply to get an idea out into circulation as quickly as possible. However, a growing number of influential papers have only been published on the arXiv server, and some have suggested that arXiv has in effect replaced journal publication as the primary mode of scholarly communication within certain specialties in physics. As Paul Ginsparg indicates, arXiv has had great success as a scholarly resource despite employing only a modicum of review:
From the outset, a variety of heuristic screening mechanisms have been in place to ensure insofar as possible that submissions are at least of refereeable quality. That means they satisfy the minimal criterion that they would not be peremptorily rejected by any competent journal editor as nutty, offensive, or otherwise manifestly inappropriate, and would instead at least in principle be suitable for review (i.e., without the risk of alienating or wasting the time of a referee, that essential unaccounted resource). These mechanisms are an important – if not essential – component of why readers find the site so useful: though the most recently submitted articles have not yet necessarily undergone formal review, the vast majority of the articles can, would, or do eventually satisfy editorial requirements somewhere. (Ginsparg 12, emphasis in original)
- In 2004, however, arXiv added a layer of author verification to its system by implementing an endorsement process that requires new authors to be vouched for by established authors before submitting their first paper to any subject area on the site. The site is at great pains to indicate that the endorsement process “is not peer review,” but it is a process for the review of peers, and as such bears a direct relationship to the site administrators’ desire to maintain the consistently high quality of submissions to the site, a means of verifying that “arXiv contributors belong [to] the scientific community” (“The arXiv endorsement system”).[1.19] The site administrators do note, however, that “Endorsement is a necessary but not sufficient condition to have papers accepted in arXiv; arXiv reserves the right to reject or reclassify any submission,” suggesting that the open server is nonetheless subject to a degree of editorial control, if not in the form of traditional peer review." (http://mediacommons.futureofthebook.org/mcpress/plannedobsolescence/one/the-future-of-peer-review/)