The politics of science: Peer review

Blog Post by Dr. S Nassir Ghaemi, Tufts University School of Medicine, Boston, Massachusetts

“In my journal, anyone can make a fool of himself.”  (Rudolph Virchow)

Perhaps the most important thing to know about scientific publication is that the “best” scientific journals do not publish the most important articles.  This will be surprising to some readers, and probably annoying to others (often editorial members of prestigious journals).  I could be wrong; this statement reflects my personal experience and my reading of the history of medicine, but if I am correct, the implication for the average clinician is important:  it will not be enough to read the largest and most famous journals.  For new ideas, one must look elsewhere.Peer review

The process of publishing scientific articles is a black box to most clinicians and to the public.  Unless one engages in research, one would not know all the human foibles that are involved.  It is a quite fallible process, but one that seems to have some merit nonetheless.

The key feature is “peer review.” The merits of peer review are debatable; indeed its key feature of anonymity can bring out the worst of what has been called the psychopathology of academe.    Let us see how this works.

The process begins when the researcher sends an article to the editor of a scientific journal; the editor then chooses a few (usually 2-4) other researchers who usually are authorities in that topic; those persons are the peer reviewers and they are anonymous.  The researcher does not know who they are. These persons then write 1-3 pages of review, detailing specific changes they would like to see in the manuscript. If the paper is not accurate, in their view, or has too many errors, or involves mistaken interpretations, and so on, the reviewers can recommend that it be rejected.  The paper would then not be published by that journal, though the researcher could try to send it to a different journal and go through the same process.  If the changes requested seem feasible to the editor, then the paper is sent back to the researcher with the specific changes requested by peer reviewers.  The researcher can then revise the manuscript and send it back to the editor; if all or most of the changes are made, the paper is then typically accepted for publication.  Very rarely, reviewers may recommend acceptance of a paper with no or very minor changes from the beginning.

This is the process.  It may seem rational, but the problem is that human beings are involved, and human beings are not, generally, rational.  In fact, the whole scientific peer review process is, in my view, quite akin to Winston Churchill’s definition of democracy:  It is the worst system imaginable, except for all the others. 

Perhaps the main problem is what one might call academic road rage.  As is well known, it is thought that anonymity is a major factor that leads to road rage among drivers of automobiles.  When I do not know who the other driver is, I tend to assume the worst about him;  and when he cannot see my face, nor I his, I can afford to be socially inappropriate and aggressive, because facial and other physical cues do not impede me.  I think the same factors are in play with scientific peer review:  Routinely, one reads frustrated and angry comments from peer reviewer; exclamation points abound; inferences about one’s intentions as an author are made based on pure speculation;  one’s integrity and research competence are not infrequently questioned. Now sometimes the content that leads to such exasperation is justifiable; legitimate scientific and statistical questions can be raised; it is the emotion and tone which seem excessive. 

Four interpretations of peer review

Peer review has become a matter of explicit discussion among medical editors, especially in special issues of the Journal of the American Medical Association (JAMA).  The result of this public debate has been summarized as follows:

Four differing perceptions of the current refereeing process have been identified:  ‘the sieve (peer review screens worthy from unworthy submissions), the switch ( a persistent author can eventually get anything published, but peer review determines where, the smithy (papers are pounded into new and better shapes between the hammer of peer review  and the anvil of editorial standards), and the shot in the dark (peer review is essentially unpredictable and unreproducible and hence, in effect, random).’  It is remarkable that there is little more than opinion to support these characterizations of the gate-keeping  process which plays such a  critical role in the operation of today’s huge medical research enterprise (‘peer review is the linch pin of science.’).  (W. Silverman, quoted in Ghaemi 2009)

I tend to subscribe to the “switch” and “smithy” interpretations.   I do not think that peer review is the wonderful sieve of the worthy from the unworthy that so many assume, nor is it simply random. It is humanly irrational, however, and thus a troublesome “linchpin” for our science.

It is these human weaknesses that trouble me.  For instance, peer reviewers often know authors, either personally or professionally, and they may have a personal dislike for an author; or if not, they may dislike the author’s ideas, in a visceral and emotional way. (For all we know, some may also have economic motivations, as some critics of the pharmaceutical industry suggest).  How can we remove these biases inherent in anonymous peer review?  One approach would be to remove anonymity, and force peer reviewers to identify themselves.  Since all authors are peer reviewers for others, and all peer reviewers also write their own papers as authors,  editors would be worried that they would not get complete and direct critiques from peer reviewers, who might fear retribution by authors (when serving as peer reviewers).  Not just paper publication, but grant funding – money, the life blood of a person’s employment in medical research –  are subject to anonymous peer review, and thus grudges that might be expressed in later peer review could in fact lead to losing funding and consequent economic hardship.

Who reviews the reviewers?

We see how far we have come from the neutral objective ideals of science.  The scientific peer review process involves human beings of flesh and blood, who like and dislike each other, and the dollar bill, here as elsewhere, has a pre-eminent role.

How good or bad is this anonymous peer review process?  I have described the matter qualitatively; are there any statistical studies of it? There are, in fact; one study for example, decided to “review the reviewers” (Baxt et al., 1998, Ann Emerg Med, 32: 310-7).  All reviewers of the Annals of Emergency Medicine received a fictitious manuscript, a purported placebo-controlled RCT of a treatment for migraine, for review in which 10 major and 13 minor statistical and scientific errors were deliberately placed.  (Major errors included no definition of migraine, absence of any inclusion or exclusion criteria, and use of a rating scale that had never been validated or previously reported. Also, the p-values reported for the main outcome was made up and did not follow in any way from the actual data presented.  The data demonstrated no difference between drug and placebo, but the authors concluded that there was a difference.)    Of 203 reviewers, 15 recommend acceptance of the manuscript, 117 rejection, and 67 revision.   So about half of reviewers appropriately realized that the manuscript had numerous flaws, beyond the amount that would usually allow for appropriate revision.  Further, 68% of reviewers did not realize that the conclusions written by the manuscript authors did not follow from other results of the study.

If this is the status of scientific peer review, then one has to be concerned that many studies are poorly vetted, and that some of the published literature (at least) is inaccurate either in its exposition or its interpretation, applying standard accepted statistical concepts.

Mediocrity rewarded

Beyond the publication of papers that should not be published, the peer review process has the problem of not publishing papers that should be published.  In my experience both as an author and as an occasional guest editor for scientific journals, when multiple peer reviews bring up different concerns, it is impossible for authors to respond adequately to a wide range of critiques, and thus difficult for editors to publish.  In such cases, the problem, perhaps, is not so much the content of the paper, but rather the topic itself. It may be too controversial, or too new, and thus difficult for several peer reviewers to agree that it merits publication.  

In my own writing, I have noticed that, at times, the most rejected papers are the most enduring. My rule of thumb is that if a paper is rejected more than five times, then it is either completely useless or utterly prescient. In my view, scientific peer review ousts poor papers–but also great ones; the middling, comfortably predictable, tend to get published. 

This brings us back to the claim at the beginning of this article, that the most prestigious journals usually do not publish the most original or novel articles; this is because the peer review process is inherently conservative. I do not claim that there is any better system, but I think the weaknesses of our current system need to be honestly acknowledged.

One weakness is that scientific innovation is rarely welcomed, and new ideas are always at a disadvantage against the old and staid. Again, non-researchers might have had a more favorable illusion about science, that it encourages progress and new ideas and that it is consciously self-critical. That is how it should be; but this is how it is, in the words of the great medical statistician Ronald Fisher: 

A scientific career is peculiar in some ways.  Its raison d’etre is the increase of natural knowledge.  Occasionally, therefore, an increase of natural knowledge occurs. But this is tactless, and feelings are hurt.  For in some small degree it is inevitable that views previously expounded are shown to be either obsolete or false.  Most people, I think, can recognize this and take it in good part if what they have been teaching for ten ears or so comes to need a little revision; but some undoubtedly take it hard, as a blow to their amour proper, or even as an invasion of the territory they have come to think of as exclusively  their own, and they must react with the same ferocity as we can see in the robins and chaffinches these spring days when they resent an intrusion into their little territories. I do not think anything can be done about it.  It is inherent in the nature of our profession; but a young scientist may be warned and advised that when he has a jewel to offer for the enrichment of mankind some certainly will wish to turn and rend him.

So this is part of the politics of science – how papers get published.   It is another aspect of statistics where we see numbers give way to human emotions, where scientific law is replaced by human arbitrariness.  Even with all these limitations, we somehow manage to see a scientific literature that produces useful knowledge.  The wise clinician will use that knowledge where possible, while aware of the limitations of the process. 

Dr. S. Nassir Ghaemi is author of A Clinician’s Guide to Statistics and Epidemiology in Mental Health, Measuring Truth and Uncertainty


Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: