On Journal Referees

I have been an editor of enough ecological journals to know the problems of referees first hand. The start is to try to find a referee for a particular paper. I would guess now that more than two-thirds of scientists asked to referee a potential paper say they do not have time. This leads to another question of why no one has any time to do anything, but that is a digression. If one is fortunate you get 2 or 3 good referees.

The next problem comes when the reviews of the paper come back. Besides dealing with the timing of return of the reviews, there are four rules which ought to be enforced on all referees. First, review the potential paper as it is. Do not write a review saying this is what you should have written- that is not your job. Second, if the paper is good enough, be positive in making suggestions for improvement. If it is not good enough in your opinion, try to say so politely and suggest alternate journals. Perhaps the authors are aiming for a journal that is too prestigious. Third, do not say in so many words that the author should cite the following 4 papers of mine…. And fourth, do not make ad hominem attacks on the authors. If you do not like people from Texas, this is not the place to take it out on the particular authors who happen to live there.

Given the reviews, the managing editor for the paper ought to make a judgment. Some reviews do not follow the four rules above. A good editor discards these and puts a black mark on the file of that particular reviewer. I would not submit a referee’s review to the authors if it violated any of the above 4 rules. I have known and respected editors who operated this way in the past.

The difficulty now is that ecological journals are overrun. This is driven in part by the desire to maximize the number of papers one publishes in order to get a job, and in part by journals not wanting to publish longer papers. Journals do not either have the funding or the desire to grow in relation to the number of users. This typically means that papers are sent out for reviews with a note attached saying that we have to reject 80% or so of papers regardless of how good they are, a rather depressing order from above. When this level of automatic rejection is reached, the editor in chief has the power to reject any kinds of papers not in favour at the moment. I like models so let’s publish lots of model papers. Or I like data so let’s publish only a few model papers.

One reason journals are overrun is that many of the papers published in our best ecology journals are discussions of what we ought to be doing. They may be well written but they add nothing to the wisdom of our age if they simply repeat what has been in standard textbooks for the last 30 years. In days gone by, many of these papers I think might have been given as a review seminar, possibly at a meeting, but no one would have thought that they were worthy of publication. Clearly the editors of some of our journals think it is more important to talk about what to do rather than to do it.

I think without any empirical data that the quality of reviews of manuscripts has deteriorated as the number of papers published has increased. I often have to translate reviews for young scientists who are devastated by some casual remark in a review. “Forget that nonsense, deal with this point as it is important, ignore this insult to your supervisor, go have a nice glass of red wine and relax, ……”. One learns how to deal with poor reviews.

I have been reading Bertram Murray’s book “What Were They Thinking? Is Population Ecology a Science?” (2011), unfortunately published after he died in 2010. It is a long diatribe about reviews of some of his papers and it would be instructive for any young ecologist to read it. You can appreciate why Murray had trouble with some editors just from the subtitle of his book, “Is Population Ecology a Science?” It illustrates very well that even established ecologists have difficulty dealing with reviews they think are not fair. In defense of Murray, he was able to get many of his papers published, and he cites these in this book. One will not come away from this reading with much respect for ornithology journals.

I think if you can get one good, thoughtful review of your manuscript you should be delighted. And if you are rejected from your favourite journal, try another one. The walls of academia could be papered with letters of rejection for our most eminent ecologists, so you are in the company of good people.

Meanwhile if you are asked to referee a paper, do a good job and try to obey the four rules. Truth and justice do not always win out in any endeavour if you are trying to get a paper published. At least if you are a referee you can try to avoid these issues.

Barto, E. Kathryn, and Matthias C. Rillig. 2012. “Dissemination biases in ecology: effect sizes matter more than quality.” Oikos 121 (2):228-235. doi: 10.1111/j.1600-0706.2011.19401.x.

Ioannidis, John P. A. 2005. “Why most published research findings are false.” PLoS Medicine 2 (8):e124.

Medawar, P.B. 1963. “Is the scientific paper a fraud?” In The Threat and the Glory, edited by P.B. Medawar, 228-233. New York: Harper Collins.

Merrill, E. 2014. “Should we be publishing more null results?” Journal of Wildlife Management 78 (4):569-570. doi: 10.1002/jwmg.715.

Murray, Bertram G., Jr. 2011. What Were They Thinking? Is Population Ecology a Science? Infinity Publishing. 310 pp. ISBN 9780741463937

 

3 thoughts on “On Journal Referees

  1. ELS

    I agree very strongly with your statement, “Clearly the editors of some of our journals think it is more important to talk about what to do rather than to do it.” When I was a student, only Dan Janzen could get away with such papers, because he actually had novel insights. I guess this means that I’m now old enough to be called a curmudgeon, but I sincerely hope that we can go back to valuing papers that actually present data or, at the very least, novel insights. I’m so tired of being forced (by reviewers who violate rule 3, above) to cite papers that add absolutely no value to the discussion while rehashing issues (probably in a graduate seminar) that have already been resolved.

    Reply
  2. Michael McBain

    Apart from the avalanche of papers making rather small contributions to our corpus of knowledge, here in Australia [a place Charlie is well familiar with], the pressure is now on to publish in ‘prestigious’ or ‘Tier 1’ or ‘A-star’ journals, as though ‘Tier 2’ or ‘A’, ‘B’, or [God forbid] ‘C’ journals are unworthy of our patronage. There was even a list of the ‘acceptable’ journals, whose editors must now endure a tsunami of journal-inappropriate submissions from all over the place. By ‘journal-inappropriate’ I mean that the authors have not thought about whether their paper fits into the rubric or ethos of that journal. The ‘prestige’ of these journals is often based on citation data, and the people making the decisions about relative standing obviously don’t concern themselves with journals that aren’t in SCI. In my field [GIS/remote sensing/environmental change], there are about 20 journals, each with its own specialist or preferred areas, but only about 5 of them are in SCI. A particular paper may not be appropriate to any of the five ‘prestigious’ ones, so why should I waste everybody’s time submitting it when I know it would fit better with one of the other 15 ‘non-prestigious’ journals? The fashions of academic politics first blow east, then they blow west.

    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *