Tag Archives: university rankings

Citation Analysis Gone Crazy

Perhaps we should stop and look at the evils of citation analysis in science. Citation analysis began some 15 or 20 years ago with a useful thought that it might be nice to know if one’s scientific papers were being read and used by others working in the same area. But now it has morphed into a Godzilla that has the potential to run our lives. I think the current situation rests on three principles:

  1. Your scientific ability can be measured by the number of citations you receive. This is patent nonsense.
  2. The importance of your research is determined by which journals accept your papers. More nonsense.
  3. Your long-term contribution to ecological science can be measured precisely by your h–score or some variant.

These principles appeal greatly to the administrators of science and to many people who dish out the money for scientific research. You can justify your decisions with numbers. Excellent job to make the research enterprise quantitative. The contrary view which I might hope is held by many scientists rests on three different principles:

  1. Your scientific ability is difficult to measure and can only be approximately evaluated by another scientist working in your field. Science is a human enterprise not unlike music.
  2. The importance of your research is impossible to determine in the short term of a few years, and in a subject like ecology probably will not be recognized for decades after it is published.
  3. Your long-term contribution to ecological science will have little to do with how many citations you accumulate.

It will take a good historian to evaluate these alternative views of our science.

This whole issue would not matter except for the fact that it is eroding science hiring and science funding. The latest I have heard is that Norwegian universities are now given a large amount of money by the government if they publish a paper in SCIENCE or NATURE, and a very small amount of money if they publish the same results in the CANADIAN JOURNAL OF ZOOLOGY or – God forbid – the CANADIAN FIELD NATURALIST (or equivalent ‘lower class’ journals). I am not sure how many other universities will fall under this kind of reward-based publication scores. All of this is done I think because we do not wish to involve the human judgment factor in decision making. I suppose you could argue that this is a grand experiment like climate change (with no controls) – use these scores for 30 years and then see if they worked better than the old system based on human judgment. How does one evaluate such experiments?

NSERC (Natural Sciences and Engineering Research Council) in Canada has been trending in that direction in the last several years. In the eternal good old days scientists read research proposals and made judgments about the problem, the approach, and the likelihood of success of a research program. They took time to discuss at least some of the issues. But we move now into quantitative scores that replace human judgment, which I believe to be a very large mistake.

I view ecological research and practice much like I think medical research and medical practice operate. We do not know how well certain studies and experiment will work, any more than a surgeon knows exactly whether a particular technique or treatment will work or a particular young doctor will be a good surgeon, and we gain by experience in a mostly non-quantitative manner. Meanwhile we should encourage young scientists to try new ideas and studies, to give them opportunities based on judgments rather than on counts of papers or citations. Currently we want to rank everyone and every university like sporting teams and find out the winner. This is a destructive paradigm for science. It works for tennis but not for ecology.

Bornmann, L. & Marx, W. (2014) How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98, 487-509.

Leimu, R. & Koricheva, J. (2005) What determines the citation frequency of ecological papers? Trends in Ecology & Evolution, 20, 28-32.

Parker, J., Lortie, C. & Allesina, S. (2010) Characterizing a scientific elite: the social characteristics of the most highly cited scientists in environmental science and ecology. Scientometrics, 85, 129-143.

Todd, P.A., Yeo, D.C.J., Li, D. & Ladle, R.J. (2007) Citing practices in ecology: can we believe our own words? Oikos, 116, 1599-1601.

On publishing in SCIENCE and NATURE

We are having an ongoing discussion at the University of Canberra Institute for Applied Ecology about the need to obtain a measure of our strength in research. We have entered the age of quantification of all things even those that cannot be quantified, and so each of us must get our ranking from our citation rates or h-scores, or journal impact factors. And institutes rise and fall along with our research grants on the basis of these numbers. All of this seems to be necessary but is quite silly for two reasons. First, the importance of any particular paper or idea can only be judged in the long term, so trying to decide if you should have a job because of your citation rate is a cop out. Second, this quantification undermines the importance of judgment of scientists and administrators as adjudicators of the relative merits of specific research and specific scientists. The problem is that as a young scientist in particular you are caught in a web of nonsense and you have to play the game.

The name of the game is to get a paper in SCIENCE or NATURE. To do this you must shorten the presentation so much that it is nearly unintelligible and violates the staid assumption that a scientific paper must have enough detail in it that someone else can repeat the study and test its conclusions. These details are typically left to be put in the supplementary materials that one can download separately from the published paper. So these papers become like headlines in a newspaper, giving a grand conclusion with little of the details of how it was reached. But this publication is the hallmark of success so one must try. The only rule I can suggest is to have a Plan B for publication since about 99% of papers are rejected from SCIENCE AND NATURE.

There is a demography at work here that we must keep in mind. If scientific output is doubling every 7 years approximately, then getting a paper into SCIENCE or NATURE now is twice as hard as it was 7 years ago, on a totally random model of acceptance. So when your supervisor tells you that he or she got a paper in SCIENCE xx years ago, and so should you now, you might point out the demographic momentum of science.

Editors of any journal especially SCIENCE and NATURE are under great pressure, and if anyone thinks that their decisions are completely unbiased, they probably think that the earth is flat. All of us think some parts of our science are more important than others, and editorial decisions are far from perfect. The important message for young scientists is not to get discouraged when rejection slips appear. Any senior scientist could paper the hallways with letters of rejection from various journals. The important thing is to do good research, test hypotheses, make interesting speculations that can be tested, and move on, with or without a paper in SCIENCE or NATURE.

Finally, if someone wants an interesting project, you might trace the history of papers that have appeared in SCIENCE and NATURE over the last 50 years and see how many of them have been significant contributions to the ecological science we recognize now. Perhaps someone has done this already and it has been rejected by SCIENCE and is sitting in a filing cabinet somewhere…….