Tag Archives: citation scores

Citation Analysis Gone Crazy

Perhaps we should stop and look at the evils of citation analysis in science. Citation analysis began some 15 or 20 years ago with a useful thought that it might be nice to know if one’s scientific papers were being read and used by others working in the same area. But now it has morphed into a Godzilla that has the potential to run our lives. I think the current situation rests on three principles:

  1. Your scientific ability can be measured by the number of citations you receive. This is patent nonsense.
  2. The importance of your research is determined by which journals accept your papers. More nonsense.
  3. Your long-term contribution to ecological science can be measured precisely by your h–score or some variant.

These principles appeal greatly to the administrators of science and to many people who dish out the money for scientific research. You can justify your decisions with numbers. Excellent job to make the research enterprise quantitative. The contrary view which I might hope is held by many scientists rests on three different principles:

  1. Your scientific ability is difficult to measure and can only be approximately evaluated by another scientist working in your field. Science is a human enterprise not unlike music.
  2. The importance of your research is impossible to determine in the short term of a few years, and in a subject like ecology probably will not be recognized for decades after it is published.
  3. Your long-term contribution to ecological science will have little to do with how many citations you accumulate.

It will take a good historian to evaluate these alternative views of our science.

This whole issue would not matter except for the fact that it is eroding science hiring and science funding. The latest I have heard is that Norwegian universities are now given a large amount of money by the government if they publish a paper in SCIENCE or NATURE, and a very small amount of money if they publish the same results in the CANADIAN JOURNAL OF ZOOLOGY or – God forbid – the CANADIAN FIELD NATURALIST (or equivalent ‘lower class’ journals). I am not sure how many other universities will fall under this kind of reward-based publication scores. All of this is done I think because we do not wish to involve the human judgment factor in decision making. I suppose you could argue that this is a grand experiment like climate change (with no controls) – use these scores for 30 years and then see if they worked better than the old system based on human judgment. How does one evaluate such experiments?

NSERC (Natural Sciences and Engineering Research Council) in Canada has been trending in that direction in the last several years. In the eternal good old days scientists read research proposals and made judgments about the problem, the approach, and the likelihood of success of a research program. They took time to discuss at least some of the issues. But we move now into quantitative scores that replace human judgment, which I believe to be a very large mistake.

I view ecological research and practice much like I think medical research and medical practice operate. We do not know how well certain studies and experiment will work, any more than a surgeon knows exactly whether a particular technique or treatment will work or a particular young doctor will be a good surgeon, and we gain by experience in a mostly non-quantitative manner. Meanwhile we should encourage young scientists to try new ideas and studies, to give them opportunities based on judgments rather than on counts of papers or citations. Currently we want to rank everyone and every university like sporting teams and find out the winner. This is a destructive paradigm for science. It works for tennis but not for ecology.

Bornmann, L. & Marx, W. (2014) How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98, 487-509.

Leimu, R. & Koricheva, J. (2005) What determines the citation frequency of ecological papers? Trends in Ecology & Evolution, 20, 28-32.

Parker, J., Lortie, C. & Allesina, S. (2010) Characterizing a scientific elite: the social characteristics of the most highly cited scientists in environmental science and ecology. Scientometrics, 85, 129-143.

Todd, P.A., Yeo, D.C.J., Li, D. & Ladle, R.J. (2007) Citing practices in ecology: can we believe our own words? Oikos, 116, 1599-1601.

Back to p-Values

Alas ecology has slipped lower on the totem-pole of serious sciences by an article that has captured the attention of the media:

Low-Décarie, E., Chivers, C., and Granados, M. 2014. Rising complexity and falling explanatory power in ecology. Frontiers in Ecology and the Environment 12(7): 412-418. doi: 10.1890/130230.

There is much that is positive in this paper, so you should read it if only to decide whether or not to use it in a graduate seminar in statistics or in ecology. Much of what is concluded is certainly true, that there are more p-values in papers now than there were some years ago. The question then comes down to what these kinds of statistics mean and how this would justify a conclusion captured by the media that explanatory power in ecology is declining over time, and the bottom line of what to do about falling p-values. Since as far as I can see most statisticians today seem to believe that p-values are meaningless (e.g. Ioannidis 2005), one wonders what the value of showing this trend is. A second item that most statisticians agree about is that R2 values are a poor measure of anything other than the items in a particular data set. Any ecological paper that contains data to be analysed and reported summarizes many tests providing p-values and R2 values of which only some are reported. It would be interesting to do a comparison with what is recognized as a mature science (like physics or genetics) by asking whether the past revolutions in understanding and prediction power in those sciences corresponded with increasing numbers of p-values or R2 values.

To ask these questions is to ask what is the metric of scientific progress? At the present time we confuse progress with some indicators that may have little to do with scientific advancement. As journal editors we race to increase their impact factor which is interpreted as a measure of importance. For appointments to university positions we ask how many citations a person has and how many papers they have produced. We confuse scientific value with some numbers which ironically might have a very low R2 value as predictors of potential progress in a science. These numbers make sense as metrics to tell publication houses how influential their journals are, or to tell Department Heads how fantastic their job choices are, but we fool ourselves if we accept them as indicators of value to science.

If you wish to judge scientific progress you might wish to look at books that have gathered together the most important papers of the time, and examine a sequence of these from the 1950s to the present time. What is striking is that papers that seemed critically important in the 1960s or 1970s are now thought to be concerned with relatively uninteresting side issues, and conversely papers that were ignored earlier are now thought to be critical to understanding. A list of these changes might be a useful accessory to anyone asking about how to judge importance or progress in a science.

A final comment would be to look at the reasons why a relatively mature science like geology has completely failed to be able to predict earthquakes in advance and even to specify the locations of some earthquakes (Steina et al. 2012; Uyeda 2013). Progress in understanding does not of necessity dictate progress in prediction. And we ought to be wary of confusing progress with p-and R2 values.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLoS Medicine 2(8): e124.

Steina, S., Gellerb, R.J., and Liuc, M. 2012. Why earthquake hazard maps often fail and what to do about it. Tectonophysics 562-563: 1-24. doi: 10.1016/j.tecto.2012.06.047.

Uyeda, S. 2013. On earthquake prediction in Japan. Proceedings of the Japan Academy, Series B 89(9): 391-400. doi: 10.2183/pjab.89.391.

On publishing in SCIENCE and NATURE

We are having an ongoing discussion at the University of Canberra Institute for Applied Ecology about the need to obtain a measure of our strength in research. We have entered the age of quantification of all things even those that cannot be quantified, and so each of us must get our ranking from our citation rates or h-scores, or journal impact factors. And institutes rise and fall along with our research grants on the basis of these numbers. All of this seems to be necessary but is quite silly for two reasons. First, the importance of any particular paper or idea can only be judged in the long term, so trying to decide if you should have a job because of your citation rate is a cop out. Second, this quantification undermines the importance of judgment of scientists and administrators as adjudicators of the relative merits of specific research and specific scientists. The problem is that as a young scientist in particular you are caught in a web of nonsense and you have to play the game.

The name of the game is to get a paper in SCIENCE or NATURE. To do this you must shorten the presentation so much that it is nearly unintelligible and violates the staid assumption that a scientific paper must have enough detail in it that someone else can repeat the study and test its conclusions. These details are typically left to be put in the supplementary materials that one can download separately from the published paper. So these papers become like headlines in a newspaper, giving a grand conclusion with little of the details of how it was reached. But this publication is the hallmark of success so one must try. The only rule I can suggest is to have a Plan B for publication since about 99% of papers are rejected from SCIENCE AND NATURE.

There is a demography at work here that we must keep in mind. If scientific output is doubling every 7 years approximately, then getting a paper into SCIENCE or NATURE now is twice as hard as it was 7 years ago, on a totally random model of acceptance. So when your supervisor tells you that he or she got a paper in SCIENCE xx years ago, and so should you now, you might point out the demographic momentum of science.

Editors of any journal especially SCIENCE and NATURE are under great pressure, and if anyone thinks that their decisions are completely unbiased, they probably think that the earth is flat. All of us think some parts of our science are more important than others, and editorial decisions are far from perfect. The important message for young scientists is not to get discouraged when rejection slips appear. Any senior scientist could paper the hallways with letters of rejection from various journals. The important thing is to do good research, test hypotheses, make interesting speculations that can be tested, and move on, with or without a paper in SCIENCE or NATURE.

Finally, if someone wants an interesting project, you might trace the history of papers that have appeared in SCIENCE and NATURE over the last 50 years and see how many of them have been significant contributions to the ecological science we recognize now. Perhaps someone has done this already and it has been rejected by SCIENCE and is sitting in a filing cabinet somewhere…….