Tag Archives: research quality

On What to Read in the Ecological Literature

Postgraduate students in ecology face a wall of literature that they must come to grips with in their career. Time is limited and unlike the French naturalist Comte de Buffon who produced 36 volumes of Histoire Naturelle from 1749 to 1788, most of us do not have the luxury of several assistants reading the current literature to us during all waking hours (even during meals). So, there are three options available now if you wish to become a scientist. First, you can decide that there was nothing serious written before a specified date like 2008, and then concentrate on the recent literature only. Alternatively, you can decide that all the current wisdom in ecology is summarized in a few books and read them. This option has the danger that your choice of books to read may give you a distorted orientation to ecological science. Thirdly, you may decide that your thesis supervisor is a concentrated source of ecological wisdom and simply do what he or she says. This is certainly the most parsimonious way to proceed but the risk here is that you may find later when looking for a job that your supervisor was considered a fringe player rather than the central cutting edge of future ecological science.

Whatever your decision you will still face a large pile of scientific papers. So, the skill you need to sharpen is how to cull the literature. If you wish to study cone production in Pinus banksiana, you can search for all the literature with this Latin name in the search terms of the Web of Science or a similar source program. Given all that, you can now (I am told) get AI to write your thesis automatically. This is of course nonsense since any specific set of ecological literature will have many contradictory papers, some papers that are outright incorrect because of statistics or experimental design, and others that are speculation rather than data rich. So, you will have to read a great deal to fix on a specific problem within this specified field that you can address with your thesis work. The key question is as always What Next? New ideas, new insights, new speculation are the keys at this point.

Perhaps the most important insight here is that there are many thousands of unanswered questions in science, and ecology may be particularly difficult in having many critical issues that have simply been dropped because they are too difficult. But what was too difficult 10 years ago may be easy to measure now, so advances in understanding are possible. But here you must pick a problem that is solvable, and there are many problems floating around in the ecological literature that are impossible to solve, and others that if solved will be of little use for the critical issues that are now visible. There is no simple guidance here for new scientists. We can see in textbooks and reviews the problems of the past clearly stated and investigated, but the problems of the past that AI or your library can highlight may not be the problems that are most important for the future of our science. Bravery here is desirable but dangerous.

There are other issues that I think worth noting for young ecologists. Read widely. There are many good ecological journals, and do not assume that all you need to read are British ones, or American ones, or Science and Nature. With all due respects, there is much nonsense published in Science and Nature, not to mention lesser renowned journals. Do not assume that only English papers present ecological wisdom. Read sceptically and ask what is the evidence for any conclusion and how good it is. However, a word of caution to postgraduate students is in order here: be careful not to apply these rules to your thesis supervisor’s research. Some things in science are sacred.

Andrew (2020), Fox (2021) and Fox et al (2023) discuss some of the reasons ecological journals do not reach perfection, and their analyses may help relieve your anxiety if your recent paper has been rejected by your favourite journal.

Andrew, N. R. (2020). Design flaws and poor language: Two key reasons why manuscripts get rejected from austral ecology across all countries between 2017 and 2020. Austral Ecology, 45, 505–509.doi: 10.1111/aec.12908.

Fox, C. W. (2021). Which peer reviewers voluntarily reveal their identity to authors? Insights into the consequences of open-identities peer review. Proceedings of the Royal Society B: Biological Sciences, 288(1961), 20211399. doi: 10.1098/rspb.2021.1399.

Fox, C.W., Meyer, J. & Aime, E. (2023) Double‐blind peer review affects reviewer ratings and editor decisions at an ecology journal. Functional Ecology, 37, 1144-1157.doi. 10.1111/1365-2435.14259.

The Meaningless of Random Sampling

Statisticians tell us that random sampling is necessary for making general inferences from the particular to the general. If field ecologists accept this dictum, we can only conclude that it is very difficult to nearly impossible to reach generality. We can reach conclusions about specific local areas, and that is valuable, but much of our current ecological wisdom on populations and communities relies on the faulty model of non-random sampling. We rarely try to define the statistical ‘population’ which we are studying and attempting to make inferences about with our data. Some examples might be useful to illustrate this problem.

Marine ecologists ae mostly agreed that sea surface temperature rise is destroying coral reef ecosystems. This is certainly true, but it camouflages the fact that very few square kilometres of coral reefs like the Great Barrier Reef have been comprehensively studied with a proper sampling design (e.g. Green 1979, Lewis 2004). When we analyse the details of coral reef declines, we find that many species are affected by rising sea temperatures, but some are not, and it is possible that some species will adapt by natural selection to the higher temperatures. So we quite rightly raise the alarm about the future of coral reefs. But in doing so we neglect in many cases to specify the statistical ‘population’ to which our conclusions apply.

Most people would agree that such an approach to generalizing ecological findings is tantamount to saying the problem is “how many angels can dance on the head of a pin”, and in practice we can ignore the problem and generalize from the studied reefs to all reefs. And scientists would point out that physics and chemistry seek generality and ignore this problem because one can do chemistry in Zurich or in Toronto and use the same laws that do not change with time or place. But the ecosystems of today are not going to be the ecosystems of tomorrow, so generality in time cannot be guaranteed, as paleoecologists have long ago pointed out.

It is the spatial problem of field studies that collides most strongly with the statistical rule to random sample. Consider a hypothetical example of a large national park that has recently been burned by this year’s fires in the Northern Hemisphere. If we wish to measure the recovery process of the vegetation, we need to set out plots to resample. We have two choices: (1) lay out as many plots as possible, and sample these for several years to plot recovery. Or (2) lay out plots at random each year, never repeating the same exact areas to satisfy the specifications of statisticians to “random sample” the recovery in the park. We typically would do (1) for two reasons. Setting up new plots each year as per (2) would greatly increase the initial field work of defining the random plots and would probably mean that travel time between the plots would be greatly increased. Using approach (1) we would probably set out plots with relatively easy access from roads or trails to minimize costs of sampling. We ignore the advice of statisticians because of our real-world constraints of time and money. And we hope to answer the initial questions about recovery with this simpler design.

I could find few papers in the ecological literature that discuss this general problem of inference from the particular to the general (Ives 2018, Hauss 2018) and only one that deals with a real-world situation (Ducatez 2019). I would be glad to be sent more references on this problem by readers.

The bottom line is that if your supervisor or research coordinator criticizes your field work because your study areas are not randomly placed or your replicate sites were not chosen at random, tell him or her politely that virtually no ecological research in the field is done by truly random sampling. Does this make our research less useful for achieving ecological understanding – probably not. And we might note that medical science works in exactly the same way field ecologists work, do what you can with the money and time you have. The law that scientific knowledge requires random sampling is often a pseudo-problem in my opinion.  

Ducatez, S. (2019) Which sharks attract research? Analyses of the distribution of research effort in sharks reveal significant non-random knowledge biases. Reviews in Fish Biology and Fisheries, 29, 355-367. doi. 10.1007/s11160-019-09556-0

Green, R.H. (1979) Sampling Design and Statistical Methods for Environmental Biologists. Wiley, New York. 257 pp.

Hauss, K. (2018) Statistical Inference from Non-Random Samples. Problems in Application and Possible Solutions in Evaluation Research. Zeitschrift fur Evaluation, 17, 219-240. doi.

Ives, A.R. (2018) Informative Irreproducibility and the Use of Experiments in Ecology. BioScience, 68, 746-747. doi. 10.1093/biosci/biy090

Lewis, J. (2004) Has random sampling been neglected in coral reef faunal surveys? Coral Reefs, 23, 192-194. doi: 10.1007/s00338-004-0377-y.

On Assumptions in Ecology Papers

What can we do as ecologists to improve the publishing standards of ecology papers? I suggest one simple but bold request. We should require at the end of every published paper a annotated list of the assumptions made in providing the analysis reported in the paper. A tabular format could be devised with columns for the assumption, the perceived support of and tests for the assumption, and references for this support or lack thereof. I can hear the screaming already, so this table could be put in the Supplementary Material which most people do not read. We could add to each paper in the final material where there are statements of who did the writing, who provided the money, and add a reference to this assumptions table in the Supplementary Material or a statement that no assumptions about anything were made to reach these conclusions.

The first response I can detect to this recommendation is that many ecologists will differ in what they state are assumptions to their analysis and conclusions. As an example, in wildlife studies, we commonly make the assumption that an individual animal having a radio collar will behave and survive just like another animal with no collar. In analyses of avian population dynamics, we might commonly assume that our visiting nests does not affect their survival probability. We make many such assumptions about random or non-random sampling. My question then is whether or not there is any value in listing these kinds of assumptions. My response is that this approach of listing what the authors think they are assuming should alert the reviewers to the elephants in the room that have not been listed.

My attention was called to this general issue by the recent paper of Ginzburg and Damuth (2022) in which they contrasted the assumptions of two general theories of functional responses of predators to prey – “prey dependence” versus “ratio dependence”. We have in ecology many such either-or discussions that never seem to end. Consider the long-standing discussion of whether populations can be regulated by factors that are “density dependent” or “density independent”, a much-debated issue that is still with us even though it was incisively analyzed many years ago.  

Experimental ecology is not exempt from assumptions, as outlined in Kimmel et al. (2021) who provide an incisive review of cause and effect in ecological experiments. Pringle and Hutchinson (2020) discuss the failure of assumptions in food web analysis and how these might be resolved with new techniques of analysis. Drake et al. (2021) consider the role of connectivity in arriving at conservation evaluations of patch dynamics, and the importance of demographic contributions to connectivity via dispersal. The key point is that, as ecology progresses, the role of assumptions must be continually questioned in relation to our conclusions about population and community dynamics in relation to conservation and landscape management.

Long ago Peters (1991) wrote an extended critique of how ecology should operate to avoid some of these issues, but his 1991 book is not easily available to students (currently available on Amazon for about $90). To encourage more discussion of these questions from the older to the more current literature, I have copied Peters Chapter 4 to the bottom of my web page at https://www.zoology.ubc.ca/~krebs/books.html for students to download if they wish to discuss these issues in more detail.

Perhaps a possible message in all this has been that ecology has always wished to be “physics-in-miniature” with grand generalizations like the laws we teach in the physical sciences. Over the last 60 years the battle in the ecology literature has been between this model of physics and the view that every population and community differ, and everything is continuing to change under the climate emergency so that we can have little general theory in ecology. There are certainly many current generalizations, but they are relatively useless for a transition from the general to the particular for the development of a predictive science. The consequence is that we now bounce from individual study to individual study, typically starting from different assumptions, with very limited predictability that is empirically testable. And the central issue for ecological science is how can we move from the present fragmentation in our knowledge to a more unified science. Perhaps starting to examine the assumptions of our current publications would be a start in this direction.  

Drake, J., Lambin, X., and Sutherland, C. (2021). The value of considering demographic contributions to connectivity: a review. Ecography 44, 1-18. doi: 10.1111/ecog.05552.

Ginzburg, L.R. and Damuth, J. (2022). The Issue Isn’t Which Model of Consumer Interference Is Right, but Which One Is Least Wrong. Frontiers in Ecology and Evolution 10, 860542. doi: 10.3389/fevo.2022.860542.

Kimmel, K., Dee, L.E., Avolio, M.L., and Ferraro, P.J. (2021). Causal assumptions and causal inference in ecological experiments. Trends in Ecology & Evolution 36, 1141-1152. doi: 10.1016/j.tree.2021.08.008.

Peters, R.H. (1991) ‘A Critique for Ecology.’ (Cambridge University Press: Cambridge, England.) ISBN:0521400171 (Chapter 4 pdf available at https://www.zoology.ubc.ca/~krebs/books.html)

Pringle, R.M. and Hutchinson, M.C. (2020). Resolving Food-Web Structure. Annual Review of Ecology, Evolution, and Systematics 51, 55-80. doi: 10.1146/annurev-ecolsys-110218-024908.

On Replication in Ecology

All statistics books recommend replication in scientific studies. I suggest that this recommendation has been carried to extreme in current ecological studies. In approximately 50% of ecological papers I read in our best journals (a biased sample to be sure) the results of the study are not new and have been replicated many times in the past, often in papers not cited in ‘new’ papers. There is no harm in this happening, but it does not lead to progress in our understanding of populations, communities or ecosystems or lead to new ecological theory. We do need replication examining the major ideas in ecology, and this is good. On the other hand, we do not need more and more studies of what we might call ecological truths. An analogy would be to test in 2022 the Flat Earth Hypothesis to examine its predictions. It is time to move on.

There is an extensive literature on hypothesis testing which can be crudely summarized by “Observations of X” which can be explained by hypothesis A, B, or C each of which have unique predictions associated with them. A series of experiments are carried out to test these predictions and the most strongly supported hypothesis, call it B*, is accepted as current knowledge. Explanation B* is useful scientifically only if it leads to a new set of predictions D, E, and F which are then tested. This chain of explanation is never simple. There can be much disagreement which may mean sharpening the hypotheses following from Explanation B*. At the same time there will be some scientists who despite all the accumulated data still accept the Flat Earth Hypothesis. If you think this is nonsense, you have not been reading the news about the Covid epidemic.

Further complications arise from two streams of thought. The first is that the way forward is via simple mathematical models to represent the system. There is much literature on modelling in ecology which is most useful when it is based on good field data, but for too many ecological problems the model is believed more than the data, and the assumptions of the models are not stated or tested. If you think that models lead directly to progress, examine again the Covid modelling situation in the past 2 years. The second stream of thought that complicates ecological science is that of descriptive ecology. Many of the papers in the current literature describe a current set of data or events with no hypothesis in mind. The major offenders are the biodiversity scientists and the ‘measure everything’ scientists. The basis of this approach seems to be that all our data will be of major use in 50, 100 or whatever years, so we must collect major archives of ecological data. Biodiversity is the bandwagon of the present time, and it is a most useful endeavour to classify and categorise species. As such it leads to much natural history that is interesting and important for many non-scientists. And almost everyone would agree that we should protect biodiversity. But while biodiversity studies are a necessary background to ecological studies, they do not lead to progress in the scientific understanding of the ecosphere.

Conservation biology is closely associated with biodiversity science, but it suffers even more from the problems outlined above. Conservation is important for everyone, but the current cascade of papers in conservation biology are too often of little use. We do not need opinion pieces; we need clear thinking and concrete data to solve conservation issues. This is not easy since once a species is endangered there are typically too few of them to study properly. And like the rest of ecological science, funding is so poor that reliable data cannot be achieved, and we are left with more unvalidated indices or opinions on species changes. Climate change puts an enormous kink in any conservation recommendations, but on the other hand serves as a panchrestron, a universal explanation for every possible change that occurs in ecosystems and thus can be used to justify every research agenda, good or poor with spurious correlations.

We could advance our ecological understanding more rapidly by demanding a coherent theoretical framework for all proposed programs of research. Grace (2019) argues that plant ecology has made much progress during the last 80 years, in contrast to the less positive overview of Peters (1991) or my observations outlined above. Prosser (2020) provides a critique for microbial ecology that echoes what Peters argued in 1991. All these divergences of opinion would be worthy of a graduate seminar discussion.

If you think all my observations are nonsense, then you should read the perceptive book by Peters (1991) written 30 years ago on the state of ecological science as well as the insightful evaluation of this book by Grace (2019) and the excellent overview of these questions in Currie (2019).  I suggest that many of the issues Peters (1991) raised are with us in 2022, and his general conclusion that ecology is a weak science rather than a strong one still stands. We should celebrate the increases in ecological understanding that have been achieved, but we could advance the science more rapidly by demanding more rigor in what we publish.

Currie, D.J. (2019). Where Newton might have taken ecology. Global Ecology and Biogeography 28, 18-27. doi: 10.1111/geb.12842.

Grace, John (2019). Has ecology grown up? Plant Ecology & Diversity 12, 387-405. doi: 10.1080/17550874.2019.1638464.

Peters, R.H. (1991) ‘A Critique for Ecology.’ (Cambridge University Press: Cambridge, England.). 366 pages. ISBN: 0521400171

Prosser, J.I. (2020). Putting science back into microbial ecology: a question of approach. Philosophical Transactions of the Royal Society. Biological sciences 375, 20190240. doi: 10.1098/rstb.2019.0240.

On Questionable Research Practices

Ecologists and evolutionary biologists are tarred and feathered along with many scientists who are guilty of questionable research practices. So says this article in “The Conservation” on the web:
https://theconversation.com/our-survey-found-questionable-research-practices-by-ecologists-and-biologists-heres-what-that-means-94421?utm_source=twitter&utm_medium=twitterbutton

Read this article if you have time but here is the essence of what they state:

“Cherry picking or hiding results, excluding data to meet statistical thresholds and presenting unexpected findings as though they were predicted all along – these are just some of the “questionable research practices” implicated in the replication crisis psychology and medicine have faced over the last half a decade or so.

“We recently surveyed more than 800 ecologists and evolutionary biologists and found high rates of many of these practices. We believe this to be first documentation of these behaviours in these fields of science.

“Our pre-print results have certain shock value, and their release attracted a lot of attention on social media.

  • 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking)
  • 42% had collected more data after inspecting whether results were statistically significant (a form of “p hacking”)
  • 51% reported an unexpected finding as though it had been hypothesised from the start (known as “HARKing”, or Hypothesising After Results are Known).”

It is worth looking at these claims a bit more analytically. First, the fact that more than 800 ecologists and evolutionary biologists were surveyed tells you nothing about the precision of these results unless you can be convinced this is a random sample. Most surveys are non-random and yet are reported as though they are a random, reliable sample.

Failing to report results is common in science for a variety of reasons that have nothing to do with questionable research practices. Many graduate theses contain results that are never published. Does this mean their data are being hidden? Many results are not reported because they did not find an expected result. This sounds awful until you realize that journals often turn down papers because they are not exciting enough, even though the results are completely reliable. Other results are not reported because the investigator realized once the study is complete that it was not carried on long enough, and the money has run out to do more research. One would have to have considerable detail about each study to know whether or not these 64% of researchers were “cherry picking”.

Alas the next problem is more serious. The 42% who are accused of “p-hacking” were possibly just using sequential sampling or using a pilot study to get the statistical parameters to conduct a power analysis. Any study which uses replication in time, a highly desirable attribute of an ecological study, would be vilified by this rule. This complaint echos the statistical advice not to use p-values at all (Ioannidis 2005, Bruns and Ioannidis 2016) and refers back to complaints about inappropriate uses of statistical inference (Armhein et al. 2017, Forstmeier et al. 2017). The appropriate solution to this problem is to have a defined experimental design with specified hypotheses and predictions rather than an open ended observational study.

The third problem about unexpected findings hits at an important aspect of science, the uncovering of interesting and important new results. It is an important point and was warned about long ago by Medewar (1963) and emphasized recently by Forstmeier et al. (2017). The general solution should be that novel results in science must be considered tentative until they can be replicated, so that science becomes a self-correcting process. But the temptation to emphasize a new result is hard to restrain in the era of difficult job searches and media attention to novelty. Perhaps the message is that you should read any “unexpected findings” in Science and Nature with a degree of skepticism.

The cited article published in “The Conversation” goes on to discuss some possible interpretations of what these survey results mean. And the authors lean over backwards to indicate that these survey results do not mean that we should not trust the conclusions of science, which unfortunately is exactly what some aspects of the public media have emphasized. Distrust of science can be a justification for rejecting climate change data and rejecting the value of immunizations against diseases. In an era of declining trust in science, these kinds of trivial surveys have shock value but are of little use to scientists trying to sort out the details about how ecological and evolutionary systems operate.

A significant source of these concerns flows from the literature that focuses on medical fads and ‘breakthroughs’ that are announced every day by the media searching for ‘news’ (e.g. “eat butter”, “do not eat butter”). The result is almost a comical model of how good scientists really operate. An essential assumption of science is that scientific results are not written in stone but are always subject to additional testing and modification or rejection. But one result is that we get a parody of science that says “you can’t trust anything you read” (e.g. Ashcroft 2017). Perhaps we just need to repeat to ourselves to be critical, that good science is evidence-based, and then remember George Bernard Shaw’s comment:

Success does not consist in never making mistakes but in never making the same one a second time.

Amrhein, V., Korner-Nievergelt, F., and Roth, T. 2017. The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research. PeerJ  5: e3544. doi: 10.7717/peerj.3544.

Ashcroft, A. 2017. The politics of research-Or why you can’t trust anything you read, including this article! Psychotherapy and Politics International 15(3): e1425. doi: 10.1002/ppi.1425.

Bruns, S.B., and Ioannidis, J.P.A. 2016. p-Curve and p-Hacking in observational research. PLoS ONE 11(2): e0149144. doi: 10.1371/journal.pone.0149144.

Forstmeier, W., Wagenmakers, E.-J., and Parker, T.H. 2017. Detecting and avoiding likely false-positive findings – a practical guide. Biological Reviews 92(4): 1941-1968. doi: 10.1111/brv.12315.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLOS Medicine 2(8): e124. doi: 10.1371/journal.pmed.0020124.

Medawar, P.B. 1963. Is the scientific paper a fraud? Pp. 228-233 in The Threat and the Glory. Edited by P.B. Medawar. Harper Collins, New York. pp. 228-233. ISBN 978-0-06-039112-6

A Modest Proposal for a New Ecology Journal

I read the occasional ecology paper and ask myself how this particular paper ever got published when it is full of elementary mistakes and shows no understanding of the literature. But alas we can rarely do anything about this as individuals. If you object to what a particular paper has concluded because of its methods or analysis, it is usually impossible to submit a critique that the relevant journal will publish. After all, which editor would like to admit that he or she let a hopeless paper through the publication screen. There are some exceptions to this rule, and I list two examples below in the papers by Barraquand (2014) and Clarke (2014). But if you search the Web of Science you will find few such critiques for published ecology papers.

One solution jumped to mind for this dilemma: start a new ecology journal perhaps entitled Misleading Ecology Papers: Critical Commentary Unfurled. Papers submitted to this new journal would be restricted to a total of 5 pages and 10 references, and all polemics and personal attacks would be forbidden. The key for submissions would be to state a critique succinctly, and suggest a better way to construct the experiment or study, a new method of analysis that is more rigorous, or key papers that were missed because they were published before 2000. These rules would potentially leave a large gap for some very poor papers to avoid criticism, papers that would require a critique longer than the original paper. Perhaps one very long critique could be distinguished as a Review of the Year paper. Alternatively, some long critiques could be published in book form (Peters 1991), and not require this new journal. The Editor of the journal would require all critiques to be signed by the authors, but would permit in exceptional circumstances to have the authors be anonymous to prevent job losses or in more extreme cases execution by the Mafia. Critiques of earlier critiques would be permitted in the new journal, but an infinite regress will be discouraged. Book reviews could be the subject of a critique, and the great shortage of critical book reviews in the current publication blitz is another aspect of ecological science that is largely missing in the current journals. This new journal would of course be electronic, so there would be no page charges, and all articles would be open access. All the major bibliographic databases like the Web of Science would be encouraged to catalog the publications, and a doi: would be assigned to each paper from CrossRef.

If this new journal became highly successful, it would no doubt be purchased by Wiley-Blackwell or Springer for several million dollars, and if this occurred, the profits would accrue proportionally to all the authors who had published papers to make this journal popular. The sale of course would be contingent on the purchaser guaranteeing not to cancel the entire journal to prevent any criticism of their own published papers.

At the moment criticism of ecological science does not occur for several years after a poor paper is published and by that time the Donald Rumsfeld Effect would have occurred to apply the concept of truth to the conclusions of this poor work. For one example, most of the papers critiqued by Clarke (2014) were more than 10 years old. By making the feedback loop much tighter, certainly within one year of a poor paper appearing, budding ecologists could be intercepted before being led off course.

This journal would not be popular with everyone. Older ecologists often strive mightily to prevent any criticism of their prior conclusions, and some young ecologists make their career by pointing out how misleading some of the papers of the older generation are. This new journal would assist in creating a more egalitarian ecological world by producing humility in older ecologists and more feelings of achievements in young ecologists who must build up their status in the science. Finally, the new journal would be a focal point for graduate seminars in ecology by bringing together and identifying the worst of the current crop of poor papers in ecology. Progress would be achieved.

 

Barraquand, F. 2014. Functional responses and predator–prey models: a critique of ratio dependence. Theoretical Ecology 7(1): 3-20. doi: 10.1007/s12080-013-0201-9.

Clarke, P.J. 2014. Seeking global generality: a critique for mangrove modellers. Marine and Freshwater Research 65(10): 930-933. doi: 10.1071/MF13326.

Peters, R.H. 1991. A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp. ISBN:0521400171

 

On Tipping Points and Regime Shifts in Ecosystems

A new important paper raises red flags about our preoccupation with tipping points, alternative stable states and regime shifts (I’ll call them collectively sharp transitions) in ecosystems (Capon et al. 2015). I do not usually call attention to papers but this paper and a previous review (Mac Nally et al. 2014) seem to me to be critical for how we think about ecosystem changes in both aquatic and terrestrial ecosystems.

Consider an oversimplified example of how a sharp transition might work. Suppose we dumped fertilizer into a temperate clear-water lake. The clear water soon turns into pea soup with a new batch of algal species, a clear shift in the ecosystem, and this change is not good for many of the invertebrates or fish that were living there. Now suppose we stop dumping fertilizer into the lake. In time, and this could be a few years, the lake can either go back to its original state of clear water or it could remain as a pea soup lake for a very long time even though the pressure of added fertilizer was stopped. This second outcome would be a sharp transition, “you cannot go back from here” and the question for ecologists is how often does this happen? Clearly the answer is of great interest to natural resource managers and restoration ecologists.

The history of this idea for me was from the 1970s at UBC when Buzz Holling and Carl Walters were modelling the spruce budworm outbreak problem in eastern Canadian coniferous forests. They produced a model with a manifold surface that tipped the budworm from a regime of high abundance to one of low abundance (Holling 1973). We were all suitably amazed and began to wonder if this kind of thinking might be helpful in understanding snowshoe hare population cycles and lemming cycles. The evidence was very thin for the spruce budworm, but the model was fascinating. Then by the 1980s the bandwagon started to roll, and alternative stable states and regime change seemed to be everywhere. Many ideas about ecosystem change got entangled with sharp transition, and the following two reviews help to unravel them.

Of the 135 papers reviewed by Capon et al. (2015) very few showed good evidence of alternative stable states in freshwater ecosystems. They highlighted the use and potential misuse of ecological theory in trying to predict future ecosystem trajectories by managers, and emphasized the need of a detailed analysis of the mechanisms causing ecosystem change. In a similar paper for estuaries and near inshore marine ecosystems, Mac Nally et al. (2014) showed that of 376 papers that suggested sharp transitions, only 8 seemed to have sufficient data to satisfy the criteria needed to conclude that a transition had occurred and was linkable to an identifiable pressure. Most of the changes described in these studies are examples of gradual ecosystem changes rather than a dramatic shift; indeed, the timescale against which changes are assessed is critical. As always the devil is in the details.

All of this is to recognize that strong ecosystem changes do occur in response to human actions but they are not often sharp transitions that are closely linked to human actions, as far as we can tell now. And the general message is clearly to increase rigor in our ecological publications, and to carry out the long-term studies that provide a background of natural variation in ecosystems so that we have a ruler to measure human induced changes. Reviews such as these two papers go a long way to helping ecologists lift our game.

Perhaps it is best to end with part of the abstract in Capon et al. (2015):

“We found limited understanding of the subtleties of the relevant theoretical concepts and encountered few mechanistic studies that investigated or identified cause-and-effect relationships between ecological responses and nominal pressures. Our results mirror those of reviews for estuarine, nearshore and marine aquatic ecosystems, demonstrating that although the concepts of regime shifts and alternative stable states have become prominent in the scientific and management literature, their empirical underpinning is weak outside of a specific environmental setting. The application of these concepts in future research and management applications should include evidence on the mechanistic links between pressures and consequent ecological change. Explicit consideration should also be given to whether observed temporal dynamics represent variation along a continuum rather than categorically different states.”

 

Capon, S.J., Lynch, A.J.J., Bond, N., Chessman, B.C., Davis, J., Davidson, N., Finlayson, M., Gell, P.A., Hohnberg, D., Humphrey, C., Kingsford, R.T., Nielsen, D., Thomson, J.R., Ward, K., and Mac Nally, R. 2015. Regime shifts, thresholds and multiple stable states in freshwater ecosystems; a critical appraisal of the evidence. Science of The Total Environment 517(0): in press. doi:10.1016/j.scitotenv.2015.02.045.

Holling, C.S. 1973. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics 4: 1-23. doi:10.1146/annurev.es.04.110173.000245.

Mac Nally, R., Albano, C., and Fleishman, E. 2014. A scrutiny of the evidence for pressure-induced state shifts in estuarine and nearshore ecosystems. Austral Ecology 39: 898-906. doi:10.1111/aec.12162.

On Indices of Population Abundance

I am often surprised at ecological meetings by how many ecological studies rely on indices rather than direct measures. The most obvious cases involve population abundance. Two common criteria for declaring a species as endangered are that its population has declined more than 70% in the last ten years (or three generations) or that its population size is less than 2500 mature individuals. The criteria are many and every attempt is made to make them quantitative. But too often the methods used to estimate changes in population abundance are based on an index of population size, and all too rarely is the index calibrated against known abundances. If an index increases by 2-fold, e.g. from 20 to 40 counts, it is not at all clear that this means the population size has increased 2-fold. I think many ecologists begin their career thinking that indices are useful and reliable and end their career wondering if they are providing us with a correct picture of population changes.

The subject of indices has been discussed many times in ecology, particularly among applied ecologists. Anderson (2001) challenged wildlife ecologists to remember that indices include an unmeasured term, detectability: Anderson (2001, p. 1295) wrote:

“While common sense might suggest that one should estimate parameters of interest (e.g., population density or abundance), many investigators have settled for only a crude index value (e.g., “relative abundance”), usually a raw count. Conceptually, such an index value (c) is the product of the parameter of interest (N) and a detection or encounter probability (p): then c=pN

He noted that many indices used by ecologists make a large assumption that the probability of encounter is a constant over time and space and individual observers. Much of the discussion of detectability flowed from these early papers (Williams, Nichols & Conroy 2002; Southwell, Paxton & Borchers 2008). There is an interesting exchange over Anderson’s (2001) paper by Engeman (2003) followed by a retort by Anderson (2003) that ended with this blast at small mammal ecologists:

“Engeman (2003) notes that McKelvey and Pearson (2001) found that 98% of the small-mammal studies reviewed resulted in too little data for valid mark-recapture estimation. This finding, to me, reflects a substantial failure of survey design if these studies were conducted to estimate population size. ……..O’Connor (2000) should not wonder “why ecology lags behind biology” when investigators of small-mammal communities commonly (i.e., over 700 cases) achieve sample sizes <10. These are empirical methods; they cannot be expected to perform well without data.” (page 290)

Take that you small mammal trappers!

The warnings are clear about index data. In some cases they may be useful but they should never be used as population abundance estimates without careful validation. Even by small mammal trappers like me.

Anderson, D.R. (2001) The need to get the basics right in wildlife field studies. Wildlife Society Bulletin, 29, 1294-1297.

Anderson, D.R. (2003) Index values rarely constitute reliable information. Wildlife Society Bulletin, 31, 288-291.

Engeman, R.M. (2003) More on the need to get the basics right: population indices. Wildlife Society Bulletin, 31, 286-287.

McKelvey, K.S. & Pearson, D.E. (2001) Population estimation with sparse data: the role of estimators versus indices revisited. Canadian Journal of Zoology, 79, 1754-1765.

O’Connor, R.J. (2000) Why ecology lags behind biology. The Scientist, 14, 35.

Southwell, C., Paxton, C.G.M. & Borchers, D.L. (2008) Detectability of penguins in aerial surveys over the pack-ice off Antarctica. Wildlife Research, 35, 349-357.

Williams, B.K., Nichols, J.D. & Conroy, M.J. (2002) Analysis and Management of Animal Populations. Academic Press, New York.

Citation Analysis Gone Crazy

Perhaps we should stop and look at the evils of citation analysis in science. Citation analysis began some 15 or 20 years ago with a useful thought that it might be nice to know if one’s scientific papers were being read and used by others working in the same area. But now it has morphed into a Godzilla that has the potential to run our lives. I think the current situation rests on three principles:

  1. Your scientific ability can be measured by the number of citations you receive. This is patent nonsense.
  2. The importance of your research is determined by which journals accept your papers. More nonsense.
  3. Your long-term contribution to ecological science can be measured precisely by your h–score or some variant.

These principles appeal greatly to the administrators of science and to many people who dish out the money for scientific research. You can justify your decisions with numbers. Excellent job to make the research enterprise quantitative. The contrary view which I might hope is held by many scientists rests on three different principles:

  1. Your scientific ability is difficult to measure and can only be approximately evaluated by another scientist working in your field. Science is a human enterprise not unlike music.
  2. The importance of your research is impossible to determine in the short term of a few years, and in a subject like ecology probably will not be recognized for decades after it is published.
  3. Your long-term contribution to ecological science will have little to do with how many citations you accumulate.

It will take a good historian to evaluate these alternative views of our science.

This whole issue would not matter except for the fact that it is eroding science hiring and science funding. The latest I have heard is that Norwegian universities are now given a large amount of money by the government if they publish a paper in SCIENCE or NATURE, and a very small amount of money if they publish the same results in the CANADIAN JOURNAL OF ZOOLOGY or – God forbid – the CANADIAN FIELD NATURALIST (or equivalent ‘lower class’ journals). I am not sure how many other universities will fall under this kind of reward-based publication scores. All of this is done I think because we do not wish to involve the human judgment factor in decision making. I suppose you could argue that this is a grand experiment like climate change (with no controls) – use these scores for 30 years and then see if they worked better than the old system based on human judgment. How does one evaluate such experiments?

NSERC (Natural Sciences and Engineering Research Council) in Canada has been trending in that direction in the last several years. In the eternal good old days scientists read research proposals and made judgments about the problem, the approach, and the likelihood of success of a research program. They took time to discuss at least some of the issues. But we move now into quantitative scores that replace human judgment, which I believe to be a very large mistake.

I view ecological research and practice much like I think medical research and medical practice operate. We do not know how well certain studies and experiment will work, any more than a surgeon knows exactly whether a particular technique or treatment will work or a particular young doctor will be a good surgeon, and we gain by experience in a mostly non-quantitative manner. Meanwhile we should encourage young scientists to try new ideas and studies, to give them opportunities based on judgments rather than on counts of papers or citations. Currently we want to rank everyone and every university like sporting teams and find out the winner. This is a destructive paradigm for science. It works for tennis but not for ecology.

Bornmann, L. & Marx, W. (2014) How to evaluate individual researchers working in the natural and life sciences meaningfully? A proposal of methods based on percentiles of citations. Scientometrics, 98, 487-509.

Leimu, R. & Koricheva, J. (2005) What determines the citation frequency of ecological papers? Trends in Ecology & Evolution, 20, 28-32.

Parker, J., Lortie, C. & Allesina, S. (2010) Characterizing a scientific elite: the social characteristics of the most highly cited scientists in environmental science and ecology. Scientometrics, 85, 129-143.

Todd, P.A., Yeo, D.C.J., Li, D. & Ladle, R.J. (2007) Citing practices in ecology: can we believe our own words? Oikos, 116, 1599-1601.

Back to p-Values

Alas ecology has slipped lower on the totem-pole of serious sciences by an article that has captured the attention of the media:

Low-Décarie, E., Chivers, C., and Granados, M. 2014. Rising complexity and falling explanatory power in ecology. Frontiers in Ecology and the Environment 12(7): 412-418. doi: 10.1890/130230.

There is much that is positive in this paper, so you should read it if only to decide whether or not to use it in a graduate seminar in statistics or in ecology. Much of what is concluded is certainly true, that there are more p-values in papers now than there were some years ago. The question then comes down to what these kinds of statistics mean and how this would justify a conclusion captured by the media that explanatory power in ecology is declining over time, and the bottom line of what to do about falling p-values. Since as far as I can see most statisticians today seem to believe that p-values are meaningless (e.g. Ioannidis 2005), one wonders what the value of showing this trend is. A second item that most statisticians agree about is that R2 values are a poor measure of anything other than the items in a particular data set. Any ecological paper that contains data to be analysed and reported summarizes many tests providing p-values and R2 values of which only some are reported. It would be interesting to do a comparison with what is recognized as a mature science (like physics or genetics) by asking whether the past revolutions in understanding and prediction power in those sciences corresponded with increasing numbers of p-values or R2 values.

To ask these questions is to ask what is the metric of scientific progress? At the present time we confuse progress with some indicators that may have little to do with scientific advancement. As journal editors we race to increase their impact factor which is interpreted as a measure of importance. For appointments to university positions we ask how many citations a person has and how many papers they have produced. We confuse scientific value with some numbers which ironically might have a very low R2 value as predictors of potential progress in a science. These numbers make sense as metrics to tell publication houses how influential their journals are, or to tell Department Heads how fantastic their job choices are, but we fool ourselves if we accept them as indicators of value to science.

If you wish to judge scientific progress you might wish to look at books that have gathered together the most important papers of the time, and examine a sequence of these from the 1950s to the present time. What is striking is that papers that seemed critically important in the 1960s or 1970s are now thought to be concerned with relatively uninteresting side issues, and conversely papers that were ignored earlier are now thought to be critical to understanding. A list of these changes might be a useful accessory to anyone asking about how to judge importance or progress in a science.

A final comment would be to look at the reasons why a relatively mature science like geology has completely failed to be able to predict earthquakes in advance and even to specify the locations of some earthquakes (Steina et al. 2012; Uyeda 2013). Progress in understanding does not of necessity dictate progress in prediction. And we ought to be wary of confusing progress with p-and R2 values.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLoS Medicine 2(8): e124.

Steina, S., Gellerb, R.J., and Liuc, M. 2012. Why earthquake hazard maps often fail and what to do about it. Tectonophysics 562-563: 1-24. doi: 10.1016/j.tecto.2012.06.047.

Uyeda, S. 2013. On earthquake prediction in Japan. Proceedings of the Japan Academy, Series B 89(9): 391-400. doi: 10.2183/pjab.89.391.