Category Archives: Evaluating Research Quality

But It is Complicated in Ecology

Consider two young ecologists both applying for the same position in a university or an NGO. To avoid a legal challenge, I will call one Ecologist C (as short for “conservative”), and the second candidate Ecologist L (as short for “liberal”). Both have just published reviews of conservation ecology. Person L has stated very clearly that the biological world is in rapid, catastrophic collapse with much unrecoverable extinction on the immediate calendar, and that this calls for emergency large-scale funding and action. Person C has reviewed similar parts of the biological world and concluded that some groups of animals and plants are of great concern, but that many other groups show no strong signals of collapse or that the existing data are inadequate to decide if populations are declining or not. Which person will get the job and why?

There is no answer to this hypothetical question, but it is worth pondering the potential reasons for these rather different perceptions of the conservation biology world. First, it is clear that candidate L’s catastrophic statements will be on the front page of the New York Times tomorrow, while much less publicity will accrue to candidate C’s statements. This is a natural response to the ‘This Is It!” approach so much admired by thrill seekers in contrast to the “Maybe Yes, Maybe No”, and “It Is Complicated” approach. But rather than get into a discussion of personality types, it may be useful to dig a bit deeper into what this question reveals about contemporary conservation ecology.

Good scientists attempting to answer this dichotomy of opinion in conservation ecology would seek data on several questions.
(1) Are there sufficient data available to reach a conclusion on this important topic?
(2) If there are not sufficient data, should we err on the side of being careful about our conclusion and risk “crying wolf”?
(3) Can we agree on what types of data are needed and admissible in this discussion?

On all these simple questions ecologists will argue very strongly. For question (1) we might assume that a 20-year study of a dominant species might be sufficient to determine trend (e.g. Plaza and Lambertucci 2020). Others will be happy with 5 years of data on several species. Can we substitute space for time? Can we simply use genetic data to answer all conservation questions (Hoffmann et al. 2017)? If the habitat we are studying contains 75 species of plants or invertebrates, on how many species must we have accurate data to support Ecologist L? Or do we need any data at all if we are convinced about climate change? Alfonzetti et al, (2020) and Wang et al. (2020) give two good examples of data problems with plants and butterflies with respect to conservation status. 

For question (2) there will be much more disagreement because this is not about the science involved but is a personal judgement about the future consequences of projected trends in species numbers. These judgements are typically based loosely on past observations of similar ecological populations or communities, some of which have declined in abundance and disappeared (the Passenger Pigeon Paradigm) or conversely those species that have recovered from minimal abundance to become common again (the Kirtland’s Warbler Paradigm). The problem revolves back to the question of what are ‘sufficient data’ to decide conservation policies.

Fortunately, most policy-oriented NGO conservation groups concentrate on the larger conservation issues of finding and protecting large areas of habitat from development and pushing strongly for policies that rein in climate change and reduce pollution produced by poor business and government practices.

In the current political and social climate, I suspect Ecologist L would get the job rather than Ecologist C. I can think of only one university hiring in my career that was sealed by a very assured candidate like person L who said to the departmental head and the search committee “Hire me and I will put this university on the MAP!”. We decided in this case we did not want to be on that particular MAP.

At present you can see all these questions are common in any science dealing with an urgent problem, as illustrated by the Covid-19 pandemic discussions, although much more money is being thrown at that disease issue than we ever expect to see for conservation or ecological science in general. It really is complicated in all science that is important to us.

Alfonzetti, M., et al. (2020). Shortfalls in extinction risk assessments for plants. Australian Journal of Botany 68, 466-471. doi: 10.1071/BT20106.

Hoffmann, A.A., Sgro, C.M., and Kristensen, T.N. (2017). Revisiting adaptive potential, population size, and conservation. Trends in Ecology & Evolution 32, 506-517. doi: 10.1016/j.tree.2017.03.012.

Plaza, P.I. and Lambertucci, S.A. (2020). Ecology and conservation of a rare species: What do we know and what may we do to preserve Andean condors? Biological Conservation 251, 108782. doi: 10.1016/j.biocon.2020.108782.

Wang, W.-L., Suman, D.O., Zhang, H.-H., Xu, Z.-B., Ma, F.-Z., and Hu, S.-J. (2020). Butterfly conservation in China: From science to action. Insects (Basel, Switzerland) 11, 661. doi: 10.3390/insects11100661.

On Citations and Scientific Research in Ecology

Begin with a few common assumptions in science.
(1) Higher citation rates define more valuable science
(2) Recent references are more valuable than older references
(3) Retracted scientific research is rapidly recognized and dropped from discussion
(4) The vast majority of scientific research reported in papers is read by other scientists.
(5) Results cited in scientific papers are cited correctly in subsequent references.

The number of publications in ecological science is growing rapidly world-wide, and a corollary of this must be that the total number of citations is growing even more rapidly (e.g. Westgate et al. 2020). It is well recognized that citations are unevenly spread among published papers, and reports that nearly 50% of published papers never receive any citations at all are commonly cited. I have not been able to validate this for papers in the ecological sciences. The more important question is whether the most highly cited papers are the most significant for progress in ecological understanding. If this is the case, you can simply ignore the vast majority of the published literature and save reading time. But this seems unlikely to be correct for ecological science.

The issue of scientific importance is a time bomb partly because ‘importance’ may be redefined over time as sciences mature, and this redefinition may occur in years or tens of years. A classic example is the citation history of Charles Elton’s (1958) book on invasions (Richardson and Pyšek 2008). Published in 1958, this book had almost no citations until the 1990s. Citations have become more and more important in the ranking of individual scholars as well as university departments during the last 20 years (Keville et al. 2017). This has occurred despite continuous warnings that citations are not valid for comparing individuals of different age or departments in different academic fields (Patience et al. 2017). If you publish in Covid-19 research this year, you are likely to get more citations than the person working in earthworm taxonomy.

Most published papers confirm the general belief that citing the most recent papers is more successful than citing older papers. If this belief could be tested, it would simplify education of graduate students and facilitate teaching. But the simple fact is that in ecology often (but not always) older papers have better perspectives than more recent papers or indicate paths of research that have failed to lead to ecological wisdom. 

Newspapers revel in stories of retracted research, if only to show that scientists are human. Of some interest are studies that show that research which is retracted continues to be cited. Hagberg (2020) cites a case in which a paper was retracted but continued to be cited as much after retraction as before. Fortunately, retracted research is rare in the ecological sciences but not absent, but the various conflicting ways in which scientific journals deal with papers with fraudulent results discovered after they are published leave much to be desired. 

A final comment on references is a warning to anyone reading the discussion or conclusions of a paper. Smith and Cumberledge (2020) have reported a random sample of references in a variety of scientific papers indicated a 25% error rate in ‘quotation’ errors. Quotation errors are distinct from ‘citation errors’ which are minor mistakes in the year of publication, page numbers or names in citations given in papers. Quotation errors are examples of “original paper authors say XX, citing paper says YY, a contradiction to what was originally reported. They used 250 citations from the 5 most highly cited scientific publications of today to determine how many papers contained ‘quotation errors’ and found a 25% error rate. About 33% of these errors could be called ‘Unsubstantiated’ and about 50% of the remaining quotation errors were ‘Impossible to substantiate” category. Their study reinforced early work by Todd et al. (2007) and pointed out to readers a weakness in the current use of references in scientific writing that is often missed by reviewers.

On a more positive note, on how to increase your citation rate, Murphy et al. (2019) surveyed the titles of 3562 papers and their subsequent citation rate from four ecology and entomology journals. They found that papers that did not include the Latin name of species in the title of the paper were cited 47% more often than papers with Latin names in the title. The number of words in the title of the paper had almost no effect on citation rates. They were unable to determine whether the injection of humor in the title of the paper had any effect on citation rates because too few papers attempted humor in the title.   

Elton, C.S. (1958) ‘The Ecology of Invasions by Animals and Plants.’ (Methuen: London.) ISBN: 978-3-030-34721-5

Hagberg, J.M. (2020). The unfortunately long life of some retracted biomedical research publications. Journal of Applied Physiology 128, 1381-1391. doi: 10.1152/japplphysiol.00003.2020.

Keville, M.P., Nelson, C.R., and Hauer, F.R. (2017). Academic productivity in the field of ecology. Ecosphere 8, e01620. doi: 10.1002/ecs2.1620.

Murphy, S.M., Vidal, M.C., Hallagan, C.J., Broder, E.D., and Barnes, E.E. (2019). Does this title bug (Hemiptera) you? How to write a title that increases your citations. Ecological Entomology 44, 593-600. doi: 10.1111/een.12740.

Patience, G.S., Patience, C.A., Blais, B., and Bertrand, F. (2017). Citation analysis of scientific categories. Heliyon 3, e00300. doi: https://doi.org/10.1016/j.heliyon.2017.e00300.

Richardson, D.M. and Pyšek, P. (2008). Fifty years of invasion ecology – the legacy of Charles Elton. Diversity and Distributions 14, 161-168. doi: 10.1111/j.1472-4642.2007.00464.x.

Smith, N. and Cumberledge, A. (2020). Quotation errors in general science journals. Proceedings of the Royal Society. A, 476, 20200538. doi: 10.1098/rspa.2020.0538.

Todd, P.A., Yeo, D.C.J., Li, D., and Ladle, R.J. (2007). Citing practices in ecology: can we believe our own words? Oikos 116, 1599-1601. doi: 10.1111/j.2007.0030-1299.15992.x

Westgate, M.J., Barton, P.S., Lindenmayer, D.B., and Andrew., N.R. (2020). Quantifying shifts in topic popularity over 44 years of Austral Ecology. Austral Ecology 45, 663-671. doi: 10.1111/aec.12938.

On the Use of Statistics in Ecological Research

There is an ever-deepening cascade of statistical methods and if you are going to be up to date you will have to use and cite some of them in your research reports or thesis. But before you jump into these methods, you might consider a few tidbits of advice. I suggest three rules and a few simple guidelines:

Rule 1. For descriptive papers keep to descriptive statistics. Every good basic statistics book has advice on when to use means to describe “average values”, when to use medians, or percentiles. Follow their advice and do not in your report generate any hypotheses except in the discussion. And follow the simple advice of statisticians not to generate and then test a hypothesis with the same set of data. Descriptive papers are most valuable. They can lead us to speculations and suggest hypotheses and explanations, but they do not lead us to strong inference.

Rule 2. For explanatory papers, the statistical rules become more complicated. For scientific explanation you need 2 or more alternative hypotheses that make different, non-overlapping predictions. The predictions must involve biological or physical mechanisms. Correlations alone are not mechanisms. They may help to lead you to a mechanism, but the key is that the mechanism must involve a cause and an effect. A correlation of a decline in whale numbers with a decline in sunspot numbers may be interesting but only if you can tie this correlation into an actual mechanism that affects birth or death rates of the whales.

Rule 3. For experimental papers you have access to a large variety of books and papers on experimental design. You must have a control or unmanipulated group, or for a comparative experiment a group A with treatment X, and a group B with treatment Y. There are many rules in the writings of experimental design that give good guidance (e.g. Anderson 2008; Eberhardt 2003; Johnson 2002; Shadish et al. 2002; Underwood 1990).

For all these ecology papers, consider the best of the recent statistical admonitions. Use statistics to enlighten not to obfuscate the reader. Use graphics to illustrate major results. Avoid p-values (Anderson et al. 2000; Ioannidis 2019a, 2019b). Measure effect sizes for different treatments (Nakagawa and Cuthill 2007). Add to these general admonitions the conventional rules of paper or report submission – do not argue with the editor, argue a small amount with the reviewers (none are perfect), and put your main messages in the abstract. And remember that it is possible there was some interesting research done before the year 2000.

Anderson, D.R. (2008) ‘Model Based Inference in the Life Sciences: A Primer on Evidence.’ (Springer: New York.). 184 pp.

Anderson, D.R., Burnham, K.P., and Thompson, W.L. (2000). Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64, 912-923.

Eberhardt, L.L. (2003). What should we do about hypothesis testing? Journal of Wildlife Management 67, 241-247.

Ioannidis, J.P.A. (2019a). Options for publishing research without any P-values. European Heart Journal 40, 2555-2556. doi: 10.1093/eurheartj/ehz556.

Ioannidis, J. P. A. (2019b). What have we (not) learnt from millions of scientific papers with P values? American Statistician 73, 20-25. doi: 10.1080/00031305.2018.1447512.

Johnson, D.H. (2002). The importance of replication in wildlife research. Journal of Wildlife Management 66, 919-932.

Nakagawa, S. and Cuthill, I.C. (2007). Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews 82, 591-605. doi: 10.1111/j.1469-185X.2007.00027.x.

Shadish, W.R, Cook, T.D., and Campbell, D.T. (2002) ‘Experimental and Quasi-Experimental Designs for Generalized Causal Inference.’ (Houghton Mifflin Company: New York.)

Underwood, A. J. (1990). Experiments in ecology and management: Their logics, functions and interpretations. Australian Journal of Ecology 15, 365-389.

On Three Kinds of Ecology Papers

There are many possible types of papers that discuss ecology, and in particular I want to deal only with empirical studies that deal with terrestrial and aquatic populations, communities, or ecosystems. I will not discuss here theoretical studies or modelling studies. I suggest it is possible to classify papers in ecological science journals that deal with field studies into three categories which I will call Descriptive Ecology, Explanatory Ecology, and Experimental Ecology. Papers in all these categories deal with a description of some aspects of the ecological world and how it works but they differ in their scientific impact.

Descriptive Ecology publications are essential to ecological science because they present some details of the natural history of an ecological population or community that is vital to our growing understanding of the biota of the Earth. There is much literature in this group, and ecologists all have piles of books on the local natural history of birds, moths, turtles, and large mammals, to mention only a few. Fauna and flora compilations pull much of this information together to guide beginning students and the interested public in increased knowledge of local fauna and flora. These publications are extremely valuable because they form the natural history basis of our science, and greatly outnumber the other two categories of papers. The importance of this information has been a continuous message of ecologists over many years (e.g. Bartholomew 1986; Dayton 2003; Travis 2020).

The scientific journals that professional ecologists read are mostly concerned with papers that can be classified as Explanatory Ecology and Experimental Ecology. In a broad sense these two categories can be described as providing a good story to tie together and thus explain the known facts of natural history or alternatively to define a set of hypotheses that provide alternative explanations for these facts and then to test these hypotheses experimentally. Rigorous ecology like all good science proceeds from the explanatory phase to the experimental phase. Good natural history provides several possible explanations for ecological events but does not stop there. If a particular bird population is declining, we need first to make a guess from natural history if this decline might be from disease, habitat loss, or predation. But to proceed to successful management of this conservation problem, we need studies that distinguish the cause(s) of our ecological problems, as recognized by Caughley (1994) and emphasized by Hone et al. (2018). Consequently the flow in all the sciences is from descriptive studies to explanatory ideas to experimental validation. Without experimental validation ‘ecological ideas’ can transform into ‘ecological opinions’ to the detriment of our science. This is not a new view of scientific method (Popper 1963) but it does need to be repeated (Betini et al. 2017). 

If I repeat this too much, I suggest you do a survey of how often ecological papers in your favorite journal are published without ever using the word ‘hypothesis’ or ‘experiment’. A historical survey of these or similar words would be a worthwhile endeavour for an honours or M.Sc. student in any one of the ecological subdisciplines. The favourite explanation offered in many current papers is climate change, a particularly difficult hypothesis to test because, if it is specified vaguely enough, it is impossible to test experimentally. Telling interesting stories should not be confused with rigorous experimental ecology.

Bartholomew, G. A. (1986). The role of natural history in comtemporary biology. BioScience 36, 324-329. doi: 10.2307/1310237

Betini, G.S., Avgar, T., and Fryxell, John M. (2017). Why are we not evaluating multiple competing hypotheses in ecology and evolution? Royal Society Open Science 4, 160756. doi: 10.1098/rsos.160756.

Caughley, G. (1994). Directions in conservation biology. Journal of Animal Ecology 63, 215-244. doi: 10.2307/5542

Dayton, P.K. (2003). The importance of the natural sciences to conservation. American Naturalist 162, 1-13. doi: 10.1086/376572

Hone, J., Drake, Alistair, and Krebs, C.J. (2018). Evaluating wildlife management by using principles of applied ecology: case studies and implications. Wildlife Research 45, 436-445. doi: 10.1071/WR18006.

Popper, K. R. (1963) ‘Conjectures and Refutations: The Growth of Scientific Knowledge.’ (Routledge and Kegan Paul: London.)

Travis, Joseph (2020). Where is natural history in ecological, evolutionary, and behavioral science? American Naturalist 196, 1-8. doi: 10.1086/708765.

On Declining Bird Populations

The conservation literature and the media are alive with cries of declining bird populations around the world (Rosenberg et al. 2019). Birds are well liked by people, and an important part of our environment so they garner a lot of attention when the cry goes out that all is not well. The problems from a scientific perspective is what evidence is required to “cry wolf’. There are many different opinions on what data provide reliable evidence. There is a splendid critique of the Rosenberg et al paper by Brian McGill that you should read::
https://dynamicecology.wordpress.com/2019/09/20/did-north-america-really-lose-3-billion-birds-what-does-it-mean/

My object here is to add a comment from the viewpoint of population ecology. It might be useful for bird ecologists to have a brief overview of what ecological evidence is required to decide that a bird population or a bird species or a whole group of birds is threatened or endangered. One simple way to make this decision is with a verbal flow chart and I offer here one example of how to proceed.

  1. Get accurate and precise data on the populations of interest. If you claim a population is declining or endangered, you need to define the population and know its abundance over a reasonable time period.

Note that this is already a nearly impossible demand. For birds that are continuously resident it is possible to census them well. Let me guess that continuous residency occurs in at most 5% or fewer of the birds of the world. The other birds we would like to protect are global or local migrants or move unpredictably in search of food resources, so it is difficult to define a population and determine if the population as a whole is rising or falling. Compounding all this are the truly rare bird species that are difficult to census like all rare species. Dorey and Walker (2018) examine these concerns for Canada.

The next problem is what is a reasonable time period for the census data. The Committee on the Status of Endangered Wildlife in Canada (COSEWIC) gives 10 years or 3 generations, whichever is longer (see web link below). So now we need to know the generation time of the species of concern. We can make a guess at generation time but let us stick with 10 years for the moment. For how many bird species in Canada do we have 10 years of accurate population estimates?

  • Next, we need to determine the causes of the decline if we wish to instigate management actions. Populations decline because of a falling reproductive rate, increasing death rate, or higher emigration rates. There are very few birds for which we have 10 years of diagnosis for the causes of changes in these vital rates. Strong conclusions should not rest on weak data.

The absence of much of these required data force conservation biologists to guess about what is driving numbers down, knowing only that population numbers are falling. Typically, many things are happening over the 10 years of assessment – climate is changing, habitats are being lost or gained, invasive species are spreading, new toxic chemical are being used for pest control, diseases are appearing, the list is long. We have little time or money to determine the critical limiting factors. We can only make a guess.

  • At this stage we must specify an action plan to recommend management actions for the recovery of the declining bird population. Management actions are limited. We cannot in the short term alter climate. Regulating toxic chemical use in agriculture takes years. In a few cases we can set aside more habitat as a generalized solution for all declining birds. We have difficulty controlling invasive species, and some invasive species might be native species expanding their geographic range (e.g. Bodine and Capaldi 2017, Thibault et al. 2018).

Conservation ecologists are now up against the wall because all management actions that are recommended will cost money and will face potential opposition from some people. Success is not guaranteed because most of the data available are inadequate. Medical doctors face the same problem with rare diseases and uncertain treatments when deciding how to treat patients with no certainty of success.

In my opinion the data on which the present concern over bird losses is too poor to justify the hyper-publicity about declining birds. I realize most conservation biologists will disagree but that is why I think we need to lift our game by having a more rigorous set of data rules for categories of concern in conservation. A more balanced tone of concern may be more useful in gathering public support for management efforts. Stanton et al. (2018) provide a good example for farmland birds. Overuse of the word ‘extinction’ is counterproductive in my opinion. Trying to provide better data is highly desirable so that conservation papers do not always end with the statement ‘but detailed mechanistic studies are lacking’. Pleas for declining populations ought to be balanced by recommendations for solutions to the problem. Local solutions are most useful, global solutions are critical in the long run but given current global governance are too much fairy tales.

Bodine, E.N. and Capaldi, A. (2017). Can culling Barred Owls save a declining Northern Spotted Owl population? Natural Resource Modeling 30, e12131. doi: 10.1111/nrm.12131.

Dorey, K. and Walker, T.R. (2018). Limitations of threatened species lists in Canada: A federal and provincial perspective. Biological Conservation 217, 259-268. doi: 10.1016/j.biocon.2017.11.018.

Rosenberg, K.V., et al. (2019). Decline of the North American avifauna. Science 366, 120-124. doi: 10.1126/science.aaw1313.

Stanton, R.L., Morrissey, C.A., and Clark, R.G. (2018). Analysis of trends and agricultural drivers of farmland bird declines in North America: A review. Agriculture, Ecosystems & Environment 254, 244-254. doi: 10.1016/j.agee.2017.11.028.

Thibault, M., et al. (2018). The invasive Red-vented bulbul (Pycnonotus cafer) outcompetes native birds in a tropical biodiversity hotspot. PLoS ONE 13, e0192249. doi: 10.1371/journal.pone.0192249.

http://cosewic.ca/index.php/en-ca/assessment-process/wildlife-species-assessment-process-categories-guidelines/quantitative-criteria

On Progress in Ecology

We are in ecology continually discussing what progress we are making in answering the central questions of our science. For this reason, it is sometimes interesting to compare our situation with that of economics, the queen of the social sciences, where the same argument also continues. A review by David Graeber (2019) in the New York Review of Books contains some comments about the ‘theoretical war’ in economics that might apply to some ecology subdisciplines. In it he discusses the arguments in social science between two divergent views of economics, that of the school of Keynesians and that of the now dominant Neoclassical School led by Frederich Hayek and later by Milton Friedman and many others of the Chicago School. John Maynard Keynes threw down a challenge illustrated in this quote from Graeber (2019):

“In other words, ‘(Keynes)’ assumed that the ground was always shifting under the analysts’ feet; the object of any social science was inherently unstable. Max Weber, for similar reasons, argued that it would never be possible for social scientists to come up with anything remotely like the laws of physics, because by the time they had come anywhere near to gathering enough information, society itself, and what analysts felt was important to know about it, would have changed so much that the information would be irrelevant. (p. 57)”

Precise quantitative predictions could be provided by simplified economic models, the Chicago School argued in rebutting Keynes. Graeber (2019) comments:

“Surely there’s nothing wrong with creating simplified models. Arguably, this is how any science of human affairs has to proceed. But an empirical science then goes on to test those models against what people actually do, and adjust them accordingly. This is precisely what economists did not do. Instead, they discovered that, if one encased those models in mathematical formulae completely impenetrable to the noninitiate, it would be possible to create a universe in which those premises could never be refuted. (“All actors are engaged in the maximization of utility. What is utility? Whatever it is that an actor appears to be maximizing.”) The mathematical equations allowed economists to plausibly claim theirs was the only branch of social theory that had advanced to anything like a predictive science.  (p. 57)”

In ecology the major divergence between schools of thought promoting progress have never been quite this distinct. Shades of complaint are evident in the writings of Peters (1991) and a burst of comment after that ranged from optimism (e.g. Bibby 2003) to more support for Peter’s critique (Underwood et al. 2000, Graham and Dayton 2002). Interest at this time seems to have waned in favour of very specific topics for review. If you check the Web of Science for the last 5 years for “progress” and “ecology” you will find reviews of root microbes, remote sensing of the carbon cycle, reintroduction of fishes in Canada and a host of very important reviews of small parts of the broad nature of ecology. As Kingsland (2004, 2005) recognized, ecology is an integrating science that brings together data from diverse fields of study. If this is correct, it is not surprising that ecologists differ in answering questions about progress in ecology. We should stick to small specific problems on which we can make detailed studies, measurements, and experiments to increase understanding of the causes of the original problem.

One of the most thoughtful papers on progress in ecology was that of Graham and Dayton (2002) who made an important point about progress in ecology:

“We believe that many consequences of ecological advancement will be obstacles to future progress. Here we briefly discuss just a few: (1) ecological specialization; (2) erasure of history; and (3) expansion of the literature. These problems are interconnected and have the potential to divert researchers and hinder ecological breakthroughs.” (p. 1486)

My question to all ecologists is whether or not we agree with this ‘prediction’ from 2002. There is no question in my judgement that ecology is much more specialized now, that history is erased in spite of search engines like the Web of Science and that the ecology literature is booming so rapidly that it feeds back to ecological specialization. There is no clear solution to these problems. The fact that ecology is integrative has developed into a belief that anyone with a little training in ecological science can call themselves an ecologist and pontificate about the problems of our day. This element of ‘fake news’ is not confined to ecology and we can counter it only by calling out errors propagated by politicians and others who continue to confuse truth in science with their uneducated beliefs.

Bibby, C.J. (2003). Fifty years of Bird Study. Bird Study 50, 194-210. Doi: 10.1080/00063650309461314.

Graham, M.H. and Dayton, P.K. (2002). On the evolution of ecological ideas: paradigms and scientific progress. Ecology 83, 1481-1489. Doi: 10.1890/0012-9658(2002)083[1481:OTEOEI]2.0.CO;2.

Graeber, D. (2019). Against Economics. New York Review of Books 66, 52-58. December 5, 2019.

Kingsland, S. (2004). Conveying the intellectual challenge of ecology: an historical perspective. Frontiers in Ecology and the Environment 2, 367-374. Doi: 10.1890/1540-9295(2004)002[0367:CTICOE]2.0.CO;2.

Kingsland, S.E. (2005) The Evolution of American Ecology, 1890-2000. Johns Hopkins University Press: Baltimore. 313 pp. ISBN 0801881714

Peters, R.H. (1991). A Critique for Ecology. Cambridge University Press: Cambridge, England. 366 pp. ISBN:0521400171

Underwood, A.J., Chapman, M.G., and Connell, S.D. (2000). Observations in ecology: you can’t make progress on processes without understanding the patterns. Journal of Experimental Marine Biology and Ecology 250, 97-115. Doi: 10.1016/S0022-0981(00)00181-7.

On Christmas Holiday Wishes

We are all supposed to make some wishes over the Holiday Season, no matter what our age or occupation. So, this blog is in that holiday spirit with the constraint that I will write about ecology, rather than the whole world, to keep it short and specific. So, here are my 12 wishes for improving the science of ecology in 2020:

  1. When you start your thesis or study, write down in 50 words or less what is the problem, what are the possible solutions to this problem, and what can we do about it.
  2. Take this statement and convert it to a 7 second sound bite that points out clearly for the person on the street or the head of the Research Council why this is an important use of the foundation’s or taxpayers’ money.
  3. Read the literature that is available on your topic of study even if it was published in the last century.
  4. When writing your report, thesis, or paper on your research, prepare an abstract or summary that follows the old rules of stating clearly WHO, WHAT, WHEN, WHERE, WHY, and HOW. Spend much time on this step, since many of your readers will only be able to read this far. 
  5. Make tables and graphs that are clear and to the point. Define the points or histograms on a graph.
  6. Define all three- and four-letter acronyms. Not everyone will know what RSE or SECR means.
  7. Remember the cardinal rule of data presentation that if your data are an estimate of some value, you should provide the confidence limits or credible intervals of your data.
  8. Above all be truthful and modest in your conclusions. If your evidence points in one direction but is weak, say so. If the support of your evidence is strong, say so. But do not say that this is the first time anyone has ever suggested your conclusions.
  9. In the discussion of your results, give some space to suggesting what limits apply to your conclusions. Do your statements apply only to brown trout, or to all trout, or to all freshwater fish? Are your conclusions limited to one biogeographic zone, or one plant community, or to one small national park?  
  10. The key point at the end of your report should be what next? You or others will take up your challenges, and since you have worked hard and thought much about the ecological problems you have faced, you should be the best person to suggest some future directions for research.
  11. Once your have completed your report or paper, go back and read again all the literature that is available on your topic of study and review it critically.
  12. Finish your report or paper, keeping in mind the old adage, the perfect is the enemy of the good. It is quite impossible in science to be perfect. Better good than perfect.

And as you dive into any kind of biological research, it is useful to read about some of the controversies that you may run into as you write your papers or reports, particularly in the statistical treatment of biological data (Hardwicke and Ioannidis 2019, Ioannidis 2019). The statistical controversy over p-values has been a hot issue for several years and you will likely run into it sooner or later (Ioannidis 2019a, Siontis and Ioannidis 2018). The important point you should remember is that ecologists are scientists and our view of the value of our research work is the antithesis of Shakespeare’s Macbeth:

Life’s but a walking shadow, a poor player that struts and frets his hour upon the stage, and then is heard no more. It is a tale told by an idiot, full of sound and fury,
signifying nothing.”
(Act 5, Scene 5)

This is because our scientific work is valuable for conserving life on Earth, and so it must be carried out to a high and improving standard. It will be there as a contribution to knowledge and available for a long time. It may be useful now, or in one year, or perhaps in 10 or 100 years as an important contribution to solving ecological problems. So, we should strive for the best.

Hardwicke, T.E. and Ioannidis, J.P.A. (2019). Petitions in scientific argumentation: Dissecting the request to retire statistical significance. European Journal of Clinical Investigation 49, e13162.  doi: 10.1111/eci.13162.

Ioannidis, J.P.A. (2019). Options for publishing research without any P-values. European Heart Journal 40, 2555-2556. doi: 10.1093/eurheartj/ehz556.

Ioannidis, J.P.A. (2019a Ioannidis). What have we (not) learnt from millions of scientific papers with P values? American Statistician 73, 20-25. doi: 10.1080/00031305.2018.1447512.

Siontis, K.C. and Ioannidis, J.P.A. (2018). Replication, duplication, and waste in a quarter million systematic reviews and meta-analyses. Circulation: Cardiovascular Quality and Outcomes 11, e005212. doi: 10.1161/CIRCOUTCOMES.118.005212.

Is Conservation Ecology Destroying Ecology?

Ecology became a serious science some 100 years ago when the problems that it sought to understand were clear and simple: the reasons for the distribution and abundance of organisms on Earth. It subdivided fairly early into three parts, population, community, and ecosystem ecology. It was widely understood that to understand population ecology you needed to know a great deal about physiology and behaviour in relation to the environment, and to understand community ecology you had to know a great deal about population dynamics. Ecosystem ecology then moved into community ecology plus all the physical and chemical interactions with the whole environment. But the sciences are not static, and ecology in the past 60 years has come to include nearly everything from chemistry and geography to meteorological sciences, so if you tell someone you are an ‘ecologist’ now, they have only a vague idea of what you do.

The latest invader into the ecology sphere has been conservation biology so that in the last 20 years it has become a dominant driver of ecological concerns. This has brought ecology into the forefront of publicity and the resulting political areas of controversy, not necessarily bad but with some scientific consequences. ‘Bandwagons’ are for the most part good in science because it attracts good students and professors and brings public support on side. Bandwagons are detrimental when they draw too much of the available scientific funding away from critical basic research and champion scientific fads.

The question I wish to raise is whether conservation ecology has become the latest fad in the broad science of ecology and whether this has derailed important background research. Conservation science begins with the broad and desirable goal of preserving all life on Earth and thus thwarting extinctions. This is an impossible goal and the question then becomes how can we trim it down to an achievable scientific aim? We could argue that the most important goal is to describe all the species on Earth, so that we would then know what “money” we have in the “bank”. But if we look at the insects alone, we see that this is not an achievable goal in the short term. And the key to many of these issues is what we mean by “the short term”. If we are talking10 years, we may have very specific goals, if 100 years we may redesign the goal posts, and if 1000 years again our views might change.

This is a key point. As humans we design our goals in the time frames of months and a few years, not in general in geological time. Because of climate change we are now being forced to view many things in a shorter and shorter time frame. If you live in Miami, you should do something about sea level rise now. If you grow wheat in Australia, you should worry about decreasing annual rainfall. But science in general does not have a time frame. Technology does, and we need a new phone every year, but the understanding of cancer or the ecology of tropical rain forests does not have a deadline.

But conservation biology has a ticking clock called extinction. Now we can compound our concerns about climate change and conservation to capture more of the funding for biological research in order to prevent extinctions of rare and endangered species. 

Ecological science over the past 40 years has been progressing slowly through population ecology into community and ecosystem ecology while learning that the details of populations are critical to the understanding of community function and learning how communities operate is necessary for understanding ecosystem change. None of this has been linear progress but rather a halting progression with many deviations and false leads. In order to push this agenda forward more funding has clearly been needed because teams of researchers are needed to understand a community and even more people to study an ecosystem. At the same time the value of long-term studies has become evident and equipment has become more expensive.

We have now moved into the Anthropocene in which in my opinion the focus has shifted completely from trying to answer the primary problems of ecological science to the conservation of organisms. In practice this has too often resulted in research that could only be called poor population ecology. Poor in the sense of the need for immediate short-term answers for declining species populations with no proper understanding of the underlying problem. We are faced with calls for funding that are ‘crying wolf’ with inadequate data but heartfelt opinions. Recovery plans for single species or closely related groups focus on a set of unstudied opinions that may well be correct, but to test these ideas in a reliable scientific manner would take years. Triage on a large scale is practiced without discussing the issue, and money is thrown at problems based on the publicity generated. Populations of threatened species continue to decline in what can only be described as failed management. Blame is spread in all directions to developers or farmers or foresters or chemical companies. I do not think these are the signs of a good science which above all ought to work from the strength of evidence and prepare recovery plans based on empirical science.

Part of the problem I think lies in the modern need to ‘do something’, ‘do anything’ to show that you care about a particular problem. ‘We have now no time for slow-moving conventional science, we need immediate results now’. Fortunately, many ecologists are critical of these undesirable trends in our science and carry on (e.g. Amos et al. 2013). You will not likely read tweets about these people or read about them in your daily newspapers. Evidence-based science is rarely quick, and complaints like those that I give here are not new (Sutherland et al. 2004, Likens 2010, Nichols 2012).  

Amos, J.N., Balasubramaniam, S., Grootendorst, L. et al. (2013). Little evidence that condition, stress indicators, sex ratio, or homozygosity are related to landscape or habitat attributes in declining woodland birds. Journal of Avian Biology 44, 45-54. doi: 10.1111/j.1600-048X.2012.05746.x

Likens, G.E. (2010). The role of science in decision making: does evidence-based science drive environmental policy? Frontiers in Ecology and the Environment 8, e1-e9. doi: 10.1890/090132

Nichols, J.D. (2012). Evidence, models, conservation programs and limits to management. Animal Conservation 15, 331-333. doi: 10.1111/j.1469-1795.2012.00574.x

Sutherland, W.J., Pullin, A.S., Dolman, P.M., Knight, T.M. (2004). The need for evidence-based conservation. Trends in Ecology and Evolution 19, 305-308. doi: 10.1016/j.tree.2004.03.018

On Questionable Research Practices

Ecologists and evolutionary biologists are tarred and feathered along with many scientists who are guilty of questionable research practices. So says this article in “The Conservation” on the web:
https://theconversation.com/our-survey-found-questionable-research-practices-by-ecologists-and-biologists-heres-what-that-means-94421?utm_source=twitter&utm_medium=twitterbutton

Read this article if you have time but here is the essence of what they state:

“Cherry picking or hiding results, excluding data to meet statistical thresholds and presenting unexpected findings as though they were predicted all along – these are just some of the “questionable research practices” implicated in the replication crisis psychology and medicine have faced over the last half a decade or so.

“We recently surveyed more than 800 ecologists and evolutionary biologists and found high rates of many of these practices. We believe this to be first documentation of these behaviours in these fields of science.

“Our pre-print results have certain shock value, and their release attracted a lot of attention on social media.

  • 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking)
  • 42% had collected more data after inspecting whether results were statistically significant (a form of “p hacking”)
  • 51% reported an unexpected finding as though it had been hypothesised from the start (known as “HARKing”, or Hypothesising After Results are Known).”

It is worth looking at these claims a bit more analytically. First, the fact that more than 800 ecologists and evolutionary biologists were surveyed tells you nothing about the precision of these results unless you can be convinced this is a random sample. Most surveys are non-random and yet are reported as though they are a random, reliable sample.

Failing to report results is common in science for a variety of reasons that have nothing to do with questionable research practices. Many graduate theses contain results that are never published. Does this mean their data are being hidden? Many results are not reported because they did not find an expected result. This sounds awful until you realize that journals often turn down papers because they are not exciting enough, even though the results are completely reliable. Other results are not reported because the investigator realized once the study is complete that it was not carried on long enough, and the money has run out to do more research. One would have to have considerable detail about each study to know whether or not these 64% of researchers were “cherry picking”.

Alas the next problem is more serious. The 42% who are accused of “p-hacking” were possibly just using sequential sampling or using a pilot study to get the statistical parameters to conduct a power analysis. Any study which uses replication in time, a highly desirable attribute of an ecological study, would be vilified by this rule. This complaint echos the statistical advice not to use p-values at all (Ioannidis 2005, Bruns and Ioannidis 2016) and refers back to complaints about inappropriate uses of statistical inference (Armhein et al. 2017, Forstmeier et al. 2017). The appropriate solution to this problem is to have a defined experimental design with specified hypotheses and predictions rather than an open ended observational study.

The third problem about unexpected findings hits at an important aspect of science, the uncovering of interesting and important new results. It is an important point and was warned about long ago by Medewar (1963) and emphasized recently by Forstmeier et al. (2017). The general solution should be that novel results in science must be considered tentative until they can be replicated, so that science becomes a self-correcting process. But the temptation to emphasize a new result is hard to restrain in the era of difficult job searches and media attention to novelty. Perhaps the message is that you should read any “unexpected findings” in Science and Nature with a degree of skepticism.

The cited article published in “The Conversation” goes on to discuss some possible interpretations of what these survey results mean. And the authors lean over backwards to indicate that these survey results do not mean that we should not trust the conclusions of science, which unfortunately is exactly what some aspects of the public media have emphasized. Distrust of science can be a justification for rejecting climate change data and rejecting the value of immunizations against diseases. In an era of declining trust in science, these kinds of trivial surveys have shock value but are of little use to scientists trying to sort out the details about how ecological and evolutionary systems operate.

A significant source of these concerns flows from the literature that focuses on medical fads and ‘breakthroughs’ that are announced every day by the media searching for ‘news’ (e.g. “eat butter”, “do not eat butter”). The result is almost a comical model of how good scientists really operate. An essential assumption of science is that scientific results are not written in stone but are always subject to additional testing and modification or rejection. But one result is that we get a parody of science that says “you can’t trust anything you read” (e.g. Ashcroft 2017). Perhaps we just need to repeat to ourselves to be critical, that good science is evidence-based, and then remember George Bernard Shaw’s comment:

Success does not consist in never making mistakes but in never making the same one a second time.

Amrhein, V., Korner-Nievergelt, F., and Roth, T. 2017. The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research. PeerJ  5: e3544. doi: 10.7717/peerj.3544.

Ashcroft, A. 2017. The politics of research-Or why you can’t trust anything you read, including this article! Psychotherapy and Politics International 15(3): e1425. doi: 10.1002/ppi.1425.

Bruns, S.B., and Ioannidis, J.P.A. 2016. p-Curve and p-Hacking in observational research. PLoS ONE 11(2): e0149144. doi: 10.1371/journal.pone.0149144.

Forstmeier, W., Wagenmakers, E.-J., and Parker, T.H. 2017. Detecting and avoiding likely false-positive findings – a practical guide. Biological Reviews 92(4): 1941-1968. doi: 10.1111/brv.12315.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLOS Medicine 2(8): e124. doi: 10.1371/journal.pmed.0020124.

Medawar, P.B. 1963. Is the scientific paper a fraud? Pp. 228-233 in The Threat and the Glory. Edited by P.B. Medawar. Harper Collins, New York. pp. 228-233. ISBN 978-0-06-039112-6

On Mauna Loa and Long-Term Studies

If there is one important element missing in many of our current ecological paradigms it is long-term studies. This observation boils down to the lack of proper controls for our observations. If we do not know the background of our data sets, we lack critical perspective on how to interpret short-term studies. We should have learned this from paleoecologists whose many studies of plant pollen profiles and other time series from the geological record show that models of stability which occupy most of the superstructure of ecological theory are not very useful for understanding what is happening in the real world today.

All of this got me wondering what it might have been like for Charles Keeling when he began to measure CO2 levels on Mauna Loa in Hawaii in 1958. Let us do a thought experiment and suggest that he was at that time a typical postgraduate students told by his professors to get his research done in 4 or at most 5 years and write his thesis. These would be the basic data he got if he was restricted to this framework:

Keeling would have had an interesting seasonal pattern of change that could be discussed and lead to the recommendation of having more CO2 monitoring stations around the world. And he might have thought that CO2 levels were increasing slightly but this trend would not be statistically significant, especially if he has been cut off after 4 years of work. In fact the US government closed the Mauna Loa observatory in 1964 to save money, but fortunately Keeling’s program was rescued after a few months of closure (Harris 2010).

Charles Keeling could in fact be a “patron saint” for aspiring ecology graduate students. In 1957 as a postdoc he worked on developing the best way to measure CO2 in the air by the use of an infrared gas analyzer, and in 1958 he had one of these instruments installed at the top of Mauna Loa in Hawaii (3394 m, 11,135 ft) to measure pristine air. By that time he had 3 published papers (Marx et al. 2017). By 1970 at age 42 his publication list had increased to a total of 22 papers and an accumulated total of about 50 citations to his research papers. It was not until 1995 that his citation rate began to exceed 100 citations per year, and after 1995 at age 67 his citation rate increased very much. So, if we can do a thought experiment, in the modern era he could never even apply for a postdoctoral fellowship, much less a permanent job. Marx et al. (2017) have an interesting discussion of why Keeling was undercited and unappreciated for so long on what is now considered one of the world’s most critical environmental issues.

What is the message for mere mortals? For postgraduate students, do not judge the importance of your research by its citation rate. Worry about your measurement methods. Do not conclude too much from short-term studies. For professors, let your bright students loose with guidance but without being a dictator. For granting committees and appointment committees, do not be fooled into thinking that citation rates are a sure metric of excellence. For theoretical ecologists, be concerned about the precision and accuracy of the data you build models about. And for everyone, be aware that good science was carried out before the year 2000.

And CO2 levels yesterday were 407 ppm while Nero is still fiddling.

Harris, D.C. (2010) Charles David Keeling and the story of atmospheric CO2 measurements. Analytical Chemistry, 82, 7865-7870. doi: 10.1021/ac1001492

Marx, W., Haunschild, R., French, B. & Bornmann, L. (2017) Slow reception and under-citedness in climate change research: A case study of Charles David Keeling, discoverer of the risk of global warming. Scientometrics, 112, 1079-1092. doi: 10.1007/s11192-017-2405-z