Tag Archives: critical hypotheses

Have we moved on from Hypotheses into the New Age of Ecology?

For the last 60 years a group of Stone Age scientists like myself have preached to ecology students that one needs hypotheses to do proper science. Now it has always been clear that not all ecologists followed this precept, and a recent review hammers this point home (Betts et al. 2021). I have always asked my students to read the papers from the Stone Age about scientific progress – Popper (1959), Platt (1964), Peters (1991) and even back to the Pre-Stone Age, Chamberlin (1897). There has been much said about this issue, and the recent Betts et al. (2021) paper pulls much of it together by reviewing papers from 1991 to 2015. Their conclusion is dismal if you think ecological science should make progress in gathering evidence. No change from 1990 to 2015. Multiple alternative hypotheses = 6% of papers, Mechanistic hypotheses = 25% of papers, Descriptive hypotheses = 12%, No hypotheses = 75% of papers. Why should this be after years of recommending the gold standard of multiple alternative hypotheses? Can we call ecology a science with these kinds of scores? 

The simplest reason is that in the era of Big Data we do not need any hypotheses to understand populations, communities, and ecosystems. We have computers, that is enough. I think this is a rather silly view, but one would have to interview believers to find out what they view as progress from big data in the absence of hypotheses. The second excuse might be that we cannot be bothered with hypotheses until we have a complete description of life on earth, food webs, interaction webs, diets, competitors, etc. Once we achieve that we will be able to put together mechanistic hypotheses rapidly. An alternative statement of this view is that we need very much natural history to make any progress in ecology, and this is the era of descriptive natural history and that is why 75% of papers do not list the word hypothesis.

But this is all nonsense of course, and try this view on a medical scientist, a physicist, an aeronautical engineer, or a farmer. The fundamental principle of science is cause-and-effect or the simple view that we would like to see how things work and why often they do not work. Have your students read Romesburg (1981) for an easy introduction and then the much more analytical book by Pearl and Mackenzie (2018) to gain an understanding of the complexity of the simple view that there is a cause and it produces an effect. Hone et al. (2023) discuss these specific problems with respect to improving our approach to wildlife management

What can be done about the dismal situation described by Betts et al. (2021)? One useful recommendation for editors and reviewers would be to request for every submitted paper for a clear statement of the hypothesis they are testing, and hopefully for alternative hypotheses. There should be ecology journals specifically for natural history where the opposite gateway is set: no use of ‘hypothesis’ in this journal. This would not solve all the Betts et al. problems because some ecology papers are based on the experimental design of ‘do something’ and then later ‘try to invent some way to support a hypotheses’, after the fact science. One problem with this type of literature survey is, as Betts et al. recognized, is that papers could be testing hypotheses without using this exact word. So words like ‘proposition’, ‘thesis’, ‘conjectures’ could camouflage thinking about alternative explanations without the actual word ‘hypothesis’.

One other suggestion to deal with this situation might be for journal editors to disallow all papers with hypotheses that are completely untestable. This type of rejection could be instructive to authors to assist rewriting your paper to be more specific about alternative hypotheses. If you can make a clear causal set of predictions that a particular species will go extinct in 100 years, this could be described as a ‘possible future scenario’ that could be guided by some mechanisms that are specified. Or if you have a hypothesis that ‘climate change will affect species geographical ranges, you are providing  a very vague inference that is difficult to test without being more specific about mechanisms, particularly if the species involved is rare.

There is a general problem with null hypotheses which state there is “no effect”. In some few cases these null hypotheses are useful but for the most part they are very weak and should indicate that you have not thought enough about alternative hypotheses.

So read Platt (1964) or at least the first page of it, the first chapter of Popper (1959), and Betts et al. (2021) paper and in your research try to avoid the dilemmas they discuss, and thus help to move our science forward lest it become a repository of ‘stamp collecting’.

Betts, M.G., Hadley, A.S., Frey, D.W., Frey, S.J.K., Gannon, D., Harris, S.H., et al. (2021) When are hypotheses useful in ecology and evolution? Ecology and Evolution, 11, 5762-5776. doi: 10.1002/ece3.7365.

Chamberlin, T.C. (1897) The method of multiple working hypotheses. Journal of Geology, 5, 837-848 (reprinted in Science 148: 754-759 in 1965). doi. 10.1126/science.148.3671.754.

Hone, J., Drake, A. & Krebs, C.J. (2023) Evaluation options for wildlife management and strengthening of causal inference BioScience, 73, 48-58.doi: 10.1093/biosci/biac105.

Pearl, J., and Mackenzie, D. 2018. The Book of Why. The New Science of Cause and Effect. Penguin, London, U.K. 432 pp. ISBN: 978-1541698963.

Peters, R.H. (1991) A Critique for Ecology. Cambridge University Press, Cambridge, England. ISBN: 0521400171.

Platt, J.R. (1964) Strong inference. Science, 146, 347-353.doi: 10.1126/science.146.3642.347.

Popper, K.R. (1959) The Logic of Scientific Discovery. Hutchinson & Co., London. ISBN: 978-041-5278-447.

Romesburg, H.C. (1981) Wildlife science: gaining reliable knowledge. Journal of Wildlife Management, 45, 293-313. doi:10.2307/3807913.

Is Ecology Becoming a Correlation Science?

One of the first lessons in Logic 101 is classically called “Post hoc, ergo propter hoc” or in plain English, “After that, therefore because of that”. The simplest example of many you can see in the newspapers might be: “The ocean is warming up, salmon populations are going down, it must be another effect of climate change. There is a great deal of literature on the problems associated with these kinds of simple inferences, going back to classics like Romesburg (1981), Cox and Wermuth (2004), Sugihara et al. (2012), and Nichols et al. (2019). My purpose here is only to remind you to examine cause and effect when you make ecological conclusions.

My concern is partly related to news articles on ecological problems. A recent example is the collapse of the snow crab fishery in the Gulf of Alaska which in the last 5 years has gone from a very large and profitable fishery interacting with a very large crab population to, at present, a closed fishery with very few snow crabs. What has happened? Where did the snow crabs go? No one really knows but there are perhaps half a dozen ideas put forward to explain what has happened. Meanwhile the fishery and the local economy are in chaos. Without very many critical data on this oceanic ecosystem we can list several factors that might be involved – climate change warming of the Bering Sea, predators, overfishing, diseases, habitat disturbances because of bottom trawl fishing, natural cycles, and then recognizing that we have no simple way for deciding cause and effect and therefore making management choices.

The simplest solution is to say that many interacting factors are involved and many papers indicate the complexity of populations, communities and ecosystems (e,g, Lidicker 1991, Holmes 1995, Howarth et al. 2014). Everyone would agree with this general idea, “the world is complex”, but the arguments have always been “how do we proceed to investigate ecological processes and solve ecological problems given this complexity?” The search for generality has led mostly into replications in which ‘identical’ populations or communities behave very differently. How can we resolve this problem? A simple answer to all this is to go back to the correlation coefficient and avoid complexity.

Having some idea of what is driving changes in ecological systems is certainly better than having no idea, but it is a problem when only one explanation is pushed without a careful consideration of alternative possibilities. The media and particularly the social media are encumbered with oversimplified views of the causes of ecological problems which receive wide approbation with little detailed consideration of alternative views. Perhaps we will always be exposed to these oversimplified views of complex problems but as scientists we should not follow in these footsteps without hard data.

What kind of data do we need in science? We must embrace the rules of causal inference, and a good start might be the books of Popper (1963) and Pearl and Mackenzie (2018) and for ecologists in particular the review of the use of surrogate variables in ecology by Barton et al. (2015). Ecologists are not going to win public respect for their science until they can avoid weak inference, minimize hand waving, and follow the accepted rules of causal inference. We cannot build a science on the simple hypothesis that the world is complicated or by listing multiple possible causes for changes. Correlation coefficients can be a start to unravelling complexity but only a weak one. We need better methods for resolving complex issues in ecology.

Barton, P.S., Pierson, J.C., Westgate, M.J., Lane, P.W. & Lindenmayer, D.B. (2015) Learning from clinical medicine to improve the use of surrogates in ecology. Oikos, 124, 391-398.doi: 10.1111/oik.02007.

Cox, D.R. and Wermuth, N. (2004). Causality: a statistical view. International Statistical Reviews 72: 285-305.

Holmes, J.C. (1995) Population regulation: a dynamic complex of interactions. Wildlife Research, 22, 11-19.

Howarth, L.M., Roberts, C.M., Thurstan, R.H. & Stewart, B.D. (2014) The unintended consequences of simplifying the sea: making the case for complexity. Fish and Fisheries, 15, 690-711.doi: 10.1111/faf.12041

Lidicker, W.Z., Jr. (1991) In defense of a multifactor perspective in population ecology. Journal of Mammalogy, 72, 631-635.

Nichols, J.D., Kendall, W.L. & Boomer, G.S. (2019) Accumulating evidence in ecology: Once is not enough. Ecology and Evolution, 9, 13991-14004.doi: 10.1002/ece3.5836.

Pearl, J., and Mackenzie, D. 2018. The Book of Why. The New Science of Cause and Effect. Penguin, London, U.K. 432 pp. ISBN: 978-1541698963

Popper, K.R. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge and Kegan Paul, London. 608 pp. ISBN: 978-1541698963

Romesburg, H.C. (1981) Wildlife science: gaining reliable knowledge. Journal of Wildlife Management, 45, 293-313.

Sugihara, G., et al. (2012) Detecting causality in complex ecosystems. Science, 338, 496-500.doi: 10.1126/science.1227079.

On the Meaning of ‘Food Limitation’ in Population Ecology

There are many different ecological constraints that are collected in the literature under the umbrella of ‘food limitation’ when ecologists try to explain the causes of population changes or conservation problems. ‘Sockeye salmon in British Columbia are declining in abundance because of food limitation in the ocean’. ’Jackrabbits in some states in the western US are increasing because climate change has increased plant growth and thus removed the limitation of their plant food supplies.’ ‘Moose numbers in western Canada are declining because their food plants have shifted their chemistry to cope with the changing climate and now suffer food limitation”. My suggestion here is that ecologists should be careful in defining the meaning of ‘limitation’ in discussing these kinds of population changes in both rare and abundant species.

Perhaps the first principle is that it is the definition of life that food is always limiting. One does not need to do an experiment to demonstrate this truism. So to start we must agree that modern agriculture is built on the foundation that food can be improved and that this form of ‘food limitation’ is not what ecologists who are interested in population changes in the real world are trying to test. The key to explain population differences must come from resource differences in the broad sense, not food alone but a host of other ecological causal factors that may produce changes in birth and death rates in populations.

‘Limitation’ can be used in a spatial or a temporal context. Population density of deer mice can differ in average density in 2 different forest types, and this spatial problem would have to be investigated as a search for the several possible mechanisms that could be behind this observation. Often this is passed off too easily by saying that “resources” are limiting in the poorer habitat, but this statement takes us no closer to understanding what the exact food resources are. If food resources carefully defined are limiting density in the ‘poorer’ habitat, this would be a good example of food limitation in a spatial sense. By contrast if a single population is increasing in one year and declining in the next year, this could be an example of food limitation in a temporal sense.

The more difficult issue now becomes what evidence you have that food is limiting in either time or space. Growth in body size in vertebrates is one clear indirect indicator but we need to know exactly what food resources are limiting. The temptation is to use feeding experiments to test for food limitation (reviewed in Boutin 1990). Feeding experiments in the lab are simple, in the field not simple. Feeding an open population can lead to immigration and if your response variable is population density, you have an indirect effect of feeding. If animals in the experimentally fed area grow faster or have a higher reproductive output, you have evidence of the positive effect of the feeding treatment. You can then claim ‘food limitation’ for these specific variables. If population density increases on your feeding area relative to unfed controls, you can also claim ‘food limitation of density’. The problems then come when you consider the temporal dimension due to seasonal or annual effects. If the population density falls and you are still feeding in season 2 or year 2, then food limitation of density is absent, and the change must have been produced by higher mortality in season 2 or higher emigration.

Food resources could be limiting because of predator avoidance (Brown and Kotler 2007). The ecology of fear from predation has blossomed into a very large literature that explores the non-consumptive effects of predators on prey foraging that can lead to food limitation without food resources being in short supply (e.g., Peers et al. 2018, Allen et al. 2022).

All of this seems to be terribly obvious but the key point is that if you examine the literature about “food limitation” look at the evidence and the experimental design. Ecologists like medical doctors at times have a long list of explanations designed to sooth the soul without providing good evidence of what exact mechanism is operating. Economists are near the top with this distinguished approach, exceeded only by politicians, who have an even greater art in explaining changes after the fact with limited evidence.

As a footnote to defining this problem of food limitation, you should read Boutin (1990). I have also raved on about this topic in Chapter 8 of my 2013 book on rodent populations if you wish more details.

Allen, M.C., Clinchy, M. & Zanette, L.Y. (2022) Fear of predators in free-living wildlife reduces population growth over generations. Proceedings of the National Academy of Sciences (PNAS), 119, e2112404119. doi: 10.1073/pnas.2112404119.

Boutin, S. (1990). Food supplementation experiments with terrestrial vertebrates: patterns, problems, and the future. Canadian Journal of Zoology 68(2): 203-220. doi: 10.1139/z90-031.

Brown, J.S. & Kotler, B.P. (2007) Foraging and the ecology of fear. Foraging: Behaviour and Ecology (eds. D.W. Stephens, J.S. Brown & R.C. Ydenberg), pp. 437-448.University of Chicago Press, Chicago. ISBN: 9780226772646

Krebs, C.J. (2013) Chapter 8, The Food Hypothesis. In Population Fluctuations in Rodents. University of Chicago Press, Chicago. ISBN: 978-0-226-01035-9

On Post-hoc Ecology

Back in the Stone Age when science students took philosophy courses, a logic course was a common choice for students majoring in science. Among the many logical fallacies one of the most common was the Post Hoc Fallacy, or in full “Post hoc, ergo propter hoc”, “After this, therefore because of this.” The Post Hoc Fallacy has the following general form:

  1. A occurs before B.
  2. Therefore A is the cause of B.

Many examples of this fallacy are given in the newspapers every day. “I lost my pencil this morning and an earthquake occurred in California this afternoon.” Therefore….. Of course, we are certain that this sort of error could never occur in the 21st century, but I would like to suggest to the contrary that its frequency is probably on the rise in ecology and evolutionary biology, and the culprit (A) is most often climate change.

Hilborn and Stearns (1982) pointed out many years ago that most ecological and evolutionary changes have multiple causes, and thus we must learn to deal with multiple causation in which a variety of factors combine and interact to produce an observed outcome. This point of view places an immediate dichotomy between the two extremes of ecological thinking – single factor experiments to determine causation cleanly versus the “many factors are involved” world view. There are a variety of intermediate views of ecological causality between these two extremes, leading in part to the flow chart syndrome of boxes and arrows aptly described by my CSIRO colleague Kent Williams as “horrendograms”. If you are a natural resource manager you will prefer the simple end of the spectrum to answer the management question of ‘what can I possibly manipulate to change an undesirable outcome for this population or community?’

Many ecological changes are going on today in the world, populations are declining or increasing, species are disappearing, geographical distributions are moving toward the poles or to higher altitudes, and novel diseases are appearing in populations of plants and animals. The simplest explanation of all these changes is that climate change is the major cause because in every part of the Earth some aspect of winter or summer climate is changing. This might be correct, or it might be an example of the Post Hoc Fallacy. How can we determine which explanation is correct?

First, for any ecological change it is important to identify a mechanism of change. Climate, or more properly weather, is itself a complex factor of temperature, humidity, and rainfall, and for climate to be considered a proper cause you must advance some information on physiology or behaviour or genetics that would link some specific climate parameter to the changes observed. Information on possible mechanisms makes the potential explanation more feasible. A second step is to make some specific predictions that can be tested either by experiments or by further observational data. Berteaux et al. (2006) provided a careful list of suggestions on how to proceed in this manner, and Tavecchia et al. (2016) have illustrated how one traditional approach to studying the impact of climate change on population dynamics could lead to forecasting errors.

A second critical focus must be on long-term studies of the population or community of interest. In particular, 3-4 year studies common in Ph.D. theses must make the assumption that the results are a random sample of annual ecological changes. Often this is not the case and this can be recognized when longer term studies are completed or more easily if an experimental manipulation can be carried out on the mechanisms involved.

The retort to these complaints about ecological and evolutionary inference is that all investigated problems are complex and multifactorial, so that after much investigation one can conclude only that “many factors are involved”. The application of AIC analysis attempts to blunt this criticism by taking the approach that, given the data (the evidence), what hypothesis is best supported? Hobbs and Hilborn (2006) provide a guide to the different methods of inference that can improve on the standard statistical approach. The AIC approach has always carried with it the awareness of the possibility that the correct hypothesis is not present in the list being evaluated, or that some combination of relevant factors cannot be tested because the available data does not cover a wide enough range of variation. Burnham et al. (2011) provide an excellent checklist for the use of AIC measures to discriminate among hypotheses. Guthery et al. (2005) and Stephens et al. (2005) carry the discussion in interesting ways. Cade (2015) discusses an interesting case in which inappropriate AIC methods lead to questionable conclusions about habitat distribution preferences and use by sage-grouse in Colorado.

If there is a simple message in all this it is to think very carefully about what the problem is in any investigation, what the possible solutions or hypotheses are that could explain the problem, and then utilize the best statistical methods to answer that question. Older statistical methods are not necessarily bad, and newer statistical methods not automatically better for solving problems. The key lies in good data, relevant to the problem being investigated. And if you are a beginning investigator, read some of these papers.

Berteaux, D., et al. 2006. Constraints to projecting the effects of climate change on mammals. Climate Research 32(2): 151-158. doi: 10.3354/cr032151.

Burnham, K.P., Anderson, D.R., and Huyvaert, K.P. 2011. AIC model selection and multimodel inference in behavioral ecology: some background, observations, and comparisons. Behavioral Ecology and Sociobiology 65(1): 23-35. doi: 10.1007/s00265-010-1029-6.

Guthery, F.S., Brennan, L.A., Peterson, M.J., and Lusk, J.J. 2005. Information theory in wildlife science: Critique and viewpoint. Journal of Wildlife Management 69(2): 457-465. doi: 10.1890/04-0645.

Hilborn, R., and Stearns, S.C. 1982. On inference in ecology and evolutionary biology: the problem of multiple causes. Acta Biotheoretica 31: 145-164. doi: 10.1007/BF01857238

Hobbs, N.T., and Hilborn, R. 2006. Alternatives to statistical hypothesis testing in ecology: a guide to self teaching. Ecological Applications 16(1): 5-19. doi: 10.1890/04-0645

Stephens, P.A., Buskirk, S.W., Hayward, G.D., and Del Rio, C.M. 2005. Information theory and hypothesis testing: a call for pluralism. Journal of Applied Ecology 42(1): 4-12. doi: 10.1111/j.1365-2664.2005.01002.x

Tavecchia, G., et al. 2016. Climate-driven vital rates do not always mean climate-driven population. Global Change Biology 22(12): 3960-3966. doi: 10.1111/gcb.13330.

A Modest Proposal for a New Ecology Journal

I read the occasional ecology paper and ask myself how this particular paper ever got published when it is full of elementary mistakes and shows no understanding of the literature. But alas we can rarely do anything about this as individuals. If you object to what a particular paper has concluded because of its methods or analysis, it is usually impossible to submit a critique that the relevant journal will publish. After all, which editor would like to admit that he or she let a hopeless paper through the publication screen. There are some exceptions to this rule, and I list two examples below in the papers by Barraquand (2014) and Clarke (2014). But if you search the Web of Science you will find few such critiques for published ecology papers.

One solution jumped to mind for this dilemma: start a new ecology journal perhaps entitled Misleading Ecology Papers: Critical Commentary Unfurled. Papers submitted to this new journal would be restricted to a total of 5 pages and 10 references, and all polemics and personal attacks would be forbidden. The key for submissions would be to state a critique succinctly, and suggest a better way to construct the experiment or study, a new method of analysis that is more rigorous, or key papers that were missed because they were published before 2000. These rules would potentially leave a large gap for some very poor papers to avoid criticism, papers that would require a critique longer than the original paper. Perhaps one very long critique could be distinguished as a Review of the Year paper. Alternatively, some long critiques could be published in book form (Peters 1991), and not require this new journal. The Editor of the journal would require all critiques to be signed by the authors, but would permit in exceptional circumstances to have the authors be anonymous to prevent job losses or in more extreme cases execution by the Mafia. Critiques of earlier critiques would be permitted in the new journal, but an infinite regress will be discouraged. Book reviews could be the subject of a critique, and the great shortage of critical book reviews in the current publication blitz is another aspect of ecological science that is largely missing in the current journals. This new journal would of course be electronic, so there would be no page charges, and all articles would be open access. All the major bibliographic databases like the Web of Science would be encouraged to catalog the publications, and a doi: would be assigned to each paper from CrossRef.

If this new journal became highly successful, it would no doubt be purchased by Wiley-Blackwell or Springer for several million dollars, and if this occurred, the profits would accrue proportionally to all the authors who had published papers to make this journal popular. The sale of course would be contingent on the purchaser guaranteeing not to cancel the entire journal to prevent any criticism of their own published papers.

At the moment criticism of ecological science does not occur for several years after a poor paper is published and by that time the Donald Rumsfeld Effect would have occurred to apply the concept of truth to the conclusions of this poor work. For one example, most of the papers critiqued by Clarke (2014) were more than 10 years old. By making the feedback loop much tighter, certainly within one year of a poor paper appearing, budding ecologists could be intercepted before being led off course.

This journal would not be popular with everyone. Older ecologists often strive mightily to prevent any criticism of their prior conclusions, and some young ecologists make their career by pointing out how misleading some of the papers of the older generation are. This new journal would assist in creating a more egalitarian ecological world by producing humility in older ecologists and more feelings of achievements in young ecologists who must build up their status in the science. Finally, the new journal would be a focal point for graduate seminars in ecology by bringing together and identifying the worst of the current crop of poor papers in ecology. Progress would be achieved.

 

Barraquand, F. 2014. Functional responses and predator–prey models: a critique of ratio dependence. Theoretical Ecology 7(1): 3-20. doi: 10.1007/s12080-013-0201-9.

Clarke, P.J. 2014. Seeking global generality: a critique for mangrove modellers. Marine and Freshwater Research 65(10): 930-933. doi: 10.1071/MF13326.

Peters, R.H. 1991. A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp. ISBN:0521400171

 

Climate Change and Ecological Science

One dominant paradigm of the ecological literature at the present time is what I would like to call the Climate Change Paradigm. Stated in its clearest form, it states that all temporal ecological changes now observed are explicable by climate change. The test of this hypothesis is typically a correlation between some event like a population decline, an invasion of a new species into a community, or the outbreak of a pest species and some measure of climate. Given clever statistics and sufficient searching of many climatic measurements with and without time lags, these correlations are often sanctified by p< 0.05. Should we consider this progress in ecological understanding?

An early confusion in relating climate fluctuations to population changes was begun by labelling climate as a density independent factor within the density-dependent model of population dynamics. Fortunately, this massive confusion was sorted out by Enright (1976) but alas I still see this error repeated in recent papers about population changes. I think that much of the early confusion of climatic impacts on populations was due to this classifying all climatic impacts as density-independent factors.

One’s first response perhaps might be that indeed many of the changes we see in populations and communities are indeed related to climate change. But the key here is to validate this conclusion, and to do this we need to talk about the mechanisms by which climate change is acting on our particular species or species group. The search for these mechanisms is much more difficult than the demonstration of a correlation. To become more convincing one might predict that the observed correlation will continue for the next 5 (10, 20?) years and then gather the data to validate the correlation. Many of these published correlations are so weak as to preclude any possibility of validation in the lifetime of a research scientist. So the gold standard must be the deciphering of the mechanisms involved.

And a major concern is that many of the validations of the climate change paradigm on short time scales are likely to be spurious correlations. Those who need a good laugh over the issue of spurious correlation should look at Vigen (2015), a book which illustrates all too well the fun of looking for silly correlations. Climate is a very complex variable and a nearly infinite number of measurements can be concocted with temperature (mean, minimum, maximum), rainfall, snowfall, or wind, analyzed over any number of time periods throughout the year. We are always warned about data dredging, but it is often difficult to know exactly what authors of any particular paper have done. The most extreme examples are possible to spot, and my favorite is this quotation from a paper a few years ago:

“A total of 864 correlations in 72 calendar weather periods were examined; 71 (eight percent) were significant at the p< 0.05 level. …There were 12 negative correlations, p< 0.05, between the number of days with (precipitation) and (a demographic measure). A total of 45- positive correlations, p<0.05, between temperatures and (the same demographic measure) were disclosed…..”

The climate change paradigm is well established in biogeography and the major shifts in vegetation that have occurred in geological time are well correlated with climatic changes. But it is a large leap of faith to scale this well established framework down to the local scale of space and a short-term time scale. There is no question that local short term climate changes can explain many changes in populations and communities, but any analysis of these kinds of effects must consider alternative hypotheses and mechanisms of change. Berteaux et al. (2006) pointed out the differences between forecasting and prediction in climate models. We desire predictive models if we are to improve ecological understanding, and Berteaux et al. (2006) suggested that predictive models are successful if they follow three rules:

(1) Initial conditions of the system are well described (inherent noise is small);

(2) No important variable is excluded from the model (boundary conditions are defined adequately);

(3) Variables used to build the model are related to each other in the proper way (aggregation/representation is adequate).

Like most rules for models, whether these conditions are met is rarely known when the model is published, and we need subsequent data from the real world to see if the predictions are correct.

I am much less convinced that forecasting models are useful in climate research. Forecasting models describe an ecological situation based on correlations among the measurements available with no clear mechanistic model of the ecological interactions involved. My concern was highlighted in a paper by Myers (1998) who investigated for fish populations the success of published juvenile recruitment-environmental factor (typically temperature) correlations and found that very few forecasting models were reliable when tested against additional data obtained after publication. It would be useful for someone to carry out a similar analysis for bird and mammal population models.

Small mammals show some promise for predictive models in some ecosystems. The analysis by Kausrud et al. (2008) illustrates a good approach to incorporating climate into predictive explanations of population change in Norwegian lemmings that involve interactions between climate and predation. The best approach in developing these kinds of explanations and formulating them into models is to determine how the model performs when additional data are obtained in the years to follow publication.

The bottom line is to avoid spurious climatic correlations by describing and evaluating mechanistic models that are based on observable biological factors. And then make predictions that can be tested in a realistic time frame. If we cannot do this, we risk publishing fairy tales rather than science.

Berteaux, D., et al. (2006) Constraints to projecting the effects of climate change on mammals. Climate Research, 32, 151-158. doi: 10.3354/cr032151

Enright, J. T. (1976) Climate and population regulation: the biogeographer’s dilemma. Oecologia, 24, 295-310.

Kausrud, K. L., et al. (2008) Linking climate change to lemming cycles. Nature, 456, 93-97. doi: 10.1038/nature07442

Myers, R. A. (1998) When do environment-recruitment correlations work? Reviews in Fish Biology and Fisheries, 8, 285-305. doi: 10.1023/A:1008828730759

Vigen, T. (2015) Spurious Correlations, Hyperion, New York City. ISBN: 978-031-633-9438

On Critical Questions in Biodiversity and Conservation Ecology

Biodiversity can be a vague concept with so many measurement variants to make one wonder what it is exactly, and how to incorporate ideas about biodiversity into scientific hypotheses. Even if we take the simplest concept of species richness as the operational measure, many questions arise about the importance of the rare species that make up most of the biodiversity but so little of the biomass. How can we proceed to a better understanding of this nebulous ecological concept that we continually put before the public as needing their attention?

Biodiversity conservation relies on community and ecosystem ecology for guidance on how to advance scientific understanding. A recent paper by Turkington and Harrower (2016) articulates this very clearly by laying out 7 general questions for analyzing community structure for conservation of biodiversity. As such these questions are a general model for community and ecosystem ecology approaches that are needed in this century. Thus it would pay to look at these 7 questions more closely and to read this new paper. Here is the list of 7 questions from the paper:

  1. How are natural communities structured?
  2. How does biodiversity determine the function of ecosystems?
  3. How does the loss of biodiversity alter the stability of ecosystems?
  4. How does the loss of biodiversity alter the integrity of ecosystems?
  5. Diversity and species composition
  6. How does the loss of species determine the ability of ecosystems to respond to disturbances?
  7. How does food web complexity and productivity influence the relative strength of trophic interactions and how do changes in trophic structure influence ecosystem function?

Turkington and Harrower (2016) note that each of these 7 questions can be asked in at least 5 different contexts in the biodiversity hotspots of China:

  1. How do the observed responses change across the 28 vegetation types in China?
  2. How do the observed responses change from the low productivity grasslands of the Qinghai Plateau to higher productivity grasslands in other parts of China?
  3. How do the observed responses change along a gradient in the intensity of human use or degradation?
  4. How long should an experiment be conducted given that the immediate results are seldom indicative of longer-term outcomes?
  5. How does the scale of the experiment influence treatment responses?

There are major problems in all of this as Turkington and Harrower (2016) and Bruelheide et al. (2014) have discussed. The first problem is to determine what the community is or what the bounds of an ecosystem are. This is a trivial issue according to community and ecosystem ecologists, and all one does is draw a circle around the particular area of interest for your study. But two points remain. Populations, communities, and ecosystems are open systems with no clear boundaries. In population ecology we can master this problem by analyses of movements and dispersal of individuals. On a short time scale plants in communities are fixed in position while their associated animals move on species-specific scales. Communities and ecosystems are not a unit but vary continuously in space and time, making their analysis difficult. The species present on 50 m2 are not the same as those on another plot 100 m or 1000 m away even if the vegetation types are labeled the same. So we replicate plots within what we define to be our community. If you are studying plant dynamics, you can experimentally place all plant species selected in defined plots in a pre-arranged configuration for your planting experiments, but you cannot do this with animals except in microcosms. All experiments are place specific, and if you consider climate change on a 100 year time scale, they are also time specific. We can hope that generality is strong and our conclusions will apply in 100 years but we do not know this now.

But we can do manipulative experiments, as these authors strongly recommend, and that brings a whole new set of problems, outlined for example in Bruelheide et al. (2014, Table 1, page 78) for a forestry experiment in southern China. Decisions about how many tree species to manipulate in what size of plots and what planting density to use are all potentially critical to the conclusions we reach. But it is the time frame of hypothesis testing that is the great unknown. All these studies must be long-term but whether this is 10 years or 50 years can only be found out in retrospect. Is it better to have, for example, forestry experiments around the world carried out with identical protocols, or to adopt a laissez faire approach with different designs since we have no idea yet of what design is best for answering these broad questions.

I suspect that this outline of the broad questions given in Turkington and Harrower (2016) is at least a 100 year agenda, and we need to be concerned how we can carry this forward in a world where funding of research questions has a 3 or 5 year time frame. The only possible way forward, until we win the Lottery, is for all researchers to carry out short term experiments on very specific hypotheses within this framework. So every graduate student thesis in experimental community and ecosystem ecology is important to achieving the goals outlined in these papers. Even if this 100 year time frame is optimistic and achievable, we can progress on a shorter time scale by a series of detailed experiments on small parts of the community or ecosystem at hand. I note that some of these broad questions listed above have been around for more than 50 years without being answered. If we redefine our objectives more precisely and do the kinds of experiments that these authors suggest we can move forward, not with the solution of grand ideas as much as with detailed experimental data on very precise questions about our chosen community. In this way we keep the long-range goal posts in view but concentrate on short-term manipulative experiments that are place and time specific.

This will not be easy. Birds are probably the best studied group of animals on Earth, and we now have many species that are changing in abundance dramatically over large spatial scales (e.g. http://www.stateofcanadasbirds.org/ ). I am sobered by asking avian ecologists why a particular species is declining or dramatically increasing. I never get a good answer, typically only a generally plausible idea, a hand waving explanation based on correlations that are not measured or well understood. Species recovery plans are often based on hunches rather than good data, with few of the key experiments of the type requested by Turkington and Harrower (2016). At the moment the world is changing rather faster than our understanding of these ecological interactions that tie species together in communities and ecosystems. We are walking when we need to be running, and even the Red Queen is not keeping up.

Bruelheide, H. et al. 2014. Designing forest biodiversity experiments: general considerations illustrated by a new large experiment in subtropical China. Methods in Ecology and Evolution, 5, 74-89. doi: 10.1111/2041-210X.12126

Turkington, R. & Harrower, W.L. 2016. An experimental approach to addressing ecological questions related to the conservation of plant biodiversity in China. Plant Diversity, 38, 1-10. Available at: http://journal.kib.ac.cn/EN/volumn/current.shtml

Hypothesis testing using field data and experiments is definitely NOT a waste of time

At the ESA meeting in 2014 Greg Dwyer (University of Chicago) gave a talk titled “Trying to understand ecological data without mechanistic models is a waste of time.” This theme has recently been reiterated on Dynamic Ecology Jeremy Fox, Brian McGill and Megan Duffy’s blog (25 January 2016 https://dynamicecology.wordpress.com/2016/01/25/trying-to-understand-ecological-data-without-mechanistic-models-is-a-waste-of-time/).  Some immediate responses to this blog have been such things as “What is a mechanistic model?” “What about the use of inappropriate statistics to fit mechanistic models,” and “prediction vs. description from mechanistic models”.  All of these are relevant and interesting issues in interpreting the value of mechanistic models.

The biggest fallacy however in this blog post or at least the title of the blog post is the implication that field ecological data are collected in a vacuum.  Hypotheses are models, conceptual models, and it is only in the absence of hypotheses that trying to understand ecological data is a “waste of time”. Research proposals that fund field work demand testable hypotheses, and testing hypotheses advances science. Research using mechanistic models should also develop testable hypotheses, but mechanistic models are certainly are not the only route to hypothesis creation of testing.

Unfortunately, mechanistic models rarely identify how the robustness and generality of the model output could be tested from ecological data and often fail comprehensively to properly describe the many assumptions made in constructing the model. In fact, they are often presented as complete descriptions of the ecological relationships in question, and methods for model validation are not discussed. Sometimes modelling papers include blatantly unrealistic functions to simplify ecological processes, without exploring the sensitivity of results to the functions.

I can refer to my own area of research expertise, population cycles for an example here.  It is not enough for example to have a pattern of ups and downs with a 10-year periodicity to claim that the model is an acceptable representation of cyclic population dynamics of for example a forest lepidopteran or snowshoe hares. There are many ways to get cyclic dynamics in modeled systems. Scientific progress and understanding can only be made if the outcome of conceptual, mechanistic or statistical models define the hypotheses that could be tested and the experiments that could be conducted to support the acceptance, rejection or modification of the model and thus to inform understanding of natural systems.

How helpful are mechanistic models – the gypsy moth story

Given the implication of Dwyer’s blog post (or at least blog post title) that mechanistic models are the only way to ecological understanding, it is useful to look at models of gypsy moth dynamics, one of Greg’s areas of modeling expertise, with the view toward evaluating whether model assumptions are compatible with real-world data Dwyer et al.  2004  (http://www.nature.com/nature/journal/v430/n6997/abs/nature02569.html)

Although there has been considerable excellent work on gypsy moth over the years, long-term population data are lacking.  Population dynamics therefore are estimated by annual estimates of defoliation carried out by the US Forest Service in New England starting in 1924. These data show periods of non-cyclicity, two ten-year cycles (peaks in 1981 and 1991 that are used by Dwyer for comparison to modeled dynamics for a number of his mechanistic models) and harmonic 4-5 year cycles between 1943 and1979 and since the 1991 outbreak. Based on these data 10-year cycles are the exception not the rule for introduced populations of gypsy moth. Point 1. Many of the Dwyer mechanistic models were tested using the two outbreak periods and ignored over 20 years of subsequent defoliation data lacking 10-year cycles. Thus his results are limited in their generality.

As a further example a recent paper, Elderd et al. (2013)  (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3773759/) explored the relationship between alternating long and short cycles of gypsy moth in oak dominated forests by speculating that inducible tannins in oaks modifies the interactions between gypsy moth larvae and viral infection. Although previous field experiments (D’Amico et al. 1998) http://onlinelibrary.wiley.com/doi/10.1890/0012-9658(1998)079%5b1104:FDDNAW%5d2.0.CO%3b2/abstract concluded that gypsy moth defoliation does not affect tannin levels sufficiently to influence viral infection, Elderd et al. (2013) proposed that induced tannins in red oak foliage reduces variation in viral infection levels and promotes shorter cycles. In this study, an experiment was conducted using jasmonic acid sprays to induce oak foliage. Point 2 This mechanistic model is based on experiments using artificially induced tannins as a mimic of insect damage inducing plant defenses. However, earlier fieldwork showed that foliage damage does not influence virus transmission and thus does not support the relevance of this mechanism.

In this model Elderd et al. (2013) use a linear relationship for viral transmission (transmission of infection on baculovirus density) based on two data points and the 0 intercept. In past mechanistic models and in a number of other systems the relationship between viral transmission and host density is nonlinear (D’Amico et al. 2005, http://onlinelibrary.wiley.com/doi/10.1111/j.0307-6946.2005.00697.x/abstract;jsessionid=D93D281ACD3F94AA86185EFF95AC5119.f02t02?userIsAuthenticated=false&deniedAccessCustomisedMessage= Fenton et al. 2002, http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2656.2002.00656.x/full). Point 3. Data are insufficient to accurately describe the viral transmission relationship used in the model.

Finally the Elderd et al. (2013) model considers two types of gypsy moth habitat – one composed of 43% oaks that are inducible and the other of 15% oaks and the remainder of the forest composition is in adjacent blocks of non-inducible pines. Data show that gypsy moth outbreaks are limited to areas with high frequencies of oaks. In mixed forests, pines are only fed on by later instars of moth larvae when oaks are defoliated. The pines would be interspersed amongst the oaks not in separate blocks as in the modeled population. Point 4.  Patterns of forest composition in the models that are crucial to the result are unrealistic and this makes the interpretation of the results impossible.

Point 5 and conclusion. Because it can be very difficult to critically review someone else’s mechanistic model as model assumptions are often hidden in supplementary material and hard to interpret, and because relationships used in models are often arbitrarily chosen and not based on available data, it could be easy to conclude that “mechanistic models are misleading and a waste of time”. But of course that wouldn’t be productive. So my final point is that closer collaboration between modelers and data collectors would be the best way to ensure that the models are reasonable and accurate representations of the data.  In this way understanding and realistic predictions would be advanced. Unfortunately the great push to publish high profile papers works against this collaboration and manuscripts of mechanistic models rarely include data savvy referees.

D’Amico, V., J. S. Elkinton, G. Dwyer, R. B. Willis, and M. E. Montgomery. 1998. Foliage damage does not affect within-season transmission of an insect virus. Ecology 79:1104-1110.

D’Amico, V. D., J. S. Elkinton, P. J.D., J. P. Buonaccorsi, and G. Dwyer. 2005. Pathogen clumping: an explanation for non-linear transmission of an insect virus. Ecological Entomology 30:383-390.

Dwyer, G., F. Dushoff, and S. H. Yee. 2004. The combined effects of pathogens and predators on insect outbreaks. Nature 430:341-345.

Elderd, B. D., B. J. Rehill, K. J. Haynes, and G. Dwyer. 2013. Induced plant defenses, host–pathogen interactions, and forest insect outbreaks. Proceedings of the National Academy of Sciences 110:14978-14983.

Fenton, A., J. P. Fairbairn, R. Norman, and P. J. Hudson. 2002. Parasite transmission: reconciling theory and reality. Journal of Animal Ecology 71:893-905.

A Survey of Strong Inference in Ecology Papers: Platt’s Test and Medawar’s Fraud Model

In 1897 Chamberlin wrote an article in the Journal of Geology on the method of multiple working hypotheses as a way of experimentally testing scientific ideas (Chamberlin 1897 reprinted in Science). Ecology was scarcely invented at that time and this has stimulated my quest here to see if current ecology journals subscribe to Chamberlin’s approach to science. Platt (1964) formalized this approach as “strong inference” and argued that it was the best way for science to progress rapidly. If this is the case (and some do not agree that this approach is suitable for ecology) then we might use this model to check now and then on the state of ecology via published papers.

I did a very small survey in the Journal of Animal Ecology for 2015. Most ecologists I hope would classify this as one of our leading journals. I asked the simple question of whether in the Introduction to each paper there were explicit hypotheses stated and explicit alternative hypotheses, and categorized each paper as ‘yes’ or ‘no’. There is certainly a problem here in that many papers stated a hypothesis or idea they wanted to investigate but never discussed what the alternative was, or indeed if there was an alternative hypothesis. As a potential set of covariates, I tallied how many times the word ‘hypothesis’ or ‘hypotheses’ occurred in each paper, as well as the word ‘test’, ‘prediction’, and ‘model’. Most ‘model’ and ‘test’ words were used in the context of statistical models or statistical tests of significance. Singular and plural forms of these words were all counted.

This is not a publication and I did not want to spend the rest of my life looking at all the other ecology journals and many issues, so I concentrated on the Journal of Animal Ecology, volume 84, issues 1 and 2 in 2015. I obtained these results for the 51 articles in these two issues: (number of times the word appeared per article, averaged over all articles)

Explicit hypothesis and alternative hypotheses

“Hypothesis”

“Test”

“Prediction”

“Model”

Yes

22%

Mean

3.1

7.9

6.5

32.5

No

78%

Median

1

6

4

20

No. articles

51

Range

0-23

0-37

0-27

0-163

There are lots of problems with a simple analysis like this and perhaps its utility may lie in stimulating a more sophisticated analysis of a wider variety of journals. It is certainly not a random sample of the ecology literature. But maybe it gives us a few insights into ecology 2015.

I found the results quite surprising in that many papers failed Platt’s Test for strong inference. Many papers stated hypotheses but failed to state alternative hypotheses. In some cases the implied alternative hypothesis is the now-discredited null hypothesis (Johnson 2002). One possible reason for the failure to state hypotheses clearly was discussed by Medawar many years ago (Howitt and Wilson 2014; Medawar 1963). He pointed out that most scientific papers were written backwards, analysing the data, finding out what it concluded, and then writing the introduction to the paper knowing the results to follow. A significant number of papers in these issues I have looked at here seem to have been written following Medawar’s “fraud model”.

But make of such data as you will, and I appreciate that many people write papers in a less formal style than Medawar or Platt would prefer. And many have alternative hypotheses in mind but do not write them down clearly. And perhaps many referees do not think we should be restricted to using the hypothetical deductive approach to science. All of these points of view should be discussed rather than ignored. I note that some ecological journals now turn back papers that have no clear statement of a hypothesis in the introduction to the submitted paper.

The word ‘model’ is the most common word to appear in this analysis, typically in the case of a statistical model evaluated by AIC kinds of statistics. And the word ‘test’ was most commonly used in statistical tests (‘t-test’) in a paper. Indeed virtually all of these paper overflow with statistical estimates of various kinds. Few however come back in the conclusions to state exactly what progress has been made by their paper and even less make statements about what should be done next. From this small survey there is considerable room for improvement in ecological publications.

Chamberlin, T.C. 1897. The method of multiple working hypotheses. Journal of Geology 5: 837-848 (reprinted in Science 148: 754-759 in 1965). doi:10.1126/science.148.3671.754

Howitt, S.M., and Wilson, A.N. 2014. Revisiting “Is the scientific paper a fraud?”. EMBO reports 15(5): 481-484. doi:10.1002/embr.201338302

Johnson, D.H. (2002) The role of hypothesis testing in wildlife science. Journal of Wildlife Management 66(2): 272-276. doi: 10.2307/3803159

Medawar, P.B. 1963. Is the scientific paper a fraud? In “The Threat and the Glory”. Edited by P.B. Medawar. Harper Collins, New York. pp. 228-233. (Reprinted by Harper Collins in 1990. ISBN: 9780060391126.)

Platt, J.R. 1964. Strong inference. Science 146: 347-353. doi:10.1126/science.146.3642.347

Is Ecology like Economics?

One statement in Thomas Piketty’s book on economics struck me as a possible description of ecology’s development. On page 32 he states:

“To put it bluntly, the discipline of economics has yet to get over its childish passion for mathematics and for purely theoretical and often highly ideological speculation at the expense of historical research and collaboration with the other social sciences. Economists are all too often preoccupied with petty mathematical problems of interest only to themselves. This obsession with mathematics is an easy way of acquiring the appearance of scientificity without having to answer the far more complex questions posed by the world we live in.”

If this is at least a partially correct summary of ecology’s history, we could argue that finally in the last 20 years ecology has begun to analyze the far more complex questions posed by the ecological world. But it does so with a background of oversimplified models, whether verbal or mathematical, that we are continually trying to fit our data into. Square pegs into round holes.

Part of this problem arises from the hierarchy of science in which physics and in particular mathematics are ranked as the ideals of science to which we should all strive. It is another verbal model of the science world constructed after the fact with little attention to the details of how physics and the other hard sciences have actually progressed over the past three centuries.

Sciences also rank high in the public mind when they provide humans with more gadgets and better cars and airplanes, so that technology and science are always confused. Physics led to engineering which led to all our modern gadgets and progress. Biology has assisted medicine in continually improving human health, and natural history has enriched our lives by raising our appreciation of biodiversity. But ecology has provided a less clearly articulated vision for humans with a new list of commandments that seem to inhibit economic ‘progress’. Much of what we find in conservation biology and wildlife management simply states the obvious that humans have made a terrible mess of life on Earth – extinctions, overharvesting, pollution of lakes and the ocean, and invasive weeds among other things. In some sense ecologists are like the priests of old, warning us that God or some spiritual force will punish us if we violate some commandments or regulations. In our case it is the Earth that suffers from poorly thought out human alterations, and, in a nutshell, CO2 is the new god that will indeed guarantee that the end is near. No one really wants to hear or believe this, if we accept the polls taken in North America.

So the bottom line for ecologists should be to concentrate on the complex questions posed by the biological world, and try first to understand the problems and second to suggest some way to solve them. Much easier said than done, as we can see from the current economic mess in what might be a sister science.

Piketty, T. 2014. Capital in the Twenty-First Century. Belknap Press, Harvard University, Boston. 696 pp. ISBN 9780674430006