Tag Archives: progress in ecology

On Questionable Research Practices

Ecologists and evolutionary biologists are tarred and feathered along with many scientists who are guilty of questionable research practices. So says this article in “The Conservation” on the web:
https://theconversation.com/our-survey-found-questionable-research-practices-by-ecologists-and-biologists-heres-what-that-means-94421?utm_source=twitter&utm_medium=twitterbutton

Read this article if you have time but here is the essence of what they state:

“Cherry picking or hiding results, excluding data to meet statistical thresholds and presenting unexpected findings as though they were predicted all along – these are just some of the “questionable research practices” implicated in the replication crisis psychology and medicine have faced over the last half a decade or so.

“We recently surveyed more than 800 ecologists and evolutionary biologists and found high rates of many of these practices. We believe this to be first documentation of these behaviours in these fields of science.

“Our pre-print results have certain shock value, and their release attracted a lot of attention on social media.

  • 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking)
  • 42% had collected more data after inspecting whether results were statistically significant (a form of “p hacking”)
  • 51% reported an unexpected finding as though it had been hypothesised from the start (known as “HARKing”, or Hypothesising After Results are Known).”

It is worth looking at these claims a bit more analytically. First, the fact that more than 800 ecologists and evolutionary biologists were surveyed tells you nothing about the precision of these results unless you can be convinced this is a random sample. Most surveys are non-random and yet are reported as though they are a random, reliable sample.

Failing to report results is common in science for a variety of reasons that have nothing to do with questionable research practices. Many graduate theses contain results that are never published. Does this mean their data are being hidden? Many results are not reported because they did not find an expected result. This sounds awful until you realize that journals often turn down papers because they are not exciting enough, even though the results are completely reliable. Other results are not reported because the investigator realized once the study is complete that it was not carried on long enough, and the money has run out to do more research. One would have to have considerable detail about each study to know whether or not these 64% of researchers were “cherry picking”.

Alas the next problem is more serious. The 42% who are accused of “p-hacking” were possibly just using sequential sampling or using a pilot study to get the statistical parameters to conduct a power analysis. Any study which uses replication in time, a highly desirable attribute of an ecological study, would be vilified by this rule. This complaint echos the statistical advice not to use p-values at all (Ioannidis 2005, Bruns and Ioannidis 2016) and refers back to complaints about inappropriate uses of statistical inference (Armhein et al. 2017, Forstmeier et al. 2017). The appropriate solution to this problem is to have a defined experimental design with specified hypotheses and predictions rather than an open ended observational study.

The third problem about unexpected findings hits at an important aspect of science, the uncovering of interesting and important new results. It is an important point and was warned about long ago by Medewar (1963) and emphasized recently by Forstmeier et al. (2017). The general solution should be that novel results in science must be considered tentative until they can be replicated, so that science becomes a self-correcting process. But the temptation to emphasize a new result is hard to restrain in the era of difficult job searches and media attention to novelty. Perhaps the message is that you should read any “unexpected findings” in Science and Nature with a degree of skepticism.

The cited article published in “The Conversation” goes on to discuss some possible interpretations of what these survey results mean. And the authors lean over backwards to indicate that these survey results do not mean that we should not trust the conclusions of science, which unfortunately is exactly what some aspects of the public media have emphasized. Distrust of science can be a justification for rejecting climate change data and rejecting the value of immunizations against diseases. In an era of declining trust in science, these kinds of trivial surveys have shock value but are of little use to scientists trying to sort out the details about how ecological and evolutionary systems operate.

A significant source of these concerns flows from the literature that focuses on medical fads and ‘breakthroughs’ that are announced every day by the media searching for ‘news’ (e.g. “eat butter”, “do not eat butter”). The result is almost a comical model of how good scientists really operate. An essential assumption of science is that scientific results are not written in stone but are always subject to additional testing and modification or rejection. But one result is that we get a parody of science that says “you can’t trust anything you read” (e.g. Ashcroft 2017). Perhaps we just need to repeat to ourselves to be critical, that good science is evidence-based, and then remember George Bernard Shaw’s comment:

Success does not consist in never making mistakes but in never making the same one a second time.

Amrhein, V., Korner-Nievergelt, F., and Roth, T. 2017. The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research. PeerJ  5: e3544. doi: 10.7717/peerj.3544.

Ashcroft, A. 2017. The politics of research-Or why you can’t trust anything you read, including this article! Psychotherapy and Politics International 15(3): e1425. doi: 10.1002/ppi.1425.

Bruns, S.B., and Ioannidis, J.P.A. 2016. p-Curve and p-Hacking in observational research. PLoS ONE 11(2): e0149144. doi: 10.1371/journal.pone.0149144.

Forstmeier, W., Wagenmakers, E.-J., and Parker, T.H. 2017. Detecting and avoiding likely false-positive findings – a practical guide. Biological Reviews 92(4): 1941-1968. doi: 10.1111/brv.12315.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLOS Medicine 2(8): e124. doi: 10.1371/journal.pmed.0020124.

Medawar, P.B. 1963. Is the scientific paper a fraud? Pp. 228-233 in The Threat and the Glory. Edited by P.B. Medawar. Harper Collins, New York. pp. 228-233. ISBN 978-0-06-039112-6

Three Approaches to Ecology

I ask the question here why ecology is not appreciated as a science at a time when it is critical to the survival of the existing world. So the first question we need to answer is if this premise is correct. I offer only one example. A university zoology department has recently produced a discussion paper on its plans for faculty recruitment over the next 15 years. This document does not include the word “ecology” in any of its forward planning. Now it is probably not unusual for biology or zoology departments in major universities to downplay ecology when there is so much excitement in molecular biology, but it is an indicator that ecology is not a good place to put your money and reputation as you await a Nobel Prize. So if we can accept the initial premise that ecology is not appreciated, we might ask why this situation exists, a point raised long ago by O’Connor (2000). Here are a few thoughts on the matter.

There are three broad approaches to the science of ecology – theoretical ecology, empirical ecology, and applied ecology. These three areas of ecology rarely talk to each other, although one might hope that they could in future evolve into a seamless thread of science.

Theoretical ecology deals with the mathematical world that has too often only a tangential concern with ecological problems. It has its own journals and a whole set of elegant discussions that have few connections to the real world. It is most useful for exploring what might be if we make certain mathematical assumptions. It is without question the most prestigious part of the broad science of ecology, partly because it involves elegant mathematics and partly because it does not get involved in all the complexities of real-world ecological systems. It is the physics of ecology. As such it carries on in its own world and tends to be ignored by most of those working in the other two broad areas of ecology.

Empirical ecology has set itself the task of understanding how the natural world works at the level of individuals, populations, communities and ecosystems. In its pure form it does not care about solving practical ecological or environmental problems, but its practitioners assume probably correctly that the information they provide will in fact be useful now or in the future. It seeks generality but rarely finds it because all individuals and species differ in how they play the ecological game of survival. If it has a mantra, it is “the devil is in the details”. The problem is the details of empirical ecology are boring to politicians, business people, and to much of the television generation now operating with a 7 second or 140 character limit on concentration.

Applied ecology is where the action is now, and if you wish to be relevant and topical you should be an applied ecologist, whether a conservation biologist, a forester, or an agricultural scientist. The mantra of applied ecologists is to do no harm to the environment while solving real world problems. Applied ecologists are forced to put the human imprint into empirical ecology, so they are very much concerned with declining populations and extinctions of plants and animals. The main but not the sole impact of humans is on climate change, so much of applied ecology traces back to the impacts of climate change on ecosystems, all added to by the increasing human population with its rising expectations. But applied ecologists are always behind the environmental problems of the day because the issues multiply faster than possible solutions can be evaluated. This ought to make for high employment for applied ecologists but in fact the opposite seems to be happening because governments too often avoid long-term problems beyond their 4-year mandate. If you do not agree, think climate change.

So, the consequence is that we have three independent worlds out there. Applied ecologists are too busy to apply the successful paradigms of empirical ecology to their problems because they are under strict time limits by their line managers who need to suggest immediate action on problems. They must therefore fire off solutions like golf balls in all directions, hoping that some might actually help solve problems. Empirical ecologists may not be helpful for applied ecologists if they are overwhelmed by the details of their particular system of study and are constrained by the ‘publish or perish’ mentality of the granting agencies.

Finally, we lay on top all this a lack of funding in the environmental sciences for investigating and solving both immediate and long-term ecological problems. And I am back to my favourite quote in the ecological literature:

“Humans, including ecologists, have a peculiar fascination with attempting to correct one ecological mistake with another, rather than removing the source of the problem.” (Schindler 1997).

What can we do about this? Three things. Pressure our politicians to increase funding on long-term environmental problems. This will provide the person-power to find and test solutions to our known problems. Vote with your ballot and your feet to improve sustainability. And whether you are young or old strive to do no harm to the Earth. And if all this is too difficult, take some practical advice not to buy a house in Miami Beach, or any house near the beach. Do something for the environment every day.

 

O’Connor, R.J. (2000) Why ecology lags behind biology. The Scientist 14(20):35. (October 16, 2000).

Schindler, D.W. (1997) Liming to restore acidified lakes and streams: a typical approach to restoring damaged ecosystems? Restoration Ecology 5:1-6

 

On Ecological Predictions

The gold standard of ecological studies is the understanding of a particular ecological issue or system and the ability to predict the operation of that system in the future. A simple example is the masting of trees (Pearse et al. 2016). Mast seeding is synchronous and highly variable seed production among years by a population of perennial plants. One ecological question is what environmental drivers cause these masting years and what factors can be used to predict mast years. Weather cues and plant resource states presumably interact to determine mast years. The question I wish to raise here, given this widely observed natural history event, is how good our predictive models can be on a spatial and temporal scale.

On a spatial scale masting events can be widespread or localized, and this provides some cues to the important weather variables that might be important. Assuming we can derive weather models for prediction, we face two often unknown constraints – space and time. If we can derive a weather model for trees in New Zealand, will it also apply to trees in Australia or California? Or on a more constrained geographical view, if it applied on the South Island of New Zealand will it also apply on the North Island? At the other extreme, must we derive models for every population of particular plants in different areas, so that predictability is spatially limited? We hope not and work on the assumption of more spatial generality than what we can measure on our particular small study areas.

The temporal stability of our explanations is now particularly worrisome because of climate change. If we have a good model of masting for a particular tree species in 2017, will it still be working in 2030, 2050 or 2100? A physicist would never ask such a question since a “scientific law” is independent of time. But biology in general and ecology in particular is not time independent both because of evolution and now in particular because of changing climate. But we have not faced up to whether or not we must check our “ecological laws” over and over again as the environment changes, and if we have to do this what must the time scale of rechecking be? Perhaps this question can be answered by determining the speed of potential evolutionary change in species groups. If virus diseases can evolve quickly in terms of months or years, we must be eternally vigilant to consider if the flu virus of 2017 is going to be the same as that of 2016. We should not stop virus research and say that we have sorted out some universal model that will become an equivalent of the laws of physics.

The consequences of these simple observations are not simple. One consequence is the implication that monitoring is an essential ecological activity. But in most ecological funding agencies monitoring is thought to be unscientific, not leading to progress, and more stamp collecting. So we have to establish that, like the Weather Bureau every country supports, we must have an equivalent ecological monitoring bureau. We do have these bureaus for some ecological systems that make money, like marine fisheries, but most other ecosystems are left in limbo with little or no funding on the generalized assumption that “mother or father nature will take care of itself” or expressed more elegantly by a cabinet minister who must be nameless, “there is no need for more forestry research, as we know everything we need to know already”. The urge by politicians to cut research funding lives too much in environmental research.

But ecologists are not just ‘stamp collectors’ as some might think. We need to develop generality but at a time scale and a spatial scale that is reliable and useful for the resolution of the problem that gave rise to the research. Typically for ecological issues this time scale would be 10-25 years, and a rule of thumb might be for 10 generations of the organisms being studied. For many of our questions an annual scale might be most useful, but for long-lived plants and animals we must be thinking of decades or even centuries. Some practical examples from Pacifici et al. (2013): If you study field voles (Microtus spp.) typically you can complete your studies of 10 generations in 3.5 years (on average). If you study red squirrels (Tamiasciurus hudsonicus), the same 10 generations will cost you 39 years, and if red foxes (Vulpes vulpes) 58 years. If wildebeest (Connochaetes taurinus) in the Serengeti, 10 generations will take you 80 years, and if you prefer red kangaroos (Macropus rufus) it will take about 90 years. All these estimates are very approximate but they give you an idea of what the time scale of a long-term study might be. Except for the rodent example, all these study durations are nearly impossible to achieve, and the question for ecologists is this: Should we be concerned about these time scales, or should we scale everything to the human research time scale?

The spatial scale has expanded greatly for ecologists with the advent of radio transmitters and the possibility of satellite tracking. These technological advances allow many conservation questions regarding bird migration to be investigated (e.g. Oppel et al. 2015). But no matter what the spatial scale of interest in a research or management program, variation among individuals and sites must be analyzed by means of the replication of measurements or manipulations at several sites. The spatial scale is dictated by the question under investigation, and the issue of fragmentation has focused attention on the importance of spatial movements both for ecological and evolutionary questions (Betts et al. 2014).

And the major question remains: can we construct an adequate theory of ecology from a series of short-term, small area or small container studies?

Betts, M.G., Fahrig, L., Hadley, A.S., Halstead, K.E., Bowman, J., Robinson, W.D., Wiens, J.A. & Lindenmayer, D.B. (2014) A species-centered approach for uncovering generalities in organism responses to habitat loss and fragmentation. Ecography, 37, 517-527. doi: 10.1111/ecog.00740

Oppel, S., Dobrev, V., Arkumarev, V., Saravia, V., Bounas, A., Kret, E., Velevski, M., Stoychev, S. & Nikolov, S.C. (2015) High juvenile mortality during migration in a declining population of a long-distance migratory raptor. Ibis, 157, 545-557. doi: 10.1111/ibi.12258

Pacifici, M., Santini, L., Di Marco, M., Baisero, D., Francucci, L., Grottolo Marasini, G., Visconti, P. & Rondinini, C. (2013) Database on generation length of mammals. Nature Conservation, 5, 87-94. doi: 10.3897/natureconservation.5.5734

Pearse, I.S., Koenig, W.D. & Kelly, D. (2016) Mechanisms of mast seeding: resources, weather, cues, and selection. New Phytologist, 212 (3), 546-562. doi: 10.1111/nph.14114

Ecological Alternative Facts

It has become necessary to revise my recent ecological thinking about the principles of ecology along the lines now required in the New World Order. I list here the thirteen cardinal principles of the new ecology 2017:

  1. Population growth is unlimited and is no longer subject to regulation.
  2. Communities undergo succession to the final equilibrium state of the 1%.
  3. Communities and ecosystems are resilient to any and all disturbances and operate best when challenged most strongly, for example with oil spills.
  4. Resources are never limiting under any conditions for the 1% and heavy exploitation helps them to trickle down readily to assist the other 99%.
  5. Overexploiting populations is good for the global ecosystem because it gets rid of the species that are wimps.
  6. Mixing of faunas and floras have been shown over the last 300 years to contribute to the increasing ecological health of Earth.
  7. Recycling is unnecessary in view of recent advances in mining technology.
  8. Carbon dioxide is a valuable resource for plants and we must increase its contribution to atmospheric chemistry.
  9. Climate change is common and advantageous since it occurs from night to day, and has always been with us for many millions of years.
  10. Evolution maximizes wisdom and foresight, especially in mammals.
  11. Conservation of less fit species is an affront to alternative natural laws that were recognized during the 18th century and are now mathematically defined in the new synthetic theory of economic and ecological fitness.
  12. Scientific experiments are no longer necessary because we have computers and technological superiority.
  13. Truth in science is no longer necessary and must be balanced against equally valid post-truth beliefs.

The old ecology, now superseded, was illustrated in Krebs (2016), and is already out of date. Recommendations for other alternative ecological facts will be welcome. Please use the comments.

Krebs, C.J. (2016) Why Ecology Matters. University of Chicago Press, Chicago. 208 pp.

Technology Can Lead Us Astray

Our iPhones teach us very subtly to have great faith in technology. This leads the public at large to think that technology will solve large issues like greenhouse gases and climate change. But for scientists we should remember that technology must be looked at very carefully when it tells us we have a shortcut to ecological measurement and understanding. For the past 35 years satellite data has been available to calculate an index of greening for vegetation from large landscapes. The available index is called NDVI, normalized difference vegetation index, and is calculated as a ratio of near infrared light to red light reflected from the vegetation being surveyed. I am suspicious that NDVI measurements tell ecologists anything that is useful for the understanding of vegetation dynamics and ecosystem stability. Probably this is because I am focused on local scale events and landscapes of hundreds of km2 and in particular what is happening in the forest understory. The key to one’s evaluation of these satellite technologies most certainly lies in the questions under investigation.

A whole array of different satellites have been used to measure NDVI and since the more recent satellites have different precision and slightly different physical characteristics, there is some problem of comparing results from different satellites in different years if one wishes to study long-term trends (Guay et al. 2014). It is assumed that NDVI measurements can be translated into aboveground net primary production and can be used to start to answer ecological questions about seasonal and annual changes in primary production and to address general issues about the impact of rising CO2 levels on ecosystems.

All inferences about changes in primary production on a broad scale hinge on the reliability of NDVI as an accurate measure of net primary production. Much has been written about the use of NDVI measures and the need for ground truthing. Community ecologists may be concerned about specific components of the vegetation rather than an overall green index, and the question arises whether NDVI measures in a forest community are able to capture changes in both the trees and the understory, or for that matter in the ground vegetation. For overall carbon capture estimates, a greenness index may be accurate enough, but if one wishes to determine whether deciduous trees are replacing evergreen trees, NDVI may not be very useful.

How can we best validate satellite based estimates of primary productivity? To do this on a landscape scale we need to have large areas with ground truthing. Field crops are one potential source of such data. Kang et al. (2016) used crops to quantify the relationship between remotely sensed leaf-area index and other satellite measures such as NDVI. The relationships are clear in a broad sense but highly variable in particular, so that the ability to predict crop yields from satellite data at local levels is subject to considerable error. Johnson (2016, Fig. 6, p. 75) found the same problem with crops such as barley and cotton (see sample data set below). So there is good news and bad news from these kinds of analyses. The good news is that we can have extensive global coverage of trends in vegetation parameters and crop production, but the bad news is that at the local level this information may not be helpful for studies that require high precision for example in local values of net primary production. Simply to assume that satellite measures are accurate measures of ecological variables like net aboveground primary production is too optimistic at present, and work continues on possible improvements.

Many of the critical questions about community changes associated with climate change cannot in my opinion be answered by remote sensing unless there is a much higher correlation of ground-based research that is concurrent with satellite imagery. We must look critically at the available data. Blanco et al. (2016) for example compared NDVI estimates from MODIS satellite data with primary production monitored on the ground in harvested plots in western Argentina. The regression between NDVI and estimated primary production had R2 values of 0.35 for the overall annual values and 0.54 for the data restricted to the peak of annual growth. Whether this is a satisfactory statistical association is up to plant ecologists to decide. I think it is not, and the substitution of p values for the utility of such relationships is poor ecology. Many more of these kind of studies need to be carried out.

The advent of using drones for very detailed spectral data on local study areas will open new opportunities to derive estimates of primary production. For the present I think we should be aware that NDVI and its associated measures of ‘greenness’ from satellites may not be a very reliable measure for local or landscape values of net primary production. Perhaps it is time to move back to the field and away from the computer to find out what is happening to global plant growth.

Blanco, L.J., Paruelo, J.M., Oesterheld, M., and Biurrun, F.N. 2016. Spatial and temporal patterns of herbaceous primary production in semi-arid shrublands: a remote sensing approach. Journal of Vegetation Science 27(4): 716-727. doi: 10.1111/jvs.12398.

Guay, K.C., Beck, P.S.A., Berner, L.T., Goetz, S.J., Baccini, A., and Buermann, W. 2014. Vegetation productivity patterns at high northern latitudes: a multi-sensor satellite data assessment. Global Change Biology 20(10): 3147-3158. doi: 10.1111/gcb.12647.

Johnson, D.M. 2016. A comprehensive assessment of the correlations between field crop yields and commonly used MODIS products. International Journal of Applied Earth Observation and Geoinformation 52(1): 65-81. doi: 10.1016/j.jag.2016.05.010.

Kang, Y., Ozdogan, M., Zipper, S.C., Roman, M.O., and Walker, J. 2016. How universal Is the relationship between remotely sensed vegetation Indices and crop leaf area Index? A global assessment. Remote Sensing 2016 8(7): 597 (591-529). doi: 10.3390/rs8070597.

Cotton yield vs NDVI Index

A Modest Proposal for a New Ecology Journal

I read the occasional ecology paper and ask myself how this particular paper ever got published when it is full of elementary mistakes and shows no understanding of the literature. But alas we can rarely do anything about this as individuals. If you object to what a particular paper has concluded because of its methods or analysis, it is usually impossible to submit a critique that the relevant journal will publish. After all, which editor would like to admit that he or she let a hopeless paper through the publication screen. There are some exceptions to this rule, and I list two examples below in the papers by Barraquand (2014) and Clarke (2014). But if you search the Web of Science you will find few such critiques for published ecology papers.

One solution jumped to mind for this dilemma: start a new ecology journal perhaps entitled Misleading Ecology Papers: Critical Commentary Unfurled. Papers submitted to this new journal would be restricted to a total of 5 pages and 10 references, and all polemics and personal attacks would be forbidden. The key for submissions would be to state a critique succinctly, and suggest a better way to construct the experiment or study, a new method of analysis that is more rigorous, or key papers that were missed because they were published before 2000. These rules would potentially leave a large gap for some very poor papers to avoid criticism, papers that would require a critique longer than the original paper. Perhaps one very long critique could be distinguished as a Review of the Year paper. Alternatively, some long critiques could be published in book form (Peters 1991), and not require this new journal. The Editor of the journal would require all critiques to be signed by the authors, but would permit in exceptional circumstances to have the authors be anonymous to prevent job losses or in more extreme cases execution by the Mafia. Critiques of earlier critiques would be permitted in the new journal, but an infinite regress will be discouraged. Book reviews could be the subject of a critique, and the great shortage of critical book reviews in the current publication blitz is another aspect of ecological science that is largely missing in the current journals. This new journal would of course be electronic, so there would be no page charges, and all articles would be open access. All the major bibliographic databases like the Web of Science would be encouraged to catalog the publications, and a doi: would be assigned to each paper from CrossRef.

If this new journal became highly successful, it would no doubt be purchased by Wiley-Blackwell or Springer for several million dollars, and if this occurred, the profits would accrue proportionally to all the authors who had published papers to make this journal popular. The sale of course would be contingent on the purchaser guaranteeing not to cancel the entire journal to prevent any criticism of their own published papers.

At the moment criticism of ecological science does not occur for several years after a poor paper is published and by that time the Donald Rumsfeld Effect would have occurred to apply the concept of truth to the conclusions of this poor work. For one example, most of the papers critiqued by Clarke (2014) were more than 10 years old. By making the feedback loop much tighter, certainly within one year of a poor paper appearing, budding ecologists could be intercepted before being led off course.

This journal would not be popular with everyone. Older ecologists often strive mightily to prevent any criticism of their prior conclusions, and some young ecologists make their career by pointing out how misleading some of the papers of the older generation are. This new journal would assist in creating a more egalitarian ecological world by producing humility in older ecologists and more feelings of achievements in young ecologists who must build up their status in the science. Finally, the new journal would be a focal point for graduate seminars in ecology by bringing together and identifying the worst of the current crop of poor papers in ecology. Progress would be achieved.

 

Barraquand, F. 2014. Functional responses and predator–prey models: a critique of ratio dependence. Theoretical Ecology 7(1): 3-20. doi: 10.1007/s12080-013-0201-9.

Clarke, P.J. 2014. Seeking global generality: a critique for mangrove modellers. Marine and Freshwater Research 65(10): 930-933. doi: 10.1071/MF13326.

Peters, R.H. 1991. A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp. ISBN:0521400171

 

Climate Change and Ecological Science

One dominant paradigm of the ecological literature at the present time is what I would like to call the Climate Change Paradigm. Stated in its clearest form, it states that all temporal ecological changes now observed are explicable by climate change. The test of this hypothesis is typically a correlation between some event like a population decline, an invasion of a new species into a community, or the outbreak of a pest species and some measure of climate. Given clever statistics and sufficient searching of many climatic measurements with and without time lags, these correlations are often sanctified by p< 0.05. Should we consider this progress in ecological understanding?

An early confusion in relating climate fluctuations to population changes was begun by labelling climate as a density independent factor within the density-dependent model of population dynamics. Fortunately, this massive confusion was sorted out by Enright (1976) but alas I still see this error repeated in recent papers about population changes. I think that much of the early confusion of climatic impacts on populations was due to this classifying all climatic impacts as density-independent factors.

One’s first response perhaps might be that indeed many of the changes we see in populations and communities are indeed related to climate change. But the key here is to validate this conclusion, and to do this we need to talk about the mechanisms by which climate change is acting on our particular species or species group. The search for these mechanisms is much more difficult than the demonstration of a correlation. To become more convincing one might predict that the observed correlation will continue for the next 5 (10, 20?) years and then gather the data to validate the correlation. Many of these published correlations are so weak as to preclude any possibility of validation in the lifetime of a research scientist. So the gold standard must be the deciphering of the mechanisms involved.

And a major concern is that many of the validations of the climate change paradigm on short time scales are likely to be spurious correlations. Those who need a good laugh over the issue of spurious correlation should look at Vigen (2015), a book which illustrates all too well the fun of looking for silly correlations. Climate is a very complex variable and a nearly infinite number of measurements can be concocted with temperature (mean, minimum, maximum), rainfall, snowfall, or wind, analyzed over any number of time periods throughout the year. We are always warned about data dredging, but it is often difficult to know exactly what authors of any particular paper have done. The most extreme examples are possible to spot, and my favorite is this quotation from a paper a few years ago:

“A total of 864 correlations in 72 calendar weather periods were examined; 71 (eight percent) were significant at the p< 0.05 level. …There were 12 negative correlations, p< 0.05, between the number of days with (precipitation) and (a demographic measure). A total of 45- positive correlations, p<0.05, between temperatures and (the same demographic measure) were disclosed…..”

The climate change paradigm is well established in biogeography and the major shifts in vegetation that have occurred in geological time are well correlated with climatic changes. But it is a large leap of faith to scale this well established framework down to the local scale of space and a short-term time scale. There is no question that local short term climate changes can explain many changes in populations and communities, but any analysis of these kinds of effects must consider alternative hypotheses and mechanisms of change. Berteaux et al. (2006) pointed out the differences between forecasting and prediction in climate models. We desire predictive models if we are to improve ecological understanding, and Berteaux et al. (2006) suggested that predictive models are successful if they follow three rules:

(1) Initial conditions of the system are well described (inherent noise is small);

(2) No important variable is excluded from the model (boundary conditions are defined adequately);

(3) Variables used to build the model are related to each other in the proper way (aggregation/representation is adequate).

Like most rules for models, whether these conditions are met is rarely known when the model is published, and we need subsequent data from the real world to see if the predictions are correct.

I am much less convinced that forecasting models are useful in climate research. Forecasting models describe an ecological situation based on correlations among the measurements available with no clear mechanistic model of the ecological interactions involved. My concern was highlighted in a paper by Myers (1998) who investigated for fish populations the success of published juvenile recruitment-environmental factor (typically temperature) correlations and found that very few forecasting models were reliable when tested against additional data obtained after publication. It would be useful for someone to carry out a similar analysis for bird and mammal population models.

Small mammals show some promise for predictive models in some ecosystems. The analysis by Kausrud et al. (2008) illustrates a good approach to incorporating climate into predictive explanations of population change in Norwegian lemmings that involve interactions between climate and predation. The best approach in developing these kinds of explanations and formulating them into models is to determine how the model performs when additional data are obtained in the years to follow publication.

The bottom line is to avoid spurious climatic correlations by describing and evaluating mechanistic models that are based on observable biological factors. And then make predictions that can be tested in a realistic time frame. If we cannot do this, we risk publishing fairy tales rather than science.

Berteaux, D., et al. (2006) Constraints to projecting the effects of climate change on mammals. Climate Research, 32, 151-158. doi: 10.3354/cr032151

Enright, J. T. (1976) Climate and population regulation: the biogeographer’s dilemma. Oecologia, 24, 295-310.

Kausrud, K. L., et al. (2008) Linking climate change to lemming cycles. Nature, 456, 93-97. doi: 10.1038/nature07442

Myers, R. A. (1998) When do environment-recruitment correlations work? Reviews in Fish Biology and Fisheries, 8, 285-305. doi: 10.1023/A:1008828730759

Vigen, T. (2015) Spurious Correlations, Hyperion, New York City. ISBN: 978-031-633-9438

Fishery Models and Ecological Understanding

Anyone interested in population dynamics, fisheries management, or ecological understanding in general will be interested to read the exchanges in Science, 23 April 2016 on the problem of understanding stock changes in the northern cod (Gadus morhua) fishery in the Gulf of Maine. I think this exchange is important to read because it illustrates two general problems with ecological science – how to understand ecological changes with incomplete data, and how to extrapolate what is happening into taking some management action.

What we have here are sets of experts promoting a management view and others contradicting the suggested view. There is no question but that ecologists have made much progress in understanding both marine and freshwater fisheries. Probably the total number of person-years of research on marine fishes like the northern cod would dwarf that on all other ecological studies combined. Yet we are still arguing about fundamental processes in major marine fisheries. You will remember that the northern cod in particular was one of the largest fisheries in the world when it began to be exploited in the 16th century, and by the 1990s it was driven to about 1% of its prior abundance, almost to the status of a threatened species.

Pershing et al. (2015) suggested, based on data on a rise in sea surface temperature in the Gulf of Maine, that cod mortality had increased with temperature and this was causing the fishery management model to overestimate the allowable catch. Palmer et al. (2016) and Swain et al. (2016) disputed their conclusions, and Pershing et al. (2016) responded. The details are in these papers and I do not pretend to know whose views are closest to be correct.

But I’m interested in two facts. First, Science clearly thought this controversy was important and worth publishing, even in the face of a 99% rejection rate for all submissions to that journal. Second, it illustrates that ecology faces a lot of questions when it makes conclusions that natural resource managers should act upon. Perhaps it is akin to medicine in being controversial, even though it is all supposed to be evidence based. It is hard to imagine physical scientists or engineers arguing so publically over the design of a bridge or a hydroelectric dam. Why is it that ecologists so often spend time arguing with one another over this or that theory or research finding? If we admit that our conclusions about the world’s ecosystems are so meager and uncertain, does it mean we have a very long way to go before we can claim to be a hard science? We would hope not but what is the evidence?

One problem so well illustrated here in these papers is the difficulty of measuring the parameters of change in marine fish populations and then tying these estimates to models that are predictive of changes required for management actions. The combination of less than precise data and models that are overly precise in their assumptions could be a deadly combination in the ecological management of natural resources.

Palmer, M.C., Deroba, J.J., Legault, C.M., and Brooks, E.N. 2016. Comment on “Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery”. Science 352(6284): 423-423. doi:10.1126/science.aad9674.

Pershing, A.J., Alexander, M.A., Hernandez, C.M., Kerr, L.A., Le Bris, A., Mills, K.E., Nye, J.A., Record, N.R., Scannell, H.A., Scott, J.D., Sherwood, G.D., and Thomas, A.C. 2016. Response to Comments on “Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery”. Science 352(6284): 423-423. doi:10.1126/science.aae0463.

Pershing, A.J., Alexander, M.A., Hernandez, C.M., Kerr, L.A., Le Bris, A., Mills, K.E., Nye, J.A., Record, N.R., Scannell, H.A., Scott, J.D., Sherwood, G.D., and Thomas, A.C. 2015. Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery. Science 350(6262): 809-812. doi:10.1126/science.aac9819.

Swain, D.P., Benoît, H.P., Cox, S.P., and Cadigan, N.G. 2016. Comment on “Slow adaptation in the face of rapid warming leads to collapse of the Gulf of Maine cod fishery”. Science 352(6284): 423-423. doi:10.1126/science.aad9346.

On Statistical Progress in Ecology

There is a general belief that science progresses over time and given that the number of scientists is increasing, this is a reasonable first approximation. The use of statistics in ecology has been one of ever increasing improvements of methods of analysis, accompanied by bandwagons. It is one of these bandwagons that I want to discuss here by raising the general question:

Has the introduction of new methods of analysis in biological statistics led to advances in ecological understanding?

This is a very general question and could be discussed at many levels, but I want to concentrate on the top levels of statistical inference by means of old-style frequentist statistics, Bayesian methods, and information theoretic methods. I am prompted to ask this question because of my reviewing of many papers submitted to ecological journals in which the data are so buried by the statistical analysis that the reader is left in a state of confusion whether or not any progress has been made. Being amazed by the methodology is not the same as being impressed by the advance in ecological understanding.

Old style frequentist statistics (read Sokal and Rohlf textbook) has been criticized for concentrating on null hypothesis testing when everyone knows the null hypothesis is not correct. This has led to refinements in methods of inference that rely on effect size and predictive power that is now the standard in new statistical texts. Information-theoretic methods came in to fill the gap by making the data primary (rather than the null hypothesis) and asking the question which of several hypotheses best fit the data (Anderson et al. 2000). The key here was to recognize that one should have prior expectations or several alternative hypotheses in any investigation, as recommended in 1897 by Chamberlin. Bayesian analysis furthered the discussion not only by having several alternative hypotheses but by the ability to use prior information in the analysis (McCarthy and Masters 2006). Implicit in both information theoretic and Bayesian analysis is the recognition that all of the alternative hypotheses might be incorrect, and that the hypothesis selected as ‘best’ might have very low predictive power.

Two problems have arisen as a result of this change of focus in model selection. The first is the problem of testability. There is an implicit disregard for the old idea that models or conclusions from an analysis should be tested with further data, preferably with data obtained independently from the original data used to find the ‘best’ model. The assumption might be made that if we get further data, we should add it to the prior data and update the model so that it somehow begins to approach the ‘perfect’ model. This was the original definition of passive adaptive management, which is now suggested to be a poor model for natural resource management. The second problem is that the model selected as ‘best’ may be of little use for natural resource management because it has little predictability. In management issues for conservation or exploitation of wildlife there may be many variables that affect population changes and it may not be possible to conduct active adaptive management for all of these variables.

The take home message is that we need in the conclusions of our papers to have a measure of progress in ecological insight whatever statistical methods we use. The significance of our research will not be measured by the number of p-values, AIC values, BIC values, or complicated tables. The key question must be: What new ecological insights have been achieved by these methods?

Anderson, D.R., Burnham, K.P., and Thompson, W.L. 2000. Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64(4): 912-923.

Chamberlin, T.C. 1897. The method of multiple working hypotheses. Journal of Geology 5: 837-848 (reprinted in Science 148: 754-759 in 1965). doi:10.1126/science.148.3671.754.

McCarthy, M.A., and Masters, P.I.P. 2005. Profiting from prior information in Bayesian analyses of ecological data. Journal of Applied Ecology 42(6): 1012-1019. doi:10.1111/j.1365-2664.2005.01101.x.

Walters, C. 1986. Adaptive Management of Renewable Resources. Macmillan, New York.

 

On Critical Questions in Biodiversity and Conservation Ecology

Biodiversity can be a vague concept with so many measurement variants to make one wonder what it is exactly, and how to incorporate ideas about biodiversity into scientific hypotheses. Even if we take the simplest concept of species richness as the operational measure, many questions arise about the importance of the rare species that make up most of the biodiversity but so little of the biomass. How can we proceed to a better understanding of this nebulous ecological concept that we continually put before the public as needing their attention?

Biodiversity conservation relies on community and ecosystem ecology for guidance on how to advance scientific understanding. A recent paper by Turkington and Harrower (2016) articulates this very clearly by laying out 7 general questions for analyzing community structure for conservation of biodiversity. As such these questions are a general model for community and ecosystem ecology approaches that are needed in this century. Thus it would pay to look at these 7 questions more closely and to read this new paper. Here is the list of 7 questions from the paper:

  1. How are natural communities structured?
  2. How does biodiversity determine the function of ecosystems?
  3. How does the loss of biodiversity alter the stability of ecosystems?
  4. How does the loss of biodiversity alter the integrity of ecosystems?
  5. Diversity and species composition
  6. How does the loss of species determine the ability of ecosystems to respond to disturbances?
  7. How does food web complexity and productivity influence the relative strength of trophic interactions and how do changes in trophic structure influence ecosystem function?

Turkington and Harrower (2016) note that each of these 7 questions can be asked in at least 5 different contexts in the biodiversity hotspots of China:

  1. How do the observed responses change across the 28 vegetation types in China?
  2. How do the observed responses change from the low productivity grasslands of the Qinghai Plateau to higher productivity grasslands in other parts of China?
  3. How do the observed responses change along a gradient in the intensity of human use or degradation?
  4. How long should an experiment be conducted given that the immediate results are seldom indicative of longer-term outcomes?
  5. How does the scale of the experiment influence treatment responses?

There are major problems in all of this as Turkington and Harrower (2016) and Bruelheide et al. (2014) have discussed. The first problem is to determine what the community is or what the bounds of an ecosystem are. This is a trivial issue according to community and ecosystem ecologists, and all one does is draw a circle around the particular area of interest for your study. But two points remain. Populations, communities, and ecosystems are open systems with no clear boundaries. In population ecology we can master this problem by analyses of movements and dispersal of individuals. On a short time scale plants in communities are fixed in position while their associated animals move on species-specific scales. Communities and ecosystems are not a unit but vary continuously in space and time, making their analysis difficult. The species present on 50 m2 are not the same as those on another plot 100 m or 1000 m away even if the vegetation types are labeled the same. So we replicate plots within what we define to be our community. If you are studying plant dynamics, you can experimentally place all plant species selected in defined plots in a pre-arranged configuration for your planting experiments, but you cannot do this with animals except in microcosms. All experiments are place specific, and if you consider climate change on a 100 year time scale, they are also time specific. We can hope that generality is strong and our conclusions will apply in 100 years but we do not know this now.

But we can do manipulative experiments, as these authors strongly recommend, and that brings a whole new set of problems, outlined for example in Bruelheide et al. (2014, Table 1, page 78) for a forestry experiment in southern China. Decisions about how many tree species to manipulate in what size of plots and what planting density to use are all potentially critical to the conclusions we reach. But it is the time frame of hypothesis testing that is the great unknown. All these studies must be long-term but whether this is 10 years or 50 years can only be found out in retrospect. Is it better to have, for example, forestry experiments around the world carried out with identical protocols, or to adopt a laissez faire approach with different designs since we have no idea yet of what design is best for answering these broad questions.

I suspect that this outline of the broad questions given in Turkington and Harrower (2016) is at least a 100 year agenda, and we need to be concerned how we can carry this forward in a world where funding of research questions has a 3 or 5 year time frame. The only possible way forward, until we win the Lottery, is for all researchers to carry out short term experiments on very specific hypotheses within this framework. So every graduate student thesis in experimental community and ecosystem ecology is important to achieving the goals outlined in these papers. Even if this 100 year time frame is optimistic and achievable, we can progress on a shorter time scale by a series of detailed experiments on small parts of the community or ecosystem at hand. I note that some of these broad questions listed above have been around for more than 50 years without being answered. If we redefine our objectives more precisely and do the kinds of experiments that these authors suggest we can move forward, not with the solution of grand ideas as much as with detailed experimental data on very precise questions about our chosen community. In this way we keep the long-range goal posts in view but concentrate on short-term manipulative experiments that are place and time specific.

This will not be easy. Birds are probably the best studied group of animals on Earth, and we now have many species that are changing in abundance dramatically over large spatial scales (e.g. http://www.stateofcanadasbirds.org/ ). I am sobered by asking avian ecologists why a particular species is declining or dramatically increasing. I never get a good answer, typically only a generally plausible idea, a hand waving explanation based on correlations that are not measured or well understood. Species recovery plans are often based on hunches rather than good data, with few of the key experiments of the type requested by Turkington and Harrower (2016). At the moment the world is changing rather faster than our understanding of these ecological interactions that tie species together in communities and ecosystems. We are walking when we need to be running, and even the Red Queen is not keeping up.

Bruelheide, H. et al. 2014. Designing forest biodiversity experiments: general considerations illustrated by a new large experiment in subtropical China. Methods in Ecology and Evolution, 5, 74-89. doi: 10.1111/2041-210X.12126

Turkington, R. & Harrower, W.L. 2016. An experimental approach to addressing ecological questions related to the conservation of plant biodiversity in China. Plant Diversity, 38, 1-10. Available at: http://journal.kib.ac.cn/EN/volumn/current.shtml