Category Archives: Experimental Design in Ecology

On Research Questions in Ecology

I have done considerable research in arctic Canada on questions of population and community ecology, and perhaps because of this I get e mails about new proposals. This one just arrived from a NASA program called ABoVE that is just now starting up.

“Climate change in the Arctic and Boreal region is unfolding faster than anywhere else on Earth, resulting in reduced Arctic sea ice, thawing of permafrost soils, decomposition of long- frozen organic matter, widespread changes to lakes, rivers, coastlines, and alterations of ecosystem structure and function. NASA’s Terrestrial Ecology Program is in the process of planning a major field campaign, the Arctic-Boreal Vulnerability Experiment (ABoVE), which will take place in Alaska and western Canada during the next 5 to 8 years.“

“The focus of this solicitation is the initial research to begin the Arctic-Boreal Vulnerability Experiment (ABoVE) field campaign — a large-scale study of ecosystem responses to environmental change in western North America’s Arctic and boreal region and the implications for social-ecological systems. The Overarching Science Question for ABoVE is: “How vulnerable or resilient are ecosystems and society to environmental change in the Arctic and boreal region of western North America? “

I begin by noting that Peters (1991) wrote very much about the problems with these kinds of ‘how’ questions. First of all note that this is not a scientific question. There is no conceivable way to answer this question. It contains a set of meaningless words to an ecologist who is interested in testing alternative hypotheses.

One might object that this is not a research question but a broad brush agenda for more detailed proposals that will be phrased in such a way to become scientific questions. Yet it boggles the mind to ask how vulnerable ecosystems are to anything unless one is very specific. One has to define an ecosystem, difficult if it is an open system, and then define what vulnerable means operationally, and then define what types of environmental changes should be addressed – temperature, rainfall, pollution, CO2. And all of that over the broad expanse of arctic and boreal western North America, a sampling problem on a gigantic scale. Yet an administrator or politician could reasonably ask at the end of this program, ‘Well, what is the answer to this question?’ That might be ‘quite vulnerable’, and then we could go on endlessly with meaningless questions and answers that might pass for science on Fox News but not I would hope at the ESA. We can in fact measure how primary production changes over time, how much CO2 is sequestered or released from the soils of the arctic and boreal zone, but how do we translate this into resilience, another completely undefined empirical ecological concept?

We could attack the question retrospectively by asking for example: How resilient have arctic ecosystems been to the environmental changes of the past 30 years? We can document that shrubs have increased in abundance and biomass in some areas of the arctic and boreal zone (Myers-Smith et al. 2011), but what does that mean for the ecosystem or society in particular? We could note that there are almost no data on these questions because funding for northern science has been pitiful, and that raises the issue that if these changes we are asking about occur on a time scale of 30 or 50 years, how will we ever keep monitoring them over this time frame when research is doled out in 3 and 5 year blocks?

The problem of tying together ecosystems and society is that they operate on different time scales of change. Ecosystem changes in terrestrial environments of the North are slow, societal changes are fast and driven by far more obvious pressures than ecosystem changes. The interaction of slow and fast variables is hard enough to decipher scientifically without having many external inputs.

So perhaps in the end this Arctic-Boreal Vulnerability Experiment (another misuse of the word ‘experiment’) will just describe a long-term monitoring program and provide the funding for much clever ecological research, asking specific questions about exactly what parts of what ecosystems are changing and what the mechanisms of change involve. Every food web in the North is a complex network of direct and indirect interactions, and I do not know anyone who has a reliable enough understanding to predict how vulnerable any single element of the food web is to climate change. Like medieval scholars we talk much about changes of state or regime shifts, or tipping points with a model of how the world should work, but with little long term data to even begin to answer these kinds of political questions.

My hope is that this and other programs will generate some funding that will allow ecologists to do some good science. We may be fiddling while Rome is burning, but at any rate we could perhaps understand why it is burning. That also raises the issue of whether or not understanding is a stimulus for action on items that humans can control.

Myers-Smith, I.H., et al. (2011) Expansion of canopy-forming willows over the 20th century on Herschel Island, Yukon Territory, Canada. Ambio, 40, 610-623.

Peters, R.H. (1991) A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp.

On Indices of Population Abundance

I am often surprised at ecological meetings by how many ecological studies rely on indices rather than direct measures. The most obvious cases involve population abundance. Two common criteria for declaring a species as endangered are that its population has declined more than 70% in the last ten years (or three generations) or that its population size is less than 2500 mature individuals. The criteria are many and every attempt is made to make them quantitative. But too often the methods used to estimate changes in population abundance are based on an index of population size, and all too rarely is the index calibrated against known abundances. If an index increases by 2-fold, e.g. from 20 to 40 counts, it is not at all clear that this means the population size has increased 2-fold. I think many ecologists begin their career thinking that indices are useful and reliable and end their career wondering if they are providing us with a correct picture of population changes.

The subject of indices has been discussed many times in ecology, particularly among applied ecologists. Anderson (2001) challenged wildlife ecologists to remember that indices include an unmeasured term, detectability: Anderson (2001, p. 1295) wrote:

“While common sense might suggest that one should estimate parameters of interest (e.g., population density or abundance), many investigators have settled for only a crude index value (e.g., “relative abundance”), usually a raw count. Conceptually, such an index value (c) is the product of the parameter of interest (N) and a detection or encounter probability (p): then c=pN

He noted that many indices used by ecologists make a large assumption that the probability of encounter is a constant over time and space and individual observers. Much of the discussion of detectability flowed from these early papers (Williams, Nichols & Conroy 2002; Southwell, Paxton & Borchers 2008). There is an interesting exchange over Anderson’s (2001) paper by Engeman (2003) followed by a retort by Anderson (2003) that ended with this blast at small mammal ecologists:

“Engeman (2003) notes that McKelvey and Pearson (2001) found that 98% of the small-mammal studies reviewed resulted in too little data for valid mark-recapture estimation. This finding, to me, reflects a substantial failure of survey design if these studies were conducted to estimate population size. ……..O’Connor (2000) should not wonder “why ecology lags behind biology” when investigators of small-mammal communities commonly (i.e., over 700 cases) achieve sample sizes <10. These are empirical methods; they cannot be expected to perform well without data.” (page 290)

Take that you small mammal trappers!

The warnings are clear about index data. In some cases they may be useful but they should never be used as population abundance estimates without careful validation. Even by small mammal trappers like me.

Anderson, D.R. (2001) The need to get the basics right in wildlife field studies. Wildlife Society Bulletin, 29, 1294-1297.

Anderson, D.R. (2003) Index values rarely constitute reliable information. Wildlife Society Bulletin, 31, 288-291.

Engeman, R.M. (2003) More on the need to get the basics right: population indices. Wildlife Society Bulletin, 31, 286-287.

McKelvey, K.S. & Pearson, D.E. (2001) Population estimation with sparse data: the role of estimators versus indices revisited. Canadian Journal of Zoology, 79, 1754-1765.

O’Connor, R.J. (2000) Why ecology lags behind biology. The Scientist, 14, 35.

Southwell, C., Paxton, C.G.M. & Borchers, D.L. (2008) Detectability of penguins in aerial surveys over the pack-ice off Antarctica. Wildlife Research, 35, 349-357.

Williams, B.K., Nichols, J.D. & Conroy, M.J. (2002) Analysis and Management of Animal Populations. Academic Press, New York.

Back to p-Values

Alas ecology has slipped lower on the totem-pole of serious sciences by an article that has captured the attention of the media:

Low-Décarie, E., Chivers, C., and Granados, M. 2014. Rising complexity and falling explanatory power in ecology. Frontiers in Ecology and the Environment 12(7): 412-418. doi: 10.1890/130230.

There is much that is positive in this paper, so you should read it if only to decide whether or not to use it in a graduate seminar in statistics or in ecology. Much of what is concluded is certainly true, that there are more p-values in papers now than there were some years ago. The question then comes down to what these kinds of statistics mean and how this would justify a conclusion captured by the media that explanatory power in ecology is declining over time, and the bottom line of what to do about falling p-values. Since as far as I can see most statisticians today seem to believe that p-values are meaningless (e.g. Ioannidis 2005), one wonders what the value of showing this trend is. A second item that most statisticians agree about is that R2 values are a poor measure of anything other than the items in a particular data set. Any ecological paper that contains data to be analysed and reported summarizes many tests providing p-values and R2 values of which only some are reported. It would be interesting to do a comparison with what is recognized as a mature science (like physics or genetics) by asking whether the past revolutions in understanding and prediction power in those sciences corresponded with increasing numbers of p-values or R2 values.

To ask these questions is to ask what is the metric of scientific progress? At the present time we confuse progress with some indicators that may have little to do with scientific advancement. As journal editors we race to increase their impact factor which is interpreted as a measure of importance. For appointments to university positions we ask how many citations a person has and how many papers they have produced. We confuse scientific value with some numbers which ironically might have a very low R2 value as predictors of potential progress in a science. These numbers make sense as metrics to tell publication houses how influential their journals are, or to tell Department Heads how fantastic their job choices are, but we fool ourselves if we accept them as indicators of value to science.

If you wish to judge scientific progress you might wish to look at books that have gathered together the most important papers of the time, and examine a sequence of these from the 1950s to the present time. What is striking is that papers that seemed critically important in the 1960s or 1970s are now thought to be concerned with relatively uninteresting side issues, and conversely papers that were ignored earlier are now thought to be critical to understanding. A list of these changes might be a useful accessory to anyone asking about how to judge importance or progress in a science.

A final comment would be to look at the reasons why a relatively mature science like geology has completely failed to be able to predict earthquakes in advance and even to specify the locations of some earthquakes (Steina et al. 2012; Uyeda 2013). Progress in understanding does not of necessity dictate progress in prediction. And we ought to be wary of confusing progress with p-and R2 values.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLoS Medicine 2(8): e124.

Steina, S., Gellerb, R.J., and Liuc, M. 2012. Why earthquake hazard maps often fail and what to do about it. Tectonophysics 562-563: 1-24. doi: 10.1016/j.tecto.2012.06.047.

Uyeda, S. 2013. On earthquake prediction in Japan. Proceedings of the Japan Academy, Series B 89(9): 391-400. doi: 10.2183/pjab.89.391.

Models need testable predictions to be useful

It has happened again.  I have just been to a seminar on genetic models – something about adaptation of species on the edges of their ranges.  Yes this is an interesting topic of relevance to interpreting species’ responses to changing environments.  It ended by the speaker saying something like, “It would be a lot of work to test this in the field”. How much more useful my hour would have been spent if the talk had ended with “Although it would be difficult to do, this model makes the following predictions that could be tested in the field,” or “The following results would reject the hypothesis upon which this model is based.”

Now it is likely that some found these theoretical machinations interesting and satisfying in some mathematical way, but I feel that it is irresponsible to not even consider how a model could be tested and the possibility (a likely possibility at that) that it doesn’t apply to nature and tells us nothing helpful about understanding what is going to happen to willow or birch shrubs at the edge of their ranges in the warming arctic (for example).

Recommendation – no paper on models should be published or talked about unless it makes specific, testable predictions of how the model can be tested.

Experimental Model Systems in Ecology

Ecology progresses slowly when we have to study natural populations or communities. It is expensive to manipulate large units of habitat, and there are two approaches that suggest themselves to alleviate this problem. First, study small areas that can be analysed and manipulated by one or two persons. This can be a useful approach, depending on your question and hypotheses, and I do not discuss this approach here. The second approach is through experimental model systems. Typically this means taking the question or problem into a semi-laboratory system. For aquatic studies it may mean putting large cylinders in a lake (Carpenter 1996). For rodent studies it may mean putting populations into small fenced enclosures. For sake of clarity I will discuss this latter example with which I am familiar.

The key question for all experimental model systems in ecology is to know at what spatial and temporal scale the system works. To gain precision we typically want to conduct our studies within an enclosure of some small size. That is, we wish to study an open system with more precision by converting it to a closed system of some much smaller size. But what size allows the system to operate as an open natural population, in this example of rodents? In a sense we wish to know the shape of this generalized curve:

EMS_Drawing1

Assume there is some natural outcome known for the particular study. In the case of small rodents this might be that the population fluctuates in periodic ‘cycles’. The question then is what size of enclosure is needed to observe this same population trend. One simple way of looking at this is to ask for islands, what size of island allows a closed population to fluctuate in ‘cycles’. For this particular problem we know that you cannot observe ‘cycles’ in small rooms in the laboratory or even in 1 ha field enclosures.

Many other examples can be given for this type of question in ecology. For example, we may know that infanticide in a particular species is rare in natural populations. But if we raise the same species in small cages in the laboratory, we may observe infanticide very commonly. We would conclude that this is not the natural state of this system, and thus decide that you could not draw conclusions about the frequency of infanticide by studying it in small cages.

The critical judgement is whether any experimental model system we design will mimic natural processes that occur in open, real world populations or communities. All too often in ecological studies we assume that the size of the enclosure or study area that we are using is “natural” and the conclusions will represent what happens in natural populations or communities. In an ideal world we would examine a series of sizes of our study enclosures to see the best one that mimics natural outcomes. But this cannot always be done for reasons of time and money. In some cases we have no idea what the natural situation is, and in these cases it is most difficult to know if our model system results bear any relationship to reality.

This whole issue is another way of looking at the problem of habitat fragmentation – how small a piece of habitat can we get by with to conserve species X or community Y? These types of conservation questions always involve a temporal as well as a spatial dimension, given the problem of extinction debts (Krauss et al. 2010). In the extreme case we can argue that we can conserve at least some species in zoos, but this is a way of avoiding the main goal of conserving natural environments and processes.

The bottom line is to ask yourself as you are setting up a study using an experimental model system approach whether the process you are investigating can be observed at the spatial and temporal scale you have available. Alternatively it may be important to try to construct the curve shown above for the system of interest. This question is important because some previous studies for any ecological system may have reached invalid conclusions because of a faulty spatial scale of the model system.

Carpenter, S. R. 1996. Microcosm experiments have limited relevance for community and ecosystem ecology. Ecology 77:677-680.

Krauss, J., R. Bommarco, M. Guardiola, R. K. Heikkinen, A. Helm, M. Kuussaari, R. Lindborg, E. Öckinger, M. Pärtel, J. Pino, J. Pöyry, K. M. Raatikainen, A. Sang, C. Stefanescu, T. Teder, M. Zobel, and I. Steffan-Dewenter. 2010. Habitat fragmentation causes immediate and time-delayed biodiversity loss at different trophic levels. Ecology Letters 13:597-605.