Monthly Archives: December 2014

Is Community Ecology Impossible?

John Lawton writing in 1999 about general laws in ecological studies stated:

“…. ecological patterns and the laws, rules and mechanisms that underpin them are contingent on the organisms involved, and their environment…. The contingency [due to different species’ attributes] becomes overwhelmingly complicated at intermediate scales, characteristic of community ecology, where there are a large number of case histories, and very little other than weak, fuzzy generalizations….. To discover general patterns, laws and rules in nature, ecology may need to pay less attention to the ‘middle ground’ of community ecology, relying less on reductionism and experimental manipulation, but increasing research efforts into macroecology.” (Lawton 1999, page 177)

There are two generalizations here to consider: first that macroecology is the way forward, and second that community ecology is a difficult area that can lead only to fuzzy generalizations. I will leave the macroecology issue to later, and concentrate on the idea that community ecology can never develop general laws.

The last 15 years of ecological research has partly justified Lawton’s skepticism because progress in community ecology has largely rested on local studies and local generalizations. One illustration of the difficulty of devising generalities is the controversy over the intermediate disturbance hypothesis (Schwilk, Keeley & Bond 1997; Wilkinson 1999; Fox 2013a; Fox 2013b; Kershaw & Mallik 2013; Sheil & Burslem 2013). In their recent review Kershaw and Mallik (2013) concluded that confirmation of the intermediate disturbance hypothesis for all studies was around 20%. For terrestrial ecosystems only, support was about 50%. What should we do with hypotheses that fail as often as succeed? That is perhaps a key question in community ecology. Kershaw and Mallik (2013) adopt the approach that states that the intermediate disturbance hypothesis will apply only to grassland communities of moderate productivity. The details here are not important, the strategy of limiting a supposedly general hypothesis to a small set of communities is critical. We are back to the issue of generality. It is certainly progress to set limits on particular hypotheses, but it does leave the land managers hanging. Kershaw and Mallik (2013) state that the rationale for current forest harvesting models in the boreal forest relies on the intermediate disturbance hypothesis being correct for this ecosystem. Does this matter or not? I am not sure.

Prins and Gordon (2014) evaluated a whole series of hypotheses that represented the conventional wisdom in community ecology and concluded that much of what is accepted as well supported community ecological theory has only limited support. If this is accepted (and Simberloff (2014) does not accept it) we are left in an era of chaos in which practical ecosystem management has few clear models for how to proceed unless studies are available at the local level.

Should we conclude that community ecology is impossible? Certainly not, but it may be much more difficult than our simple models suggest, and the results of studies may be more local in application than our current general overarching theories like the intermediate disturbance hypothesis.

The devil is in the details again, and the most successful community ecological studies have essentially been population ecology studies writ large for the major species in the community. Evolution rears its ugly head to confound generalization. There is not, for example, a generalized large mammal predator in every community, and the species of predators that have evolved on different continents do not all follow the same ecological rules. Ecology may be more local than we would like to believe. Perhaps Lawton (1999) was right about community ecology.

Fox, J.W. (2013a) The intermediate disturbance hypothesis is broadly defined, substantive issues are key: a reply to Sheil and Burslem. Trends in Ecology & Evolution, 28, 572-573.

Fox, J.W. (2013b) The intermediate disturbance hypothesis should be abandoned. Trends in Ecology & Evolution, 28, 86-92.

Kershaw, H.M. & Mallik, A.U. (2013) Predicting plant diversity response to disturbance: Applicability of the Intermediate Disturbance Hypothesis and Mass Ratio Hypothesis. Critical Reviews in Plant Sciences, 32, 383-395.

Lawton, J.H. (1999) Are there general laws in ecology? Oikos, 84, 177-192.

Prins, H.H.T. & Gordon, I.J. (eds.) (2014) Invasion Biology and Ecological Theory: Insights from a Continent in Transformation.  Cambridge University Press, Cambridge. 540 pp.

Schwilk, D.W., Keeley, J.E. & Bond, W.J. (1997) The intermediate disturbance hypothesis does not explain fire and diversity pattern in fynbos. Plant Ecology, 132, 77-84.

Sheil, D. & Burslem, D.F.R.P. (2013) Defining and defending Connell’s intermediate disturbance hypothesis: a response to Fox. Trends in Ecology & Evolution, 28, 571-572.

Simberloff, D. (2014) Book Review: Herbert H. T. Prins and Iain J. Gordon (eds.): Invasion biology and ecological theory. Insights from a continent in transformation. Biological Invasions, 16, 2757-2759.

Wilkinson, D.M. (1999) The disturbing history of intermediate disturbance. Oikos, 84, 145-147.

On Adaptive Management

I was fortunate to be on the sidelines at UBC in the 1970s when Carl Walters, Ray Hilborn, and Buzz Holling developed and refined the ideas of adaptive management. Working mostly in a fisheries context in which management is both possible and essential, they developed a new paradigm of how to proceed in the management of natural resources to reduce or avoid the mistakes of the past (Walters & Hilborn 1978). Somehow it was one of those times in science where everything worked because these three ecologists were a near perfect fit to one another, full of new ideas and inspired guesses about how to put their ideas into action. Many other scientists joined in, and Holling (1978) put this collaboration together in a book that can still be downloaded from the website of the International Institute for Applied Systems Analysis (IASA) in Vienna:

Adaptive management became the new paradigm, now taken up with gusto by many natural resources and conservation agencies (Westgate, Likens & Lindenmayer 2013). Adaptive management can be carried out in two different ways. Passive adaptive management involves having a model of the system being managed and manipulating it in a series of ways that improve the model fit over time. Active adaptive management takes several different models and uses different management manipulations to decide which model best describes how the system operates. Both approaches intend to reduce the uncertainty about how the system works so as to define the limits of management options.

The message was (as they argued) nothing more than common sense, to learn by doing. But common sense is uncommonly used, as we see too often even in the 21st century. Adaptive management became very popular in the 1990s, but while many took up the banner of adaptive management, relatively few cases have been successfully completed (Walters 2007; Westgate, Likens & Lindenmayer 2013). There are many different reasons for this (discussed well in these two papers), not the least of which is the communication gap between research scientists and resource managers. Research scientists typically wish to test an ecological hypothesis by a management manipulation, but the resource manager may not be able to use this particular management manipulation in practice because it costs too much. To be useful in the real world any management experiment needs to have careful, long-term monitoring to map its outcome, and management agencies do not often have the opportunity to carry out extensive monitoring. The underlying cause then is mainly financial, and resource agencies rarely have an adequate budget to cover the important wildlife and fisheries issues they are supposed to manage.

If anything, reading this ‘old’ literature should remind ecologists that the problems discussed are inherent in management and will not go away as we move into the era of climate change. Let me stop with a few of the guideposts from Holling’s book:

Treat assessment as an ongoing process…
Remember that uncertainties are inherent…
Involve decision makers early in the analysis…
Establish a degree of belief for each of your alternative models…
Avoid facile and narcotic compression of indicators such as cost/benefit ratios that are generally inappropriate for environmental problems….

And probably remind yourself that there can be wisdom in the elders….

The take-home message for me in re-reading these older papers on adaptive management is that it is similar to the problem we have with models in ecology. We can produce simple models or in this case solutions to management problems on paper, but getting them to work properly in the real world where social viewpoints, political power, and scientific information collide is extremely difficult. This is no reason to stop doing the best science and to try to weld it into management agencies. But it is easier said than done.

Holling, C.S. (1978) Adaptive Environmental Assessment and Management. John Wiley and Sons, Chichester, UK.

Walters, C.J. (2007) Is adaptive management helping to solve fisheries problems? Ambio, 36, 304-307.

Walters, C.J. & Hilborn, R. (1978) Ecological optimization and adaptive management. Annual Review of Ecology and Systematics, 9, 157-188.

Westgate, M.J., Likens, G.E. & Lindenmayer, D.B. (2013) Adaptive management of biological systems: A review. Biological Conservation, 158, 128-139.

On Repeatability in Ecology

One of the elementary lessons of statistics is that every measurement must be repeatable so that differences or changes in some ecological variable can be interpreted with respect to some ecological or environmental mechanism. So if we count 40 elephants in one year and count 80 in the following year, we know that population abundance has changed and we do not have to consider the possibility that the repeatability of our counting method is so poor that 40 and 80 could refer to the same population size. Both precision and bias come into the discussion at this point. Much of the elaboration of ecological methods involves the attempt to improve the precision of methods such as those for estimating abundance or species richness. There is less discussion of the problem of bias.

The repeatability that is most crucial in forging a solid science is that associated with experiments. We should not simply do an important experiment in a single place and then assume the results apply world-wide. Of course we do this, but we should always remember that this is a gigantic leap of faith. Ecologists are often not willing to repeat critical experiments, in contrast to scientists in chemistry or molecular biology. Part of this reluctance is understandable because the costs associated with many important field experiments is large and funding committees must then judge whether to repeat the old or fund the new. But if we do not repeat the old, we never can discover the limits to our hypotheses or generalizations. Given a limited amount of money, experimental designs often limit the potential generality of the conclusions. Should you have 2 or 4 or 6 replicates? Should you have more replicates and fewer treatment sites or levels of manipulation? When we can, we try one way and then another to see if we get similar results.

A looming issue now is climate change which means that the ecosystem studied in 1980 is possibly rather different than the one you now study in 2014, or the place someone manipulated in 1970 is not the same community you manipulated this year. The worst case scenario would be to find out that you have to do the same experiment every ten years to check if the whole response system has changed. Impossible with current funding levels. How can we develop a robust set of generalizations or ‘theories’ in ecology if the world is changing so that the food webs we so carefully described have now broken down? I am not sure what the answers are to these difficult questions.

And then you pile evolution into this mix and wonder if organisms can change like Donelson et al.’s (2012) tropical reef fish, so that climate changes might be less significant than we currently think, at least for some species. The frustration that ecologists now face over these issues with respect to ecosystem management boils over in many verbal discussions like those on “novel ecosystems” (Hobbs et al. 2014, Aronson et al. 2014) that can be viewed as critical decisions about how to think about environmental change or a discussion about angels on pinheads.

Underlying all of this is the global issue of repeatability, and whether our current perceptions of how to manage ecosystems is sufficiently reliable to sidestep the adaptive management scenarios that seem so useful in theory (Conroy et al. 2011) but are at present rare in practice (Keith et al. 2011). The need for action in conservation biology seems to trump the need for repeatability to test the generalizations on which we base our management recommendations. This need is apparent in all our sciences that affect humans directly. In agriculture we release new varieties of crops with minimal long term studies of their effects on the ecosystem, or we introduce new methods such as no till agriculture without adequate studies of its impacts on soil structure and pest species. This kind of hubris does guarantee long term employment in mitigating adverse consequences, but is perhaps not an optimal way to proceed in environmental management. We cannot follow the Hippocratic Oath in applied ecology because all our management actions create winners and losers, and ‘harm’ then becomes an opinion about how we designate ‘winners’ and ‘losers’. Using social science is one way out of this dilemma, but history gives sparse support for the idea of ‘expert’ opinion for good environmental action.

Aronson, J., Murcia, C., Kattan, G.H., Moreno-Mateos, D., Dixon, K. & Simberloff, D. (2014) The road to confusion is paved with novel ecosystem labels: a reply to Hobbs et al. Trends in Ecology & Evolution, 29, 646-647.

Conroy, M.J., Runge, M.C., Nichols, J.D., Stodola, K.W. & Cooper, R.J. (2011) Conservation in the face of climate change: The roles of alternative models, monitoring, and adaptation in confronting and reducing uncertainty. Biological Conservation, 144, 1204-1213.

Donelson, J.M., Munday, P.L., McCormick, M.I. & Pitcher, C.R. (2012) Rapid transgenerational acclimation of a tropical reef fish to climate change. Nature Climate Change, 2, 30-32.

Hobbs, R.J., Higgs, E.S. & Harris, J.A. (2014) Novel ecosystems: concept or inconvenient reality? A response to Murcia et al. Trends in Ecology & Evolution, 29, 645-646.

Keith, D.A., Martin, T.G., McDonald-Madden, E. & Walters, C. (2011) Uncertainty and adaptive management for biodiversity conservation. Biological Conservation, 144, 1175-1178.

On Research Questions in Ecology

I have done considerable research in arctic Canada on questions of population and community ecology, and perhaps because of this I get e mails about new proposals. This one just arrived from a NASA program called ABoVE that is just now starting up.

“Climate change in the Arctic and Boreal region is unfolding faster than anywhere else on Earth, resulting in reduced Arctic sea ice, thawing of permafrost soils, decomposition of long- frozen organic matter, widespread changes to lakes, rivers, coastlines, and alterations of ecosystem structure and function. NASA’s Terrestrial Ecology Program is in the process of planning a major field campaign, the Arctic-Boreal Vulnerability Experiment (ABoVE), which will take place in Alaska and western Canada during the next 5 to 8 years.“

“The focus of this solicitation is the initial research to begin the Arctic-Boreal Vulnerability Experiment (ABoVE) field campaign — a large-scale study of ecosystem responses to environmental change in western North America’s Arctic and boreal region and the implications for social-ecological systems. The Overarching Science Question for ABoVE is: “How vulnerable or resilient are ecosystems and society to environmental change in the Arctic and boreal region of western North America? “

I begin by noting that Peters (1991) wrote very much about the problems with these kinds of ‘how’ questions. First of all note that this is not a scientific question. There is no conceivable way to answer this question. It contains a set of meaningless words to an ecologist who is interested in testing alternative hypotheses.

One might object that this is not a research question but a broad brush agenda for more detailed proposals that will be phrased in such a way to become scientific questions. Yet it boggles the mind to ask how vulnerable ecosystems are to anything unless one is very specific. One has to define an ecosystem, difficult if it is an open system, and then define what vulnerable means operationally, and then define what types of environmental changes should be addressed – temperature, rainfall, pollution, CO2. And all of that over the broad expanse of arctic and boreal western North America, a sampling problem on a gigantic scale. Yet an administrator or politician could reasonably ask at the end of this program, ‘Well, what is the answer to this question?’ That might be ‘quite vulnerable’, and then we could go on endlessly with meaningless questions and answers that might pass for science on Fox News but not I would hope at the ESA. We can in fact measure how primary production changes over time, how much CO2 is sequestered or released from the soils of the arctic and boreal zone, but how do we translate this into resilience, another completely undefined empirical ecological concept?

We could attack the question retrospectively by asking for example: How resilient have arctic ecosystems been to the environmental changes of the past 30 years? We can document that shrubs have increased in abundance and biomass in some areas of the arctic and boreal zone (Myers-Smith et al. 2011), but what does that mean for the ecosystem or society in particular? We could note that there are almost no data on these questions because funding for northern science has been pitiful, and that raises the issue that if these changes we are asking about occur on a time scale of 30 or 50 years, how will we ever keep monitoring them over this time frame when research is doled out in 3 and 5 year blocks?

The problem of tying together ecosystems and society is that they operate on different time scales of change. Ecosystem changes in terrestrial environments of the North are slow, societal changes are fast and driven by far more obvious pressures than ecosystem changes. The interaction of slow and fast variables is hard enough to decipher scientifically without having many external inputs.

So perhaps in the end this Arctic-Boreal Vulnerability Experiment (another misuse of the word ‘experiment’) will just describe a long-term monitoring program and provide the funding for much clever ecological research, asking specific questions about exactly what parts of what ecosystems are changing and what the mechanisms of change involve. Every food web in the North is a complex network of direct and indirect interactions, and I do not know anyone who has a reliable enough understanding to predict how vulnerable any single element of the food web is to climate change. Like medieval scholars we talk much about changes of state or regime shifts, or tipping points with a model of how the world should work, but with little long term data to even begin to answer these kinds of political questions.

My hope is that this and other programs will generate some funding that will allow ecologists to do some good science. We may be fiddling while Rome is burning, but at any rate we could perhaps understand why it is burning. That also raises the issue of whether or not understanding is a stimulus for action on items that humans can control.

Myers-Smith, I.H., et al. (2011) Expansion of canopy-forming willows over the 20th century on Herschel Island, Yukon Territory, Canada. Ambio, 40, 610-623.

Peters, R.H. (1991) A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp.

On Indices of Population Abundance

I am often surprised at ecological meetings by how many ecological studies rely on indices rather than direct measures. The most obvious cases involve population abundance. Two common criteria for declaring a species as endangered are that its population has declined more than 70% in the last ten years (or three generations) or that its population size is less than 2500 mature individuals. The criteria are many and every attempt is made to make them quantitative. But too often the methods used to estimate changes in population abundance are based on an index of population size, and all too rarely is the index calibrated against known abundances. If an index increases by 2-fold, e.g. from 20 to 40 counts, it is not at all clear that this means the population size has increased 2-fold. I think many ecologists begin their career thinking that indices are useful and reliable and end their career wondering if they are providing us with a correct picture of population changes.

The subject of indices has been discussed many times in ecology, particularly among applied ecologists. Anderson (2001) challenged wildlife ecologists to remember that indices include an unmeasured term, detectability: Anderson (2001, p. 1295) wrote:

“While common sense might suggest that one should estimate parameters of interest (e.g., population density or abundance), many investigators have settled for only a crude index value (e.g., “relative abundance”), usually a raw count. Conceptually, such an index value (c) is the product of the parameter of interest (N) and a detection or encounter probability (p): then c=pN

He noted that many indices used by ecologists make a large assumption that the probability of encounter is a constant over time and space and individual observers. Much of the discussion of detectability flowed from these early papers (Williams, Nichols & Conroy 2002; Southwell, Paxton & Borchers 2008). There is an interesting exchange over Anderson’s (2001) paper by Engeman (2003) followed by a retort by Anderson (2003) that ended with this blast at small mammal ecologists:

“Engeman (2003) notes that McKelvey and Pearson (2001) found that 98% of the small-mammal studies reviewed resulted in too little data for valid mark-recapture estimation. This finding, to me, reflects a substantial failure of survey design if these studies were conducted to estimate population size. ……..O’Connor (2000) should not wonder “why ecology lags behind biology” when investigators of small-mammal communities commonly (i.e., over 700 cases) achieve sample sizes <10. These are empirical methods; they cannot be expected to perform well without data.” (page 290)

Take that you small mammal trappers!

The warnings are clear about index data. In some cases they may be useful but they should never be used as population abundance estimates without careful validation. Even by small mammal trappers like me.

Anderson, D.R. (2001) The need to get the basics right in wildlife field studies. Wildlife Society Bulletin, 29, 1294-1297.

Anderson, D.R. (2003) Index values rarely constitute reliable information. Wildlife Society Bulletin, 31, 288-291.

Engeman, R.M. (2003) More on the need to get the basics right: population indices. Wildlife Society Bulletin, 31, 286-287.

McKelvey, K.S. & Pearson, D.E. (2001) Population estimation with sparse data: the role of estimators versus indices revisited. Canadian Journal of Zoology, 79, 1754-1765.

O’Connor, R.J. (2000) Why ecology lags behind biology. The Scientist, 14, 35.

Southwell, C., Paxton, C.G.M. & Borchers, D.L. (2008) Detectability of penguins in aerial surveys over the pack-ice off Antarctica. Wildlife Research, 35, 349-357.

Williams, B.K., Nichols, J.D. & Conroy, M.J. (2002) Analysis and Management of Animal Populations. Academic Press, New York.

On Political Ecology

When I give a general lecture now, I typically have to inform the audience that I am talking about scientific ecology not political ecology. What is the difference? Scientific ecology is classical boring science, stating hypotheses, doing experiments or observations to gather the data, testing the idea, and accepting or rejecting it, outlined clearly in many papers (Platt 1963, Wolff and Krebs (2008), and illustrated in this diagram:

Scientific ecology is clearly out-of-date, and no longer ‘cool’ when compared to the new political ecology.

Political ecology is a curious mix of traditional ecology added to the advocacy issue of protecting biodiversity. Political ecology is aimed at convincing society in general and politicians in particular to protect the Earth’s biodiversity. This is a noble cause, and my complaint is only that when we advocate and use scientific ecology in pursuit of a political agenda we should be scientifically rigorous. Yet much of biodiversity science is a mix of belief and evidence, with unsuitable evidence used in support of what is a noble belief. If we believe that the end justifies the means, we would be happy with this. But I am not.

One example will illustrate my frustration with political ecology. Dirzo et al. (2014) in a recent Science paper give an illustration of the effects of removing large animals from an ecosystem. In their Figure 4, page 404, a set of 4 graphs purport to show experimentally what happens when you remove large wildlife species in Kenya, the Kenya Long-term Exclosure Experiment (Young et al. 1997). But this experiment is hopelessly flawed in being carried out on a set of plots of 4 ha, a postage stamp of habitat relative to large mammal movements and ecosystem processes. But the fact that this particular experiment was not properly designed for the questions it is now being used to address is not a problem if this is political ecology rather than scientific ecology. The overall goal of the Dirzo et al. (2014) paper is admirable, but it is achieved by quoting a whole series of questionable extrapolations given in other papers. The counter-argument in conservation biology has always been that we do not have time to do proper research and we must act now. The consequence is the elevation of expert opinion in conservation science to the realm of truth without going through the proper scientific process.

We are left with this prediction from Dirzo et al. (2014):

“Cumulatively, systematic defaunation clearly threatens to fundamentally alter basic ecological functions and is contributing to push us toward global-scale “tipping points” from which we may not be able to return ……. If unchecked, Anthropocene defaunation will become not only a characteristic of the planet’s sixth mass extinction, but also a driver of fundamental global transformations in ecosystem functioning.”

I fear that statements like this are more akin to something like a religion of conservation fundamentalism, while we proclaim to be scientists.

Dirzo, R., Young, H.S., Galetti, M., Ceballos, G., Isaac, N.J.B. & Collen, B. (2014) Defaunation in the Anthropocene. Science, 345, 401-406.

Platt, J.R. (1964) Strong inference. Science, 146, 347-353.

Wolff, J.O. & Krebs, C.J. (2008) Hypothesis testing and the scientific method revisited. Acta Zoologica Sinica, 54, 383-386.

Young, T.P., Okello, B.D., Kinyua, D. & Palmer, T.M. (1997) KLEE: A long‐term multi‐species herbivore exclusion experiment in Laikipia, Kenya. African Journal of Range & Forage Science, 14, 94-102.