Category Archives: Experimental Design in Ecology

On Caribou and Hypothesis Testing

Mountain caribou populations in western Canada have been declining for the past 10-20 years and concern has mounted to the point where extinction of many populations could be imminent, and the Canadian federal government is asking why this has occurred. This conservation issue has supported a host of field studies to determine what the threatening processes are and what we can do about them. A recent excellent summary of experimental studies in British Columbia (Serrouya et al. 2017) has stimulated me to examine this caribou crisis as an illustration of the art of hypothesis testing in field ecology. We teach all our students to specify hypotheses and alternative hypotheses as the first step to solving problems in population ecology, so here is a good example to start with.

From the abstract of this paper, here is a statement of the problem and the major hypothesis:

“The expansion of moose into southern British Columbia caused the decline and extirpation of woodland caribou due to their shared predators, a process commonly referred to as apparent competition. Using an adaptive management experiment, we tested the hypothesis that reducing moose to historic levels would reduce apparent competition and therefore recover caribou populations. “

So the first observation we might make is that much is left out of this approach to the problem. Populations can decline because of habitat loss, food shortage, excessive hunting, predation, parasitism, disease, severe weather, or inbreeding depression. In this case much background research has narrowed the field to focus on predation as a major limitation, so we can begin our search by focusing on the predation factor (review in Boutin and Merrill 2016). In particular Serrouya et al. (2017) focused their studies on the nexus of moose, wolves, and caribou and the supposition that wolves feed preferentially on moose and only secondarily on caribou, so that if moose numbers are lower, wolf numbers will be lower and incidental kills of caribou will be reduced. So they proposed two very specific hypotheses – that wolves are limited by moose abundance, and that caribou are limited by wolf predation. The experiment proposed and carried out was relatively simple in concept: kill moose by allowing more hunting in certain areas and measure the changes in wolf numbers and caribou numbers.

The experimental area contained 3 small herds of caribou (50 to 150) and the unmanipulated area contained 2 herds (20 and 120 animals) when the study began in 2003. The extended hunting worked well, and moose in the experimental area were reduced from about 1600 animals down to about 500 over the period from 2003 to 2014. Wolf numbers in the experimental area declined by about half over the experimental period because of dispersal out of the area and some starvation within the area. So the two necessary conditions of the experiment were satisfied – moose numbers declined by about two-thirds from additional hunting and wolf numbers declined by about half on the experimental area. But the caribou population on the experimental area showed mixed results with one population showing a slight increase in numbers but the other two showing a slight loss. On the unmanipulated area both caribou populations showed a continuing slow decline. On the positive side the survival rate of adult caribou was higher on the experimental area, suggesting that the treatment hypothesis was correct.

From the viewpoint of caribou conservation, the experiment failed to change the caribou population from continuous slow declines to the rapid increase needed to recover these populations to their former greater abundance. At best it could be argued that this particular experiment slowed the rate of caribou decline. Why might this be? We can make a list of possibilities:

  1. Moose numbers on the experimental area were not reduced enough (to 300 instead of to 500 achieved). Lower moose would have meant much lower wolf numbers.
  2. Small caribou populations are nearly impossible to recover because of chance events that affect small numbers. A few wolves or bears or cougars could be making all the difference to populations numbering 10-20 individuals.
  3. The experimental area and the unmanipulated area were not assigned treatments at random. This would mean to a pure statistician that you cannot make statistical comparisons between these two areas.
  4. The general hypothesis being tested is wrong, and predation by wolves is not the major limiting factor to mountain caribou populations. Many factors are involved in caribou declines and we cannot determine what they are because they change for area to area, year to year.
  5. It is impossible to do these landscape experiments because for large landscapes it is impossible to find 2 or more areas that can be considered replicates.
  6. The experimental manipulation was not carried out long enough. Ten years of manipulation is not long for caribou who have a generation time of 15-25 years.

Let us evaluate these 6 points.

#1 is fair enough, hard to achieve a population of moose this low but possible in a second experiment.

#2 is a worry because it is difficult to deal experimentally with small populations, but we have to take the populations as a given at the time we do a manipulation.

#3 is true if you are a purist but is silly in the real world where treatments can never be assigned at random in landscape experiments.

#4 is a concern and it would be nice to include bears and other predators in the studies but there is a limit to people and money. Almost all previous studies in mountain caribou declines have pointed the finger at wolves so it is only reasonable to start with this idea. The multiple factor idea is hopeless to investigate or indeed even to study without infinite time and resources.

#5 is like #3 and it is an impossible constraint on field studies. It is a common statistical fallacy to assume that replicates must be identical in every conceivable way. If this were true, no one could do any science, lab or field.

#6 is correct but was impossible in this case because the management agencies forced this study to end in 2014 so that they could conduct another different experiment. There is always a problem deciding how long a study is sufficient, and the universal problem is that the scientists or (more likely) the money and the landscape managers run out of energy if the time exceeds about 10 years or more. The result is that one must qualify the conclusions to state that this is what happened in the 10 years available for study.

This study involved a heroic amount of field work over 10 years, and is a landmark in showing what needs to be done and the scale involved. It is a far cry from sitting at a computer designing the perfect field experiment on a theoretical landscape to actually carrying out the field work to get the data summarized in this paper. The next step is to continue to monitor some of these small caribou populations, the wolves and moose to determine how this food chain continues to adjust to changes in prey levels. The next experiment needed is not yet clear, and the eternal problem is to find the high levels of funding needed to study both predators and prey in any ecosystem in the detail needed to understand why prey numbers change. Perhaps a study of all the major predators – wolves, bears, cougars – in this system should be next. We now have the radio telemetry advances that allow satellite locations, activity levels, timing of mortality, proximity sensors when predators are near their prey, and even video and sound recording so that more details of predation events can be recorded. But all this costs money that is not yet here because governments and people have other priorities and value the natural world rather less than we ecologists would prefer. There is not yet a Nobel Prize for ecological field research, and yet here is a study on an iconic Canadian species that would be high up in the running.

What would I add to this paper? My curiosity would be satisfied by the number of person-years and the budget needed to collect and analyze these results. These statistics should be on every scientific paper. And perhaps a discussion of what to do next. In much of ecology these kinds of discussions are done informally over coffee and students who want to know how science works would benefit from listening to how these informal discussions evolve. Ecology is far from simple. Physics and chemistry are simple, genetics is simple, and ecology is really a difficult science.

Boutin, S. and Merrill, E. 2016. A review of population-based management of Southern Mountain caribou in BC. {Unpublished review available at: http://cmiae.org/wp-content/uploads/Mountain-Caribou-review-final.pdf

Serrouya, R., McLellan, B.N., van Oort, H., Mowat, G., and Boutin, S. 2017. Experimental moose reduction lowers wolf density and stops decline of endangered caribou. PeerJ  5: e3736. doi: 10.7717/peerj.3736.

 

On Defining a Statistical Population

The more I do “field ecology” the more I wonder about our standard statistical advice to young ecologists to “random sample your statistical population”. Go to the literature and look for papers on “random environmental fluctuations”, or “non-random processes”, or “random mating” and you will be overwhelmed with references and biology’s preoccupation with randomness. Perhaps we should start with the opposite paradigm, that nothing in the biological world is random in space or time, and then the corollary that if your data show a random pattern or random mating or whatever random, it means you have not done enough research and your inferences are weak.

Since virtually all modern statistical inference rests on a foundation of random sampling, every statistician will be outraged by any concerns that random sampling is possible only in situations that are scientifically uninteresting. It is nearly impossible to find an ecological paper about anything in the real world that even mentions what their statistical “population” is, what they are trying to draw inferences about. And there is a very good reason for this – it is quite impossible to define any statistical population except for those of trivial interest. Suppose we wish to measure the heights of the male 12-year-olds that go to school in Minneapolis in 2017. You can certainly do this, and select a random sample, as all statisticians would recommend. And if you continued to do this for 50 years, you would have a lot of data but no understanding of any growth changes in 12-year-old male humans because the children of 2067 in Minneapolis would be different in many ways from those of today. And so, it is like the daily report of the stock market, lots of numbers with no understanding of processes.

Despite all these ‘philosophical’ issues, ecologists carry on and try to get around this by sampling a small area that is considered homogeneous (to the human eye at least) and then arm waving that their conclusions will apply across the world for similar small areas of some ill-defined habitat (Krebs 2010). Climate change may of course disrupt our conclusions, but perhaps this is all we can do.

Alternatively, we can retreat to the minimalist position and argue that we are drawing no general conclusions but only describing the state of this small piece of real estate in 2017. But alas this is not what science is supposed to be about. We are supposed to reach general conclusions and even general laws with some predictive power. Should biologists just give up pretending they are scientists? That would not be good for our image, but on the other hand to say that the laws of ecology have changed because the climate is changing is not comforting to our political masters. Imagine the outcry if the laws of physics changed over time, so that for example in 25years it might be that CO2 is not a greenhouse gas. Impossible.

These considerations should make ecologists and other biologists very humble, but in fact this cannot be because the media would not approve and money for research would never flow into biology. Humility is a lost virtue in many western cultures, and particularly in ecology we leap from bandwagon to bandwagon to avoid the judgement that our research is limited in application to undefined statistical populations.

One solution to the dilemma of the impossibility of random sampling is just to ignore this requirement, and this approach seems to be the most common solution implicit in ecology papers. Rabe et al. (2002) surveyed the methods used by management agencies to survey population of large mammals and found that even when it was possible to use randomized counts on survey areas, most states used non-random sampling which leads to possible bias in estimates even in aerial surveys. They pointed out that ground surveys of big game were even more likely to provide data based on non-random sampling simply because most of the survey area is very difficult to access on foot. The general problem is that inference is limited in all these wildlife surveys and we do not know the ‘population’ to which the numbers derived are applicable.

In an interesting paper that could apply directly to ecology papers, Williamson (2003) analyzed research papers in a nursing journal to ask if random sampling was utilized in contrast to convenience sampling. He found that only 32% of the 89 studies he reviewed used random sampling. I suspect that this kind of result would apply to much of medical research now, and it might be useful to repeat his kind of analysis with a current ecology journal. He did not consider the even more difficult issue of exactly what statistical population is specified in particular medical studies.

I would recommend that you should put a red flag up when you read “random” in an ecology paper and try to determine how exactly the term is used. But carry on with your research because:

Errors using inadequate data are much less than those using no data at all.

Charles Babbage (1792–1871

Krebs CJ (2010). Case studies and ecological understanding. Chapter 13 in: Billick I, Price MV, eds. The Ecology of Place: Contributions of Place-Based Research to Ecological Understanding. University of Chicago Press, Chicago, pp. 283-302. ISBN: 9780226050430

Rabe, M. J., Rosenstock, S. S. & deVos, J. C. (2002) Review of big-game survey methods used by wildlife agencies of the western United States. Wildlife Society Bulletin, 30, 46-52.

Williamson, G. R. (2003) Misrepresenting random sampling? A systematic review of research papers in the Journal of Advanced Nursing. Journal of Advanced Nursing, 44, 278-288. doi: 10.1046/j.1365-2648.2003.02803.x

 

On Indices of Population Abundance

A discussion with Murray Efford last week stimulated me to raise again this issue of using indices to measure population changes. One could argue that this issue has already been fully aired by Anderson (2003) and Engemann (2003) and I discussed it briefly in a blog about 2 years ago. The general agreement appears to be that mark-recapture estimation of population size is highly desirable if the capture procedure is clearly understood in relation to the assumption of the model of estimation. McKelvey and Pearson (2001) made this point with some elegant simulations. The best procedure then, if one wishes to replace mark-recapture methods with some index of abundance (track counts, songs, fecal pellets, etc.), is to calibrate the index with absolute abundance information of some type and show that the index and absolute abundance are very highly correlated. This calibration is difficult because there are few natural populations on which we know absolute abundance with high accuracy. We are left hanging with no clear path forward, particularly for monitoring programs that have little time or money to do extensive counting of any one species.

McKelvey and Pearson (2001) laid out a good guide for the use of indices in small mammal trapping, and showed that for many sampling programs the use of the number of unique individuals caught in a sampling session was a good index of population abundance, even though it is negatively biased. The key variable in all these discussions of mark-recapture models is the probability of capture of an individual animal living on the trapping area per session. Many years ago Leslie et al. (1953) considered this issue and the practical result was the recommendation that all subsequent work with small rodents should aim for a maximum probability of capture of individuals. The simplest way to do this was with highly efficient traps and large numbers of traps (single catch traps) so that there was always an excess of traps available for the population being censused. Krebs and Boonstra (1984) presented an analysis of trappability for several Microtus populations in which these recommendations were typically followed (Longworth traps in excess), and they found that the average per session detection probability ranged from about 0.6 to 0.9 for the four Microtus species studied. In all these studies live traps were present year round in the field, locked open when not in use, so the traps became part of the local environment for the voles. Clean live traps were much less likely to catch Microtus townsendii than dirty traps soiled with urine and feces (Boonstra and Krebs 1976). It is clear that minor behavioural quirks of the species under study may have significant effects on the capture data obtained. Individual heterogeneity in the probability of capture is a major problem in all mark-recapture work. But in the end natural history is as important as statistics.

There are at least two take home messages that can come from all these considerations. First, there are many statistical decisions that have to be made before population size can be estimated from mark-recapture data or any kind of quadrat based data. Second, there is also much biological information that must be well known before starting out with some kind of sampling design. Detectability may vary greatly with observers, with types of traps used, and observer skills so that again the devil is in the details. A third take home message given to me by someone who must remain nameless is that mark-recapture is hopeless as an ecological method because even after much work, the elusive population size that one wishes to know is lost in a pile of assumptions. But we cannot accept such a negative view without trying very hard to overcome the problems of sampling and estimation.

One way out of the box we find ourselves in (if we want to estimate population size) is to use an index of abundance and recognize its limitations. We cannot use quantitative population modelling on indices but we may find that indices are the best we can do for now. In particular, monitoring with little money must rely on indices of many populations of both plants and animals. Some data are better than no data for the management of populations and communities.

For the present time spatially explicit capture-recapture (SECR) methods of population estimation have provided a most useful approach to estimating density (Efford et al. 2009, 2013) and much future work will be needed to tell us how useful this relatively new approach is for accurately estimating population density (Broekhuis and Gopalaswamy 2016).

And a final reminder that even if you study community or ecosystem ecology, you must rely on getting measures of abundance for many quantitative models of system performance. So methods that provide accuracy for population sizes are just as essential for the vast array of ecological studies.

Anderson, D.R. 2003. Index values rarely constitute reliable information. Wildlife Society Bulletin 31(1): 288-291.

Boonstra, R. and Krebs, C.J. 1976. The effect of odour on trap response in Microtus townsendii. Journal of Zoology (London) 180(4): 467-476. Doi: 10.1111/j.1469-7998.1976.tb04692.x.

Broekhuis, F. and Gopalaswamy, A.M. 2016. Counting cats: Spatially explicit population estimates of cheetah (Acinonyx jubatus) using unstructured sampling data. PLoS ONE 11(5): e0153875. Doi: 10.1371/journal.pone.0153875.

Efford, M.G. and Fewster, R.M. 2013. Estimating population size by spatially explicit capture–recapture. Oikos 122(6): 918-928. Doi: 10.1111/j.1600-0706.2012.20440.x.

Efford, M.G., Dawson, D.K., and Borchers, D.L. 2009. Population density estimated from locations of individuals on a passive detector array. Ecology 90(10): 2676-2682. Doi: 10.1890/08-1735.1

Engeman, R.M. 2003. More on the need to get the basics right: population indices. Wildlife Society Bulletin 31(1): 286-287.

Krebs, C.J. and Boonstra, R. 1984. Trappability estimates for mark-recapture data. Canadian Journal of Zoology 62 (12): 2440-2444. Doi: 10.1139/z84-360

Leslie, P.H., Chitty, D., and Chitty, H. 1953. The estimation of population parameters from data obtained by means of the capture-recapture method. III. An example of the practical applications of the method. Biometrika 40 (1-2): 137-169. Doi:10.1093/biomet/40.1-2.137

McKelvey, K.S. & Pearson, D.E. (2001) Population estimation with sparse data: the role of estimators versus indices revisited. Canadian Journal of Zoology, 79(10): 1754-1765. Doi: 10.1139/cjz-79-10-1754

On Critical Questions in Biodiversity and Conservation Ecology

Biodiversity can be a vague concept with so many measurement variants to make one wonder what it is exactly, and how to incorporate ideas about biodiversity into scientific hypotheses. Even if we take the simplest concept of species richness as the operational measure, many questions arise about the importance of the rare species that make up most of the biodiversity but so little of the biomass. How can we proceed to a better understanding of this nebulous ecological concept that we continually put before the public as needing their attention?

Biodiversity conservation relies on community and ecosystem ecology for guidance on how to advance scientific understanding. A recent paper by Turkington and Harrower (2016) articulates this very clearly by laying out 7 general questions for analyzing community structure for conservation of biodiversity. As such these questions are a general model for community and ecosystem ecology approaches that are needed in this century. Thus it would pay to look at these 7 questions more closely and to read this new paper. Here is the list of 7 questions from the paper:

  1. How are natural communities structured?
  2. How does biodiversity determine the function of ecosystems?
  3. How does the loss of biodiversity alter the stability of ecosystems?
  4. How does the loss of biodiversity alter the integrity of ecosystems?
  5. Diversity and species composition
  6. How does the loss of species determine the ability of ecosystems to respond to disturbances?
  7. How does food web complexity and productivity influence the relative strength of trophic interactions and how do changes in trophic structure influence ecosystem function?

Turkington and Harrower (2016) note that each of these 7 questions can be asked in at least 5 different contexts in the biodiversity hotspots of China:

  1. How do the observed responses change across the 28 vegetation types in China?
  2. How do the observed responses change from the low productivity grasslands of the Qinghai Plateau to higher productivity grasslands in other parts of China?
  3. How do the observed responses change along a gradient in the intensity of human use or degradation?
  4. How long should an experiment be conducted given that the immediate results are seldom indicative of longer-term outcomes?
  5. How does the scale of the experiment influence treatment responses?

There are major problems in all of this as Turkington and Harrower (2016) and Bruelheide et al. (2014) have discussed. The first problem is to determine what the community is or what the bounds of an ecosystem are. This is a trivial issue according to community and ecosystem ecologists, and all one does is draw a circle around the particular area of interest for your study. But two points remain. Populations, communities, and ecosystems are open systems with no clear boundaries. In population ecology we can master this problem by analyses of movements and dispersal of individuals. On a short time scale plants in communities are fixed in position while their associated animals move on species-specific scales. Communities and ecosystems are not a unit but vary continuously in space and time, making their analysis difficult. The species present on 50 m2 are not the same as those on another plot 100 m or 1000 m away even if the vegetation types are labeled the same. So we replicate plots within what we define to be our community. If you are studying plant dynamics, you can experimentally place all plant species selected in defined plots in a pre-arranged configuration for your planting experiments, but you cannot do this with animals except in microcosms. All experiments are place specific, and if you consider climate change on a 100 year time scale, they are also time specific. We can hope that generality is strong and our conclusions will apply in 100 years but we do not know this now.

But we can do manipulative experiments, as these authors strongly recommend, and that brings a whole new set of problems, outlined for example in Bruelheide et al. (2014, Table 1, page 78) for a forestry experiment in southern China. Decisions about how many tree species to manipulate in what size of plots and what planting density to use are all potentially critical to the conclusions we reach. But it is the time frame of hypothesis testing that is the great unknown. All these studies must be long-term but whether this is 10 years or 50 years can only be found out in retrospect. Is it better to have, for example, forestry experiments around the world carried out with identical protocols, or to adopt a laissez faire approach with different designs since we have no idea yet of what design is best for answering these broad questions.

I suspect that this outline of the broad questions given in Turkington and Harrower (2016) is at least a 100 year agenda, and we need to be concerned how we can carry this forward in a world where funding of research questions has a 3 or 5 year time frame. The only possible way forward, until we win the Lottery, is for all researchers to carry out short term experiments on very specific hypotheses within this framework. So every graduate student thesis in experimental community and ecosystem ecology is important to achieving the goals outlined in these papers. Even if this 100 year time frame is optimistic and achievable, we can progress on a shorter time scale by a series of detailed experiments on small parts of the community or ecosystem at hand. I note that some of these broad questions listed above have been around for more than 50 years without being answered. If we redefine our objectives more precisely and do the kinds of experiments that these authors suggest we can move forward, not with the solution of grand ideas as much as with detailed experimental data on very precise questions about our chosen community. In this way we keep the long-range goal posts in view but concentrate on short-term manipulative experiments that are place and time specific.

This will not be easy. Birds are probably the best studied group of animals on Earth, and we now have many species that are changing in abundance dramatically over large spatial scales (e.g. http://www.stateofcanadasbirds.org/ ). I am sobered by asking avian ecologists why a particular species is declining or dramatically increasing. I never get a good answer, typically only a generally plausible idea, a hand waving explanation based on correlations that are not measured or well understood. Species recovery plans are often based on hunches rather than good data, with few of the key experiments of the type requested by Turkington and Harrower (2016). At the moment the world is changing rather faster than our understanding of these ecological interactions that tie species together in communities and ecosystems. We are walking when we need to be running, and even the Red Queen is not keeping up.

Bruelheide, H. et al. 2014. Designing forest biodiversity experiments: general considerations illustrated by a new large experiment in subtropical China. Methods in Ecology and Evolution, 5, 74-89. doi: 10.1111/2041-210X.12126

Turkington, R. & Harrower, W.L. 2016. An experimental approach to addressing ecological questions related to the conservation of plant biodiversity in China. Plant Diversity, 38, 1-10. Available at: http://journal.kib.ac.cn/EN/volumn/current.shtml

Hypothesis testing using field data and experiments is definitely NOT a waste of time

At the ESA meeting in 2014 Greg Dwyer (University of Chicago) gave a talk titled “Trying to understand ecological data without mechanistic models is a waste of time.” This theme has recently been reiterated on Dynamic Ecology Jeremy Fox, Brian McGill and Megan Duffy’s blog (25 January 2016 https://dynamicecology.wordpress.com/2016/01/25/trying-to-understand-ecological-data-without-mechanistic-models-is-a-waste-of-time/).  Some immediate responses to this blog have been such things as “What is a mechanistic model?” “What about the use of inappropriate statistics to fit mechanistic models,” and “prediction vs. description from mechanistic models”.  All of these are relevant and interesting issues in interpreting the value of mechanistic models.

The biggest fallacy however in this blog post or at least the title of the blog post is the implication that field ecological data are collected in a vacuum.  Hypotheses are models, conceptual models, and it is only in the absence of hypotheses that trying to understand ecological data is a “waste of time”. Research proposals that fund field work demand testable hypotheses, and testing hypotheses advances science. Research using mechanistic models should also develop testable hypotheses, but mechanistic models are certainly are not the only route to hypothesis creation of testing.

Unfortunately, mechanistic models rarely identify how the robustness and generality of the model output could be tested from ecological data and often fail comprehensively to properly describe the many assumptions made in constructing the model. In fact, they are often presented as complete descriptions of the ecological relationships in question, and methods for model validation are not discussed. Sometimes modelling papers include blatantly unrealistic functions to simplify ecological processes, without exploring the sensitivity of results to the functions.

I can refer to my own area of research expertise, population cycles for an example here.  It is not enough for example to have a pattern of ups and downs with a 10-year periodicity to claim that the model is an acceptable representation of cyclic population dynamics of for example a forest lepidopteran or snowshoe hares. There are many ways to get cyclic dynamics in modeled systems. Scientific progress and understanding can only be made if the outcome of conceptual, mechanistic or statistical models define the hypotheses that could be tested and the experiments that could be conducted to support the acceptance, rejection or modification of the model and thus to inform understanding of natural systems.

How helpful are mechanistic models – the gypsy moth story

Given the implication of Dwyer’s blog post (or at least blog post title) that mechanistic models are the only way to ecological understanding, it is useful to look at models of gypsy moth dynamics, one of Greg’s areas of modeling expertise, with the view toward evaluating whether model assumptions are compatible with real-world data Dwyer et al.  2004  (http://www.nature.com/nature/journal/v430/n6997/abs/nature02569.html)

Although there has been considerable excellent work on gypsy moth over the years, long-term population data are lacking.  Population dynamics therefore are estimated by annual estimates of defoliation carried out by the US Forest Service in New England starting in 1924. These data show periods of non-cyclicity, two ten-year cycles (peaks in 1981 and 1991 that are used by Dwyer for comparison to modeled dynamics for a number of his mechanistic models) and harmonic 4-5 year cycles between 1943 and1979 and since the 1991 outbreak. Based on these data 10-year cycles are the exception not the rule for introduced populations of gypsy moth. Point 1. Many of the Dwyer mechanistic models were tested using the two outbreak periods and ignored over 20 years of subsequent defoliation data lacking 10-year cycles. Thus his results are limited in their generality.

As a further example a recent paper, Elderd et al. (2013)  (http://www.ncbi.nlm.nih.gov/pmc/articles/PMC3773759/) explored the relationship between alternating long and short cycles of gypsy moth in oak dominated forests by speculating that inducible tannins in oaks modifies the interactions between gypsy moth larvae and viral infection. Although previous field experiments (D’Amico et al. 1998) http://onlinelibrary.wiley.com/doi/10.1890/0012-9658(1998)079%5b1104:FDDNAW%5d2.0.CO%3b2/abstract concluded that gypsy moth defoliation does not affect tannin levels sufficiently to influence viral infection, Elderd et al. (2013) proposed that induced tannins in red oak foliage reduces variation in viral infection levels and promotes shorter cycles. In this study, an experiment was conducted using jasmonic acid sprays to induce oak foliage. Point 2 This mechanistic model is based on experiments using artificially induced tannins as a mimic of insect damage inducing plant defenses. However, earlier fieldwork showed that foliage damage does not influence virus transmission and thus does not support the relevance of this mechanism.

In this model Elderd et al. (2013) use a linear relationship for viral transmission (transmission of infection on baculovirus density) based on two data points and the 0 intercept. In past mechanistic models and in a number of other systems the relationship between viral transmission and host density is nonlinear (D’Amico et al. 2005, http://onlinelibrary.wiley.com/doi/10.1111/j.0307-6946.2005.00697.x/abstract;jsessionid=D93D281ACD3F94AA86185EFF95AC5119.f02t02?userIsAuthenticated=false&deniedAccessCustomisedMessage= Fenton et al. 2002, http://onlinelibrary.wiley.com/doi/10.1046/j.1365-2656.2002.00656.x/full). Point 3. Data are insufficient to accurately describe the viral transmission relationship used in the model.

Finally the Elderd et al. (2013) model considers two types of gypsy moth habitat – one composed of 43% oaks that are inducible and the other of 15% oaks and the remainder of the forest composition is in adjacent blocks of non-inducible pines. Data show that gypsy moth outbreaks are limited to areas with high frequencies of oaks. In mixed forests, pines are only fed on by later instars of moth larvae when oaks are defoliated. The pines would be interspersed amongst the oaks not in separate blocks as in the modeled population. Point 4.  Patterns of forest composition in the models that are crucial to the result are unrealistic and this makes the interpretation of the results impossible.

Point 5 and conclusion. Because it can be very difficult to critically review someone else’s mechanistic model as model assumptions are often hidden in supplementary material and hard to interpret, and because relationships used in models are often arbitrarily chosen and not based on available data, it could be easy to conclude that “mechanistic models are misleading and a waste of time”. But of course that wouldn’t be productive. So my final point is that closer collaboration between modelers and data collectors would be the best way to ensure that the models are reasonable and accurate representations of the data.  In this way understanding and realistic predictions would be advanced. Unfortunately the great push to publish high profile papers works against this collaboration and manuscripts of mechanistic models rarely include data savvy referees.

D’Amico, V., J. S. Elkinton, G. Dwyer, R. B. Willis, and M. E. Montgomery. 1998. Foliage damage does not affect within-season transmission of an insect virus. Ecology 79:1104-1110.

D’Amico, V. D., J. S. Elkinton, P. J.D., J. P. Buonaccorsi, and G. Dwyer. 2005. Pathogen clumping: an explanation for non-linear transmission of an insect virus. Ecological Entomology 30:383-390.

Dwyer, G., F. Dushoff, and S. H. Yee. 2004. The combined effects of pathogens and predators on insect outbreaks. Nature 430:341-345.

Elderd, B. D., B. J. Rehill, K. J. Haynes, and G. Dwyer. 2013. Induced plant defenses, host–pathogen interactions, and forest insect outbreaks. Proceedings of the National Academy of Sciences 110:14978-14983.

Fenton, A., J. P. Fairbairn, R. Norman, and P. J. Hudson. 2002. Parasite transmission: reconciling theory and reality. Journal of Animal Ecology 71:893-905.

On Tipping Points and Regime Shifts in Ecosystems

A new important paper raises red flags about our preoccupation with tipping points, alternative stable states and regime shifts (I’ll call them collectively sharp transitions) in ecosystems (Capon et al. 2015). I do not usually call attention to papers but this paper and a previous review (Mac Nally et al. 2014) seem to me to be critical for how we think about ecosystem changes in both aquatic and terrestrial ecosystems.

Consider an oversimplified example of how a sharp transition might work. Suppose we dumped fertilizer into a temperate clear-water lake. The clear water soon turns into pea soup with a new batch of algal species, a clear shift in the ecosystem, and this change is not good for many of the invertebrates or fish that were living there. Now suppose we stop dumping fertilizer into the lake. In time, and this could be a few years, the lake can either go back to its original state of clear water or it could remain as a pea soup lake for a very long time even though the pressure of added fertilizer was stopped. This second outcome would be a sharp transition, “you cannot go back from here” and the question for ecologists is how often does this happen? Clearly the answer is of great interest to natural resource managers and restoration ecologists.

The history of this idea for me was from the 1970s at UBC when Buzz Holling and Carl Walters were modelling the spruce budworm outbreak problem in eastern Canadian coniferous forests. They produced a model with a manifold surface that tipped the budworm from a regime of high abundance to one of low abundance (Holling 1973). We were all suitably amazed and began to wonder if this kind of thinking might be helpful in understanding snowshoe hare population cycles and lemming cycles. The evidence was very thin for the spruce budworm, but the model was fascinating. Then by the 1980s the bandwagon started to roll, and alternative stable states and regime change seemed to be everywhere. Many ideas about ecosystem change got entangled with sharp transition, and the following two reviews help to unravel them.

Of the 135 papers reviewed by Capon et al. (2015) very few showed good evidence of alternative stable states in freshwater ecosystems. They highlighted the use and potential misuse of ecological theory in trying to predict future ecosystem trajectories by managers, and emphasized the need of a detailed analysis of the mechanisms causing ecosystem change. In a similar paper for estuaries and near inshore marine ecosystems, Mac Nally et al. (2014) showed that of 376 papers that suggested sharp transitions, only 8 seemed to have sufficient data to satisfy the criteria needed to conclude that a transition had occurred and was linkable to an identifiable pressure. Most of the changes described in these studies are examples of gradual ecosystem changes rather than a dramatic shift; indeed, the timescale against which changes are assessed is critical. As always the devil is in the details.

All of this is to recognize that strong ecosystem changes do occur in response to human actions but they are not often sharp transitions that are closely linked to human actions, as far as we can tell now. And the general message is clearly to increase rigor in our ecological publications, and to carry out the long-term studies that provide a background of natural variation in ecosystems so that we have a ruler to measure human induced changes. Reviews such as these two papers go a long way to helping ecologists lift our game.

Perhaps it is best to end with part of the abstract in Capon et al. (2015):

“We found limited understanding of the subtleties of the relevant theoretical concepts and encountered few mechanistic studies that investigated or identified cause-and-effect relationships between ecological responses and nominal pressures. Our results mirror those of reviews for estuarine, nearshore and marine aquatic ecosystems, demonstrating that although the concepts of regime shifts and alternative stable states have become prominent in the scientific and management literature, their empirical underpinning is weak outside of a specific environmental setting. The application of these concepts in future research and management applications should include evidence on the mechanistic links between pressures and consequent ecological change. Explicit consideration should also be given to whether observed temporal dynamics represent variation along a continuum rather than categorically different states.”

 

Capon, S.J., Lynch, A.J.J., Bond, N., Chessman, B.C., Davis, J., Davidson, N., Finlayson, M., Gell, P.A., Hohnberg, D., Humphrey, C., Kingsford, R.T., Nielsen, D., Thomson, J.R., Ward, K., and Mac Nally, R. 2015. Regime shifts, thresholds and multiple stable states in freshwater ecosystems; a critical appraisal of the evidence. Science of The Total Environment 517(0): in press. doi:10.1016/j.scitotenv.2015.02.045.

Holling, C.S. 1973. Resilience and stability of ecological systems. Annual Review of Ecology and Systematics 4: 1-23. doi:10.1146/annurev.es.04.110173.000245.

Mac Nally, R., Albano, C., and Fleishman, E. 2014. A scrutiny of the evidence for pressure-induced state shifts in estuarine and nearshore ecosystems. Austral Ecology 39: 898-906. doi:10.1111/aec.12162.

The Anatomy of an Ecological Controversy – Dingos and Conservation in Australia

Conservation is a most contentious discipline, partly because it is ecology plus a moral stance. As such you might compare it to discussions about religious truths in the last several centuries but it is a discussion among scientists who accept the priority of scientific evidence. In Australia for the past few years there has been much discussion of the role of the dingo in protecting biodiversity via mesopredator release of foxes and cats (Allen et al. 2013; Colman et al. 2014; Hayward and Marlow 2014; Letnic et al. 2011, and many more papers). I do not propose here to declare a winner in this controversy but I want to dissect it as an example of an ecological issue with so many dimensions it could continue for a long time.

Dingos in Australia are viewed like wolves in North America – the ultimate enemy that must be reduced or eradicated if possible. When in doubt about what to do, killing dingos or wolves has become the first commandment of wildlife management and conservation. The ecologist would like to know, given this socially determined goal, what are the ecological consequences of reduction or eradication of dingos or wolves. How do we determine that?

The experimentalist suggests doing a removal experiment (or conversely a re-introduction experiment) so we have ecosystems with and without dingos (Newsome et al. 2015). This would have to be carried out on a large scale dependent on the home range size of the dingo and for a number of years so that the benefits or the costs of the removal would be clear. Here is the first hurdle, this kind of experiment cannot be done, and only a quasi-experiment is possible by finding areas that have dingos and others that do not have any (or a reduced population) and comparing ecosystems. This decision immediately introduces 5 problems:

  1. The areas with- and without- the dingo are not comparable in many respects. Areas with dingos for example may be national parks placed in the mountains or in areas that humans cannot use for agriculture, while areas with dingo control are in fertile agricultural landscapes with farming subsidies.
  2. Even given areas with and without dingos there is the problem of validating the usual dingo reduction carried out by poison baits or shooting. This is an important methodological issue.
  3. One has to census the mesopredators, in Australia foxes and cats, with further methodological issues of how to achieve that with accuracy.
  4. In addition one has to census the smaller vertebrates presumed to be possibly affected by the mesopredator offtake.
  5. Finally one has to do this for several years, possibly 5-10 years, particularly in variable environments, and in several pairs of areas chosen to represent the range of ecosystems of interest.

All in all this is a formidable research program, and one that has been carried out in part by the researchers working on dingos. And we owe them our congratulations for their hard work. The major part of the current controversy has been how one measures population abundance of all the species involved. The larger the organism, paradoxically the more difficult and expensive the methods of estimating abundance. Indirect measures, often from predator tracks in sand plots, are forced on researchers because of a lack of funding and the landscape scale of the problem. The essence of the problem is that tracks in sand or mud measure both abundance and activity. If movements increase in the breeding season, tracks may indicate activity more than abundance. If old roads are the main sampling sites, the measurements are not a random sample of the landscape.

This monumental sampling headache can be eliminated by the bold stroke of concluding with Nimmo et al. (2015) and Stephens et al. (2015) that indirect measures of abundance are sufficient for guiding actions in conservation management. They may be, they may not be, and we fall back into the ecological dilemma that different ecosystems may give different answers. And the background question is what level of accuracy do you need in your study? We are all in a hurry now and want action for conservation. If you need to know only whether you have “few” or “many” dingos or tigers in your area, indirect methods may well serve the purpose. We are rushing now into the “Era of the Camera” in wildlife management because the cost is low and the volume of data is large. Camera ecology may be sufficient for occupancy questions, but may not be enough for demographic analysis without detailed studies.

The moral issue that emerges from this particular dingo controversy is similar to the one that bedevils wolf control in North America and Eurasia – should we remove large predators from ecosystems? The ecologist’s job is to determine the biodiversity costs and benefits of such actions. But in the end we are moral beings as well as ecologists, and for the record, not the scientific record but the moral one, I think it is poor policy to remove dingos, wolves, and all large predators from ecosystems. Society however seems to disagree.

 

Allen, B.L., Allen, L.R., Engeman, R.M., and Leung, L.K.P. 2013. Intraguild relationships between sympatric predators exposed to lethal control: predator manipulation experiments. Frontiers in Zoology 10(39): 1-18. doi:10.1186/1742-9994-10-39.

Colman, N.J., Gordon, C.E., Crowther, M.S., and Letnic, M. 2014. Lethal control of an apex predator has unintended cascading effects on forest mammal assemblages. Proceedings of the Royal Society of London, Series B 281(1803): 20133094. doi:DOI: 10.1098/rspb.2013.3094.

Hayward, M.W., and Marlow, N. 2014. Will dingoes really conserve wildlife and can our methods tell? Journal of Applied Ecology 51(4): 835-838. doi:10.1111/1365-2664.12250.

Letnic, M., Greenville, A., Denny, E., Dickman, C.R., Tischler, M., Gordon, C., and Koch, F. 2011. Does a top predator suppress the abundance of an invasive mesopredator at a continental scale? Global Ecology and Biogeography 20(2): 343-353. doi:10.1111/j.1466-8238.2010.00600.x.

Newsome, T.M., et al. (2015) Resolving the value of the dingo in ecological restoration. Restoration Ecology, 23 (in press). doi: 10.1111/rec.12186

Nimmo, D.G., Watson, S.J., Forsyth, D.M., and Bradshaw, C.J.A. 2015. Dingoes can help conserve wildlife and our methods can tell. Journal of Applied Ecology 52. (in press, 27 Jan. 2015). doi:10.1111/1365-2664.12369.

Stephens, P.A., Pettorelli, N., Barlow, J., Whittingham, M.J., and Cadotte, M.W. 2015. Management by proxy? The use of indices in applied ecology. Journal of Applied Ecology 52(1): 1-6. doi:10.1111/1365-2664.12383.

A Survey of Strong Inference in Ecology Papers: Platt’s Test and Medawar’s Fraud Model

In 1897 Chamberlin wrote an article in the Journal of Geology on the method of multiple working hypotheses as a way of experimentally testing scientific ideas (Chamberlin 1897 reprinted in Science). Ecology was scarcely invented at that time and this has stimulated my quest here to see if current ecology journals subscribe to Chamberlin’s approach to science. Platt (1964) formalized this approach as “strong inference” and argued that it was the best way for science to progress rapidly. If this is the case (and some do not agree that this approach is suitable for ecology) then we might use this model to check now and then on the state of ecology via published papers.

I did a very small survey in the Journal of Animal Ecology for 2015. Most ecologists I hope would classify this as one of our leading journals. I asked the simple question of whether in the Introduction to each paper there were explicit hypotheses stated and explicit alternative hypotheses, and categorized each paper as ‘yes’ or ‘no’. There is certainly a problem here in that many papers stated a hypothesis or idea they wanted to investigate but never discussed what the alternative was, or indeed if there was an alternative hypothesis. As a potential set of covariates, I tallied how many times the word ‘hypothesis’ or ‘hypotheses’ occurred in each paper, as well as the word ‘test’, ‘prediction’, and ‘model’. Most ‘model’ and ‘test’ words were used in the context of statistical models or statistical tests of significance. Singular and plural forms of these words were all counted.

This is not a publication and I did not want to spend the rest of my life looking at all the other ecology journals and many issues, so I concentrated on the Journal of Animal Ecology, volume 84, issues 1 and 2 in 2015. I obtained these results for the 51 articles in these two issues: (number of times the word appeared per article, averaged over all articles)

Explicit hypothesis and alternative hypotheses

“Hypothesis”

“Test”

“Prediction”

“Model”

Yes

22%

Mean

3.1

7.9

6.5

32.5

No

78%

Median

1

6

4

20

No. articles

51

Range

0-23

0-37

0-27

0-163

There are lots of problems with a simple analysis like this and perhaps its utility may lie in stimulating a more sophisticated analysis of a wider variety of journals. It is certainly not a random sample of the ecology literature. But maybe it gives us a few insights into ecology 2015.

I found the results quite surprising in that many papers failed Platt’s Test for strong inference. Many papers stated hypotheses but failed to state alternative hypotheses. In some cases the implied alternative hypothesis is the now-discredited null hypothesis (Johnson 2002). One possible reason for the failure to state hypotheses clearly was discussed by Medawar many years ago (Howitt and Wilson 2014; Medawar 1963). He pointed out that most scientific papers were written backwards, analysing the data, finding out what it concluded, and then writing the introduction to the paper knowing the results to follow. A significant number of papers in these issues I have looked at here seem to have been written following Medawar’s “fraud model”.

But make of such data as you will, and I appreciate that many people write papers in a less formal style than Medawar or Platt would prefer. And many have alternative hypotheses in mind but do not write them down clearly. And perhaps many referees do not think we should be restricted to using the hypothetical deductive approach to science. All of these points of view should be discussed rather than ignored. I note that some ecological journals now turn back papers that have no clear statement of a hypothesis in the introduction to the submitted paper.

The word ‘model’ is the most common word to appear in this analysis, typically in the case of a statistical model evaluated by AIC kinds of statistics. And the word ‘test’ was most commonly used in statistical tests (‘t-test’) in a paper. Indeed virtually all of these paper overflow with statistical estimates of various kinds. Few however come back in the conclusions to state exactly what progress has been made by their paper and even less make statements about what should be done next. From this small survey there is considerable room for improvement in ecological publications.

Chamberlin, T.C. 1897. The method of multiple working hypotheses. Journal of Geology 5: 837-848 (reprinted in Science 148: 754-759 in 1965). doi:10.1126/science.148.3671.754

Howitt, S.M., and Wilson, A.N. 2014. Revisiting “Is the scientific paper a fraud?”. EMBO reports 15(5): 481-484. doi:10.1002/embr.201338302

Johnson, D.H. (2002) The role of hypothesis testing in wildlife science. Journal of Wildlife Management 66(2): 272-276. doi: 10.2307/3803159

Medawar, P.B. 1963. Is the scientific paper a fraud? In “The Threat and the Glory”. Edited by P.B. Medawar. Harper Collins, New York. pp. 228-233. (Reprinted by Harper Collins in 1990. ISBN: 9780060391126.)

Platt, J.R. 1964. Strong inference. Science 146: 347-353. doi:10.1126/science.146.3642.347

On Repeatability in Ecology

One of the elementary lessons of statistics is that every measurement must be repeatable so that differences or changes in some ecological variable can be interpreted with respect to some ecological or environmental mechanism. So if we count 40 elephants in one year and count 80 in the following year, we know that population abundance has changed and we do not have to consider the possibility that the repeatability of our counting method is so poor that 40 and 80 could refer to the same population size. Both precision and bias come into the discussion at this point. Much of the elaboration of ecological methods involves the attempt to improve the precision of methods such as those for estimating abundance or species richness. There is less discussion of the problem of bias.

The repeatability that is most crucial in forging a solid science is that associated with experiments. We should not simply do an important experiment in a single place and then assume the results apply world-wide. Of course we do this, but we should always remember that this is a gigantic leap of faith. Ecologists are often not willing to repeat critical experiments, in contrast to scientists in chemistry or molecular biology. Part of this reluctance is understandable because the costs associated with many important field experiments is large and funding committees must then judge whether to repeat the old or fund the new. But if we do not repeat the old, we never can discover the limits to our hypotheses or generalizations. Given a limited amount of money, experimental designs often limit the potential generality of the conclusions. Should you have 2 or 4 or 6 replicates? Should you have more replicates and fewer treatment sites or levels of manipulation? When we can, we try one way and then another to see if we get similar results.

A looming issue now is climate change which means that the ecosystem studied in 1980 is possibly rather different than the one you now study in 2014, or the place someone manipulated in 1970 is not the same community you manipulated this year. The worst case scenario would be to find out that you have to do the same experiment every ten years to check if the whole response system has changed. Impossible with current funding levels. How can we develop a robust set of generalizations or ‘theories’ in ecology if the world is changing so that the food webs we so carefully described have now broken down? I am not sure what the answers are to these difficult questions.

And then you pile evolution into this mix and wonder if organisms can change like Donelson et al.’s (2012) tropical reef fish, so that climate changes might be less significant than we currently think, at least for some species. The frustration that ecologists now face over these issues with respect to ecosystem management boils over in many verbal discussions like those on “novel ecosystems” (Hobbs et al. 2014, Aronson et al. 2014) that can be viewed as critical decisions about how to think about environmental change or a discussion about angels on pinheads.

Underlying all of this is the global issue of repeatability, and whether our current perceptions of how to manage ecosystems is sufficiently reliable to sidestep the adaptive management scenarios that seem so useful in theory (Conroy et al. 2011) but are at present rare in practice (Keith et al. 2011). The need for action in conservation biology seems to trump the need for repeatability to test the generalizations on which we base our management recommendations. This need is apparent in all our sciences that affect humans directly. In agriculture we release new varieties of crops with minimal long term studies of their effects on the ecosystem, or we introduce new methods such as no till agriculture without adequate studies of its impacts on soil structure and pest species. This kind of hubris does guarantee long term employment in mitigating adverse consequences, but is perhaps not an optimal way to proceed in environmental management. We cannot follow the Hippocratic Oath in applied ecology because all our management actions create winners and losers, and ‘harm’ then becomes an opinion about how we designate ‘winners’ and ‘losers’. Using social science is one way out of this dilemma, but history gives sparse support for the idea of ‘expert’ opinion for good environmental action.

Aronson, J., Murcia, C., Kattan, G.H., Moreno-Mateos, D., Dixon, K. & Simberloff, D. (2014) The road to confusion is paved with novel ecosystem labels: a reply to Hobbs et al. Trends in Ecology & Evolution, 29, 646-647.

Conroy, M.J., Runge, M.C., Nichols, J.D., Stodola, K.W. & Cooper, R.J. (2011) Conservation in the face of climate change: The roles of alternative models, monitoring, and adaptation in confronting and reducing uncertainty. Biological Conservation, 144, 1204-1213.

Donelson, J.M., Munday, P.L., McCormick, M.I. & Pitcher, C.R. (2012) Rapid transgenerational acclimation of a tropical reef fish to climate change. Nature Climate Change, 2, 30-32.

Hobbs, R.J., Higgs, E.S. & Harris, J.A. (2014) Novel ecosystems: concept or inconvenient reality? A response to Murcia et al. Trends in Ecology & Evolution, 29, 645-646.

Keith, D.A., Martin, T.G., McDonald-Madden, E. & Walters, C. (2011) Uncertainty and adaptive management for biodiversity conservation. Biological Conservation, 144, 1175-1178.

On Research Questions in Ecology

I have done considerable research in arctic Canada on questions of population and community ecology, and perhaps because of this I get e mails about new proposals. This one just arrived from a NASA program called ABoVE that is just now starting up.

“Climate change in the Arctic and Boreal region is unfolding faster than anywhere else on Earth, resulting in reduced Arctic sea ice, thawing of permafrost soils, decomposition of long- frozen organic matter, widespread changes to lakes, rivers, coastlines, and alterations of ecosystem structure and function. NASA’s Terrestrial Ecology Program is in the process of planning a major field campaign, the Arctic-Boreal Vulnerability Experiment (ABoVE), which will take place in Alaska and western Canada during the next 5 to 8 years.“

“The focus of this solicitation is the initial research to begin the Arctic-Boreal Vulnerability Experiment (ABoVE) field campaign — a large-scale study of ecosystem responses to environmental change in western North America’s Arctic and boreal region and the implications for social-ecological systems. The Overarching Science Question for ABoVE is: “How vulnerable or resilient are ecosystems and society to environmental change in the Arctic and boreal region of western North America? “

I begin by noting that Peters (1991) wrote very much about the problems with these kinds of ‘how’ questions. First of all note that this is not a scientific question. There is no conceivable way to answer this question. It contains a set of meaningless words to an ecologist who is interested in testing alternative hypotheses.

One might object that this is not a research question but a broad brush agenda for more detailed proposals that will be phrased in such a way to become scientific questions. Yet it boggles the mind to ask how vulnerable ecosystems are to anything unless one is very specific. One has to define an ecosystem, difficult if it is an open system, and then define what vulnerable means operationally, and then define what types of environmental changes should be addressed – temperature, rainfall, pollution, CO2. And all of that over the broad expanse of arctic and boreal western North America, a sampling problem on a gigantic scale. Yet an administrator or politician could reasonably ask at the end of this program, ‘Well, what is the answer to this question?’ That might be ‘quite vulnerable’, and then we could go on endlessly with meaningless questions and answers that might pass for science on Fox News but not I would hope at the ESA. We can in fact measure how primary production changes over time, how much CO2 is sequestered or released from the soils of the arctic and boreal zone, but how do we translate this into resilience, another completely undefined empirical ecological concept?

We could attack the question retrospectively by asking for example: How resilient have arctic ecosystems been to the environmental changes of the past 30 years? We can document that shrubs have increased in abundance and biomass in some areas of the arctic and boreal zone (Myers-Smith et al. 2011), but what does that mean for the ecosystem or society in particular? We could note that there are almost no data on these questions because funding for northern science has been pitiful, and that raises the issue that if these changes we are asking about occur on a time scale of 30 or 50 years, how will we ever keep monitoring them over this time frame when research is doled out in 3 and 5 year blocks?

The problem of tying together ecosystems and society is that they operate on different time scales of change. Ecosystem changes in terrestrial environments of the North are slow, societal changes are fast and driven by far more obvious pressures than ecosystem changes. The interaction of slow and fast variables is hard enough to decipher scientifically without having many external inputs.

So perhaps in the end this Arctic-Boreal Vulnerability Experiment (another misuse of the word ‘experiment’) will just describe a long-term monitoring program and provide the funding for much clever ecological research, asking specific questions about exactly what parts of what ecosystems are changing and what the mechanisms of change involve. Every food web in the North is a complex network of direct and indirect interactions, and I do not know anyone who has a reliable enough understanding to predict how vulnerable any single element of the food web is to climate change. Like medieval scholars we talk much about changes of state or regime shifts, or tipping points with a model of how the world should work, but with little long term data to even begin to answer these kinds of political questions.

My hope is that this and other programs will generate some funding that will allow ecologists to do some good science. We may be fiddling while Rome is burning, but at any rate we could perhaps understand why it is burning. That also raises the issue of whether or not understanding is a stimulus for action on items that humans can control.

Myers-Smith, I.H., et al. (2011) Expansion of canopy-forming willows over the 20th century on Herschel Island, Yukon Territory, Canada. Ambio, 40, 610-623.

Peters, R.H. (1991) A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp.