Tag Archives: hypothesis testing

Belief vs. Evidence

There is an interesting game you could enter into if you classified the statements you hear or read in the media or in ecological papers. The initial dichotomy is whether or not a statement is a BELIEF or EVIDENCE BASED. There is a continuum between these polar opposites so there can easily be disagreements based on a person’s background. If I say “I believe that the earth is round” you will recognize that this is not a simple belief but a physical fact that is evidence-based. Consequently we use the word ‘belief’ in many different ways. If I say that “Aliens from outer space are firing ray guns to cause flooding in California and Australia”, it is unlikely that you will be convinced because there is no evidence of how this process could work.

If we listen to the media or read the news, you will hear many statements that I or we ‘believe’ that speed limits on streets should be reduced, or that certain types of firearms should be prohibited. The natural response of a scientist to such statements is to ask for what evidence is available that such actions will solve problems, and if there is no evidence, we deal only with opinions or beliefs. If  you lived several hundred years ago, you would be told that “malaria” was a disease caused by “bad air” coming from swamps and rivers, since there was no evidence at the time about microorganisms causing disease. So in a broad sense historical progress was made by people looking for ‘evidence’ to temper and test ‘beliefs’.

How does all this relate to ecological science? I would add the requirement to papers that state some conclusions in ecology journals to also state the beliefs the paper rely on to reach its conclusions, in addition to stating clear hypotheses and alternative hypotheses. Consider the simple case of random sampling, a basic requirement in all statistical methods. But almost no paper states what statistical population is being sampled, and if it does often the study plots are not placed randomly. The standard excuse to this is that our results apply to a large biome, and it is not physically possible to sample randomly, or that we get the same results whether we sample randomly or not. Whatever the excuse, we need to recognize this as a belief or an assumption, a less damning scientific term. And if this assumption is not accepted it is possible to sample other areas or with other methods to test if the evidence validates the assumption. Evidence can always be improved with enough funding, and this replication is exactly what many scientists are doing daily.

Until recently most scientists believed that CO2 was good for plants, and so the more CO2 the better. But the evidence provided was based on simple theory and short term lab experiments. Reich et al. (2018) and Zhu et al. (2018) pointed out that this was not correct when long-term studies were done on C3 plants like rice. So this is a good illustration of the progress of science from belief to evidence. And over the past 50 years it has become very clear that increased CO2 increases atmospheric temperature with drastic climatic and biodiversity consequences (Ripple et al. 2021). The result of these scientific advances is that now there is an extensive amount of scientific research giving the empirical evidence of climate change and CO2 effects on plants and animals. Most people agree with these broad conclusions, but there are people in large corporations and governments around the world who deny these scientific conclusions because they believe that climate change is not happening and is of little consequence to biodiversity or to daily life.

It is quite possible to ignore all the scientific literature about the consequences of climate change, CO2 increase, and biodiversity loss but the end result of passing over these problems now will fall heavily onto your children and grandchildren. The biosphere is screaming the message that ignorance will not necessarily lead to bliss.

Reich, P.B., Hobbie, S.E., Lee, T.D. & Pastore, M.A. (2018) Unexpected reversal of C3 versus C4 grass response to elevated CO2 during a 20-year field experiment. Science, 360, 317-320.doi: 10.1126/science.aas9313.

Ripple, W.J., Wolf, C., Newsome, T.M., Gregg, J.W., Lenton, T.M., Palomo, I., Eikelboom, J.A.J., Law, B.E., Huq, S., Duffy, P.B. & Rockström, J. (2021) World Scientists’ Warning of a Climate Emergency 2021. BioScience, 71, 894-898.doi: 10.1093/biosci/biab079.

Shivanna, K.R. (2022) Climate change and its impact on biodiversity and human welfare. Proceedings of the Indian National Science Academy, 88, 160-171.doi: 10.1007/s43538-022-00073-6.

Watson, R., Kundzewicz, Z.W. & Borrell-Damián, L. (2022) Covid-19, and the climate change and biodiversity emergencies. Science of The Total Environment, 844, 157188.doi: 10.1016/j.scitotenv.2022.157188.

Williams, S.E., Williams, S.E. & de la Fuente, A. (2021) Long-term changes in populations of rainforest birds in the Australia Wet Tropics bioregion: A climate-driven biodiversity emergency. PLoS ONE, 16.doi: 10.1371/journal.pone.0254307.

Zhu, C., Kobayashi, K., Loladze, I., Zhu, J. & Jiang, Q. (2018) Carbon dioxide (CO2) levels this century will alter the protein, micronutrients, and vitamin content of rice grains with potential health consequences for the poorest rice-dependent countries. Science Advances, 4, eaaq1012 doi: 10.1126/sciadv.aaq1012.

Have we moved on from Hypotheses into the New Age of Ecology?

For the last 60 years a group of Stone Age scientists like myself have preached to ecology students that one needs hypotheses to do proper science. Now it has always been clear that not all ecologists followed this precept, and a recent review hammers this point home (Betts et al. 2021). I have always asked my students to read the papers from the Stone Age about scientific progress – Popper (1959), Platt (1964), Peters (1991) and even back to the Pre-Stone Age, Chamberlin (1897). There has been much said about this issue, and the recent Betts et al. (2021) paper pulls much of it together by reviewing papers from 1991 to 2015. Their conclusion is dismal if you think ecological science should make progress in gathering evidence. No change from 1990 to 2015. Multiple alternative hypotheses = 6% of papers, Mechanistic hypotheses = 25% of papers, Descriptive hypotheses = 12%, No hypotheses = 75% of papers. Why should this be after years of recommending the gold standard of multiple alternative hypotheses? Can we call ecology a science with these kinds of scores? 

The simplest reason is that in the era of Big Data we do not need any hypotheses to understand populations, communities, and ecosystems. We have computers, that is enough. I think this is a rather silly view, but one would have to interview believers to find out what they view as progress from big data in the absence of hypotheses. The second excuse might be that we cannot be bothered with hypotheses until we have a complete description of life on earth, food webs, interaction webs, diets, competitors, etc. Once we achieve that we will be able to put together mechanistic hypotheses rapidly. An alternative statement of this view is that we need very much natural history to make any progress in ecology, and this is the era of descriptive natural history and that is why 75% of papers do not list the word hypothesis.

But this is all nonsense of course, and try this view on a medical scientist, a physicist, an aeronautical engineer, or a farmer. The fundamental principle of science is cause-and-effect or the simple view that we would like to see how things work and why often they do not work. Have your students read Romesburg (1981) for an easy introduction and then the much more analytical book by Pearl and Mackenzie (2018) to gain an understanding of the complexity of the simple view that there is a cause and it produces an effect. Hone et al. (2023) discuss these specific problems with respect to improving our approach to wildlife management

What can be done about the dismal situation described by Betts et al. (2021)? One useful recommendation for editors and reviewers would be to request for every submitted paper for a clear statement of the hypothesis they are testing, and hopefully for alternative hypotheses. There should be ecology journals specifically for natural history where the opposite gateway is set: no use of ‘hypothesis’ in this journal. This would not solve all the Betts et al. problems because some ecology papers are based on the experimental design of ‘do something’ and then later ‘try to invent some way to support a hypotheses’, after the fact science. One problem with this type of literature survey is, as Betts et al. recognized, is that papers could be testing hypotheses without using this exact word. So words like ‘proposition’, ‘thesis’, ‘conjectures’ could camouflage thinking about alternative explanations without the actual word ‘hypothesis’.

One other suggestion to deal with this situation might be for journal editors to disallow all papers with hypotheses that are completely untestable. This type of rejection could be instructive to authors to assist rewriting your paper to be more specific about alternative hypotheses. If you can make a clear causal set of predictions that a particular species will go extinct in 100 years, this could be described as a ‘possible future scenario’ that could be guided by some mechanisms that are specified. Or if you have a hypothesis that ‘climate change will affect species geographical ranges, you are providing  a very vague inference that is difficult to test without being more specific about mechanisms, particularly if the species involved is rare.

There is a general problem with null hypotheses which state there is “no effect”. In some few cases these null hypotheses are useful but for the most part they are very weak and should indicate that you have not thought enough about alternative hypotheses.

So read Platt (1964) or at least the first page of it, the first chapter of Popper (1959), and Betts et al. (2021) paper and in your research try to avoid the dilemmas they discuss, and thus help to move our science forward lest it become a repository of ‘stamp collecting’.

Betts, M.G., Hadley, A.S., Frey, D.W., Frey, S.J.K., Gannon, D., Harris, S.H., et al. (2021) When are hypotheses useful in ecology and evolution? Ecology and Evolution, 11, 5762-5776. doi: 10.1002/ece3.7365.

Chamberlin, T.C. (1897) The method of multiple working hypotheses. Journal of Geology, 5, 837-848 (reprinted in Science 148: 754-759 in 1965). doi. 10.1126/science.148.3671.754.

Hone, J., Drake, A. & Krebs, C.J. (2023) Evaluation options for wildlife management and strengthening of causal inference BioScience, 73, 48-58.doi: 10.1093/biosci/biac105.

Pearl, J., and Mackenzie, D. 2018. The Book of Why. The New Science of Cause and Effect. Penguin, London, U.K. 432 pp. ISBN: 978-1541698963.

Peters, R.H. (1991) A Critique for Ecology. Cambridge University Press, Cambridge, England. ISBN: 0521400171.

Platt, J.R. (1964) Strong inference. Science, 146, 347-353.doi: 10.1126/science.146.3642.347.

Popper, K.R. (1959) The Logic of Scientific Discovery. Hutchinson & Co., London. ISBN: 978-041-5278-447.

Romesburg, H.C. (1981) Wildlife science: gaining reliable knowledge. Journal of Wildlife Management, 45, 293-313. doi:10.2307/3807913.

Is Ecology Becoming a Correlation Science?

One of the first lessons in Logic 101 is classically called “Post hoc, ergo propter hoc” or in plain English, “After that, therefore because of that”. The simplest example of many you can see in the newspapers might be: “The ocean is warming up, salmon populations are going down, it must be another effect of climate change. There is a great deal of literature on the problems associated with these kinds of simple inferences, going back to classics like Romesburg (1981), Cox and Wermuth (2004), Sugihara et al. (2012), and Nichols et al. (2019). My purpose here is only to remind you to examine cause and effect when you make ecological conclusions.

My concern is partly related to news articles on ecological problems. A recent example is the collapse of the snow crab fishery in the Gulf of Alaska which in the last 5 years has gone from a very large and profitable fishery interacting with a very large crab population to, at present, a closed fishery with very few snow crabs. What has happened? Where did the snow crabs go? No one really knows but there are perhaps half a dozen ideas put forward to explain what has happened. Meanwhile the fishery and the local economy are in chaos. Without very many critical data on this oceanic ecosystem we can list several factors that might be involved – climate change warming of the Bering Sea, predators, overfishing, diseases, habitat disturbances because of bottom trawl fishing, natural cycles, and then recognizing that we have no simple way for deciding cause and effect and therefore making management choices.

The simplest solution is to say that many interacting factors are involved and many papers indicate the complexity of populations, communities and ecosystems (e,g, Lidicker 1991, Holmes 1995, Howarth et al. 2014). Everyone would agree with this general idea, “the world is complex”, but the arguments have always been “how do we proceed to investigate ecological processes and solve ecological problems given this complexity?” The search for generality has led mostly into replications in which ‘identical’ populations or communities behave very differently. How can we resolve this problem? A simple answer to all this is to go back to the correlation coefficient and avoid complexity.

Having some idea of what is driving changes in ecological systems is certainly better than having no idea, but it is a problem when only one explanation is pushed without a careful consideration of alternative possibilities. The media and particularly the social media are encumbered with oversimplified views of the causes of ecological problems which receive wide approbation with little detailed consideration of alternative views. Perhaps we will always be exposed to these oversimplified views of complex problems but as scientists we should not follow in these footsteps without hard data.

What kind of data do we need in science? We must embrace the rules of causal inference, and a good start might be the books of Popper (1963) and Pearl and Mackenzie (2018) and for ecologists in particular the review of the use of surrogate variables in ecology by Barton et al. (2015). Ecologists are not going to win public respect for their science until they can avoid weak inference, minimize hand waving, and follow the accepted rules of causal inference. We cannot build a science on the simple hypothesis that the world is complicated or by listing multiple possible causes for changes. Correlation coefficients can be a start to unravelling complexity but only a weak one. We need better methods for resolving complex issues in ecology.

Barton, P.S., Pierson, J.C., Westgate, M.J., Lane, P.W. & Lindenmayer, D.B. (2015) Learning from clinical medicine to improve the use of surrogates in ecology. Oikos, 124, 391-398.doi: 10.1111/oik.02007.

Cox, D.R. and Wermuth, N. (2004). Causality: a statistical view. International Statistical Reviews 72: 285-305.

Holmes, J.C. (1995) Population regulation: a dynamic complex of interactions. Wildlife Research, 22, 11-19.

Howarth, L.M., Roberts, C.M., Thurstan, R.H. & Stewart, B.D. (2014) The unintended consequences of simplifying the sea: making the case for complexity. Fish and Fisheries, 15, 690-711.doi: 10.1111/faf.12041

Lidicker, W.Z., Jr. (1991) In defense of a multifactor perspective in population ecology. Journal of Mammalogy, 72, 631-635.

Nichols, J.D., Kendall, W.L. & Boomer, G.S. (2019) Accumulating evidence in ecology: Once is not enough. Ecology and Evolution, 9, 13991-14004.doi: 10.1002/ece3.5836.

Pearl, J., and Mackenzie, D. 2018. The Book of Why. The New Science of Cause and Effect. Penguin, London, U.K. 432 pp. ISBN: 978-1541698963

Popper, K.R. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge and Kegan Paul, London. 608 pp. ISBN: 978-1541698963

Romesburg, H.C. (1981) Wildlife science: gaining reliable knowledge. Journal of Wildlife Management, 45, 293-313.

Sugihara, G., et al. (2012) Detecting causality in complex ecosystems. Science, 338, 496-500.doi: 10.1126/science.1227079.

On the Meaning of ‘Food Limitation’ in Population Ecology

There are many different ecological constraints that are collected in the literature under the umbrella of ‘food limitation’ when ecologists try to explain the causes of population changes or conservation problems. ‘Sockeye salmon in British Columbia are declining in abundance because of food limitation in the ocean’. ’Jackrabbits in some states in the western US are increasing because climate change has increased plant growth and thus removed the limitation of their plant food supplies.’ ‘Moose numbers in western Canada are declining because their food plants have shifted their chemistry to cope with the changing climate and now suffer food limitation”. My suggestion here is that ecologists should be careful in defining the meaning of ‘limitation’ in discussing these kinds of population changes in both rare and abundant species.

Perhaps the first principle is that it is the definition of life that food is always limiting. One does not need to do an experiment to demonstrate this truism. So to start we must agree that modern agriculture is built on the foundation that food can be improved and that this form of ‘food limitation’ is not what ecologists who are interested in population changes in the real world are trying to test. The key to explain population differences must come from resource differences in the broad sense, not food alone but a host of other ecological causal factors that may produce changes in birth and death rates in populations.

‘Limitation’ can be used in a spatial or a temporal context. Population density of deer mice can differ in average density in 2 different forest types, and this spatial problem would have to be investigated as a search for the several possible mechanisms that could be behind this observation. Often this is passed off too easily by saying that “resources” are limiting in the poorer habitat, but this statement takes us no closer to understanding what the exact food resources are. If food resources carefully defined are limiting density in the ‘poorer’ habitat, this would be a good example of food limitation in a spatial sense. By contrast if a single population is increasing in one year and declining in the next year, this could be an example of food limitation in a temporal sense.

The more difficult issue now becomes what evidence you have that food is limiting in either time or space. Growth in body size in vertebrates is one clear indirect indicator but we need to know exactly what food resources are limiting. The temptation is to use feeding experiments to test for food limitation (reviewed in Boutin 1990). Feeding experiments in the lab are simple, in the field not simple. Feeding an open population can lead to immigration and if your response variable is population density, you have an indirect effect of feeding. If animals in the experimentally fed area grow faster or have a higher reproductive output, you have evidence of the positive effect of the feeding treatment. You can then claim ‘food limitation’ for these specific variables. If population density increases on your feeding area relative to unfed controls, you can also claim ‘food limitation of density’. The problems then come when you consider the temporal dimension due to seasonal or annual effects. If the population density falls and you are still feeding in season 2 or year 2, then food limitation of density is absent, and the change must have been produced by higher mortality in season 2 or higher emigration.

Food resources could be limiting because of predator avoidance (Brown and Kotler 2007). The ecology of fear from predation has blossomed into a very large literature that explores the non-consumptive effects of predators on prey foraging that can lead to food limitation without food resources being in short supply (e.g., Peers et al. 2018, Allen et al. 2022).

All of this seems to be terribly obvious but the key point is that if you examine the literature about “food limitation” look at the evidence and the experimental design. Ecologists like medical doctors at times have a long list of explanations designed to sooth the soul without providing good evidence of what exact mechanism is operating. Economists are near the top with this distinguished approach, exceeded only by politicians, who have an even greater art in explaining changes after the fact with limited evidence.

As a footnote to defining this problem of food limitation, you should read Boutin (1990). I have also raved on about this topic in Chapter 8 of my 2013 book on rodent populations if you wish more details.

Allen, M.C., Clinchy, M. & Zanette, L.Y. (2022) Fear of predators in free-living wildlife reduces population growth over generations. Proceedings of the National Academy of Sciences (PNAS), 119, e2112404119. doi: 10.1073/pnas.2112404119.

Boutin, S. (1990). Food supplementation experiments with terrestrial vertebrates: patterns, problems, and the future. Canadian Journal of Zoology 68(2): 203-220. doi: 10.1139/z90-031.

Brown, J.S. & Kotler, B.P. (2007) Foraging and the ecology of fear. Foraging: Behaviour and Ecology (eds. D.W. Stephens, J.S. Brown & R.C. Ydenberg), pp. 437-448.University of Chicago Press, Chicago. ISBN: 9780226772646

Krebs, C.J. (2013) Chapter 8, The Food Hypothesis. In Population Fluctuations in Rodents. University of Chicago Press, Chicago. ISBN: 978-0-226-01035-9

On How Genomics will not solve Ecological Problems

I am responding to this statement in an article in the Conversation by Anne Murgai on April 19, 2022 (https://phys.org/news/2022-04-african-scientists-genes-species.html#google_vignette) : The opening sentence of her article on genomics encapsulates one of the problems of conservation biology today:

“DNA is the blueprint of life. All the information that an organism needs to survive, reproduce, adapt to environments or survive a disease is in its DNA. That is why genomics is so important.”

If this is literally correct, almost all of ecological science should disappear, and our efforts to analyse changes in geographic distributions, abundance, survival and reproductive rates, competition with other organisms, wildlife diseases, conservation of rare species and all things that we discuss in our ecology journals are epiphenomena, and thus our slow progress in sorting out these ecological issues is solely because we have not yet sequenced all our species to find the answers to everything in their DNA.

This is of course not correct, and the statement quoted above is a great exaggeration. But, if it is believed to be correct, it has some important consequences for scientific funding. I will confine my remarks to the fields of conservation and ecology. The first and most important is that belief in this view of genetic determinism is having large effects on where conservation funding is going. Genomics has been a rising star in biological science for the past 2 decades because of technological advances in sequencing DNA. As such, given a fixed budget, it is taking money away from the more traditional approaches to conservation such as setting up protected areas and understanding the demography of declining populations. Hausdorf (2021) explores these conflicting problems in an excellent review, and he concludes that often more cost-effective methods of conservation should be prioritized over genomic analyses. Examples abound of conservation problems that are immediate and typically underfunded (e.g., Turner et al. 2021, Silva et al, 2021).   

What is the resolution of these issues? I can recommend only that those in charge of dispensing funding for conservation science examine the hypotheses being tested and avoid endless funding for descriptive genomics that claim to have a potential and immediate outcome that will forward the main objectives of conservation. Certainly, some genomic projects will fit into this desirable science category, but many will not, and the money should be directed elsewhere.  

The Genomics Paradigm listed above is used in the literature on medicine and social science, and a good critique of this view from a human perspective is given in a review by Feldman and Riskin (2022). Scientists dealing with human breast cancer or schizophrenia show the partial but limited importance of DNA in determining the cause or onset of these complex conditions (e.g., Hilker et al 2018, Manobharathi et al. 2021). Conservation problems are equally complex, and in the climate emergency have a short time frame for action. I suspect that genomics for all its strengths will have only a minor part to play in the resolution of ecological problems and conservation crises in the coming years.

Feldman, Marcus W. and Riskin, Jessica (2022). Why Biology is not Destiny. The New York Review of Books 69 (April 21, 2022), 43-46.

Hausdorf, Bernhard (2021). A holistic perspective on species conservation. Biological Conservation 264, 109375. doi: 10.1016/j.biocon.2021.109375.

Hilker, R., Helenius, D., Fagerlund, B., Skytthe, A., Christensen, K., Werge, T.M., Nordentoft, M., and Glenthøj, B. (2018). Heritability of Schizophrenia and Schizophrenia Spectrum based on the Nationwide Danish Twin Register. Biological Psychiatry 83, 492-498. doi: 10.1016/j.biopsych.2017.08.017.

Manobharathi, V., Kalaiyarasi, D., and Mirunalini, S. (2021). A concise critique on breast cancer: A historical and scientific perspective. Research Journal of Biotechnology 16, 220-230.

Samuel, G. N. and Farsides, B. (2018). Public trust and ‘ethics review’ as a commodity: the case of Genomics England Limited and the UK’s 100,000 genomes project. Medicine, Health Care, and Philosophy 21, 159-168. doi: 10.1007/s11019-017-9810-1.

Silva, F., Kalapothakis, E., Silva, L., and Pelicice, F. (2021). The sum of multiple human stressors and weak management as a threat for migratory fish. Biological Conservation 264, 109392. doi: 10.1016/j.biocon.2021.109392.

Turner, A., Wassens, S., and Heard, G. (2021). Chytrid infection dynamics in frog populations from climatically disparate regions. Biological Conservation 264, 109391. doi: 10.1016/j.biocon.2021.109391.

On Replication in Ecology

All statistics books recommend replication in scientific studies. I suggest that this recommendation has been carried to extreme in current ecological studies. In approximately 50% of ecological papers I read in our best journals (a biased sample to be sure) the results of the study are not new and have been replicated many times in the past, often in papers not cited in ‘new’ papers. There is no harm in this happening, but it does not lead to progress in our understanding of populations, communities or ecosystems or lead to new ecological theory. We do need replication examining the major ideas in ecology, and this is good. On the other hand, we do not need more and more studies of what we might call ecological truths. An analogy would be to test in 2022 the Flat Earth Hypothesis to examine its predictions. It is time to move on.

There is an extensive literature on hypothesis testing which can be crudely summarized by “Observations of X” which can be explained by hypothesis A, B, or C each of which have unique predictions associated with them. A series of experiments are carried out to test these predictions and the most strongly supported hypothesis, call it B*, is accepted as current knowledge. Explanation B* is useful scientifically only if it leads to a new set of predictions D, E, and F which are then tested. This chain of explanation is never simple. There can be much disagreement which may mean sharpening the hypotheses following from Explanation B*. At the same time there will be some scientists who despite all the accumulated data still accept the Flat Earth Hypothesis. If you think this is nonsense, you have not been reading the news about the Covid epidemic.

Further complications arise from two streams of thought. The first is that the way forward is via simple mathematical models to represent the system. There is much literature on modelling in ecology which is most useful when it is based on good field data, but for too many ecological problems the model is believed more than the data, and the assumptions of the models are not stated or tested. If you think that models lead directly to progress, examine again the Covid modelling situation in the past 2 years. The second stream of thought that complicates ecological science is that of descriptive ecology. Many of the papers in the current literature describe a current set of data or events with no hypothesis in mind. The major offenders are the biodiversity scientists and the ‘measure everything’ scientists. The basis of this approach seems to be that all our data will be of major use in 50, 100 or whatever years, so we must collect major archives of ecological data. Biodiversity is the bandwagon of the present time, and it is a most useful endeavour to classify and categorise species. As such it leads to much natural history that is interesting and important for many non-scientists. And almost everyone would agree that we should protect biodiversity. But while biodiversity studies are a necessary background to ecological studies, they do not lead to progress in the scientific understanding of the ecosphere.

Conservation biology is closely associated with biodiversity science, but it suffers even more from the problems outlined above. Conservation is important for everyone, but the current cascade of papers in conservation biology are too often of little use. We do not need opinion pieces; we need clear thinking and concrete data to solve conservation issues. This is not easy since once a species is endangered there are typically too few of them to study properly. And like the rest of ecological science, funding is so poor that reliable data cannot be achieved, and we are left with more unvalidated indices or opinions on species changes. Climate change puts an enormous kink in any conservation recommendations, but on the other hand serves as a panchrestron, a universal explanation for every possible change that occurs in ecosystems and thus can be used to justify every research agenda, good or poor with spurious correlations.

We could advance our ecological understanding more rapidly by demanding a coherent theoretical framework for all proposed programs of research. Grace (2019) argues that plant ecology has made much progress during the last 80 years, in contrast to the less positive overview of Peters (1991) or my observations outlined above. Prosser (2020) provides a critique for microbial ecology that echoes what Peters argued in 1991. All these divergences of opinion would be worthy of a graduate seminar discussion.

If you think all my observations are nonsense, then you should read the perceptive book by Peters (1991) written 30 years ago on the state of ecological science as well as the insightful evaluation of this book by Grace (2019) and the excellent overview of these questions in Currie (2019).  I suggest that many of the issues Peters (1991) raised are with us in 2022, and his general conclusion that ecology is a weak science rather than a strong one still stands. We should celebrate the increases in ecological understanding that have been achieved, but we could advance the science more rapidly by demanding more rigor in what we publish.

Currie, D.J. (2019). Where Newton might have taken ecology. Global Ecology and Biogeography 28, 18-27. doi: 10.1111/geb.12842.

Grace, John (2019). Has ecology grown up? Plant Ecology & Diversity 12, 387-405. doi: 10.1080/17550874.2019.1638464.

Peters, R.H. (1991) ‘A Critique for Ecology.’ (Cambridge University Press: Cambridge, England.). 366 pages. ISBN: 0521400171

Prosser, J.I. (2020). Putting science back into microbial ecology: a question of approach. Philosophical Transactions of the Royal Society. Biological sciences 375, 20190240. doi: 10.1098/rstb.2019.0240.

On the Canadian Biodiversity Observation Network (CAN BON)

I have been reading the report of an exploratory workshop from July 2021 on designing a biodiversity monitoring network across Canada to address priority monitoring gaps and engage Indigenous people across Canada. The 34 pages of their workshop report can be accessed here, and I recommend you might read it before reading my comments on the report:

https://www.nserc-crsng.gc.ca/Media-Media/NewsDetail-DetailNouvelles_eng.asp?ID=1310

I have a few comments on this report that are my opinion only. I think the Report on this workshop outlines a plan so grand and misguided that it could not be achieved in this century, even with a military budget. The report is a statement of wisdom put together with platitudes. Why is this and what are the details that I believe to be unachievable?

The major goal of the proposed network is to bring together everyone to improve biodiversity monitoring and address the highest priority gaps to support biodiversity conservation. I think most of the people of Canada would support these objectives, but what does it mean? Let us do a thought experiment. Suppose at this instant in time we knew the distribution and the exact abundance of every species in Canada. What would we know, what could we manage, what good would all these data be except as a list taking up terabytes of data? If we had these data for several years and the numbers or biomass were changing, what could we do? Is all well in our ecosystems or not? What are we trying to maximize when we have no idea of the mechanisms of change? Contrast these concerns about biodiversity with the energy and resources applied in medicine to the mortality of humans infected with Covid viruses in the last 3 years. A monumental effort to examine the mechanisms of infection and ways of preventing illness, with a clear goal and clear measures of progress toward that goal.

There is no difficulty in putting out “dream” reports, and biologists as well as physicists and astronomers, and social scientists have been doing this for years. But in my opinion this report is a dream too far and I give you a few reasons why.

First, we have no clear definition of biodiversity except that it includes everything living, so if we are going to monitor biodiversity what exactly should we do? For some of us monitoring caribou and wolves would be a sufficient program, or whales in the arctic, or plant species in peat bogs. So, to begin with we have to say what operationally we would define as the biodiversity we wish to monitor. We could put all our energy into a single group of species like birds and claim that these are the signal species to monitor for ecosystem integrity. Or should we consider only the COSEWIC list of Threatened or Endangered Species in Canada as our major monitoring concern? So, the first job of CAN BON must be to make a list of what the observation network is supposed to observe (Lindenmayer 2018). There is absolutely no agreement on that simple question within Canada now, and without it we cannot move forward to make an effective network.

The second issue that I take with the existing report is that the emphasis is on observations, and then the question is what problems will be solved by observation alone. The advance of ecological science has been based on observation and experiment directed to specific questions either of ecological interest or of economic interest. In the Pacific salmon fishery for example the objective of observation is to predict escapement and thus allowable harvest quotas. Despite years of high-quality observations and experiments, we are still a long way from understanding the ecosystem dynamics that drive Pacific salmon reproduction and survival.

Contrast the salmon problem with the caribou problem. We have a reasonably good understanding of why caribou populations are declining or not, based on many studies of predator-prey dynamics, harvesting, and habitat management. At present the southern populations of caribou are disappearing because of a loss of habitat because of land use for forestry and mining, and the interacting nexus of factors is well understood. What we do not do as a society is put these ideas into practice for conservation; for example, forestry must have priority over land use for economic reasons and the caribou populations at risk suffer. Once ecological knowledge is well defined, it does not lead automatically to action that biodiversity scientists would like. Climate change is the elephant in the room for many of our ecological problems but it is simultaneously easy to blame and yet uneven in its effects.

The third problem is funding, and this overwhelms the objectives of the Network. Ecological funding in general in Canada is a disgrace, yet we achieve much with little money. If this ever changes it will require major public input and changed governmental objectives, neither is under our immediate control. One way to press this objective forward is to produce a list of the most serious biodiversity problems facing Canada now along with suggestions for their resolution. There is no simple way to develop this list. A by-product of the current funding system in Canada is the shelling out of peanuts in funding to a wide range of investigators whose main job becomes how to jockey for the limited funds by overpromising results. Coordination is rare partly because funding is low. So (for example) I can work only on the tree ecology of the boreal forest because I am not able to expand my studies to include the shrubs, the ground vegetation, the herbivores, and the insect pests, not to mention the moose and the caribou.  

For these reasons and many more that could be addressed from the CAN BON report, I would suggest that to proceed further here is a plan:

  1. Make a list of the 10 or 15 most important questions for biodiversity science in Canada. This alone would be a major achievement.
  2. Establish subgroups organized around each of these questions who can then self-organize to discuss plans for observations and experiments designed to answer the question. Vague objectives are not sufficient. An established measure of progress is essential.
  3. Request a realistic budget and a time frame for achieving these goals from each group.  Find out what the physicists, astronomers, and medical programs deem to be suitable budgets for achieving their goals.
  4. Organize a second CAN BON conference of a small number of scientists to discuss these specific proposals. Any subgroup can participate at this level, but some decisions must be made for the overall objectives of biodiversity conservation in Canada.

These general ideas are not particularly new (Likens 1989, Lindenmayer et al. 2018). They have evolved from the setting up of the LTER Program in the USA (Hobbie 2003), and they are standard operating procedures for astronomers who need to come together with big ideas asking for big money. None of this will be easy to achieve for biodiversity conservation because it requires the wisdom of Solomon and the determination of Vladimir Putin.

Hobbie, J.E., Carpenter, S.R., Grimm, N.B., Gosz, J.R., and Seastedt, T.R. (2003). The US Long Term Ecological Research Program. BioScience 53, 21-32. doi: 10.1016/j.oneear.2021.12.008

Likens, G. E. (Ed.) (1989). ‘Long-term Studies in Ecology: Approaches and Alternatives.’ (Springer Verlag: New York.) ISBN: 0387967435

Lindenmayer, D. (2018). Why is long-term ecological research and monitoring so hard to do? (And what can be done about it). Australian Zoologist 39, 576-580. doi: 10.7882/az.2017.018.

Lindenmayer, D.B., Likens, G.E., and Franklin, J.F. (2018). Earth Observation Networks (EONs): Finding the Right Balance. Trends in Ecology & Evolution 33, 1-3. doi: 10.1016/j.tree.2017.10.008.

Blaming Climate Change for Ecological Changes

The buzz word for all ecological applications for funding and for many submitted papers is climate change. Since the rate of climate change is not something ecologists can control, there are only two reasons to cite climate change as a reason to fund current ecological research. First, since change is continuous in communities and ecosystems, it would be desirable to determine how many of the observed changes might be caused by climate change. Second, it might be desirable to measure the rate of change in ecosystems, correlate these changes to some climate variable, and then use these data as a political and social tool to stimulate politicians to do something about greenhouse gas emissions. The second approach is that taken by climatologists who blame hurricanes and tornadoes on global warming. There is no experimental way to trace any particular hurricane to particular amounts of global warming, so it is easy for critics to say these are just examples of weather variation of which we have measured much over the last 150 years and paleo-ecologists have traced over tens of thousands of years using proxies from tree rings and sediment cores. If we are to use the statistical approach we need a large enough sample to argue that extreme events are becoming more frequent, and that might take 50 years by which time the argument would be made too late to request proper action.

The second approach to prediction in ecology is fraught with problems, as outlined in Berteaux et al. (2006) and Dietze (2017). The first approach has many statistical problems as well in selecting a biologically coherent model that can be tested by in a standard scientific manner. Since there are a very large number of climate variables, the possibility of spurious correlations is excessive, and the only way to avoid these kinds of results is to be predictive and to have a biological causal chain that is testable. Myers (1998) reviewed all the fishery data for predictive models of juvenile recruitment that used environmental variables as predictors and data was subsequently collected and tested with the published model. The vast majority of these aquatic models failed when retested but a few were very successful. The general problem is that model failures or successes might not be published so even this approach can be biased if only a literature survey is undertaken. The take home message from Myers (1998) was that almost none of the recruitment-environment correlations were being used in actual fishery management.

How much would this conclusion about the failure of environmental models in fishery management apply to other areas in ecology? Mouquet et al. (2014) pointed out that predictions could be classified as ‘explanatory’ or ‘anticipatory’ and that “While explanatory predictions are necessarily testable, anticipatory predictions need not be…….In summary, anticipatory predictions differ from explanatory predictions in that they do not aim at testing models and theory. They rely on the assumption that underlying hypotheses are valid while explanatory predictions are based on hypotheses to be tested. Anticipatory predictions are also not necessarily supposed to be true.” (page 1296). If we accept these distinctions, we have (I think) a major problem in that many of the predictive models put forward in the ecological literature are anticipatory, so they would be of little use to a natural resource manager who requires an explanatory model.

If we ignore this problem with anticipatory predictions, we can concentrate on explanatory predictions that are useful to managers. One major set of explanatory predictions in ecology are those associated with range changes in relation to climate change. Cahill et al. (2014) examined the conventional hypothesis that warm-edge range limits are set by biotic interactions rather than abiotic interactions. Contrary to expectations, they found in 125 studies that abiotic factors were more frequently supported as setting warm-edge range limits. Clearly a major paradigm about warm-edge range limits is of limited utility.

Explanatory predictions are not always explicit. Mauck et al. (2018) for example developed a climate model to predict reproductive success in Leach’s storm petrel on an island off New Brunswick in eastern Canada. From 56 years of hatching success they concluded that annual global mean temperature during the spring breeding season was the single most important predictor of breeding success. They considered only a few measures of temperature as predictor variables and found that a quadratic form of annual global mean temperature was the best variable to describe the changes in breeding success. The paper speculates about how global or regional mean temperature could possibly be an ecological predictor of breeding success, and no mechanisms are specified. The actual data on breeding success are not provided in the paper, even as a temporal plot. Since global temperatures were rising steadily from 1955 to 2010, any temporal trend in any population parameter that is rising would correlate with temperature records. The critical quadratic relationship in their analysis suggests that a tipping point was reached in 1988 when hatching success began to decline. Whether or not this is a biologically correct explanatory model can be determined by additional data gathered in future years. But it would be more useful to find out what the exact ecological mechanisms are.

If the ecological world is going to hell in a handbasket, and temperatures however measured are going up, we can certainly construct a plethora of models to describe the collapse of many species and the rise of others. But this is hardly progress and would appear to be anticipatory predictions of little use to advancing ecological science, as Guthery et al. (2005) pointed out long ago. Someone ought to review and evaluate the utility of AIC methods as they are currently being used in ecological and conservation science for predictions.

Berteaux, D., Humphries, M.M., Krebs, C.J., Lima, M., McAdam, A.G., Pettorelli, N., Reale, D., Saitoh, T., Tkadlec, E., Weladji, R.B., and Stenseth, N.C. (2006). Constraints to projecting the effects of climate change on mammals. Climate Research 32, 151-158. doi: 10.3354/cr032151.

Cahill, A.E., Aiello-Lammens, M.E., Fisher-Reid, M.C., Hua, X., and Karanewsky, C.J. (2014). Causes of warm-edge range limits: systematic review, proximate factors and implications for climate change. Journal of Biogeography 41, 429-442. doi: 10.1111/jbi.12231.

Dietze, M.C. (2017). Prediction in ecology: a first-principles framework. Ecological Applications 27, 2048-2060. doi: 10.1002/eap.1589.

Guthery, F.S., Brennan, L.A., Peterson, M.J., and Lusk, J.J. (2005). Information theory in wildlife science: Critique and viewpoint. Journal of Wildlife Management 69, 457-465. doi: 10.1890/04-0645.

Mauck, R.A., Dearborn, D.C., and Huntington, C.E. (2018). Annual global mean temperature explains reproductive success in a marine vertebrate from 1955 to 2010. Global Change Biology 24, 1599-1613. doi: 10.1111/gcb.13982.

Mouquet, N., Lagadeuc, Y., Devictor, V., Doyen, L., and Duputie, A. (2015). Predictive ecology in a changing world. Journal of Applied Ecology 52, 1293-1310. doi: 10.1111/1365-2664.12482.

Myers, R.A. (1998). When do environment-recruitment correlations work? Reviews in Fish Biology and Fisheries 8, 285-305. doi: 10.1023/A:1008828730759.

 

On Questionable Research Practices

Ecologists and evolutionary biologists are tarred and feathered along with many scientists who are guilty of questionable research practices. So says this article in “The Conservation” on the web:
https://theconversation.com/our-survey-found-questionable-research-practices-by-ecologists-and-biologists-heres-what-that-means-94421?utm_source=twitter&utm_medium=twitterbutton

Read this article if you have time but here is the essence of what they state:

“Cherry picking or hiding results, excluding data to meet statistical thresholds and presenting unexpected findings as though they were predicted all along – these are just some of the “questionable research practices” implicated in the replication crisis psychology and medicine have faced over the last half a decade or so.

“We recently surveyed more than 800 ecologists and evolutionary biologists and found high rates of many of these practices. We believe this to be first documentation of these behaviours in these fields of science.

“Our pre-print results have certain shock value, and their release attracted a lot of attention on social media.

  • 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking)
  • 42% had collected more data after inspecting whether results were statistically significant (a form of “p hacking”)
  • 51% reported an unexpected finding as though it had been hypothesised from the start (known as “HARKing”, or Hypothesising After Results are Known).”

It is worth looking at these claims a bit more analytically. First, the fact that more than 800 ecologists and evolutionary biologists were surveyed tells you nothing about the precision of these results unless you can be convinced this is a random sample. Most surveys are non-random and yet are reported as though they are a random, reliable sample.

Failing to report results is common in science for a variety of reasons that have nothing to do with questionable research practices. Many graduate theses contain results that are never published. Does this mean their data are being hidden? Many results are not reported because they did not find an expected result. This sounds awful until you realize that journals often turn down papers because they are not exciting enough, even though the results are completely reliable. Other results are not reported because the investigator realized once the study is complete that it was not carried on long enough, and the money has run out to do more research. One would have to have considerable detail about each study to know whether or not these 64% of researchers were “cherry picking”.

Alas the next problem is more serious. The 42% who are accused of “p-hacking” were possibly just using sequential sampling or using a pilot study to get the statistical parameters to conduct a power analysis. Any study which uses replication in time, a highly desirable attribute of an ecological study, would be vilified by this rule. This complaint echos the statistical advice not to use p-values at all (Ioannidis 2005, Bruns and Ioannidis 2016) and refers back to complaints about inappropriate uses of statistical inference (Armhein et al. 2017, Forstmeier et al. 2017). The appropriate solution to this problem is to have a defined experimental design with specified hypotheses and predictions rather than an open ended observational study.

The third problem about unexpected findings hits at an important aspect of science, the uncovering of interesting and important new results. It is an important point and was warned about long ago by Medewar (1963) and emphasized recently by Forstmeier et al. (2017). The general solution should be that novel results in science must be considered tentative until they can be replicated, so that science becomes a self-correcting process. But the temptation to emphasize a new result is hard to restrain in the era of difficult job searches and media attention to novelty. Perhaps the message is that you should read any “unexpected findings” in Science and Nature with a degree of skepticism.

The cited article published in “The Conversation” goes on to discuss some possible interpretations of what these survey results mean. And the authors lean over backwards to indicate that these survey results do not mean that we should not trust the conclusions of science, which unfortunately is exactly what some aspects of the public media have emphasized. Distrust of science can be a justification for rejecting climate change data and rejecting the value of immunizations against diseases. In an era of declining trust in science, these kinds of trivial surveys have shock value but are of little use to scientists trying to sort out the details about how ecological and evolutionary systems operate.

A significant source of these concerns flows from the literature that focuses on medical fads and ‘breakthroughs’ that are announced every day by the media searching for ‘news’ (e.g. “eat butter”, “do not eat butter”). The result is almost a comical model of how good scientists really operate. An essential assumption of science is that scientific results are not written in stone but are always subject to additional testing and modification or rejection. But one result is that we get a parody of science that says “you can’t trust anything you read” (e.g. Ashcroft 2017). Perhaps we just need to repeat to ourselves to be critical, that good science is evidence-based, and then remember George Bernard Shaw’s comment:

Success does not consist in never making mistakes but in never making the same one a second time.

Amrhein, V., Korner-Nievergelt, F., and Roth, T. 2017. The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research. PeerJ  5: e3544. doi: 10.7717/peerj.3544.

Ashcroft, A. 2017. The politics of research-Or why you can’t trust anything you read, including this article! Psychotherapy and Politics International 15(3): e1425. doi: 10.1002/ppi.1425.

Bruns, S.B., and Ioannidis, J.P.A. 2016. p-Curve and p-Hacking in observational research. PLoS ONE 11(2): e0149144. doi: 10.1371/journal.pone.0149144.

Forstmeier, W., Wagenmakers, E.-J., and Parker, T.H. 2017. Detecting and avoiding likely false-positive findings – a practical guide. Biological Reviews 92(4): 1941-1968. doi: 10.1111/brv.12315.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLOS Medicine 2(8): e124. doi: 10.1371/journal.pmed.0020124.

Medawar, P.B. 1963. Is the scientific paper a fraud? Pp. 228-233 in The Threat and the Glory. Edited by P.B. Medawar. Harper Collins, New York. pp. 228-233. ISBN 978-0-06-039112-6

On Caribou and Hypothesis Testing

Mountain caribou populations in western Canada have been declining for the past 10-20 years and concern has mounted to the point where extinction of many populations could be imminent, and the Canadian federal government is asking why this has occurred. This conservation issue has supported a host of field studies to determine what the threatening processes are and what we can do about them. A recent excellent summary of experimental studies in British Columbia (Serrouya et al. 2017) has stimulated me to examine this caribou crisis as an illustration of the art of hypothesis testing in field ecology. We teach all our students to specify hypotheses and alternative hypotheses as the first step to solving problems in population ecology, so here is a good example to start with.

From the abstract of this paper, here is a statement of the problem and the major hypothesis:

“The expansion of moose into southern British Columbia caused the decline and extirpation of woodland caribou due to their shared predators, a process commonly referred to as apparent competition. Using an adaptive management experiment, we tested the hypothesis that reducing moose to historic levels would reduce apparent competition and therefore recover caribou populations. “

So the first observation we might make is that much is left out of this approach to the problem. Populations can decline because of habitat loss, food shortage, excessive hunting, predation, parasitism, disease, severe weather, or inbreeding depression. In this case much background research has narrowed the field to focus on predation as a major limitation, so we can begin our search by focusing on the predation factor (review in Boutin and Merrill 2016). In particular Serrouya et al. (2017) focused their studies on the nexus of moose, wolves, and caribou and the supposition that wolves feed preferentially on moose and only secondarily on caribou, so that if moose numbers are lower, wolf numbers will be lower and incidental kills of caribou will be reduced. So they proposed two very specific hypotheses – that wolves are limited by moose abundance, and that caribou are limited by wolf predation. The experiment proposed and carried out was relatively simple in concept: kill moose by allowing more hunting in certain areas and measure the changes in wolf numbers and caribou numbers.

The experimental area contained 3 small herds of caribou (50 to 150) and the unmanipulated area contained 2 herds (20 and 120 animals) when the study began in 2003. The extended hunting worked well, and moose in the experimental area were reduced from about 1600 animals down to about 500 over the period from 2003 to 2014. Wolf numbers in the experimental area declined by about half over the experimental period because of dispersal out of the area and some starvation within the area. So the two necessary conditions of the experiment were satisfied – moose numbers declined by about two-thirds from additional hunting and wolf numbers declined by about half on the experimental area. But the caribou population on the experimental area showed mixed results with one population showing a slight increase in numbers but the other two showing a slight loss. On the unmanipulated area both caribou populations showed a continuing slow decline. On the positive side the survival rate of adult caribou was higher on the experimental area, suggesting that the treatment hypothesis was correct.

From the viewpoint of caribou conservation, the experiment failed to change the caribou population from continuous slow declines to the rapid increase needed to recover these populations to their former greater abundance. At best it could be argued that this particular experiment slowed the rate of caribou decline. Why might this be? We can make a list of possibilities:

  1. Moose numbers on the experimental area were not reduced enough (to 300 instead of to 500 achieved). Lower moose would have meant much lower wolf numbers.
  2. Small caribou populations are nearly impossible to recover because of chance events that affect small numbers. A few wolves or bears or cougars could be making all the difference to populations numbering 10-20 individuals.
  3. The experimental area and the unmanipulated area were not assigned treatments at random. This would mean to a pure statistician that you cannot make statistical comparisons between these two areas.
  4. The general hypothesis being tested is wrong, and predation by wolves is not the major limiting factor to mountain caribou populations. Many factors are involved in caribou declines and we cannot determine what they are because they change for area to area, year to year.
  5. It is impossible to do these landscape experiments because for large landscapes it is impossible to find 2 or more areas that can be considered replicates.
  6. The experimental manipulation was not carried out long enough. Ten years of manipulation is not long for caribou who have a generation time of 15-25 years.

Let us evaluate these 6 points.

#1 is fair enough, hard to achieve a population of moose this low but possible in a second experiment.

#2 is a worry because it is difficult to deal experimentally with small populations, but we have to take the populations as a given at the time we do a manipulation.

#3 is true if you are a purist but is silly in the real world where treatments can never be assigned at random in landscape experiments.

#4 is a concern and it would be nice to include bears and other predators in the studies but there is a limit to people and money. Almost all previous studies in mountain caribou declines have pointed the finger at wolves so it is only reasonable to start with this idea. The multiple factor idea is hopeless to investigate or indeed even to study without infinite time and resources.

#5 is like #3 and it is an impossible constraint on field studies. It is a common statistical fallacy to assume that replicates must be identical in every conceivable way. If this were true, no one could do any science, lab or field.

#6 is correct but was impossible in this case because the management agencies forced this study to end in 2014 so that they could conduct another different experiment. There is always a problem deciding how long a study is sufficient, and the universal problem is that the scientists or (more likely) the money and the landscape managers run out of energy if the time exceeds about 10 years or more. The result is that one must qualify the conclusions to state that this is what happened in the 10 years available for study.

This study involved a heroic amount of field work over 10 years, and is a landmark in showing what needs to be done and the scale involved. It is a far cry from sitting at a computer designing the perfect field experiment on a theoretical landscape to actually carrying out the field work to get the data summarized in this paper. The next step is to continue to monitor some of these small caribou populations, the wolves and moose to determine how this food chain continues to adjust to changes in prey levels. The next experiment needed is not yet clear, and the eternal problem is to find the high levels of funding needed to study both predators and prey in any ecosystem in the detail needed to understand why prey numbers change. Perhaps a study of all the major predators – wolves, bears, cougars – in this system should be next. We now have the radio telemetry advances that allow satellite locations, activity levels, timing of mortality, proximity sensors when predators are near their prey, and even video and sound recording so that more details of predation events can be recorded. But all this costs money that is not yet here because governments and people have other priorities and value the natural world rather less than we ecologists would prefer. There is not yet a Nobel Prize for ecological field research, and yet here is a study on an iconic Canadian species that would be high up in the running.

What would I add to this paper? My curiosity would be satisfied by the number of person-years and the budget needed to collect and analyze these results. These statistics should be on every scientific paper. And perhaps a discussion of what to do next. In much of ecology these kinds of discussions are done informally over coffee and students who want to know how science works would benefit from listening to how these informal discussions evolve. Ecology is far from simple. Physics and chemistry are simple, genetics is simple, and ecology is really a difficult science.

Boutin, S. and Merrill, E. 2016. A review of population-based management of Southern Mountain caribou in BC. {Unpublished review available at: http://cmiae.org/wp-content/uploads/Mountain-Caribou-review-final.pdf

Serrouya, R., McLellan, B.N., van Oort, H., Mowat, G., and Boutin, S. 2017. Experimental moose reduction lowers wolf density and stops decline of endangered caribou. PeerJ  5: e3736. doi: 10.7717/peerj.3736.