Category Archives: Evaluating Research Quality

Is Ecology Becoming a Correlation Science?

One of the first lessons in Logic 101 is classically called “Post hoc, ergo propter hoc” or in plain English, “After that, therefore because of that”. The simplest example of many you can see in the newspapers might be: “The ocean is warming up, salmon populations are going down, it must be another effect of climate change. There is a great deal of literature on the problems associated with these kinds of simple inferences, going back to classics like Romesburg (1981), Cox and Wermuth (2004), Sugihara et al. (2012), and Nichols et al. (2019). My purpose here is only to remind you to examine cause and effect when you make ecological conclusions.

My concern is partly related to news articles on ecological problems. A recent example is the collapse of the snow crab fishery in the Gulf of Alaska which in the last 5 years has gone from a very large and profitable fishery interacting with a very large crab population to, at present, a closed fishery with very few snow crabs. What has happened? Where did the snow crabs go? No one really knows but there are perhaps half a dozen ideas put forward to explain what has happened. Meanwhile the fishery and the local economy are in chaos. Without very many critical data on this oceanic ecosystem we can list several factors that might be involved – climate change warming of the Bering Sea, predators, overfishing, diseases, habitat disturbances because of bottom trawl fishing, natural cycles, and then recognizing that we have no simple way for deciding cause and effect and therefore making management choices.

The simplest solution is to say that many interacting factors are involved and many papers indicate the complexity of populations, communities and ecosystems (e,g, Lidicker 1991, Holmes 1995, Howarth et al. 2014). Everyone would agree with this general idea, “the world is complex”, but the arguments have always been “how do we proceed to investigate ecological processes and solve ecological problems given this complexity?” The search for generality has led mostly into replications in which ‘identical’ populations or communities behave very differently. How can we resolve this problem? A simple answer to all this is to go back to the correlation coefficient and avoid complexity.

Having some idea of what is driving changes in ecological systems is certainly better than having no idea, but it is a problem when only one explanation is pushed without a careful consideration of alternative possibilities. The media and particularly the social media are encumbered with oversimplified views of the causes of ecological problems which receive wide approbation with little detailed consideration of alternative views. Perhaps we will always be exposed to these oversimplified views of complex problems but as scientists we should not follow in these footsteps without hard data.

What kind of data do we need in science? We must embrace the rules of causal inference, and a good start might be the books of Popper (1963) and Pearl and Mackenzie (2018) and for ecologists in particular the review of the use of surrogate variables in ecology by Barton et al. (2015). Ecologists are not going to win public respect for their science until they can avoid weak inference, minimize hand waving, and follow the accepted rules of causal inference. We cannot build a science on the simple hypothesis that the world is complicated or by listing multiple possible causes for changes. Correlation coefficients can be a start to unravelling complexity but only a weak one. We need better methods for resolving complex issues in ecology.

Barton, P.S., Pierson, J.C., Westgate, M.J., Lane, P.W. & Lindenmayer, D.B. (2015) Learning from clinical medicine to improve the use of surrogates in ecology. Oikos, 124, 391-398.doi: 10.1111/oik.02007.

Cox, D.R. and Wermuth, N. (2004). Causality: a statistical view. International Statistical Reviews 72: 285-305.

Holmes, J.C. (1995) Population regulation: a dynamic complex of interactions. Wildlife Research, 22, 11-19.

Howarth, L.M., Roberts, C.M., Thurstan, R.H. & Stewart, B.D. (2014) The unintended consequences of simplifying the sea: making the case for complexity. Fish and Fisheries, 15, 690-711.doi: 10.1111/faf.12041

Lidicker, W.Z., Jr. (1991) In defense of a multifactor perspective in population ecology. Journal of Mammalogy, 72, 631-635.

Nichols, J.D., Kendall, W.L. & Boomer, G.S. (2019) Accumulating evidence in ecology: Once is not enough. Ecology and Evolution, 9, 13991-14004.doi: 10.1002/ece3.5836.

Pearl, J., and Mackenzie, D. 2018. The Book of Why. The New Science of Cause and Effect. Penguin, London, U.K. 432 pp. ISBN: 978-1541698963

Popper, K.R. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge and Kegan Paul, London. 608 pp. ISBN: 978-1541698963

Romesburg, H.C. (1981) Wildlife science: gaining reliable knowledge. Journal of Wildlife Management, 45, 293-313.

Sugihara, G., et al. (2012) Detecting causality in complex ecosystems. Science, 338, 496-500.doi: 10.1126/science.1227079.

How to Destroy a Research Station

I have had the ‘privilege’ over the last 60 years of watching three ecological field stations be destroyed. Admittedly this is a small sample, against which every ecologist can complain, but I wanted to present to you my list of how to achieve this kind of destruction should you ever be commanded to do so. I will not name names or specific places, since the aim is to develop a general theory rather than to name and pillory specific historical actions and people. I suggest that nine rules are needed to proceed smoothly in this matter if you are given this job.

  1.  Have a clear vision why you wish to destroy an existing station. Do not vacillate. The background may be money, or philosophy of science, or orders from those higher in the echelon, or a personal peeve. Remember you are an administrator, and no one can challenge your wisdom in making major changes or closing the station.  
  2. Speak to none of the current users of the research station. If the research station has a Users Committee, avoid talking to them until after all the decisions are made. A users committee is just an honorary appointment, and it helps if very few of the users are actually people who do research at the station. It is very important that your vision should not be clouded by personnel or research programs currently running at the station. And it is best if the scientists using the station have no information except gossip about the changes that are coming.
  3. Avoid loose talk around your office. If you or your group are paying a visit in the field to the research station before closing it or repositioning its purpose, give out no information to anyone on future courses of action.
  4. Communicate upwards in the hierarchy, never downwards. You must keep all the members of the higher echelons fully informed. Do not dwell on the details of your progress in destruction but emphasize the gains that will flow from this dismantling. Tell fibs as much as you like because no one will question your version of events.
  5. Never read anything about the history of the research station or read any of the papers and reports that have originated there. The key is that you as an administrator know what should be done, and the last consideration is history. Administrators must keep a clear mind, unconcerned with historical trivia.
  6. Let none of the destruction news reach the media lest the public in general might begin to see what is happening. Newspaper and media coverage are rarely flattering to bureaucrats. If possible, line up a sympathetic media person who can talk about the brilliant future of the research station and the wisdom of the decisions you have made.
  7. Take a strong business approach. Do not worry if you must fire people currently running the research station or eject scientists currently working there. Everyone must retire at some point and all business leaders have solid recipes for hiring contractors to take care of any problems with the buildings. No matter what the extra cost.
  8. Sell the research station if you possibly can in order to gain revenue for your yet to be revealed vision. You may talk complete nonsense to explain why you are making major changes or closing the research station because few of your possible critics will be in a position to distinguish nonsense statements from truth. ‘Alternative facts’ are very useful if your decisions are questioned.
  9. Realize that if you have made a mistake in destroying a research station, your employer will not know that for several years. By that time, you will have ascended in the hierarchy of your employment unit for having carried out such a definitive action. And if your co-workers know the poor job you are doing, they will write sterling letters of reference for you to move you to another position in a different department or agency so that the worse the job you have done, the stronger will be the reference letters to recommend you for another job.

There is almost no literature I can find on this topic of administering a field station. If you think field stations are eternal, it may be a sign that you are very young, or you are very fortunate in working for an agency where moving forward is correctly labeled as progress. I have always thought that long-term field research stations were considered sacred but clearly not everyone agrees. Administrators must have something to do to leave their mark on the world for better or worse. All we can do is watch and be alert for emerging symptoms of collapse.

Swanson, F.J. (2015). Confluence of arts, humanities, and science at sites of long-term ecological inquiry. Ecosphere 6 (8), Article 132. doi: 10.1890/ES15-00139.1.

On Assumptions in Ecology Papers

What can we do as ecologists to improve the publishing standards of ecology papers? I suggest one simple but bold request. We should require at the end of every published paper a annotated list of the assumptions made in providing the analysis reported in the paper. A tabular format could be devised with columns for the assumption, the perceived support of and tests for the assumption, and references for this support or lack thereof. I can hear the screaming already, so this table could be put in the Supplementary Material which most people do not read. We could add to each paper in the final material where there are statements of who did the writing, who provided the money, and add a reference to this assumptions table in the Supplementary Material or a statement that no assumptions about anything were made to reach these conclusions.

The first response I can detect to this recommendation is that many ecologists will differ in what they state are assumptions to their analysis and conclusions. As an example, in wildlife studies, we commonly make the assumption that an individual animal having a radio collar will behave and survive just like another animal with no collar. In analyses of avian population dynamics, we might commonly assume that our visiting nests does not affect their survival probability. We make many such assumptions about random or non-random sampling. My question then is whether or not there is any value in listing these kinds of assumptions. My response is that this approach of listing what the authors think they are assuming should alert the reviewers to the elephants in the room that have not been listed.

My attention was called to this general issue by the recent paper of Ginzburg and Damuth (2022) in which they contrasted the assumptions of two general theories of functional responses of predators to prey – “prey dependence” versus “ratio dependence”. We have in ecology many such either-or discussions that never seem to end. Consider the long-standing discussion of whether populations can be regulated by factors that are “density dependent” or “density independent”, a much-debated issue that is still with us even though it was incisively analyzed many years ago.  

Experimental ecology is not exempt from assumptions, as outlined in Kimmel et al. (2021) who provide an incisive review of cause and effect in ecological experiments. Pringle and Hutchinson (2020) discuss the failure of assumptions in food web analysis and how these might be resolved with new techniques of analysis. Drake et al. (2021) consider the role of connectivity in arriving at conservation evaluations of patch dynamics, and the importance of demographic contributions to connectivity via dispersal. The key point is that, as ecology progresses, the role of assumptions must be continually questioned in relation to our conclusions about population and community dynamics in relation to conservation and landscape management.

Long ago Peters (1991) wrote an extended critique of how ecology should operate to avoid some of these issues, but his 1991 book is not easily available to students (currently available on Amazon for about $90). To encourage more discussion of these questions from the older to the more current literature, I have copied Peters Chapter 4 to the bottom of my web page at https://www.zoology.ubc.ca/~krebs/books.html for students to download if they wish to discuss these issues in more detail.

Perhaps a possible message in all this has been that ecology has always wished to be “physics-in-miniature” with grand generalizations like the laws we teach in the physical sciences. Over the last 60 years the battle in the ecology literature has been between this model of physics and the view that every population and community differ, and everything is continuing to change under the climate emergency so that we can have little general theory in ecology. There are certainly many current generalizations, but they are relatively useless for a transition from the general to the particular for the development of a predictive science. The consequence is that we now bounce from individual study to individual study, typically starting from different assumptions, with very limited predictability that is empirically testable. And the central issue for ecological science is how can we move from the present fragmentation in our knowledge to a more unified science. Perhaps starting to examine the assumptions of our current publications would be a start in this direction.  

Drake, J., Lambin, X., and Sutherland, C. (2021). The value of considering demographic contributions to connectivity: a review. Ecography 44, 1-18. doi: 10.1111/ecog.05552.

Ginzburg, L.R. and Damuth, J. (2022). The Issue Isn’t Which Model of Consumer Interference Is Right, but Which One Is Least Wrong. Frontiers in Ecology and Evolution 10, 860542. doi: 10.3389/fevo.2022.860542.

Kimmel, K., Dee, L.E., Avolio, M.L., and Ferraro, P.J. (2021). Causal assumptions and causal inference in ecological experiments. Trends in Ecology & Evolution 36, 1141-1152. doi: 10.1016/j.tree.2021.08.008.

Peters, R.H. (1991) ‘A Critique for Ecology.’ (Cambridge University Press: Cambridge, England.) ISBN:0521400171 (Chapter 4 pdf available at https://www.zoology.ubc.ca/~krebs/books.html)

Pringle, R.M. and Hutchinson, M.C. (2020). Resolving Food-Web Structure. Annual Review of Ecology, Evolution, and Systematics 51, 55-80. doi: 10.1146/annurev-ecolsys-110218-024908.

On Replication in Ecology

All statistics books recommend replication in scientific studies. I suggest that this recommendation has been carried to extreme in current ecological studies. In approximately 50% of ecological papers I read in our best journals (a biased sample to be sure) the results of the study are not new and have been replicated many times in the past, often in papers not cited in ‘new’ papers. There is no harm in this happening, but it does not lead to progress in our understanding of populations, communities or ecosystems or lead to new ecological theory. We do need replication examining the major ideas in ecology, and this is good. On the other hand, we do not need more and more studies of what we might call ecological truths. An analogy would be to test in 2022 the Flat Earth Hypothesis to examine its predictions. It is time to move on.

There is an extensive literature on hypothesis testing which can be crudely summarized by “Observations of X” which can be explained by hypothesis A, B, or C each of which have unique predictions associated with them. A series of experiments are carried out to test these predictions and the most strongly supported hypothesis, call it B*, is accepted as current knowledge. Explanation B* is useful scientifically only if it leads to a new set of predictions D, E, and F which are then tested. This chain of explanation is never simple. There can be much disagreement which may mean sharpening the hypotheses following from Explanation B*. At the same time there will be some scientists who despite all the accumulated data still accept the Flat Earth Hypothesis. If you think this is nonsense, you have not been reading the news about the Covid epidemic.

Further complications arise from two streams of thought. The first is that the way forward is via simple mathematical models to represent the system. There is much literature on modelling in ecology which is most useful when it is based on good field data, but for too many ecological problems the model is believed more than the data, and the assumptions of the models are not stated or tested. If you think that models lead directly to progress, examine again the Covid modelling situation in the past 2 years. The second stream of thought that complicates ecological science is that of descriptive ecology. Many of the papers in the current literature describe a current set of data or events with no hypothesis in mind. The major offenders are the biodiversity scientists and the ‘measure everything’ scientists. The basis of this approach seems to be that all our data will be of major use in 50, 100 or whatever years, so we must collect major archives of ecological data. Biodiversity is the bandwagon of the present time, and it is a most useful endeavour to classify and categorise species. As such it leads to much natural history that is interesting and important for many non-scientists. And almost everyone would agree that we should protect biodiversity. But while biodiversity studies are a necessary background to ecological studies, they do not lead to progress in the scientific understanding of the ecosphere.

Conservation biology is closely associated with biodiversity science, but it suffers even more from the problems outlined above. Conservation is important for everyone, but the current cascade of papers in conservation biology are too often of little use. We do not need opinion pieces; we need clear thinking and concrete data to solve conservation issues. This is not easy since once a species is endangered there are typically too few of them to study properly. And like the rest of ecological science, funding is so poor that reliable data cannot be achieved, and we are left with more unvalidated indices or opinions on species changes. Climate change puts an enormous kink in any conservation recommendations, but on the other hand serves as a panchrestron, a universal explanation for every possible change that occurs in ecosystems and thus can be used to justify every research agenda, good or poor with spurious correlations.

We could advance our ecological understanding more rapidly by demanding a coherent theoretical framework for all proposed programs of research. Grace (2019) argues that plant ecology has made much progress during the last 80 years, in contrast to the less positive overview of Peters (1991) or my observations outlined above. Prosser (2020) provides a critique for microbial ecology that echoes what Peters argued in 1991. All these divergences of opinion would be worthy of a graduate seminar discussion.

If you think all my observations are nonsense, then you should read the perceptive book by Peters (1991) written 30 years ago on the state of ecological science as well as the insightful evaluation of this book by Grace (2019) and the excellent overview of these questions in Currie (2019).  I suggest that many of the issues Peters (1991) raised are with us in 2022, and his general conclusion that ecology is a weak science rather than a strong one still stands. We should celebrate the increases in ecological understanding that have been achieved, but we could advance the science more rapidly by demanding more rigor in what we publish.

Currie, D.J. (2019). Where Newton might have taken ecology. Global Ecology and Biogeography 28, 18-27. doi: 10.1111/geb.12842.

Grace, John (2019). Has ecology grown up? Plant Ecology & Diversity 12, 387-405. doi: 10.1080/17550874.2019.1638464.

Peters, R.H. (1991) ‘A Critique for Ecology.’ (Cambridge University Press: Cambridge, England.). 366 pages. ISBN: 0521400171

Prosser, J.I. (2020). Putting science back into microbial ecology: a question of approach. Philosophical Transactions of the Royal Society. Biological sciences 375, 20190240. doi: 10.1098/rstb.2019.0240.

On the Canadian Biodiversity Observation Network (CAN BON)

I have been reading the report of an exploratory workshop from July 2021 on designing a biodiversity monitoring network across Canada to address priority monitoring gaps and engage Indigenous people across Canada. The 34 pages of their workshop report can be accessed here, and I recommend you might read it before reading my comments on the report:

https://www.nserc-crsng.gc.ca/Media-Media/NewsDetail-DetailNouvelles_eng.asp?ID=1310

I have a few comments on this report that are my opinion only. I think the Report on this workshop outlines a plan so grand and misguided that it could not be achieved in this century, even with a military budget. The report is a statement of wisdom put together with platitudes. Why is this and what are the details that I believe to be unachievable?

The major goal of the proposed network is to bring together everyone to improve biodiversity monitoring and address the highest priority gaps to support biodiversity conservation. I think most of the people of Canada would support these objectives, but what does it mean? Let us do a thought experiment. Suppose at this instant in time we knew the distribution and the exact abundance of every species in Canada. What would we know, what could we manage, what good would all these data be except as a list taking up terabytes of data? If we had these data for several years and the numbers or biomass were changing, what could we do? Is all well in our ecosystems or not? What are we trying to maximize when we have no idea of the mechanisms of change? Contrast these concerns about biodiversity with the energy and resources applied in medicine to the mortality of humans infected with Covid viruses in the last 3 years. A monumental effort to examine the mechanisms of infection and ways of preventing illness, with a clear goal and clear measures of progress toward that goal.

There is no difficulty in putting out “dream” reports, and biologists as well as physicists and astronomers, and social scientists have been doing this for years. But in my opinion this report is a dream too far and I give you a few reasons why.

First, we have no clear definition of biodiversity except that it includes everything living, so if we are going to monitor biodiversity what exactly should we do? For some of us monitoring caribou and wolves would be a sufficient program, or whales in the arctic, or plant species in peat bogs. So, to begin with we have to say what operationally we would define as the biodiversity we wish to monitor. We could put all our energy into a single group of species like birds and claim that these are the signal species to monitor for ecosystem integrity. Or should we consider only the COSEWIC list of Threatened or Endangered Species in Canada as our major monitoring concern? So, the first job of CAN BON must be to make a list of what the observation network is supposed to observe (Lindenmayer 2018). There is absolutely no agreement on that simple question within Canada now, and without it we cannot move forward to make an effective network.

The second issue that I take with the existing report is that the emphasis is on observations, and then the question is what problems will be solved by observation alone. The advance of ecological science has been based on observation and experiment directed to specific questions either of ecological interest or of economic interest. In the Pacific salmon fishery for example the objective of observation is to predict escapement and thus allowable harvest quotas. Despite years of high-quality observations and experiments, we are still a long way from understanding the ecosystem dynamics that drive Pacific salmon reproduction and survival.

Contrast the salmon problem with the caribou problem. We have a reasonably good understanding of why caribou populations are declining or not, based on many studies of predator-prey dynamics, harvesting, and habitat management. At present the southern populations of caribou are disappearing because of a loss of habitat because of land use for forestry and mining, and the interacting nexus of factors is well understood. What we do not do as a society is put these ideas into practice for conservation; for example, forestry must have priority over land use for economic reasons and the caribou populations at risk suffer. Once ecological knowledge is well defined, it does not lead automatically to action that biodiversity scientists would like. Climate change is the elephant in the room for many of our ecological problems but it is simultaneously easy to blame and yet uneven in its effects.

The third problem is funding, and this overwhelms the objectives of the Network. Ecological funding in general in Canada is a disgrace, yet we achieve much with little money. If this ever changes it will require major public input and changed governmental objectives, neither is under our immediate control. One way to press this objective forward is to produce a list of the most serious biodiversity problems facing Canada now along with suggestions for their resolution. There is no simple way to develop this list. A by-product of the current funding system in Canada is the shelling out of peanuts in funding to a wide range of investigators whose main job becomes how to jockey for the limited funds by overpromising results. Coordination is rare partly because funding is low. So (for example) I can work only on the tree ecology of the boreal forest because I am not able to expand my studies to include the shrubs, the ground vegetation, the herbivores, and the insect pests, not to mention the moose and the caribou.  

For these reasons and many more that could be addressed from the CAN BON report, I would suggest that to proceed further here is a plan:

  1. Make a list of the 10 or 15 most important questions for biodiversity science in Canada. This alone would be a major achievement.
  2. Establish subgroups organized around each of these questions who can then self-organize to discuss plans for observations and experiments designed to answer the question. Vague objectives are not sufficient. An established measure of progress is essential.
  3. Request a realistic budget and a time frame for achieving these goals from each group.  Find out what the physicists, astronomers, and medical programs deem to be suitable budgets for achieving their goals.
  4. Organize a second CAN BON conference of a small number of scientists to discuss these specific proposals. Any subgroup can participate at this level, but some decisions must be made for the overall objectives of biodiversity conservation in Canada.

These general ideas are not particularly new (Likens 1989, Lindenmayer et al. 2018). They have evolved from the setting up of the LTER Program in the USA (Hobbie 2003), and they are standard operating procedures for astronomers who need to come together with big ideas asking for big money. None of this will be easy to achieve for biodiversity conservation because it requires the wisdom of Solomon and the determination of Vladimir Putin.

Hobbie, J.E., Carpenter, S.R., Grimm, N.B., Gosz, J.R., and Seastedt, T.R. (2003). The US Long Term Ecological Research Program. BioScience 53, 21-32. doi: 10.1016/j.oneear.2021.12.008

Likens, G. E. (Ed.) (1989). ‘Long-term Studies in Ecology: Approaches and Alternatives.’ (Springer Verlag: New York.) ISBN: 0387967435

Lindenmayer, D. (2018). Why is long-term ecological research and monitoring so hard to do? (And what can be done about it). Australian Zoologist 39, 576-580. doi: 10.7882/az.2017.018.

Lindenmayer, D.B., Likens, G.E., and Franklin, J.F. (2018). Earth Observation Networks (EONs): Finding the Right Balance. Trends in Ecology & Evolution 33, 1-3. doi: 10.1016/j.tree.2017.10.008.

Why Ecological Understanding Progresses Slowly

I begin with a personal observation spanning 65 years of evaluating ecological and evolutionary science – we are making progress but very slowly. This problem would be solved very simply in the Middle Ages by declaring this statement a heresy, followed by a quick burning at the stake. But for the most part we are more civil now, and we allow old folks to rant and rave without listening much.

By a stroke of luck, Betts et al. (2021) have reached the same conclusion, but in a more polite and nuanced way than I. So, for the whole story please read their paper, to which I will only add a footnote of a tirade to make it more personal. The question is simple and stark: Should all ecological research be required to follow the hypothetico-deductive framework of science? Many excellent ecologists have argued against this proposal, and I will offer only an empirical, inductive set of observations to make the contrary view in support of H-D science.  

Ecological and evolutionary papers can be broadly categorized as (1) descriptive natural history, (2) experimental hypothesis tests, and (3) future projections. The vast bulk of papers falls into the first category, a description of the world as it is today and in the past. The h-word never appears in these publications. These papers are most useful in discovering new species, new interactions between species, and the valuable information about the world of the past through paleoecology and the geological sciences. Newspapers and TV thrive on these kinds of papers and alert the public to the natural world in many excellent ways. Descriptive natural history in the broad sense fully deserves our support, and it provides information essential to category (2), experimental ecology, by asking questions about emerging problems, introduced pests, declining fisheries, endangered mammals and all the changing components of our natural world. Descriptive papers typically provide ideas that need follow up by experimental studies. 

Public support for science comes from the belief that scientists solve problems, and if the major effort of ecologists and evolutionary biologists is to describe nature, it is not surprising that financial support is minimal in these areas of study. The public is entertained but ecological problems are not solved. So, I argue we need more of papers (2). But we can get these only if we attack serious problems with experimental means, and this requires long-term thinking and long-term funding on a scale we rarely see in ecology. The movement at present is in the direction of big-data, technological methods of gathering data remotely to investigate landscape scale problems. If big data is considered only observational, we remain in category (1) and there is a critical need to make sure that big data projects are truly experimental, category (2) science (Lindenmayer, Likens and Franklin 2018). That this change is not happening so far is clear in Betts et al. (2021) Figure 2, which shows that very few papers in ecology journals in the last 25 years provide a clear set of multiple alternative hypotheses that they are attempting to test. If this criterion is a definition of good science, there is far less being done than we might think from the explosion of papers in ecology and evolution.

The third category of ecological and evolution papers is focused on future predictions with a view to climate change. In my opinion most of these papers should be confined to a science fiction journal because they are untestable model extrapolations for a future beyond our lifetimes. A limited subset of these could be useful is they were projecting a 5-10 year scenario that scientists could possibly test in the short term. If they are to be printed, I would suggest an appendix in all these papers of the list of assumptions that must be made to reach their future predictions.

There is of course the fly in the ointment that even when ecologists diagnose a conservation problem with good experiments and analysis the policy makers will not follow their advice (e.g. Palm et al. 2020). The world is not yet perfect.

Betts, M.G., Hadley, A.S., Frey, D.W., Frey, S.J.K., Gannon, D., et al. (2021). When are hypotheses useful in ecology and evolution? Ecology and Evolution. doi: 10.1002/ece3.7365.

Lindenmayer, D.B., Likens, G.E., and Franklin, J.F. (2018). Earth Observation Networks (EONs): Finding the Right Balance. Trends in Ecology & Evolution 33, 1-3. doi: 10.1016/j.tree.2017.10.008.

Palm, E. C., Fluker, S., Nesbitt, H.K., Jacob, A.L., and Hebblewhite, M. (2020). The long road to protecting critical habitat for species at risk: The case of southern mountain woodland caribou. Conservation Science and Practice 2: e219. doi: 10.1111/csp2.219.

On Innovative Ecological Research

Ecological research should have an impact on policy development. For the most part it does not. You do not need to take my word for this, since I am over the age of 40, so for confirmation you might read the New Zealand Environmental Science Funding Review (2020) which stated:

“I am not confident that there is a coherent basis for our national investment in environmental science. I am particularly concerned that there is no mechanism that links the ongoing demand environmental reporting makes for an understanding of complex ecological processes that evolve over decades, and a science funding system that is constantly searching for innovation, impact and linkages to the ever-changing demands of business and society.” (page 3)

Of course New Zealand may be an outlier, so we must seek confirmation in the Northern Hemisphere. Bill Sutherland and his many colleagues has every 3-4 years since 2006 (nearly in concert with the lemming cycle) put out an extraordinary array of suggestions for important ecological questions that need to be answered for conservation and management. If you should be running a seminar this year, you might consider doing a historical survey of how these suggestions have changed since 2006, 2010, 2013, to 2018. Excellent questions, and how much progress has there been on answering his challenges?

Some progress to be sure, and for that we are thankful, but the problems multiply faster than ecological progress, and I am reminded of trying to stop a snow avalanche with a shovel. Why should this be? There are some very big questions in ecology that we need to answer but my first observation is that we have made little progress with the Sutherland et al. (2006) list, which would be largely culled from the previous many years of ecological studies. The first problem is that research funding is too often geared to novel and innovative proposals, so that if you would ask for funding to answer an old question that Charles Elton proposed in the 1950s, you would be struck off the list of innovative ecologists and possibly exiled to Mars with Elon Musk. Innovation in the mind of the granting agencies is based on the iPhone and the latest models of cars which have a time scale of one year. Any ecologist working on a problem that has a time scale of 30 years is behind the times. So when you write a grant request proposal you are pushed to restate the problems recognized long ago as though they were newly recognized with new methods of analysis.

There is no doubt some truly innovative ecological research, and to list these might be another interesting seminar project, but most of the environmental problems of our day are very old problems that remain unresolved. Government agencies in some countries have a list of problems of the here-and-now that university research rarely focuses on because the research cannot be innovative. These mostly practical problems must then be solved by government environmental departments with their ever-shrinking resources, so they in turn contract these out to the private sector with its checkered record of gathering the data required for solving the problems at hand.

Environmental scientists will complain that when they do reach conclusions that will at least partly resolve the problems of the day, governments refuse to act on this knowledge because of a variety of vested interests; if the environment wins, the vested interests lose, not a zero-sum game. If you want a good example, note that John Tyndall recognized CO2 and the Greenhouse Effect in 1859, and Svante Arrhenius and Thomas Chamberlin calculated in 1896 that burning fossil fuels increased CO2 such that 2 X CO2 would = + 5ºC rise in temperature. And in 2021 some people still argue about this conclusion.

My suggestion is that we would be better off striking the word ‘innovation’ from all our granting councils and environmental research funding organizations, and replacing it with ‘excellent’ and ‘well designed’ as qualities to support. You are still allowed to talk about ‘innovative’ iPhones and autos, but we are better off with ‘excellent’ environmental and ecological research.

New Zealand Parliamentary Commissioner for the Environment. (2020). A review of the funding and prioritisation of environmental research in New Zealand (Wellington, New Zealand.) Available online: https://www.pce.parliament.nz/publications/environmental-research-funding-review

Sutherland, W.J., et al. (2006). The identification of 100 ecological questions of high policy relevance in the UK. Journal of Applied Ecology 43, 617-627. doi: 10.1111/j.1365-2664.2006.01188.x.

Sutherland, W.J., et al. (2010). A horizon scan of global conservation issues for 2010. Trends in Ecology & Evolution 25, 1-7. doi: 10.1016/j.tree.2009.10.003

Sutherland, W.J., (2013). Identification of 100 fundamental ecological questions. Journal of Ecology 101, 58-67. doi: 10.1111/1365-2745.12025.

Sutherland, W.J., et al. (2018). A 2018 Horizon Scan of Emerging Issues for Global Conservation and Biological Diversity. Trends in Ecology & Evolution 33, 47-58. doi: 10.1016/j.tree.2017.11.006.

On the Focus of Biodiversity Science

Biodiversity science has expanded in the last 25 years to include scientific disciplines that were in a previous time considered independent disciplines. Now this could be thought of as a good thing because we all want science to be interactive, so that geologists talk to ecologists who also talk to mathematicians and physicists. University administrators might welcome this movement because it could aim for a terminal condition in which all the departments of the university are amalgamated into one big universal science department of Biodiversity which would include sociology, forestry, agriculture, engineering, fisheries, wildlife, geography, and possibly law and literature as capstones. Depending on your viewpoint, there are a few problems with this vision or nightmare that are already showing up.

First and foremost is the problem of the increasing amount of specialist knowledge that is necessary to know how to be a good soil scientist, or geographer, or fisheries ecologist. So if we need teams of scientists working on a particular problem, there must be careful integration of the parts and a shared vision of how to reach a resolution of the problem. This is more and more difficult to achieve as each individual science itself becomes more and more specialized, so that for example your team now needs a soil scientist who specializes only in clay soils. The results of this problem are visible today with the Covid pandemic, many research groups working at odds to one another, many cooperating but not all, vaccine supplies being restricted by politics and nationalism, some specialists claiming that all can be cured with hydroxychloroquine or bleach. So the first problem is how to assemble a team. If you want to do this, you need to sort out a second issue.

The second hurdle is another very big issue upon which there is rarely good agreement: What are the problems you wish to solve? If you are a university department you have a very restricted range of faculty, so you cannot solve every biodiversity problem on earth. At one extreme you can have the one faculty member = one problem approach, so one person is concerned with the conservation of birds on mountain tops, another is to study frogs and salamanders in southern Ontario, and a third is to be concerned about the conservation of rare orchids in Indonesia. At the other extreme is the many faculty = one problem approach where you concentrate your research power on a very few issues. Typically one might think these should be Canadian issues if you were a Canadian university, or New Zealand issues if you were a New Zealand university. In general many universities have taken the first approach and have assumed that government departments will fill in the second approach by concentrating on major issues like fisheries declines or forest diseases.

Alas the consequences of the present system are that the government is reducing its involvement in solving large scale issues (take caribou in Canada, the Everglades in Florida, or house mice outbreaks in Australia). At the same time university budgets are being cut and there is less and less interest in contributing to the solution of environmental problems and more and more interest in fields that increase economic growth and jobs. Universities excel at short term challenges, 2–3-year problem solving, but do very poorly at long-term issues. And it is the long term problems that are destroying the Earth’s ecosystems.

The problem facing biodiversity science is exactly that no one wishes to concentrate on a single major problem, so we drift in bits and pieces, missing the chance to make any significant progress in any one of the major issues of our day. Take any major issue you wish to discuss. How many species are there on Earth? We do not even know that very well except in a few groups, so how much effort must go into taxonomy? Are insect populations declining? Data are extremely limited to a few groups gathered over a small number of years in a small part of the Earth with inadequate sampling. Within North America, why are charismatic species like monarch butterflies declining, or are they really declining? How much habitat must be protected to ensure the continuation of a migratory species like this butterfly. Can we ecologists claim that any one of our major problems are being resourced adequately to discover answers?

When biodiversity science interfaces with agricultural science and the applied sciences of fisheries and wildlife management we run into another set of major questions. Is modern agriculture sustainable? Certainly not, but how can we change it in the right direction? Are pelagic fisheries being overharvested? Questions abound, answers are tentative and need more evidence. Is biodiversity science supposed to provide solutions to these kinds of applied ecological questions? The current major question that appears in most biodiversity papers is how will biodiversity respond to climate change?  This is in principle a question that can be answered at the local species or community scale, but it provides no resolution to the problem of biodiversity loss or indeed even allows adequate data gathering to map the extent and reality of loss. Are we back to mapping the chairs on the Titanic but now with detailed satellite data?

What can be done about this lack of focus in biodiversity science? At the broadest level we need to increase discussions about what we are trying to accomplish in the current state of scientific organization. Trying to write down the problems we are currently studying and then the possible ways in which the problem can be resolved would be a good start. If we recognize a major problem but then can see no possible way of resolving it, perhaps our research or management efforts should be redirected. But it takes great courage to say here is a problem in biodiversity conservation, but it can never be solved with a finite budget (Buxton et al. 2021). So start by asking: why am I doing this research, and where do I think we might be in 50 years on this issue? Make a list of insoluble problems. Here is a simple one to start on: eradicating invasive species. Perhaps eradication can be done in some situations like islands (Russell et al. 2016) but is impossible in the vast majority of cases. There may be major disagreements over goals, in which case some rules might be put forward, such as a budget of $5 million over 4 years to achieve the specified goal. Much as we might like, biodiversity conservation cannot operate with an infinite budget and an infinite time frame.

Buxton, R.T., Nyboer, E.A., Pigeon, K.E., Raby, G.D., and Rytwinski, T. (2021). Avoiding wasted research resources in conservation science. Conservation Science and Practice 3. doi: 10.1111/csp2.329.

Russell, J.C., Jones, H.P., Armstrong, D.P., Courchamp, F., and Kappes, P.J. (2016). Importance of lethal control of invasive predators for island conservation. Conservation Biology 30, 670-672. doi: 10.1111/cobi.12666.

On an Experimental Design Mafia for Ecology

Ecologist A does an experiment and publishes Conclusions G and H. Ecologist B reads this paper and concludes that A’s data support Conclusions M and N and do not support Conclusions G and H. Ecologist B writes to Journal X editor to complain and is told to go get stuffed because Journal X never makes a mistake with so many members of the Editorial Board who have Nobel Prizes. This is an inviting fantasy and I want to examine one possible way to avoid at least some of these confrontations without having to fire all the Nobel Prize winners on the Editorial Board.

We go back to the simple question: Can we agree on what types of data are needed for testing this hypothesis? We now require our graduate students or at least our Nobel colleagues to submit the experimental design for their study to the newly founded Experimental Design Mafia for Ecology (or in French DEME) who will provide a critique of the formulation of the hypotheses to be tested and the actual data that will be collected. The recommendations of the DEME will be nonbinding, and professors and research supervisors will be able to ignore them with no consequences except that the coveted DEME icon will not be able to be published on the front page of the resulting papers.

The easiest part of this review will be the data methods, and this review by the DEME committee will cover the current standards for measuring temperature, doing aerial surveys for elephants, live-trapping small mammals, measuring DBH on trees, determining quadrat size for plant surveys, and other necessary data collection problems. This advice alone should hypothetically remove about 25% of future published papers that use obsolete models or inadequate methods to measure or count ecological items.

The critical part of the review will be the experimental design part of the proposed study. Experimental design is important even if it is designated as undemocratic poppycock by your research committee. First, the DEME committee will require a clear statement of the hypothesis to be tested and the alternative hypotheses. Words which are used too loosely in many ecological works must be defended as having a clear operational meaning, so that idea statements that include ‘stability’ or ‘ecosystem integrity’ may be questioned and their meaning sharpened. Hypotheses that forbid something from occurring or allow only type Y events to occur are to be preferred, and for guidance applicants may be referred to Popper (1963), Platt (1964), Anderson (2008) or Krebs (2019). If there is no alternative hypothesis, your research plan is finished. If you are using statistical methods to test your hypotheses, read Ioannidis (2019).

Once you have done all this, you are ready to go to work. Do not be concerned if your research plan goes off target or you get strange results. Be prepared to give up hypotheses that do not fit the observed facts. That means you are doing creative science.

The DEME committee will have to be refreshed every 5 years or so such that fresh ideas can be recognized. But the principles of doing good science are unlikely to change – good operational definitions, a set of hypotheses with clear predictions, a writing style that does not try to cover up contrary findings, and a forward look to what next? And the ecological world will slowly become a better place with fewer sterile arguments about angels on the head of a pin.

Anderson, D.R. (2008) ‘Model Based Inference in the Life Sciences: A Primer on Evidence.‘ (Springer: New York.) ISBN: 978-0-387-74073-7.

Ioannidis, J.P.A. (2019). What have we (not) learnt from millions of scientific papers with P values? American Statistician 73, 20-25. doi: 10.1080/00031305.2018.1447512.

Krebs, C.J. (2020). How to ask meaningful ecological questions. In Population Ecology in Practice. (Eds D.L. Murray and B.K. Sandercock.) Chapter 1, pp. 3-16. Wiley-Blackwell: Amsterdam. ISBN: 978-0-470-67414-7

Platt, J. R. (1964). Strong inference. Science 146, 347-353. doi: 10.1126/science.146.3642.347.

Popper, K. R. (1963) ‘Conjectures and Refutations: The Growth of Scientific Knowledge.’ (Routledge and Kegan Paul: London.). ISBN: 9780415285940

How Much Evidence is Enough?

The scientific community in general considers a conclusion about a problem resolved if there is enough evidence. There are many excellent books and papers that discuss what “enough evidence” means in terms of sampling design, experimental design, and statistical methods (Platt 1964, Shadish et al. 2002, Johnson 2002, and many others) so I will skip over these technical issues and discuss the nature of evidence we typically see in ecology and management.

An overall judgement one can make is that there is a great diversity among the different sciences about how much evidence is enough. If replication is expensive, typically fewer experiments are deemed sufficient. If human health is involved, as we see with Covid-19, many controlled experiments with massive replication is usually required. For fisheries and wildlife management much less evidence is typically quoted as sufficient. For much of conservation biology the problem arises that no experimental design can be considered if the species or taxa are threatened or endangered. In these cases we have to rely on a general background of accepted principles to guide our management actions. It is these cases that I want to focus on here.

Two guiding lights in the absence of convincing experiments are the Precautionary Principle and the Hippocratic Oath. The simple prescription of the Hippocratic Oath for medical doctors has always been “Do no harm”. The Precautionary Principle has been spread more widely and has various interpretations, most simply “Look before you leap” (Akins et al. 2019). But if applied too strictly some would argue, this principle might stop “green” projects that are in themselves directed toward sustainability. Wind turbine tower effects on birds are one example (Coppes et al. 2020). The conservation of wild bees may impact current agricultural production positively (Drossart and Gerard 2020) or negatively depending on the details of the conservation practices. Trade offs are a killer for many conservation solutions, jobs vs. the environment.

Many decisions about conservation action and wildlife management rest on less than solid empirical evidence. This observation could be tested in any graduate seminar by dissecting a series of papers on explicit conservation problems. Typically, those cases involving declining large bodied species like caribou or northern spotted owls or tigers are affected by a host of interconnected problems involving human usurpation of habitats for forestry, agriculture, or cities, backed up by poaching or direct climate change due to air pollution, or diseases introduced by domestic animals or introduced species. In some fraction of cases the primary cause of decline is well documented but cannot be changed by conservation biologists (e.g. CO2 and coral bleaching). 

Nichols et al. (2019) recommend a model-based approach to answering conservation and management questions as a way to increase the rate of learning about which set of hypotheses best predict ecological changes. The only problem with their approach is the time scale of learning, which for immediate conservation issues may be limiting. But for problems that have a longer time scale for hypothesis testing and decision making they have laid out an important pathway to problem solutions.

In many ecological and conservation publications we are allowed to suggest weak hypotheses for the explanation of pest outbreaks or population declines, and in the worst cases rely on “correlation = causation” arguments. This will not be a problem if we explicitly recognize weak hypotheses and specify a clear path to more rigorous hypotheses and experimental tests. Climate change is the current panchrestron or universal explanation because it shows weak associations with many ecological changes. There is no problem with invoking climate change as an explanatory variable if there are clear biological mechanisms linking this cause to population or community changes.

All of this has been said many times in the conservation and wildlife management literature, but I think needs continual reinforcement. Ask yourself: Is this evidence strong enough to support this conclusion? Weak conclusions are perhaps useful at the start of an investigation but are not a good basis for conservation or wildlife management decision making. Ensuring that our scientific conclusions “Do no harm” is a good principle for ecology as well as medicine.

Akins, A., et al. (2019). The Precautionary Principle in the international arena. Sustainability 11 (8), 2357. doi: 10.3390/su11082357.

Coppes, J., et al. (2020). The impact of wind energy facilities on grouse: a systematic review. Journal of Ornithology 161, 1-15. doi: 10.1007/s10336-019-01696-1.

Drossart, M. and Gerard, M. (2020). Beyond the decline of wild bees: Optimizing conservation measures and bringing together the actors. Insects (Basel, Switzerland) 11, 649. doi: 10.3390/insects11090649.

Johnson, D.H. (2002). The importance of replication in wildlife research. Journal of Wildlife Management 66, 919-932.

Nichols, J.D., Kendall, W.L., and Boomer, G.S. (2019). Accumulating evidence in ecology: Once is not enough. Ecology and Evolution 9, 13991-14004. doi: 10.1002/ece3.5836.

Platt, J. R. (1964). Strong inference. Science 146, 347-353. doi: 10.1126/science.146.3642.347.

Shadish, W.R, Cook, T.D., and Campbell, D.T. (2002) ‘Experimental and Quasi-Experimental Designs for Generalized Causal Inference.‘ (Houghton Mifflin Company: New York.)