Category Archives: Evaluating Research Quality

Why Ecological Understanding Progresses Slowly

I begin with a personal observation spanning 65 years of evaluating ecological and evolutionary science – we are making progress but very slowly. This problem would be solved very simply in the Middle Ages by declaring this statement a heresy, followed by a quick burning at the stake. But for the most part we are more civil now, and we allow old folks to rant and rave without listening much.

By a stroke of luck, Betts et al. (2021) have reached the same conclusion, but in a more polite and nuanced way than I. So, for the whole story please read their paper, to which I will only add a footnote of a tirade to make it more personal. The question is simple and stark: Should all ecological research be required to follow the hypothetico-deductive framework of science? Many excellent ecologists have argued against this proposal, and I will offer only an empirical, inductive set of observations to make the contrary view in support of H-D science.  

Ecological and evolutionary papers can be broadly categorized as (1) descriptive natural history, (2) experimental hypothesis tests, and (3) future projections. The vast bulk of papers falls into the first category, a description of the world as it is today and in the past. The h-word never appears in these publications. These papers are most useful in discovering new species, new interactions between species, and the valuable information about the world of the past through paleoecology and the geological sciences. Newspapers and TV thrive on these kinds of papers and alert the public to the natural world in many excellent ways. Descriptive natural history in the broad sense fully deserves our support, and it provides information essential to category (2), experimental ecology, by asking questions about emerging problems, introduced pests, declining fisheries, endangered mammals and all the changing components of our natural world. Descriptive papers typically provide ideas that need follow up by experimental studies. 

Public support for science comes from the belief that scientists solve problems, and if the major effort of ecologists and evolutionary biologists is to describe nature, it is not surprising that financial support is minimal in these areas of study. The public is entertained but ecological problems are not solved. So, I argue we need more of papers (2). But we can get these only if we attack serious problems with experimental means, and this requires long-term thinking and long-term funding on a scale we rarely see in ecology. The movement at present is in the direction of big-data, technological methods of gathering data remotely to investigate landscape scale problems. If big data is considered only observational, we remain in category (1) and there is a critical need to make sure that big data projects are truly experimental, category (2) science (Lindenmayer, Likens and Franklin 2018). That this change is not happening so far is clear in Betts et al. (2021) Figure 2, which shows that very few papers in ecology journals in the last 25 years provide a clear set of multiple alternative hypotheses that they are attempting to test. If this criterion is a definition of good science, there is far less being done than we might think from the explosion of papers in ecology and evolution.

The third category of ecological and evolution papers is focused on future predictions with a view to climate change. In my opinion most of these papers should be confined to a science fiction journal because they are untestable model extrapolations for a future beyond our lifetimes. A limited subset of these could be useful is they were projecting a 5-10 year scenario that scientists could possibly test in the short term. If they are to be printed, I would suggest an appendix in all these papers of the list of assumptions that must be made to reach their future predictions.

There is of course the fly in the ointment that even when ecologists diagnose a conservation problem with good experiments and analysis the policy makers will not follow their advice (e.g. Palm et al. 2020). The world is not yet perfect.

Betts, M.G., Hadley, A.S., Frey, D.W., Frey, S.J.K., Gannon, D., et al. (2021). When are hypotheses useful in ecology and evolution? Ecology and Evolution. doi: 10.1002/ece3.7365.

Lindenmayer, D.B., Likens, G.E., and Franklin, J.F. (2018). Earth Observation Networks (EONs): Finding the Right Balance. Trends in Ecology & Evolution 33, 1-3. doi: 10.1016/j.tree.2017.10.008.

Palm, E. C., Fluker, S., Nesbitt, H.K., Jacob, A.L., and Hebblewhite, M. (2020). The long road to protecting critical habitat for species at risk: The case of southern mountain woodland caribou. Conservation Science and Practice 2: e219. doi: 10.1111/csp2.219.

On Innovative Ecological Research

Ecological research should have an impact on policy development. For the most part it does not. You do not need to take my word for this, since I am over the age of 40, so for confirmation you might read the New Zealand Environmental Science Funding Review (2020) which stated:

“I am not confident that there is a coherent basis for our national investment in environmental science. I am particularly concerned that there is no mechanism that links the ongoing demand environmental reporting makes for an understanding of complex ecological processes that evolve over decades, and a science funding system that is constantly searching for innovation, impact and linkages to the ever-changing demands of business and society.” (page 3)

Of course New Zealand may be an outlier, so we must seek confirmation in the Northern Hemisphere. Bill Sutherland and his many colleagues has every 3-4 years since 2006 (nearly in concert with the lemming cycle) put out an extraordinary array of suggestions for important ecological questions that need to be answered for conservation and management. If you should be running a seminar this year, you might consider doing a historical survey of how these suggestions have changed since 2006, 2010, 2013, to 2018. Excellent questions, and how much progress has there been on answering his challenges?

Some progress to be sure, and for that we are thankful, but the problems multiply faster than ecological progress, and I am reminded of trying to stop a snow avalanche with a shovel. Why should this be? There are some very big questions in ecology that we need to answer but my first observation is that we have made little progress with the Sutherland et al. (2006) list, which would be largely culled from the previous many years of ecological studies. The first problem is that research funding is too often geared to novel and innovative proposals, so that if you would ask for funding to answer an old question that Charles Elton proposed in the 1950s, you would be struck off the list of innovative ecologists and possibly exiled to Mars with Elon Musk. Innovation in the mind of the granting agencies is based on the iPhone and the latest models of cars which have a time scale of one year. Any ecologist working on a problem that has a time scale of 30 years is behind the times. So when you write a grant request proposal you are pushed to restate the problems recognized long ago as though they were newly recognized with new methods of analysis.

There is no doubt some truly innovative ecological research, and to list these might be another interesting seminar project, but most of the environmental problems of our day are very old problems that remain unresolved. Government agencies in some countries have a list of problems of the here-and-now that university research rarely focuses on because the research cannot be innovative. These mostly practical problems must then be solved by government environmental departments with their ever-shrinking resources, so they in turn contract these out to the private sector with its checkered record of gathering the data required for solving the problems at hand.

Environmental scientists will complain that when they do reach conclusions that will at least partly resolve the problems of the day, governments refuse to act on this knowledge because of a variety of vested interests; if the environment wins, the vested interests lose, not a zero-sum game. If you want a good example, note that John Tyndall recognized CO2 and the Greenhouse Effect in 1859, and Svante Arrhenius and Thomas Chamberlin calculated in 1896 that burning fossil fuels increased CO2 such that 2 X CO2 would = + 5ºC rise in temperature. And in 2021 some people still argue about this conclusion.

My suggestion is that we would be better off striking the word ‘innovation’ from all our granting councils and environmental research funding organizations, and replacing it with ‘excellent’ and ‘well designed’ as qualities to support. You are still allowed to talk about ‘innovative’ iPhones and autos, but we are better off with ‘excellent’ environmental and ecological research.

New Zealand Parliamentary Commissioner for the Environment. (2020). A review of the funding and prioritisation of environmental research in New Zealand (Wellington, New Zealand.) Available online: https://www.pce.parliament.nz/publications/environmental-research-funding-review

Sutherland, W.J., et al. (2006). The identification of 100 ecological questions of high policy relevance in the UK. Journal of Applied Ecology 43, 617-627. doi: 10.1111/j.1365-2664.2006.01188.x.

Sutherland, W.J., et al. (2010). A horizon scan of global conservation issues for 2010. Trends in Ecology & Evolution 25, 1-7. doi: 10.1016/j.tree.2009.10.003

Sutherland, W.J., (2013). Identification of 100 fundamental ecological questions. Journal of Ecology 101, 58-67. doi: 10.1111/1365-2745.12025.

Sutherland, W.J., et al. (2018). A 2018 Horizon Scan of Emerging Issues for Global Conservation and Biological Diversity. Trends in Ecology & Evolution 33, 47-58. doi: 10.1016/j.tree.2017.11.006.

On the Focus of Biodiversity Science

Biodiversity science has expanded in the last 25 years to include scientific disciplines that were in a previous time considered independent disciplines. Now this could be thought of as a good thing because we all want science to be interactive, so that geologists talk to ecologists who also talk to mathematicians and physicists. University administrators might welcome this movement because it could aim for a terminal condition in which all the departments of the university are amalgamated into one big universal science department of Biodiversity which would include sociology, forestry, agriculture, engineering, fisheries, wildlife, geography, and possibly law and literature as capstones. Depending on your viewpoint, there are a few problems with this vision or nightmare that are already showing up.

First and foremost is the problem of the increasing amount of specialist knowledge that is necessary to know how to be a good soil scientist, or geographer, or fisheries ecologist. So if we need teams of scientists working on a particular problem, there must be careful integration of the parts and a shared vision of how to reach a resolution of the problem. This is more and more difficult to achieve as each individual science itself becomes more and more specialized, so that for example your team now needs a soil scientist who specializes only in clay soils. The results of this problem are visible today with the Covid pandemic, many research groups working at odds to one another, many cooperating but not all, vaccine supplies being restricted by politics and nationalism, some specialists claiming that all can be cured with hydroxychloroquine or bleach. So the first problem is how to assemble a team. If you want to do this, you need to sort out a second issue.

The second hurdle is another very big issue upon which there is rarely good agreement: What are the problems you wish to solve? If you are a university department you have a very restricted range of faculty, so you cannot solve every biodiversity problem on earth. At one extreme you can have the one faculty member = one problem approach, so one person is concerned with the conservation of birds on mountain tops, another is to study frogs and salamanders in southern Ontario, and a third is to be concerned about the conservation of rare orchids in Indonesia. At the other extreme is the many faculty = one problem approach where you concentrate your research power on a very few issues. Typically one might think these should be Canadian issues if you were a Canadian university, or New Zealand issues if you were a New Zealand university. In general many universities have taken the first approach and have assumed that government departments will fill in the second approach by concentrating on major issues like fisheries declines or forest diseases.

Alas the consequences of the present system are that the government is reducing its involvement in solving large scale issues (take caribou in Canada, the Everglades in Florida, or house mice outbreaks in Australia). At the same time university budgets are being cut and there is less and less interest in contributing to the solution of environmental problems and more and more interest in fields that increase economic growth and jobs. Universities excel at short term challenges, 2–3-year problem solving, but do very poorly at long-term issues. And it is the long term problems that are destroying the Earth’s ecosystems.

The problem facing biodiversity science is exactly that no one wishes to concentrate on a single major problem, so we drift in bits and pieces, missing the chance to make any significant progress in any one of the major issues of our day. Take any major issue you wish to discuss. How many species are there on Earth? We do not even know that very well except in a few groups, so how much effort must go into taxonomy? Are insect populations declining? Data are extremely limited to a few groups gathered over a small number of years in a small part of the Earth with inadequate sampling. Within North America, why are charismatic species like monarch butterflies declining, or are they really declining? How much habitat must be protected to ensure the continuation of a migratory species like this butterfly. Can we ecologists claim that any one of our major problems are being resourced adequately to discover answers?

When biodiversity science interfaces with agricultural science and the applied sciences of fisheries and wildlife management we run into another set of major questions. Is modern agriculture sustainable? Certainly not, but how can we change it in the right direction? Are pelagic fisheries being overharvested? Questions abound, answers are tentative and need more evidence. Is biodiversity science supposed to provide solutions to these kinds of applied ecological questions? The current major question that appears in most biodiversity papers is how will biodiversity respond to climate change?  This is in principle a question that can be answered at the local species or community scale, but it provides no resolution to the problem of biodiversity loss or indeed even allows adequate data gathering to map the extent and reality of loss. Are we back to mapping the chairs on the Titanic but now with detailed satellite data?

What can be done about this lack of focus in biodiversity science? At the broadest level we need to increase discussions about what we are trying to accomplish in the current state of scientific organization. Trying to write down the problems we are currently studying and then the possible ways in which the problem can be resolved would be a good start. If we recognize a major problem but then can see no possible way of resolving it, perhaps our research or management efforts should be redirected. But it takes great courage to say here is a problem in biodiversity conservation, but it can never be solved with a finite budget (Buxton et al. 2021). So start by asking: why am I doing this research, and where do I think we might be in 50 years on this issue? Make a list of insoluble problems. Here is a simple one to start on: eradicating invasive species. Perhaps eradication can be done in some situations like islands (Russell et al. 2016) but is impossible in the vast majority of cases. There may be major disagreements over goals, in which case some rules might be put forward, such as a budget of $5 million over 4 years to achieve the specified goal. Much as we might like, biodiversity conservation cannot operate with an infinite budget and an infinite time frame.

Buxton, R.T., Nyboer, E.A., Pigeon, K.E., Raby, G.D., and Rytwinski, T. (2021). Avoiding wasted research resources in conservation science. Conservation Science and Practice 3. doi: 10.1111/csp2.329.

Russell, J.C., Jones, H.P., Armstrong, D.P., Courchamp, F., and Kappes, P.J. (2016). Importance of lethal control of invasive predators for island conservation. Conservation Biology 30, 670-672. doi: 10.1111/cobi.12666.

On an Experimental Design Mafia for Ecology

Ecologist A does an experiment and publishes Conclusions G and H. Ecologist B reads this paper and concludes that A’s data support Conclusions M and N and do not support Conclusions G and H. Ecologist B writes to Journal X editor to complain and is told to go get stuffed because Journal X never makes a mistake with so many members of the Editorial Board who have Nobel Prizes. This is an inviting fantasy and I want to examine one possible way to avoid at least some of these confrontations without having to fire all the Nobel Prize winners on the Editorial Board.

We go back to the simple question: Can we agree on what types of data are needed for testing this hypothesis? We now require our graduate students or at least our Nobel colleagues to submit the experimental design for their study to the newly founded Experimental Design Mafia for Ecology (or in French DEME) who will provide a critique of the formulation of the hypotheses to be tested and the actual data that will be collected. The recommendations of the DEME will be nonbinding, and professors and research supervisors will be able to ignore them with no consequences except that the coveted DEME icon will not be able to be published on the front page of the resulting papers.

The easiest part of this review will be the data methods, and this review by the DEME committee will cover the current standards for measuring temperature, doing aerial surveys for elephants, live-trapping small mammals, measuring DBH on trees, determining quadrat size for plant surveys, and other necessary data collection problems. This advice alone should hypothetically remove about 25% of future published papers that use obsolete models or inadequate methods to measure or count ecological items.

The critical part of the review will be the experimental design part of the proposed study. Experimental design is important even if it is designated as undemocratic poppycock by your research committee. First, the DEME committee will require a clear statement of the hypothesis to be tested and the alternative hypotheses. Words which are used too loosely in many ecological works must be defended as having a clear operational meaning, so that idea statements that include ‘stability’ or ‘ecosystem integrity’ may be questioned and their meaning sharpened. Hypotheses that forbid something from occurring or allow only type Y events to occur are to be preferred, and for guidance applicants may be referred to Popper (1963), Platt (1964), Anderson (2008) or Krebs (2019). If there is no alternative hypothesis, your research plan is finished. If you are using statistical methods to test your hypotheses, read Ioannidis (2019).

Once you have done all this, you are ready to go to work. Do not be concerned if your research plan goes off target or you get strange results. Be prepared to give up hypotheses that do not fit the observed facts. That means you are doing creative science.

The DEME committee will have to be refreshed every 5 years or so such that fresh ideas can be recognized. But the principles of doing good science are unlikely to change – good operational definitions, a set of hypotheses with clear predictions, a writing style that does not try to cover up contrary findings, and a forward look to what next? And the ecological world will slowly become a better place with fewer sterile arguments about angels on the head of a pin.

Anderson, D.R. (2008) ‘Model Based Inference in the Life Sciences: A Primer on Evidence.‘ (Springer: New York.) ISBN: 978-0-387-74073-7.

Ioannidis, J.P.A. (2019). What have we (not) learnt from millions of scientific papers with P values? American Statistician 73, 20-25. doi: 10.1080/00031305.2018.1447512.

Krebs, C.J. (2020). How to ask meaningful ecological questions. In Population Ecology in Practice. (Eds D.L. Murray and B.K. Sandercock.) Chapter 1, pp. 3-16. Wiley-Blackwell: Amsterdam. ISBN: 978-0-470-67414-7

Platt, J. R. (1964). Strong inference. Science 146, 347-353. doi: 10.1126/science.146.3642.347.

Popper, K. R. (1963) ‘Conjectures and Refutations: The Growth of Scientific Knowledge.’ (Routledge and Kegan Paul: London.). ISBN: 9780415285940

How Much Evidence is Enough?

The scientific community in general considers a conclusion about a problem resolved if there is enough evidence. There are many excellent books and papers that discuss what “enough evidence” means in terms of sampling design, experimental design, and statistical methods (Platt 1964, Shadish et al. 2002, Johnson 2002, and many others) so I will skip over these technical issues and discuss the nature of evidence we typically see in ecology and management.

An overall judgement one can make is that there is a great diversity among the different sciences about how much evidence is enough. If replication is expensive, typically fewer experiments are deemed sufficient. If human health is involved, as we see with Covid-19, many controlled experiments with massive replication is usually required. For fisheries and wildlife management much less evidence is typically quoted as sufficient. For much of conservation biology the problem arises that no experimental design can be considered if the species or taxa are threatened or endangered. In these cases we have to rely on a general background of accepted principles to guide our management actions. It is these cases that I want to focus on here.

Two guiding lights in the absence of convincing experiments are the Precautionary Principle and the Hippocratic Oath. The simple prescription of the Hippocratic Oath for medical doctors has always been “Do no harm”. The Precautionary Principle has been spread more widely and has various interpretations, most simply “Look before you leap” (Akins et al. 2019). But if applied too strictly some would argue, this principle might stop “green” projects that are in themselves directed toward sustainability. Wind turbine tower effects on birds are one example (Coppes et al. 2020). The conservation of wild bees may impact current agricultural production positively (Drossart and Gerard 2020) or negatively depending on the details of the conservation practices. Trade offs are a killer for many conservation solutions, jobs vs. the environment.

Many decisions about conservation action and wildlife management rest on less than solid empirical evidence. This observation could be tested in any graduate seminar by dissecting a series of papers on explicit conservation problems. Typically, those cases involving declining large bodied species like caribou or northern spotted owls or tigers are affected by a host of interconnected problems involving human usurpation of habitats for forestry, agriculture, or cities, backed up by poaching or direct climate change due to air pollution, or diseases introduced by domestic animals or introduced species. In some fraction of cases the primary cause of decline is well documented but cannot be changed by conservation biologists (e.g. CO2 and coral bleaching). 

Nichols et al. (2019) recommend a model-based approach to answering conservation and management questions as a way to increase the rate of learning about which set of hypotheses best predict ecological changes. The only problem with their approach is the time scale of learning, which for immediate conservation issues may be limiting. But for problems that have a longer time scale for hypothesis testing and decision making they have laid out an important pathway to problem solutions.

In many ecological and conservation publications we are allowed to suggest weak hypotheses for the explanation of pest outbreaks or population declines, and in the worst cases rely on “correlation = causation” arguments. This will not be a problem if we explicitly recognize weak hypotheses and specify a clear path to more rigorous hypotheses and experimental tests. Climate change is the current panchrestron or universal explanation because it shows weak associations with many ecological changes. There is no problem with invoking climate change as an explanatory variable if there are clear biological mechanisms linking this cause to population or community changes.

All of this has been said many times in the conservation and wildlife management literature, but I think needs continual reinforcement. Ask yourself: Is this evidence strong enough to support this conclusion? Weak conclusions are perhaps useful at the start of an investigation but are not a good basis for conservation or wildlife management decision making. Ensuring that our scientific conclusions “Do no harm” is a good principle for ecology as well as medicine.

Akins, A., et al. (2019). The Precautionary Principle in the international arena. Sustainability 11 (8), 2357. doi: 10.3390/su11082357.

Coppes, J., et al. (2020). The impact of wind energy facilities on grouse: a systematic review. Journal of Ornithology 161, 1-15. doi: 10.1007/s10336-019-01696-1.

Drossart, M. and Gerard, M. (2020). Beyond the decline of wild bees: Optimizing conservation measures and bringing together the actors. Insects (Basel, Switzerland) 11, 649. doi: 10.3390/insects11090649.

Johnson, D.H. (2002). The importance of replication in wildlife research. Journal of Wildlife Management 66, 919-932.

Nichols, J.D., Kendall, W.L., and Boomer, G.S. (2019). Accumulating evidence in ecology: Once is not enough. Ecology and Evolution 9, 13991-14004. doi: 10.1002/ece3.5836.

Platt, J. R. (1964). Strong inference. Science 146, 347-353. doi: 10.1126/science.146.3642.347.

Shadish, W.R, Cook, T.D., and Campbell, D.T. (2002) ‘Experimental and Quasi-Experimental Designs for Generalized Causal Inference.‘ (Houghton Mifflin Company: New York.)

But It is Complicated in Ecology

Consider two young ecologists both applying for the same position in a university or an NGO. To avoid a legal challenge, I will call one Ecologist C (as short for “conservative”), and the second candidate Ecologist L (as short for “liberal”). Both have just published reviews of conservation ecology. Person L has stated very clearly that the biological world is in rapid, catastrophic collapse with much unrecoverable extinction on the immediate calendar, and that this calls for emergency large-scale funding and action. Person C has reviewed similar parts of the biological world and concluded that some groups of animals and plants are of great concern, but that many other groups show no strong signals of collapse or that the existing data are inadequate to decide if populations are declining or not. Which person will get the job and why?

There is no answer to this hypothetical question, but it is worth pondering the potential reasons for these rather different perceptions of the conservation biology world. First, it is clear that candidate L’s catastrophic statements will be on the front page of the New York Times tomorrow, while much less publicity will accrue to candidate C’s statements. This is a natural response to the ‘This Is It!” approach so much admired by thrill seekers in contrast to the “Maybe Yes, Maybe No”, and “It Is Complicated” approach. But rather than get into a discussion of personality types, it may be useful to dig a bit deeper into what this question reveals about contemporary conservation ecology.

Good scientists attempting to answer this dichotomy of opinion in conservation ecology would seek data on several questions.
(1) Are there sufficient data available to reach a conclusion on this important topic?
(2) If there are not sufficient data, should we err on the side of being careful about our conclusion and risk “crying wolf”?
(3) Can we agree on what types of data are needed and admissible in this discussion?

On all these simple questions ecologists will argue very strongly. For question (1) we might assume that a 20-year study of a dominant species might be sufficient to determine trend (e.g. Plaza and Lambertucci 2020). Others will be happy with 5 years of data on several species. Can we substitute space for time? Can we simply use genetic data to answer all conservation questions (Hoffmann et al. 2017)? If the habitat we are studying contains 75 species of plants or invertebrates, on how many species must we have accurate data to support Ecologist L? Or do we need any data at all if we are convinced about climate change? Alfonzetti et al, (2020) and Wang et al. (2020) give two good examples of data problems with plants and butterflies with respect to conservation status. 

For question (2) there will be much more disagreement because this is not about the science involved but is a personal judgement about the future consequences of projected trends in species numbers. These judgements are typically based loosely on past observations of similar ecological populations or communities, some of which have declined in abundance and disappeared (the Passenger Pigeon Paradigm) or conversely those species that have recovered from minimal abundance to become common again (the Kirtland’s Warbler Paradigm). The problem revolves back to the question of what are ‘sufficient data’ to decide conservation policies.

Fortunately, most policy-oriented NGO conservation groups concentrate on the larger conservation issues of finding and protecting large areas of habitat from development and pushing strongly for policies that rein in climate change and reduce pollution produced by poor business and government practices.

In the current political and social climate, I suspect Ecologist L would get the job rather than Ecologist C. I can think of only one university hiring in my career that was sealed by a very assured candidate like person L who said to the departmental head and the search committee “Hire me and I will put this university on the MAP!”. We decided in this case we did not want to be on that particular MAP.

At present you can see all these questions are common in any science dealing with an urgent problem, as illustrated by the Covid-19 pandemic discussions, although much more money is being thrown at that disease issue than we ever expect to see for conservation or ecological science in general. It really is complicated in all science that is important to us.

Alfonzetti, M., et al. (2020). Shortfalls in extinction risk assessments for plants. Australian Journal of Botany 68, 466-471. doi: 10.1071/BT20106.

Hoffmann, A.A., Sgro, C.M., and Kristensen, T.N. (2017). Revisiting adaptive potential, population size, and conservation. Trends in Ecology & Evolution 32, 506-517. doi: 10.1016/j.tree.2017.03.012.

Plaza, P.I. and Lambertucci, S.A. (2020). Ecology and conservation of a rare species: What do we know and what may we do to preserve Andean condors? Biological Conservation 251, 108782. doi: 10.1016/j.biocon.2020.108782.

Wang, W.-L., Suman, D.O., Zhang, H.-H., Xu, Z.-B., Ma, F.-Z., and Hu, S.-J. (2020). Butterfly conservation in China: From science to action. Insects (Basel, Switzerland) 11, 661. doi: 10.3390/insects11100661.

On Citations and Scientific Research in Ecology

Begin with a few common assumptions in science.
(1) Higher citation rates define more valuable science
(2) Recent references are more valuable than older references
(3) Retracted scientific research is rapidly recognized and dropped from discussion
(4) The vast majority of scientific research reported in papers is read by other scientists.
(5) Results cited in scientific papers are cited correctly in subsequent references.

The number of publications in ecological science is growing rapidly world-wide, and a corollary of this must be that the total number of citations is growing even more rapidly (e.g. Westgate et al. 2020). It is well recognized that citations are unevenly spread among published papers, and reports that nearly 50% of published papers never receive any citations at all are commonly cited. I have not been able to validate this for papers in the ecological sciences. The more important question is whether the most highly cited papers are the most significant for progress in ecological understanding. If this is the case, you can simply ignore the vast majority of the published literature and save reading time. But this seems unlikely to be correct for ecological science.

The issue of scientific importance is a time bomb partly because ‘importance’ may be redefined over time as sciences mature, and this redefinition may occur in years or tens of years. A classic example is the citation history of Charles Elton’s (1958) book on invasions (Richardson and Pyšek 2008). Published in 1958, this book had almost no citations until the 1990s. Citations have become more and more important in the ranking of individual scholars as well as university departments during the last 20 years (Keville et al. 2017). This has occurred despite continuous warnings that citations are not valid for comparing individuals of different age or departments in different academic fields (Patience et al. 2017). If you publish in Covid-19 research this year, you are likely to get more citations than the person working in earthworm taxonomy.

Most published papers confirm the general belief that citing the most recent papers is more successful than citing older papers. If this belief could be tested, it would simplify education of graduate students and facilitate teaching. But the simple fact is that in ecology often (but not always) older papers have better perspectives than more recent papers or indicate paths of research that have failed to lead to ecological wisdom. 

Newspapers revel in stories of retracted research, if only to show that scientists are human. Of some interest are studies that show that research which is retracted continues to be cited. Hagberg (2020) cites a case in which a paper was retracted but continued to be cited as much after retraction as before. Fortunately, retracted research is rare in the ecological sciences but not absent, but the various conflicting ways in which scientific journals deal with papers with fraudulent results discovered after they are published leave much to be desired. 

A final comment on references is a warning to anyone reading the discussion or conclusions of a paper. Smith and Cumberledge (2020) have reported a random sample of references in a variety of scientific papers indicated a 25% error rate in ‘quotation’ errors. Quotation errors are distinct from ‘citation errors’ which are minor mistakes in the year of publication, page numbers or names in citations given in papers. Quotation errors are examples of “original paper authors say XX, citing paper says YY, a contradiction to what was originally reported. They used 250 citations from the 5 most highly cited scientific publications of today to determine how many papers contained ‘quotation errors’ and found a 25% error rate. About 33% of these errors could be called ‘Unsubstantiated’ and about 50% of the remaining quotation errors were ‘Impossible to substantiate” category. Their study reinforced early work by Todd et al. (2007) and pointed out to readers a weakness in the current use of references in scientific writing that is often missed by reviewers.

On a more positive note, on how to increase your citation rate, Murphy et al. (2019) surveyed the titles of 3562 papers and their subsequent citation rate from four ecology and entomology journals. They found that papers that did not include the Latin name of species in the title of the paper were cited 47% more often than papers with Latin names in the title. The number of words in the title of the paper had almost no effect on citation rates. They were unable to determine whether the injection of humor in the title of the paper had any effect on citation rates because too few papers attempted humor in the title.   

Elton, C.S. (1958) ‘The Ecology of Invasions by Animals and Plants.’ (Methuen: London.) ISBN: 978-3-030-34721-5

Hagberg, J.M. (2020). The unfortunately long life of some retracted biomedical research publications. Journal of Applied Physiology 128, 1381-1391. doi: 10.1152/japplphysiol.00003.2020.

Keville, M.P., Nelson, C.R., and Hauer, F.R. (2017). Academic productivity in the field of ecology. Ecosphere 8, e01620. doi: 10.1002/ecs2.1620.

Murphy, S.M., Vidal, M.C., Hallagan, C.J., Broder, E.D., and Barnes, E.E. (2019). Does this title bug (Hemiptera) you? How to write a title that increases your citations. Ecological Entomology 44, 593-600. doi: 10.1111/een.12740.

Patience, G.S., Patience, C.A., Blais, B., and Bertrand, F. (2017). Citation analysis of scientific categories. Heliyon 3, e00300. doi: https://doi.org/10.1016/j.heliyon.2017.e00300.

Richardson, D.M. and Pyšek, P. (2008). Fifty years of invasion ecology – the legacy of Charles Elton. Diversity and Distributions 14, 161-168. doi: 10.1111/j.1472-4642.2007.00464.x.

Smith, N. and Cumberledge, A. (2020). Quotation errors in general science journals. Proceedings of the Royal Society. A, 476, 20200538. doi: 10.1098/rspa.2020.0538.

Todd, P.A., Yeo, D.C.J., Li, D., and Ladle, R.J. (2007). Citing practices in ecology: can we believe our own words? Oikos 116, 1599-1601. doi: 10.1111/j.2007.0030-1299.15992.x

Westgate, M.J., Barton, P.S., Lindenmayer, D.B., and Andrew., N.R. (2020). Quantifying shifts in topic popularity over 44 years of Austral Ecology. Austral Ecology 45, 663-671. doi: 10.1111/aec.12938.

On the Use of Statistics in Ecological Research

There is an ever-deepening cascade of statistical methods and if you are going to be up to date you will have to use and cite some of them in your research reports or thesis. But before you jump into these methods, you might consider a few tidbits of advice. I suggest three rules and a few simple guidelines:

Rule 1. For descriptive papers keep to descriptive statistics. Every good basic statistics book has advice on when to use means to describe “average values”, when to use medians, or percentiles. Follow their advice and do not in your report generate any hypotheses except in the discussion. And follow the simple advice of statisticians not to generate and then test a hypothesis with the same set of data. Descriptive papers are most valuable. They can lead us to speculations and suggest hypotheses and explanations, but they do not lead us to strong inference.

Rule 2. For explanatory papers, the statistical rules become more complicated. For scientific explanation you need 2 or more alternative hypotheses that make different, non-overlapping predictions. The predictions must involve biological or physical mechanisms. Correlations alone are not mechanisms. They may help to lead you to a mechanism, but the key is that the mechanism must involve a cause and an effect. A correlation of a decline in whale numbers with a decline in sunspot numbers may be interesting but only if you can tie this correlation into an actual mechanism that affects birth or death rates of the whales.

Rule 3. For experimental papers you have access to a large variety of books and papers on experimental design. You must have a control or unmanipulated group, or for a comparative experiment a group A with treatment X, and a group B with treatment Y. There are many rules in the writings of experimental design that give good guidance (e.g. Anderson 2008; Eberhardt 2003; Johnson 2002; Shadish et al. 2002; Underwood 1990).

For all these ecology papers, consider the best of the recent statistical admonitions. Use statistics to enlighten not to obfuscate the reader. Use graphics to illustrate major results. Avoid p-values (Anderson et al. 2000; Ioannidis 2019a, 2019b). Measure effect sizes for different treatments (Nakagawa and Cuthill 2007). Add to these general admonitions the conventional rules of paper or report submission – do not argue with the editor, argue a small amount with the reviewers (none are perfect), and put your main messages in the abstract. And remember that it is possible there was some interesting research done before the year 2000.

Anderson, D.R. (2008) ‘Model Based Inference in the Life Sciences: A Primer on Evidence.’ (Springer: New York.). 184 pp.

Anderson, D.R., Burnham, K.P., and Thompson, W.L. (2000). Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64, 912-923.

Eberhardt, L.L. (2003). What should we do about hypothesis testing? Journal of Wildlife Management 67, 241-247.

Ioannidis, J.P.A. (2019a). Options for publishing research without any P-values. European Heart Journal 40, 2555-2556. doi: 10.1093/eurheartj/ehz556.

Ioannidis, J. P. A. (2019b). What have we (not) learnt from millions of scientific papers with P values? American Statistician 73, 20-25. doi: 10.1080/00031305.2018.1447512.

Johnson, D.H. (2002). The importance of replication in wildlife research. Journal of Wildlife Management 66, 919-932.

Nakagawa, S. and Cuthill, I.C. (2007). Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews 82, 591-605. doi: 10.1111/j.1469-185X.2007.00027.x.

Shadish, W.R, Cook, T.D., and Campbell, D.T. (2002) ‘Experimental and Quasi-Experimental Designs for Generalized Causal Inference.’ (Houghton Mifflin Company: New York.)

Underwood, A. J. (1990). Experiments in ecology and management: Their logics, functions and interpretations. Australian Journal of Ecology 15, 365-389.

On Three Kinds of Ecology Papers

There are many possible types of papers that discuss ecology, and in particular I want to deal only with empirical studies that deal with terrestrial and aquatic populations, communities, or ecosystems. I will not discuss here theoretical studies or modelling studies. I suggest it is possible to classify papers in ecological science journals that deal with field studies into three categories which I will call Descriptive Ecology, Explanatory Ecology, and Experimental Ecology. Papers in all these categories deal with a description of some aspects of the ecological world and how it works but they differ in their scientific impact.

Descriptive Ecology publications are essential to ecological science because they present some details of the natural history of an ecological population or community that is vital to our growing understanding of the biota of the Earth. There is much literature in this group, and ecologists all have piles of books on the local natural history of birds, moths, turtles, and large mammals, to mention only a few. Fauna and flora compilations pull much of this information together to guide beginning students and the interested public in increased knowledge of local fauna and flora. These publications are extremely valuable because they form the natural history basis of our science, and greatly outnumber the other two categories of papers. The importance of this information has been a continuous message of ecologists over many years (e.g. Bartholomew 1986; Dayton 2003; Travis 2020).

The scientific journals that professional ecologists read are mostly concerned with papers that can be classified as Explanatory Ecology and Experimental Ecology. In a broad sense these two categories can be described as providing a good story to tie together and thus explain the known facts of natural history or alternatively to define a set of hypotheses that provide alternative explanations for these facts and then to test these hypotheses experimentally. Rigorous ecology like all good science proceeds from the explanatory phase to the experimental phase. Good natural history provides several possible explanations for ecological events but does not stop there. If a particular bird population is declining, we need first to make a guess from natural history if this decline might be from disease, habitat loss, or predation. But to proceed to successful management of this conservation problem, we need studies that distinguish the cause(s) of our ecological problems, as recognized by Caughley (1994) and emphasized by Hone et al. (2018). Consequently the flow in all the sciences is from descriptive studies to explanatory ideas to experimental validation. Without experimental validation ‘ecological ideas’ can transform into ‘ecological opinions’ to the detriment of our science. This is not a new view of scientific method (Popper 1963) but it does need to be repeated (Betini et al. 2017). 

If I repeat this too much, I suggest you do a survey of how often ecological papers in your favorite journal are published without ever using the word ‘hypothesis’ or ‘experiment’. A historical survey of these or similar words would be a worthwhile endeavour for an honours or M.Sc. student in any one of the ecological subdisciplines. The favourite explanation offered in many current papers is climate change, a particularly difficult hypothesis to test because, if it is specified vaguely enough, it is impossible to test experimentally. Telling interesting stories should not be confused with rigorous experimental ecology.

Bartholomew, G. A. (1986). The role of natural history in comtemporary biology. BioScience 36, 324-329. doi: 10.2307/1310237

Betini, G.S., Avgar, T., and Fryxell, John M. (2017). Why are we not evaluating multiple competing hypotheses in ecology and evolution? Royal Society Open Science 4, 160756. doi: 10.1098/rsos.160756.

Caughley, G. (1994). Directions in conservation biology. Journal of Animal Ecology 63, 215-244. doi: 10.2307/5542

Dayton, P.K. (2003). The importance of the natural sciences to conservation. American Naturalist 162, 1-13. doi: 10.1086/376572

Hone, J., Drake, Alistair, and Krebs, C.J. (2018). Evaluating wildlife management by using principles of applied ecology: case studies and implications. Wildlife Research 45, 436-445. doi: 10.1071/WR18006.

Popper, K. R. (1963) ‘Conjectures and Refutations: The Growth of Scientific Knowledge.’ (Routledge and Kegan Paul: London.)

Travis, Joseph (2020). Where is natural history in ecological, evolutionary, and behavioral science? American Naturalist 196, 1-8. doi: 10.1086/708765.

On Declining Bird Populations

The conservation literature and the media are alive with cries of declining bird populations around the world (Rosenberg et al. 2019). Birds are well liked by people, and an important part of our environment so they garner a lot of attention when the cry goes out that all is not well. The problems from a scientific perspective is what evidence is required to “cry wolf’. There are many different opinions on what data provide reliable evidence. There is a splendid critique of the Rosenberg et al paper by Brian McGill that you should read::
https://dynamicecology.wordpress.com/2019/09/20/did-north-america-really-lose-3-billion-birds-what-does-it-mean/

My object here is to add a comment from the viewpoint of population ecology. It might be useful for bird ecologists to have a brief overview of what ecological evidence is required to decide that a bird population or a bird species or a whole group of birds is threatened or endangered. One simple way to make this decision is with a verbal flow chart and I offer here one example of how to proceed.

  1. Get accurate and precise data on the populations of interest. If you claim a population is declining or endangered, you need to define the population and know its abundance over a reasonable time period.

Note that this is already a nearly impossible demand. For birds that are continuously resident it is possible to census them well. Let me guess that continuous residency occurs in at most 5% or fewer of the birds of the world. The other birds we would like to protect are global or local migrants or move unpredictably in search of food resources, so it is difficult to define a population and determine if the population as a whole is rising or falling. Compounding all this are the truly rare bird species that are difficult to census like all rare species. Dorey and Walker (2018) examine these concerns for Canada.

The next problem is what is a reasonable time period for the census data. The Committee on the Status of Endangered Wildlife in Canada (COSEWIC) gives 10 years or 3 generations, whichever is longer (see web link below). So now we need to know the generation time of the species of concern. We can make a guess at generation time but let us stick with 10 years for the moment. For how many bird species in Canada do we have 10 years of accurate population estimates?

  • Next, we need to determine the causes of the decline if we wish to instigate management actions. Populations decline because of a falling reproductive rate, increasing death rate, or higher emigration rates. There are very few birds for which we have 10 years of diagnosis for the causes of changes in these vital rates. Strong conclusions should not rest on weak data.

The absence of much of these required data force conservation biologists to guess about what is driving numbers down, knowing only that population numbers are falling. Typically, many things are happening over the 10 years of assessment – climate is changing, habitats are being lost or gained, invasive species are spreading, new toxic chemical are being used for pest control, diseases are appearing, the list is long. We have little time or money to determine the critical limiting factors. We can only make a guess.

  • At this stage we must specify an action plan to recommend management actions for the recovery of the declining bird population. Management actions are limited. We cannot in the short term alter climate. Regulating toxic chemical use in agriculture takes years. In a few cases we can set aside more habitat as a generalized solution for all declining birds. We have difficulty controlling invasive species, and some invasive species might be native species expanding their geographic range (e.g. Bodine and Capaldi 2017, Thibault et al. 2018).

Conservation ecologists are now up against the wall because all management actions that are recommended will cost money and will face potential opposition from some people. Success is not guaranteed because most of the data available are inadequate. Medical doctors face the same problem with rare diseases and uncertain treatments when deciding how to treat patients with no certainty of success.

In my opinion the data on which the present concern over bird losses is too poor to justify the hyper-publicity about declining birds. I realize most conservation biologists will disagree but that is why I think we need to lift our game by having a more rigorous set of data rules for categories of concern in conservation. A more balanced tone of concern may be more useful in gathering public support for management efforts. Stanton et al. (2018) provide a good example for farmland birds. Overuse of the word ‘extinction’ is counterproductive in my opinion. Trying to provide better data is highly desirable so that conservation papers do not always end with the statement ‘but detailed mechanistic studies are lacking’. Pleas for declining populations ought to be balanced by recommendations for solutions to the problem. Local solutions are most useful, global solutions are critical in the long run but given current global governance are too much fairy tales.

Bodine, E.N. and Capaldi, A. (2017). Can culling Barred Owls save a declining Northern Spotted Owl population? Natural Resource Modeling 30, e12131. doi: 10.1111/nrm.12131.

Dorey, K. and Walker, T.R. (2018). Limitations of threatened species lists in Canada: A federal and provincial perspective. Biological Conservation 217, 259-268. doi: 10.1016/j.biocon.2017.11.018.

Rosenberg, K.V., et al. (2019). Decline of the North American avifauna. Science 366, 120-124. doi: 10.1126/science.aaw1313.

Stanton, R.L., Morrissey, C.A., and Clark, R.G. (2018). Analysis of trends and agricultural drivers of farmland bird declines in North America: A review. Agriculture, Ecosystems & Environment 254, 244-254. doi: 10.1016/j.agee.2017.11.028.

Thibault, M., et al. (2018). The invasive Red-vented bulbul (Pycnonotus cafer) outcompetes native birds in a tropical biodiversity hotspot. PLoS ONE 13, e0192249. doi: 10.1371/journal.pone.0192249.

http://cosewic.ca/index.php/en-ca/assessment-process/wildlife-species-assessment-process-categories-guidelines/quantitative-criteria