Tag Archives: publishing papers

On Critical Evaluation in Ecology

Science proceeds by “conjecture-and-refutation” if we agree with Karl Popper (1963). There is a rich literature on science in general and ecological science in particular that is well worth a series of graduate discussions even if it is pre-2000 ancient history (Peters 1991, Weiner 1995, Woodward and Goodstein 1996). But I wish to focus on a current problem that I think is hindering ecological progress. I propose that ecological journals at this time are focusing their publications on papers that present apparent progress and are shedding papers that are critical of apparent progress. Or in Popper’s words, they focus on publishing ‘conjecture’ and avoid ‘refutation’. The most important aspect of this issue involves wildlife management and conservation issues. The human side of this issue may involve personal criticism and on occasion the loss of a job or promotion. The issue arises in part because of a confusion between the critique of ideas or data and the interpretation that all critiques are personal. So, the first principle of this discussion is that I discuss here only critiques of ideas or data.

There are many simple reasons for critiques of experimental design and data gathering. Are the treatments replicated, are the estimates of data variables reliable and sufficient, are proxy variables good or poor? Have the studies been carried out long enough? All these critiques can be summarized under the umbrella of measurement reliability. There are many examples we can use to illustrate these ideas. Are bird populations declining across the globe or locally? Are fisheries overharvesting particular species? Can we use climate change as a universal explanation of all changes in wildlife populations? Are survey methods for population changes across very large areas reliable? The problem is tied into the need for good or bad news that must be filtered to the news media or social media with high impact but little reliability. 

The problem at the level of science is the temptation to extrapolate beyond the limits of the available data. Now we come to the critical issue – how do our scientific journals respond to critical reviews of papers already published? My concern is that in the present time journals do not wish to receive or accept manuscripts that are critical of previously published papers. These decisions are no doubt confidential for journal publishers. There is perhaps some justification for this rejection policy, given that in the few cases where critiques are published on existing papers, the citation score of the original paper may greatly exceed that of the critique. So, conjecture pays, refutation does not.

Journals are flooded with papers and for the better journals I would expect at least a 60-80% rejection rate. For Science the rejection rate is 94%, for Nature 92%, and for the Journal of Animal Ecology 85% of submitted manuscripts are rejected. Consequently, the suggestion that they reserve space for ‘refutation’ is too negative to their publication model. There is little I can suggest if one in caught in this dilemma except to try another less premium journal, and remember that web searches find papers easily no matter where published. If you need inspiration, you can follow Peters (1991) and write a book critique and suffer the brickbats from the establishment (e.g. Nature 354: 444, 12 December 1991).

But if you are upset about a particular paper or series of papers, remember critiques are valuable but follow these rules for a critique:

  1. Keep it short, 5 typed pages should be near maximal length.
  2. Raise a set of major points. Do not try to cover everything.
  3. Summarize briefly the key points you are in agreement with, so they are not confounded in the discussion.
  4. Discuss what studies might distinguish hypothesis A vs B, or A+B vs C.
  5. Discuss what better methods of measurement might be used if funding is available.
  6. Never attack individuals or research groups. The discussion is about ideas, results, and inferences.

Decisions to accept some management actions may have to be taken immediately and journal editors must take that into consideration. Prognostication over accepting critiques may be damaging. But all actions must be continually evaluated and changed once the understanding of the problem changes.

There are too many examples to recommend reading about past and present controversies in ecology, so here are only two examples. Dowding et al. (2009) report a comment on suggested methods of controlling introduced pests on Macquarie Island in the Southern Ocean. I was involved in that discussion. A much bigger controversy in Canada involves Southern Mountain caribou populations which are in rapid decline. The proximate explanation for the decline is postulated to be predation by wolves and thus the suggested management action is shooting the wolves. Johnson et al. (2022), Lamb et al. (2022) and Superbie et al. (2022) provide an entre into this literature and the decisions of what to do now and in the future to prevent extinction of these ungulates. The caribou problem is complicated by the interaction of human alteration of landscapes and the natural processes of predation and food availability. Alas nothing is simple.

All these ecological dilemmas are controversial and the important role of criticism involving evaluations of alternative hypotheses are the only way forward for ecologists involved in controversies. In my opinion most ecological journals are not doing their part is publishing critiques of the conventional wisdom.

Dowding, J.E., Murphy, E.C., Springer, K., Peacock, A.J. & Krebs, C.J. (2009) Cats, rabbits, Myxoma virus, and vegetation on Macquarie Island: a comment on Bergstrom et al. (2009). Journal of Applied Ecology, 46, 1129-1132. doi: 10.1111/j.1365-2664.2009.01690.x.

Johnson, C.J., Ray, J.C. & St-Laurent, M.-H. (2022) Efficacy and ethics of intensive predator management to save endangered caribou. Conservation Science and Practice, 4: e12729. doi: 10.1111/csp2.12729.

Lamb, C.T., Willson, R., Richter, C., Owens-Beek, N., Napoleon, J., Muir, B., McNay, R.S., Lavis, E., Hebblewhite, M., Giguere, L., Dokkie, T., Boutin, S. & Ford, A.T. (2022) Indigenous-led conservation: Pathways to recovery for the nearly extirpated Klinse-Za mountain caribou. Ecological Applications 32 (5): e2581. doi: 10.1002/eap.2581.

Peters, R.H. (1991) A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp. ISBN:0521400171.

Popper, K.R. (1963) Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge and Kegan Paul, London. 412 pp. ISBN-13: 978-0415285940.

Superbie, C., Stewart, K.M., Regan, C.E., Johnstone, J.F. & McLoughlin, P.D. (2022) Northern boreal caribou conservation should focus on anthropogenic disturbance, not disturbance-mediated apparent competition. Biological Conservation, 265, 109426. doi: 10.1016/j.biocon.2021.109426.

Weiner, J. (1995) On the practice of ecology. Journal of Ecology, 83, 153-158.

Woodward, J. & Goodstein, D. (1996) Conduct, misconduct and the structure of science. American Scientist, 84, 479-490.

On Assumptions in Ecology Papers

What can we do as ecologists to improve the publishing standards of ecology papers? I suggest one simple but bold request. We should require at the end of every published paper a annotated list of the assumptions made in providing the analysis reported in the paper. A tabular format could be devised with columns for the assumption, the perceived support of and tests for the assumption, and references for this support or lack thereof. I can hear the screaming already, so this table could be put in the Supplementary Material which most people do not read. We could add to each paper in the final material where there are statements of who did the writing, who provided the money, and add a reference to this assumptions table in the Supplementary Material or a statement that no assumptions about anything were made to reach these conclusions.

The first response I can detect to this recommendation is that many ecologists will differ in what they state are assumptions to their analysis and conclusions. As an example, in wildlife studies, we commonly make the assumption that an individual animal having a radio collar will behave and survive just like another animal with no collar. In analyses of avian population dynamics, we might commonly assume that our visiting nests does not affect their survival probability. We make many such assumptions about random or non-random sampling. My question then is whether or not there is any value in listing these kinds of assumptions. My response is that this approach of listing what the authors think they are assuming should alert the reviewers to the elephants in the room that have not been listed.

My attention was called to this general issue by the recent paper of Ginzburg and Damuth (2022) in which they contrasted the assumptions of two general theories of functional responses of predators to prey – “prey dependence” versus “ratio dependence”. We have in ecology many such either-or discussions that never seem to end. Consider the long-standing discussion of whether populations can be regulated by factors that are “density dependent” or “density independent”, a much-debated issue that is still with us even though it was incisively analyzed many years ago.  

Experimental ecology is not exempt from assumptions, as outlined in Kimmel et al. (2021) who provide an incisive review of cause and effect in ecological experiments. Pringle and Hutchinson (2020) discuss the failure of assumptions in food web analysis and how these might be resolved with new techniques of analysis. Drake et al. (2021) consider the role of connectivity in arriving at conservation evaluations of patch dynamics, and the importance of demographic contributions to connectivity via dispersal. The key point is that, as ecology progresses, the role of assumptions must be continually questioned in relation to our conclusions about population and community dynamics in relation to conservation and landscape management.

Long ago Peters (1991) wrote an extended critique of how ecology should operate to avoid some of these issues, but his 1991 book is not easily available to students (currently available on Amazon for about $90). To encourage more discussion of these questions from the older to the more current literature, I have copied Peters Chapter 4 to the bottom of my web page at https://www.zoology.ubc.ca/~krebs/books.html for students to download if they wish to discuss these issues in more detail.

Perhaps a possible message in all this has been that ecology has always wished to be “physics-in-miniature” with grand generalizations like the laws we teach in the physical sciences. Over the last 60 years the battle in the ecology literature has been between this model of physics and the view that every population and community differ, and everything is continuing to change under the climate emergency so that we can have little general theory in ecology. There are certainly many current generalizations, but they are relatively useless for a transition from the general to the particular for the development of a predictive science. The consequence is that we now bounce from individual study to individual study, typically starting from different assumptions, with very limited predictability that is empirically testable. And the central issue for ecological science is how can we move from the present fragmentation in our knowledge to a more unified science. Perhaps starting to examine the assumptions of our current publications would be a start in this direction.  

Drake, J., Lambin, X., and Sutherland, C. (2021). The value of considering demographic contributions to connectivity: a review. Ecography 44, 1-18. doi: 10.1111/ecog.05552.

Ginzburg, L.R. and Damuth, J. (2022). The Issue Isn’t Which Model of Consumer Interference Is Right, but Which One Is Least Wrong. Frontiers in Ecology and Evolution 10, 860542. doi: 10.3389/fevo.2022.860542.

Kimmel, K., Dee, L.E., Avolio, M.L., and Ferraro, P.J. (2021). Causal assumptions and causal inference in ecological experiments. Trends in Ecology & Evolution 36, 1141-1152. doi: 10.1016/j.tree.2021.08.008.

Peters, R.H. (1991) ‘A Critique for Ecology.’ (Cambridge University Press: Cambridge, England.) ISBN:0521400171 (Chapter 4 pdf available at https://www.zoology.ubc.ca/~krebs/books.html)

Pringle, R.M. and Hutchinson, M.C. (2020). Resolving Food-Web Structure. Annual Review of Ecology, Evolution, and Systematics 51, 55-80. doi: 10.1146/annurev-ecolsys-110218-024908.

Seven Prescriptive Principles for Ecologists

After three of us put together a paper to list the principles of applied ecology (Hone, Drake, and Krebs 2015), I thought that perhaps we might have an additional set of general behavioural principles for ecologists. We might think of using these seven principles as a broad template for the work we do in science.

  1. Do good science and avoid opinions that are not based on facts and reliable studies. Do not cite bad science even if it is published in Science.
  2. Appreciate and support your colleagues.
  3. Because you disagree with another scientist it is not acceptable to be rude, and it is preferable to decide what experiment can solve the disagreement.
  4. Adulterating your data to remove values that do not fit your hypothesis is not acceptable.
  5. Alternative facts have no place in science. A Professor should not profess nonsense. Nonsense should be the sole prerogative of politicians.
  6. Help your fellow scientists whenever possible, and do not envy those whose papers get published in Science or Nature. Your contribution to science cannot be measured by your h-index.
  7. We have only one Earth. We should give up dreaming about moving to Mars and take care of our home here.

Many of these principles can be grouped under the umbrella of ‘scientific integrity’, and there is an extensive discussion in the literature about integrity (Edwards and Roy 2017, Horbach and Halffman 2017). Edwards and Roy (2017, pg. 53) in a (dis-) service to aspiring young academics quote a method for increasing an individual’s h-index without committing outright fraud. Horbach and Halffman (2017) point out that scientists and policymakers adopt different approaches to research integrity. Scientists discuss ‘integrity’ with a positive view of ‘good scientific practice’ that has an ethical focus, while policy people discuss ‘integrity’ with a negative view of ‘misconduct’ that has a legal focus.

The immediate problem with scientific integrity in the USA involves the current President and his preoccupation with defining ‘alternative facts’ (Goldman et al. 2017). But the problem is also a more general one, as illustrated by the long discussion carried out by conservation biologists who asked whether or not a scientist can also be an advocate for a particular policy (Garrard et al. 2016, Carroll et al. 2017).

The bottom line for ecologists and environmental scientists is important, and a serious discussion of scientific integrity should be part of every graduate seminar class. Scientific journals should become more open to challenges to papers that use faulty data, and maintaining high standards must remain number one on the list for all of us.

Carroll, C., Hartl, B., Goldman, G.T., Rohlf, D.J., and Treves, A. 2017. Defending the scientific integrity of conservation-policy processes. Conservation Biology 31(5): 967-975. doi: 10.1111/cobi.12958.

Edwards, M.A., and Roy, S. 2017. Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science 34(1): 51-61. doi: 10.1089/ees.2016.0223.correx.

Garrard, G.E., Fidler, F., Wintle, B.C., Chee, Y.E., and Bekessy, S.A. 2016. Beyond advocacy: Making space for conservation scientists in public debate. Conservation Letters 9(3): 208-212. doi: 10.1111/conl.12193.

Goldman, G.T., Berman, E., Halpern, M., Johnson, C. & Kothari, Y. (2017) Ensuring scientific integrity in the Age of Trump. Science, 355, 696-698. doi: 10.1126/science.aam5733

Hone, J., A. Drake, and C. J. Krebs. 2015. Prescriptive and empirical principles of applied ecology. Environmental Reviews 23:170-176. doi: 10.1139/er-2014-0076

Horbach, S.P.J.M., and Halffman, W. 2017. Promoting virtue or punishing fraud: Mapping contrasts in the language of ‘scientific integrity’. Science and Engineering Ethics 23(6): 1461-1485. doi: 10.1007/s11948-016-9858-y.

 

On the Tasks of Retirement

The end of another year in retirement and time to clean up the office. So this week I recycled 15,000 reprints – my personal library of scientific papers. I would guess that many young scientists would wonder why anyone would have 15,000 paper reprints when you could have all that on a small memory stick. Hence this blog.

Rule #1 of science: read the literature. In 1957 when I began graduate studies there were perhaps 6 journals that you had to read to keep up in terrestrial ecology. Most of them came out 3 or 4 times a year, and if you could not afford to have a personal copy of the paper either by buying the journal or later by xeroxing, you wrote to authors to ask them to post a copy of their paper to you – a reprint. The university even printed special postcards to request reprints with your name and address for the return mail. So scientists gathered paper copies of important papers. Then it became necessary to catalog them, and the simplest thing was to type the title and reference on a 3 by 5-inch card and put them in categories in a file cabinet. All of this will be incomprehensible to modern scientists.

A corollary of this old-style approach to science was that when you published, you had to purchase paper copies of reprints of your own papers. When someone got interested in your research, you would get reprint requests and then had to post them around the world. All this cost money and moreover you had to guess how popular your paper might be in future. The journal usually gave you 25 or 50 free reprints when you published a paper but if you thought you’d need more then you had to purchase them in advance. The first xerox machines were not commercially available until 1959. Xeroxing was quite expensive even when many different types of copying machines started to become available in the late 1960s. But it was always cheaper to buy a reprint when your paper was printed by a journal that it was to xerox a copy of the paper at a later date.

Meanwhile scientists had to write papers and textbooks, so the sorting of references became a major chore for all writers. In 1988 Endnote was first released as a software program that could incorporate references and allow one to sort and print them via a computer, so we were off and running, converting all the 3×5 cards into electronic format. One could then generate a bibliography in a short time and look up forgotten references by author or title or keywords. Through the 1990s the computer world progressed rapidly to approximate what you see today, with computer searches of the literature, and ultimately the ability to download a copy of a PDF of a scientific paper without even telling the author.

But there were two missing elements. All the pre-2000 literature was still piled on Library shelves, and at least in ecology is it possible that some literature published before 2000 might be worth reading. JSTOR (= Journal Storage) came to the rescue in 1995 and began to scan and compile electronic documents of much of this old literature, so even much of the earlier literature became readily available by the early 2000s. Currently there are about 1900 journals in most scientific disciplines that are available in JSTOR. Since by the late 1990s the volume of the scientific literature was doubling about every 7 years, the electronic world saved all of us from yet more paper copies of important papers.

What was missing still were many government and foundation documents, reviews of programs that were never published in the formal literature, now called the ‘grey literature’. Some of these are lost unless governments scan them and make them available. The result of any loss of this grey literature is that studies are sometimes repeated needlessly and money is wasted.

About 2.5 million scientific papers are published every year at the present time (http://www.cdnsciencepub.com/blog/21st-century-science-overload.aspx ) and the consequence of this explosion must be that each of us has to concentrate on a smaller and smaller area of science. What this means for instructors and textbook writers who must synthesize these new contributions is difficult to guess. We need more critical syntheses, but these kinds of papers are not welcomed by those that distribute our research funds so that young scientists feel they should not get caught up in writing an extensive review, however important that is for our science.

In contrast to my feeling of being overwhelmed at the present time, Fanelli and Larivière (2016) concluded that the publication rate of individuals has not changed in the last 100 years. Like most meta-analyses this one is suspicious in arguing against the simple observation in ecology that everyone seems to publish from their thesis many small papers rather than one synthetic one. Anyone who has served on a search committee for university or government jobs in the last 30 years would attest to the fact that the number of publications expected now for new graduates has become quite ridiculous. When I started my postdoc in 1962 I had one published paper, and for my first university job in 1964 this had increased to 3. There were at that time many job opportunities for anyone in my position with a total of 2 or 3 publications. To complicate things, Steen et al. (2013) have suggested that the number of retracted papers in science has been increasing at a faster rate than the number of publications. Whether again this applies to ecology papers is far from clear because the problem in ecology is typically that the methods or experimental design are inadequate rather than fraudulent.

If there is a simple message here, it is that the literature and the potential access to it is changing rapidly and young scientists need to be ready for this. Yet progress in ecology is not a simple metric of counts of papers or even citations. Quality trumps quantity.

Fanelli, D., and Larivière, V. 2016. Researchers’ individual publication rate has not increased in a century. PLoS ONE 11(3): e0149504. doi: 10.1371/journal.pone.0149504.

Steen, R.G., Casadevall, A., and Fang, F.C. 2013. Why has the number of scientific retractions increased?  PLoS ONE 8(7): e68397. doi: 10.1371/journal.pone.0068397.

 

12 Publishing Mistakes to Avoid

Graduate students probably feel they are given too much advice on their career goals, but it might be useful to list a few of the mistakes I see often while reviewing papers submitted for publication. Think of it as a cheat sheet to go over before final submission of a paper.

  1. Abstract. Write this first and under the realization that 95% of readers will only read this part of your paper. They need in a concise fashion the whole story, particularly for any data paper WHAT, WHERE, WHEN, HOW and WHY.
  2. Graphics. Choose your graphics carefully. Show them to others to see if they get the point immediately. Label the axes carefully. ‘Population’ could mean population size, population density, population index, or something else. ‘Species diversity’ could mean anything from the vast array of species diversity measures.
  3. Precision. If you are plotting data, a single point on a graph is not very informative without some measure of statistical precision. Dot plots without a measure of precision are fraudulent. Indicate at least in the figure legend what exact measure of precision you have used.
  4. Colour and Symbol Shape. If you have 2 or more sets of data, use colour and different symbol shapes to distinguish them. Check that the size of symbols is adequate for the reductions they will use in the journal printing. Journals that charge for colour will often print in black and white for free but use the colour in the PDF version.
  5. Histograms. Use histograms freely in your papers by only after reading Cleveland (1994) who recommends never using histograms. More comments are given in my blog ” On Graphics in Ecological Presentations”.
  6. Scale of Graph. if you wish to cheat there are some simple ways of making your data look better. See Cleveland et al. (1986) for a scatter-plot example.
  7. Tables. Tables should be simple if possible. Columns of meaningless numbers do not help the reader understand your conclusions. Most people understand graphs very quickly but tables very slowly.
  8. Discussion. Be your own critic lest your reviewers do this job for you. If some published papers reach conclusions other than you have, discuss why this might be the case. Recognize that no one study is perfect. Indicate where future research might go.
  9. Literature Cited. Check that all your literature cited in the paper are in the bibliography and none are missed. Check the required format of the references since many editors go into orbit if you use the wrong format or fail to include the doi.
  10. Supplementary Material. Consider carefully what you put in supplementary material. Standards are changing and simple excel tables of mean values are often not enough to be useful for additional analysis.
  11. Covering Letter. A last minute but critical piece of the puzzle because you need to capture in a few sentences why the editor should have your paper reviewed or decide to send it right back to you as not of interest. Remember that editors are swamped with papers and rejection rates are often 60-90% at the first cut.
  12. Select the Right Journal. This is perhaps the hardest part. Not everything in ecology can be published in Science or Nature, and given the electronic world of the Web of Science, good work will be picked up in other journals. If you have millions, you can use the journals that you must pay to publish in, but I personally think this is capitalism gone amok. Romesburg (2016, 2017) presents critical data on the issue of commercial journals in science. Read these papers and put them on your Facebook site.

 

Cleveland, W.S., Diaconis, P. & McGill, R. (1982) Variables on scatterplots look more highly correlated when the scales are increased. Science, 216, 1138-1141. http://www.jstor.org/stable/1689316

Cleveland, W.S. (1994) The Elements of Graphing Data. AT&T Bell Laboratories, Murray Hill, New Jersey. ISBN: 9780963488411

Romesburg, H.C. (2016) How publishing in open access journals threatens science and what we can do about it. Journal of Wildlife Management, 80, 1145-1151. doi: 10.1002/jwmg.21111

Romesburg, H.C. (2017) How open access is crucial to the future of science: A reply. Journal of Wildlife Management, 81, 567-571. doi: 10.1002/jwmg.21244

 

A Modest Proposal for a New Ecology Journal

I read the occasional ecology paper and ask myself how this particular paper ever got published when it is full of elementary mistakes and shows no understanding of the literature. But alas we can rarely do anything about this as individuals. If you object to what a particular paper has concluded because of its methods or analysis, it is usually impossible to submit a critique that the relevant journal will publish. After all, which editor would like to admit that he or she let a hopeless paper through the publication screen. There are some exceptions to this rule, and I list two examples below in the papers by Barraquand (2014) and Clarke (2014). But if you search the Web of Science you will find few such critiques for published ecology papers.

One solution jumped to mind for this dilemma: start a new ecology journal perhaps entitled Misleading Ecology Papers: Critical Commentary Unfurled. Papers submitted to this new journal would be restricted to a total of 5 pages and 10 references, and all polemics and personal attacks would be forbidden. The key for submissions would be to state a critique succinctly, and suggest a better way to construct the experiment or study, a new method of analysis that is more rigorous, or key papers that were missed because they were published before 2000. These rules would potentially leave a large gap for some very poor papers to avoid criticism, papers that would require a critique longer than the original paper. Perhaps one very long critique could be distinguished as a Review of the Year paper. Alternatively, some long critiques could be published in book form (Peters 1991), and not require this new journal. The Editor of the journal would require all critiques to be signed by the authors, but would permit in exceptional circumstances to have the authors be anonymous to prevent job losses or in more extreme cases execution by the Mafia. Critiques of earlier critiques would be permitted in the new journal, but an infinite regress will be discouraged. Book reviews could be the subject of a critique, and the great shortage of critical book reviews in the current publication blitz is another aspect of ecological science that is largely missing in the current journals. This new journal would of course be electronic, so there would be no page charges, and all articles would be open access. All the major bibliographic databases like the Web of Science would be encouraged to catalog the publications, and a doi: would be assigned to each paper from CrossRef.

If this new journal became highly successful, it would no doubt be purchased by Wiley-Blackwell or Springer for several million dollars, and if this occurred, the profits would accrue proportionally to all the authors who had published papers to make this journal popular. The sale of course would be contingent on the purchaser guaranteeing not to cancel the entire journal to prevent any criticism of their own published papers.

At the moment criticism of ecological science does not occur for several years after a poor paper is published and by that time the Donald Rumsfeld Effect would have occurred to apply the concept of truth to the conclusions of this poor work. For one example, most of the papers critiqued by Clarke (2014) were more than 10 years old. By making the feedback loop much tighter, certainly within one year of a poor paper appearing, budding ecologists could be intercepted before being led off course.

This journal would not be popular with everyone. Older ecologists often strive mightily to prevent any criticism of their prior conclusions, and some young ecologists make their career by pointing out how misleading some of the papers of the older generation are. This new journal would assist in creating a more egalitarian ecological world by producing humility in older ecologists and more feelings of achievements in young ecologists who must build up their status in the science. Finally, the new journal would be a focal point for graduate seminars in ecology by bringing together and identifying the worst of the current crop of poor papers in ecology. Progress would be achieved.

 

Barraquand, F. 2014. Functional responses and predator–prey models: a critique of ratio dependence. Theoretical Ecology 7(1): 3-20. doi: 10.1007/s12080-013-0201-9.

Clarke, P.J. 2014. Seeking global generality: a critique for mangrove modellers. Marine and Freshwater Research 65(10): 930-933. doi: 10.1071/MF13326.

Peters, R.H. 1991. A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp. ISBN:0521400171

 

On Statistical Progress in Ecology

There is a general belief that science progresses over time and given that the number of scientists is increasing, this is a reasonable first approximation. The use of statistics in ecology has been one of ever increasing improvements of methods of analysis, accompanied by bandwagons. It is one of these bandwagons that I want to discuss here by raising the general question:

Has the introduction of new methods of analysis in biological statistics led to advances in ecological understanding?

This is a very general question and could be discussed at many levels, but I want to concentrate on the top levels of statistical inference by means of old-style frequentist statistics, Bayesian methods, and information theoretic methods. I am prompted to ask this question because of my reviewing of many papers submitted to ecological journals in which the data are so buried by the statistical analysis that the reader is left in a state of confusion whether or not any progress has been made. Being amazed by the methodology is not the same as being impressed by the advance in ecological understanding.

Old style frequentist statistics (read Sokal and Rohlf textbook) has been criticized for concentrating on null hypothesis testing when everyone knows the null hypothesis is not correct. This has led to refinements in methods of inference that rely on effect size and predictive power that is now the standard in new statistical texts. Information-theoretic methods came in to fill the gap by making the data primary (rather than the null hypothesis) and asking the question which of several hypotheses best fit the data (Anderson et al. 2000). The key here was to recognize that one should have prior expectations or several alternative hypotheses in any investigation, as recommended in 1897 by Chamberlin. Bayesian analysis furthered the discussion not only by having several alternative hypotheses but by the ability to use prior information in the analysis (McCarthy and Masters 2006). Implicit in both information theoretic and Bayesian analysis is the recognition that all of the alternative hypotheses might be incorrect, and that the hypothesis selected as ‘best’ might have very low predictive power.

Two problems have arisen as a result of this change of focus in model selection. The first is the problem of testability. There is an implicit disregard for the old idea that models or conclusions from an analysis should be tested with further data, preferably with data obtained independently from the original data used to find the ‘best’ model. The assumption might be made that if we get further data, we should add it to the prior data and update the model so that it somehow begins to approach the ‘perfect’ model. This was the original definition of passive adaptive management, which is now suggested to be a poor model for natural resource management. The second problem is that the model selected as ‘best’ may be of little use for natural resource management because it has little predictability. In management issues for conservation or exploitation of wildlife there may be many variables that affect population changes and it may not be possible to conduct active adaptive management for all of these variables.

The take home message is that we need in the conclusions of our papers to have a measure of progress in ecological insight whatever statistical methods we use. The significance of our research will not be measured by the number of p-values, AIC values, BIC values, or complicated tables. The key question must be: What new ecological insights have been achieved by these methods?

Anderson, D.R., Burnham, K.P., and Thompson, W.L. 2000. Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64(4): 912-923.

Chamberlin, T.C. 1897. The method of multiple working hypotheses. Journal of Geology 5: 837-848 (reprinted in Science 148: 754-759 in 1965). doi:10.1126/science.148.3671.754.

McCarthy, M.A., and Masters, P.I.P. 2005. Profiting from prior information in Bayesian analyses of ecological data. Journal of Applied Ecology 42(6): 1012-1019. doi:10.1111/j.1365-2664.2005.01101.x.

Walters, C. 1986. Adaptive Management of Renewable Resources. Macmillan, New York.