On the Use of Statistics in Ecological Research

There is an ever-deepening cascade of statistical methods and if you are going to be up to date you will have to use and cite some of them in your research reports or thesis. But before you jump into these methods, you might consider a few tidbits of advice. I suggest three rules and a few simple guidelines:

Rule 1. For descriptive papers keep to descriptive statistics. Every good basic statistics book has advice on when to use means to describe “average values”, when to use medians, or percentiles. Follow their advice and do not in your report generate any hypotheses except in the discussion. And follow the simple advice of statisticians not to generate and then test a hypothesis with the same set of data. Descriptive papers are most valuable. They can lead us to speculations and suggest hypotheses and explanations, but they do not lead us to strong inference.

Rule 2. For explanatory papers, the statistical rules become more complicated. For scientific explanation you need 2 or more alternative hypotheses that make different, non-overlapping predictions. The predictions must involve biological or physical mechanisms. Correlations alone are not mechanisms. They may help to lead you to a mechanism, but the key is that the mechanism must involve a cause and an effect. A correlation of a decline in whale numbers with a decline in sunspot numbers may be interesting but only if you can tie this correlation into an actual mechanism that affects birth or death rates of the whales.

Rule 3. For experimental papers you have access to a large variety of books and papers on experimental design. You must have a control or unmanipulated group, or for a comparative experiment a group A with treatment X, and a group B with treatment Y. There are many rules in the writings of experimental design that give good guidance (e.g. Anderson 2008; Eberhardt 2003; Johnson 2002; Shadish et al. 2002; Underwood 1990).

For all these ecology papers, consider the best of the recent statistical admonitions. Use statistics to enlighten not to obfuscate the reader. Use graphics to illustrate major results. Avoid p-values (Anderson et al. 2000; Ioannidis 2019a, 2019b). Measure effect sizes for different treatments (Nakagawa and Cuthill 2007). Add to these general admonitions the conventional rules of paper or report submission – do not argue with the editor, argue a small amount with the reviewers (none are perfect), and put your main messages in the abstract. And remember that it is possible there was some interesting research done before the year 2000.

Anderson, D.R. (2008) ‘Model Based Inference in the Life Sciences: A Primer on Evidence.’ (Springer: New York.). 184 pp.

Anderson, D.R., Burnham, K.P., and Thompson, W.L. (2000). Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64, 912-923.

Eberhardt, L.L. (2003). What should we do about hypothesis testing? Journal of Wildlife Management 67, 241-247.

Ioannidis, J.P.A. (2019a). Options for publishing research without any P-values. European Heart Journal 40, 2555-2556. doi: 10.1093/eurheartj/ehz556.

Ioannidis, J. P. A. (2019b). What have we (not) learnt from millions of scientific papers with P values? American Statistician 73, 20-25. doi: 10.1080/00031305.2018.1447512.

Johnson, D.H. (2002). The importance of replication in wildlife research. Journal of Wildlife Management 66, 919-932.

Nakagawa, S. and Cuthill, I.C. (2007). Effect size, confidence interval and statistical significance: a practical guide for biologists. Biological Reviews 82, 591-605. doi: 10.1111/j.1469-185X.2007.00027.x.

Shadish, W.R, Cook, T.D., and Campbell, D.T. (2002) ‘Experimental and Quasi-Experimental Designs for Generalized Causal Inference.’ (Houghton Mifflin Company: New York.)

Underwood, A. J. (1990). Experiments in ecology and management: Their logics, functions and interpretations. Australian Journal of Ecology 15, 365-389.

1 thought on “On the Use of Statistics in Ecological Research

Comments are closed.