Tag Archives: publishing papers

Seven Prescriptive Principles for Ecologists

After three of us put together a paper to list the principles of applied ecology (Hone, Drake, and Krebs 2015), I thought that perhaps we might have an additional set of general behavioural principles for ecologists. We might think of using these seven principles as a broad template for the work we do in science.

  1. Do good science and avoid opinions that are not based on facts and reliable studies. Do not cite bad science even if it is published in Science.
  2. Appreciate and support your colleagues.
  3. Because you disagree with another scientist it is not acceptable to be rude, and it is preferable to decide what experiment can solve the disagreement.
  4. Adulterating your data to remove values that do not fit your hypothesis is not acceptable.
  5. Alternative facts have no place in science. A Professor should not profess nonsense. Nonsense should be the sole prerogative of politicians.
  6. Help your fellow scientists whenever possible, and do not envy those whose papers get published in Science or Nature. Your contribution to science cannot be measured by your h-index.
  7. We have only one Earth. We should give up dreaming about moving to Mars and take care of our home here.

Many of these principles can be grouped under the umbrella of ‘scientific integrity’, and there is an extensive discussion in the literature about integrity (Edwards and Roy 2017, Horbach and Halffman 2017). Edwards and Roy (2017, pg. 53) in a (dis-) service to aspiring young academics quote a method for increasing an individual’s h-index without committing outright fraud. Horbach and Halffman (2017) point out that scientists and policymakers adopt different approaches to research integrity. Scientists discuss ‘integrity’ with a positive view of ‘good scientific practice’ that has an ethical focus, while policy people discuss ‘integrity’ with a negative view of ‘misconduct’ that has a legal focus.

The immediate problem with scientific integrity in the USA involves the current President and his preoccupation with defining ‘alternative facts’ (Goldman et al. 2017). But the problem is also a more general one, as illustrated by the long discussion carried out by conservation biologists who asked whether or not a scientist can also be an advocate for a particular policy (Garrard et al. 2016, Carroll et al. 2017).

The bottom line for ecologists and environmental scientists is important, and a serious discussion of scientific integrity should be part of every graduate seminar class. Scientific journals should become more open to challenges to papers that use faulty data, and maintaining high standards must remain number one on the list for all of us.

Carroll, C., Hartl, B., Goldman, G.T., Rohlf, D.J., and Treves, A. 2017. Defending the scientific integrity of conservation-policy processes. Conservation Biology 31(5): 967-975. doi: 10.1111/cobi.12958.

Edwards, M.A., and Roy, S. 2017. Academic research in the 21st century: Maintaining scientific integrity in a climate of perverse incentives and hypercompetition. Environmental Engineering Science 34(1): 51-61. doi: 10.1089/ees.2016.0223.correx.

Garrard, G.E., Fidler, F., Wintle, B.C., Chee, Y.E., and Bekessy, S.A. 2016. Beyond advocacy: Making space for conservation scientists in public debate. Conservation Letters 9(3): 208-212. doi: 10.1111/conl.12193.

Goldman, G.T., Berman, E., Halpern, M., Johnson, C. & Kothari, Y. (2017) Ensuring scientific integrity in the Age of Trump. Science, 355, 696-698. doi: 10.1126/science.aam5733

Hone, J., A. Drake, and C. J. Krebs. 2015. Prescriptive and empirical principles of applied ecology. Environmental Reviews 23:170-176. doi: 10.1139/er-2014-0076

Horbach, S.P.J.M., and Halffman, W. 2017. Promoting virtue or punishing fraud: Mapping contrasts in the language of ‘scientific integrity’. Science and Engineering Ethics 23(6): 1461-1485. doi: 10.1007/s11948-016-9858-y.

 

On the Tasks of Retirement

The end of another year in retirement and time to clean up the office. So this week I recycled 15,000 reprints – my personal library of scientific papers. I would guess that many young scientists would wonder why anyone would have 15,000 paper reprints when you could have all that on a small memory stick. Hence this blog.

Rule #1 of science: read the literature. In 1957 when I began graduate studies there were perhaps 6 journals that you had to read to keep up in terrestrial ecology. Most of them came out 3 or 4 times a year, and if you could not afford to have a personal copy of the paper either by buying the journal or later by xeroxing, you wrote to authors to ask them to post a copy of their paper to you – a reprint. The university even printed special postcards to request reprints with your name and address for the return mail. So scientists gathered paper copies of important papers. Then it became necessary to catalog them, and the simplest thing was to type the title and reference on a 3 by 5-inch card and put them in categories in a file cabinet. All of this will be incomprehensible to modern scientists.

A corollary of this old-style approach to science was that when you published, you had to purchase paper copies of reprints of your own papers. When someone got interested in your research, you would get reprint requests and then had to post them around the world. All this cost money and moreover you had to guess how popular your paper might be in future. The journal usually gave you 25 or 50 free reprints when you published a paper but if you thought you’d need more then you had to purchase them in advance. The first xerox machines were not commercially available until 1959. Xeroxing was quite expensive even when many different types of copying machines started to become available in the late 1960s. But it was always cheaper to buy a reprint when your paper was printed by a journal that it was to xerox a copy of the paper at a later date.

Meanwhile scientists had to write papers and textbooks, so the sorting of references became a major chore for all writers. In 1988 Endnote was first released as a software program that could incorporate references and allow one to sort and print them via a computer, so we were off and running, converting all the 3×5 cards into electronic format. One could then generate a bibliography in a short time and look up forgotten references by author or title or keywords. Through the 1990s the computer world progressed rapidly to approximate what you see today, with computer searches of the literature, and ultimately the ability to download a copy of a PDF of a scientific paper without even telling the author.

But there were two missing elements. All the pre-2000 literature was still piled on Library shelves, and at least in ecology is it possible that some literature published before 2000 might be worth reading. JSTOR (= Journal Storage) came to the rescue in 1995 and began to scan and compile electronic documents of much of this old literature, so even much of the earlier literature became readily available by the early 2000s. Currently there are about 1900 journals in most scientific disciplines that are available in JSTOR. Since by the late 1990s the volume of the scientific literature was doubling about every 7 years, the electronic world saved all of us from yet more paper copies of important papers.

What was missing still were many government and foundation documents, reviews of programs that were never published in the formal literature, now called the ‘grey literature’. Some of these are lost unless governments scan them and make them available. The result of any loss of this grey literature is that studies are sometimes repeated needlessly and money is wasted.

About 2.5 million scientific papers are published every year at the present time (http://www.cdnsciencepub.com/blog/21st-century-science-overload.aspx ) and the consequence of this explosion must be that each of us has to concentrate on a smaller and smaller area of science. What this means for instructors and textbook writers who must synthesize these new contributions is difficult to guess. We need more critical syntheses, but these kinds of papers are not welcomed by those that distribute our research funds so that young scientists feel they should not get caught up in writing an extensive review, however important that is for our science.

In contrast to my feeling of being overwhelmed at the present time, Fanelli and Larivière (2016) concluded that the publication rate of individuals has not changed in the last 100 years. Like most meta-analyses this one is suspicious in arguing against the simple observation in ecology that everyone seems to publish from their thesis many small papers rather than one synthetic one. Anyone who has served on a search committee for university or government jobs in the last 30 years would attest to the fact that the number of publications expected now for new graduates has become quite ridiculous. When I started my postdoc in 1962 I had one published paper, and for my first university job in 1964 this had increased to 3. There were at that time many job opportunities for anyone in my position with a total of 2 or 3 publications. To complicate things, Steen et al. (2013) have suggested that the number of retracted papers in science has been increasing at a faster rate than the number of publications. Whether again this applies to ecology papers is far from clear because the problem in ecology is typically that the methods or experimental design are inadequate rather than fraudulent.

If there is a simple message here, it is that the literature and the potential access to it is changing rapidly and young scientists need to be ready for this. Yet progress in ecology is not a simple metric of counts of papers or even citations. Quality trumps quantity.

Fanelli, D., and Larivière, V. 2016. Researchers’ individual publication rate has not increased in a century. PLoS ONE 11(3): e0149504. doi: 10.1371/journal.pone.0149504.

Steen, R.G., Casadevall, A., and Fang, F.C. 2013. Why has the number of scientific retractions increased?  PLoS ONE 8(7): e68397. doi: 10.1371/journal.pone.0068397.

 

12 Publishing Mistakes to Avoid

Graduate students probably feel they are given too much advice on their career goals, but it might be useful to list a few of the mistakes I see often while reviewing papers submitted for publication. Think of it as a cheat sheet to go over before final submission of a paper.

  1. Abstract. Write this first and under the realization that 95% of readers will only read this part of your paper. They need in a concise fashion the whole story, particularly for any data paper WHAT, WHERE, WHEN, HOW and WHY.
  2. Graphics. Choose your graphics carefully. Show them to others to see if they get the point immediately. Label the axes carefully. ‘Population’ could mean population size, population density, population index, or something else. ‘Species diversity’ could mean anything from the vast array of species diversity measures.
  3. Precision. If you are plotting data, a single point on a graph is not very informative without some measure of statistical precision. Dot plots without a measure of precision are fraudulent. Indicate at least in the figure legend what exact measure of precision you have used.
  4. Colour and Symbol Shape. If you have 2 or more sets of data, use colour and different symbol shapes to distinguish them. Check that the size of symbols is adequate for the reductions they will use in the journal printing. Journals that charge for colour will often print in black and white for free but use the colour in the PDF version.
  5. Histograms. Use histograms freely in your papers by only after reading Cleveland (1994) who recommends never using histograms. More comments are given in my blog ” On Graphics in Ecological Presentations”.
  6. Scale of Graph. if you wish to cheat there are some simple ways of making your data look better. See Cleveland et al. (1986) for a scatter-plot example.
  7. Tables. Tables should be simple if possible. Columns of meaningless numbers do not help the reader understand your conclusions. Most people understand graphs very quickly but tables very slowly.
  8. Discussion. Be your own critic lest your reviewers do this job for you. If some published papers reach conclusions other than you have, discuss why this might be the case. Recognize that no one study is perfect. Indicate where future research might go.
  9. Literature Cited. Check that all your literature cited in the paper are in the bibliography and none are missed. Check the required format of the references since many editors go into orbit if you use the wrong format or fail to include the doi.
  10. Supplementary Material. Consider carefully what you put in supplementary material. Standards are changing and simple excel tables of mean values are often not enough to be useful for additional analysis.
  11. Covering Letter. A last minute but critical piece of the puzzle because you need to capture in a few sentences why the editor should have your paper reviewed or decide to send it right back to you as not of interest. Remember that editors are swamped with papers and rejection rates are often 60-90% at the first cut.
  12. Select the Right Journal. This is perhaps the hardest part. Not everything in ecology can be published in Science or Nature, and given the electronic world of the Web of Science, good work will be picked up in other journals. If you have millions, you can use the journals that you must pay to publish in, but I personally think this is capitalism gone amok. Romesburg (2016, 2017) presents critical data on the issue of commercial journals in science. Read these papers and put them on your Facebook site.

 

Cleveland, W.S., Diaconis, P. & McGill, R. (1982) Variables on scatterplots look more highly correlated when the scales are increased. Science, 216, 1138-1141. http://www.jstor.org/stable/1689316

Cleveland, W.S. (1994) The Elements of Graphing Data. AT&T Bell Laboratories, Murray Hill, New Jersey. ISBN: 9780963488411

Romesburg, H.C. (2016) How publishing in open access journals threatens science and what we can do about it. Journal of Wildlife Management, 80, 1145-1151. doi: 10.1002/jwmg.21111

Romesburg, H.C. (2017) How open access is crucial to the future of science: A reply. Journal of Wildlife Management, 81, 567-571. doi: 10.1002/jwmg.21244

 

A Modest Proposal for a New Ecology Journal

I read the occasional ecology paper and ask myself how this particular paper ever got published when it is full of elementary mistakes and shows no understanding of the literature. But alas we can rarely do anything about this as individuals. If you object to what a particular paper has concluded because of its methods or analysis, it is usually impossible to submit a critique that the relevant journal will publish. After all, which editor would like to admit that he or she let a hopeless paper through the publication screen. There are some exceptions to this rule, and I list two examples below in the papers by Barraquand (2014) and Clarke (2014). But if you search the Web of Science you will find few such critiques for published ecology papers.

One solution jumped to mind for this dilemma: start a new ecology journal perhaps entitled Misleading Ecology Papers: Critical Commentary Unfurled. Papers submitted to this new journal would be restricted to a total of 5 pages and 10 references, and all polemics and personal attacks would be forbidden. The key for submissions would be to state a critique succinctly, and suggest a better way to construct the experiment or study, a new method of analysis that is more rigorous, or key papers that were missed because they were published before 2000. These rules would potentially leave a large gap for some very poor papers to avoid criticism, papers that would require a critique longer than the original paper. Perhaps one very long critique could be distinguished as a Review of the Year paper. Alternatively, some long critiques could be published in book form (Peters 1991), and not require this new journal. The Editor of the journal would require all critiques to be signed by the authors, but would permit in exceptional circumstances to have the authors be anonymous to prevent job losses or in more extreme cases execution by the Mafia. Critiques of earlier critiques would be permitted in the new journal, but an infinite regress will be discouraged. Book reviews could be the subject of a critique, and the great shortage of critical book reviews in the current publication blitz is another aspect of ecological science that is largely missing in the current journals. This new journal would of course be electronic, so there would be no page charges, and all articles would be open access. All the major bibliographic databases like the Web of Science would be encouraged to catalog the publications, and a doi: would be assigned to each paper from CrossRef.

If this new journal became highly successful, it would no doubt be purchased by Wiley-Blackwell or Springer for several million dollars, and if this occurred, the profits would accrue proportionally to all the authors who had published papers to make this journal popular. The sale of course would be contingent on the purchaser guaranteeing not to cancel the entire journal to prevent any criticism of their own published papers.

At the moment criticism of ecological science does not occur for several years after a poor paper is published and by that time the Donald Rumsfeld Effect would have occurred to apply the concept of truth to the conclusions of this poor work. For one example, most of the papers critiqued by Clarke (2014) were more than 10 years old. By making the feedback loop much tighter, certainly within one year of a poor paper appearing, budding ecologists could be intercepted before being led off course.

This journal would not be popular with everyone. Older ecologists often strive mightily to prevent any criticism of their prior conclusions, and some young ecologists make their career by pointing out how misleading some of the papers of the older generation are. This new journal would assist in creating a more egalitarian ecological world by producing humility in older ecologists and more feelings of achievements in young ecologists who must build up their status in the science. Finally, the new journal would be a focal point for graduate seminars in ecology by bringing together and identifying the worst of the current crop of poor papers in ecology. Progress would be achieved.

 

Barraquand, F. 2014. Functional responses and predator–prey models: a critique of ratio dependence. Theoretical Ecology 7(1): 3-20. doi: 10.1007/s12080-013-0201-9.

Clarke, P.J. 2014. Seeking global generality: a critique for mangrove modellers. Marine and Freshwater Research 65(10): 930-933. doi: 10.1071/MF13326.

Peters, R.H. 1991. A Critique for Ecology. Cambridge University Press, Cambridge, England. 366 pp. ISBN:0521400171

 

On Statistical Progress in Ecology

There is a general belief that science progresses over time and given that the number of scientists is increasing, this is a reasonable first approximation. The use of statistics in ecology has been one of ever increasing improvements of methods of analysis, accompanied by bandwagons. It is one of these bandwagons that I want to discuss here by raising the general question:

Has the introduction of new methods of analysis in biological statistics led to advances in ecological understanding?

This is a very general question and could be discussed at many levels, but I want to concentrate on the top levels of statistical inference by means of old-style frequentist statistics, Bayesian methods, and information theoretic methods. I am prompted to ask this question because of my reviewing of many papers submitted to ecological journals in which the data are so buried by the statistical analysis that the reader is left in a state of confusion whether or not any progress has been made. Being amazed by the methodology is not the same as being impressed by the advance in ecological understanding.

Old style frequentist statistics (read Sokal and Rohlf textbook) has been criticized for concentrating on null hypothesis testing when everyone knows the null hypothesis is not correct. This has led to refinements in methods of inference that rely on effect size and predictive power that is now the standard in new statistical texts. Information-theoretic methods came in to fill the gap by making the data primary (rather than the null hypothesis) and asking the question which of several hypotheses best fit the data (Anderson et al. 2000). The key here was to recognize that one should have prior expectations or several alternative hypotheses in any investigation, as recommended in 1897 by Chamberlin. Bayesian analysis furthered the discussion not only by having several alternative hypotheses but by the ability to use prior information in the analysis (McCarthy and Masters 2006). Implicit in both information theoretic and Bayesian analysis is the recognition that all of the alternative hypotheses might be incorrect, and that the hypothesis selected as ‘best’ might have very low predictive power.

Two problems have arisen as a result of this change of focus in model selection. The first is the problem of testability. There is an implicit disregard for the old idea that models or conclusions from an analysis should be tested with further data, preferably with data obtained independently from the original data used to find the ‘best’ model. The assumption might be made that if we get further data, we should add it to the prior data and update the model so that it somehow begins to approach the ‘perfect’ model. This was the original definition of passive adaptive management, which is now suggested to be a poor model for natural resource management. The second problem is that the model selected as ‘best’ may be of little use for natural resource management because it has little predictability. In management issues for conservation or exploitation of wildlife there may be many variables that affect population changes and it may not be possible to conduct active adaptive management for all of these variables.

The take home message is that we need in the conclusions of our papers to have a measure of progress in ecological insight whatever statistical methods we use. The significance of our research will not be measured by the number of p-values, AIC values, BIC values, or complicated tables. The key question must be: What new ecological insights have been achieved by these methods?

Anderson, D.R., Burnham, K.P., and Thompson, W.L. 2000. Null hypothesis testing: problems, prevalence, and an alternative. Journal of Wildlife Management 64(4): 912-923.

Chamberlin, T.C. 1897. The method of multiple working hypotheses. Journal of Geology 5: 837-848 (reprinted in Science 148: 754-759 in 1965). doi:10.1126/science.148.3671.754.

McCarthy, M.A., and Masters, P.I.P. 2005. Profiting from prior information in Bayesian analyses of ecological data. Journal of Applied Ecology 42(6): 1012-1019. doi:10.1111/j.1365-2664.2005.01101.x.

Walters, C. 1986. Adaptive Management of Renewable Resources. Macmillan, New York.