Tag Archives: replication in science

Do We Need to Replicate Ecological Experiments?

If you read papers on the philosophy of science you will very quickly come across the concept of replication, the requirement to test the same hypothesis twice or more before you become too attached to your conclusions. As a new student or a research scientist you face this problem when you wish to replicate some previous study. If you do replicate, you risk being classed as an inferior scientist with no ideas of your own. If you refuse to replicate and try something new, you will be criticized as reckless and not building a solid foundation in your science.  

There is an excellent literature discussing the problem of replication in ecology in particular and science in general. Nichols et al. (2019) argue persuasively that a single experiment is not enough. Amrheim et al. (2019) approach the problem from a statistical point of view and caution that single statistical tests are a shaky platform for drawing solid conclusions. They point out that statistical tests not only test hypotheses, but also countless assumptions and particularly for ecological studies the exact plant and animal community in which the study takes place. In contrast to ecological science, medicine probably has more replication problems at the other extreme – too many replications – leading to a waste of research money and talent. (Siontis and Ioannidis 2018).

A graduate seminar could profitably focus on a list of the most critical experiments or generalizations of our time in any subdiscipline of ecology. Given such a list we could ask if the conclusions still stand as time has passed, or perhaps if climate change has upset the older predictions, or whether the observations or experiments have been replicated to test the strength of conclusions. We can develop a stronger science of ecology only if we recognize both the strengths and the limitations of our current ideas.

Baker (2016) approached this issue by asking the simple question “Is there a reproducibility crisis?” Her results are well worth visiting. She had to cast a wide net in the sciences so unfortunately there are no details specific to ecological science in this paper. A similar question in ecology would have to distinguish observational studies and experimental manipulations to narrow down a current view of this issue. An interesting example is explored in Parker (2013) who analyzed a particular hypothesis in evolutionary biology about plumage colour in a single bird species, and the array of problems of an extensive literature on sexual selection in this field is astonishing.

A critic might argue that ecology is largely a descriptive science that should not expect to develop observational or experimental conclusions that will extend very much beyond the present. If that is the case, one might argue that replication over time is important for deciding when an established principle is no longer valid. Ecological predictions based on current knowledge may have much less reliability than we would hope, but the only way to find out is to replicate. Scientific progress depends on identifying goals and determining how far we have progressed to achieving these goals (Currie 2019). To advance we need to discuss replication in ecology.

Amrhein, V., Trafinnow, D. & Greenland, S. (2019) Inferential statistics as descriptive statistics: There is no replication crisis if we don’t expect replication. American Statistician, 73, 262-270. doi: 10.1080/00031305.2018.1543137.

Baker, M. (2016) Is there a reproducibility crisis in science? Nature, 533, 452-454.

Currie, D.J. (2019) Where Newton might have taken ecology. Global Ecology and Biogeography, 28, 18-27. doi: 10.1111/geb.12842.

Nichols, J.D., Kendall, W.L. & Boomer, G.S. (2019) Accumulating evidence in ecology: Once is not enough. Ecology and Evolution, 9, 13991-14004. doi: 10.1002/ece3.5836.

Parker, T.H. (2013) What do we really know about the signalling role of plumage colour in blue tits? A case study of impediments to progress in evolutionary biology. Biological Reviews, 88, 511-536. doi: 10.1111/brv.12013.

Siontis, K.C. & Ioannidis, J.P.A. (2018) Replication, duplication, and waste in a quarter million systematic reviews and meta-analyses. Circulation: Cardiovascular quality and outcomes, 11, e005212. doi: 10.1161/CIRCOUTCOMES.118.005212.

The Meaningless of Random Sampling

Statisticians tell us that random sampling is necessary for making general inferences from the particular to the general. If field ecologists accept this dictum, we can only conclude that it is very difficult to nearly impossible to reach generality. We can reach conclusions about specific local areas, and that is valuable, but much of our current ecological wisdom on populations and communities relies on the faulty model of non-random sampling. We rarely try to define the statistical ‘population’ which we are studying and attempting to make inferences about with our data. Some examples might be useful to illustrate this problem.

Marine ecologists ae mostly agreed that sea surface temperature rise is destroying coral reef ecosystems. This is certainly true, but it camouflages the fact that very few square kilometres of coral reefs like the Great Barrier Reef have been comprehensively studied with a proper sampling design (e.g. Green 1979, Lewis 2004). When we analyse the details of coral reef declines, we find that many species are affected by rising sea temperatures, but some are not, and it is possible that some species will adapt by natural selection to the higher temperatures. So we quite rightly raise the alarm about the future of coral reefs. But in doing so we neglect in many cases to specify the statistical ‘population’ to which our conclusions apply.

Most people would agree that such an approach to generalizing ecological findings is tantamount to saying the problem is “how many angels can dance on the head of a pin”, and in practice we can ignore the problem and generalize from the studied reefs to all reefs. And scientists would point out that physics and chemistry seek generality and ignore this problem because one can do chemistry in Zurich or in Toronto and use the same laws that do not change with time or place. But the ecosystems of today are not going to be the ecosystems of tomorrow, so generality in time cannot be guaranteed, as paleoecologists have long ago pointed out.

It is the spatial problem of field studies that collides most strongly with the statistical rule to random sample. Consider a hypothetical example of a large national park that has recently been burned by this year’s fires in the Northern Hemisphere. If we wish to measure the recovery process of the vegetation, we need to set out plots to resample. We have two choices: (1) lay out as many plots as possible, and sample these for several years to plot recovery. Or (2) lay out plots at random each year, never repeating the same exact areas to satisfy the specifications of statisticians to “random sample” the recovery in the park. We typically would do (1) for two reasons. Setting up new plots each year as per (2) would greatly increase the initial field work of defining the random plots and would probably mean that travel time between the plots would be greatly increased. Using approach (1) we would probably set out plots with relatively easy access from roads or trails to minimize costs of sampling. We ignore the advice of statisticians because of our real-world constraints of time and money. And we hope to answer the initial questions about recovery with this simpler design.

I could find few papers in the ecological literature that discuss this general problem of inference from the particular to the general (Ives 2018, Hauss 2018) and only one that deals with a real-world situation (Ducatez 2019). I would be glad to be sent more references on this problem by readers.

The bottom line is that if your supervisor or research coordinator criticizes your field work because your study areas are not randomly placed or your replicate sites were not chosen at random, tell him or her politely that virtually no ecological research in the field is done by truly random sampling. Does this make our research less useful for achieving ecological understanding – probably not. And we might note that medical science works in exactly the same way field ecologists work, do what you can with the money and time you have. The law that scientific knowledge requires random sampling is often a pseudo-problem in my opinion.  

Ducatez, S. (2019) Which sharks attract research? Analyses of the distribution of research effort in sharks reveal significant non-random knowledge biases. Reviews in Fish Biology and Fisheries, 29, 355-367. doi. 10.1007/s11160-019-09556-0

Green, R.H. (1979) Sampling Design and Statistical Methods for Environmental Biologists. Wiley, New York. 257 pp.

Hauss, K. (2018) Statistical Inference from Non-Random Samples. Problems in Application and Possible Solutions in Evaluation Research. Zeitschrift fur Evaluation, 17, 219-240. doi.

Ives, A.R. (2018) Informative Irreproducibility and the Use of Experiments in Ecology. BioScience, 68, 746-747. doi. 10.1093/biosci/biy090

Lewis, J. (2004) Has random sampling been neglected in coral reef faunal surveys? Coral Reefs, 23, 192-194. doi: 10.1007/s00338-004-0377-y.

On Replication in Ecology

All statistics books recommend replication in scientific studies. I suggest that this recommendation has been carried to extreme in current ecological studies. In approximately 50% of ecological papers I read in our best journals (a biased sample to be sure) the results of the study are not new and have been replicated many times in the past, often in papers not cited in ‘new’ papers. There is no harm in this happening, but it does not lead to progress in our understanding of populations, communities or ecosystems or lead to new ecological theory. We do need replication examining the major ideas in ecology, and this is good. On the other hand, we do not need more and more studies of what we might call ecological truths. An analogy would be to test in 2022 the Flat Earth Hypothesis to examine its predictions. It is time to move on.

There is an extensive literature on hypothesis testing which can be crudely summarized by “Observations of X” which can be explained by hypothesis A, B, or C each of which have unique predictions associated with them. A series of experiments are carried out to test these predictions and the most strongly supported hypothesis, call it B*, is accepted as current knowledge. Explanation B* is useful scientifically only if it leads to a new set of predictions D, E, and F which are then tested. This chain of explanation is never simple. There can be much disagreement which may mean sharpening the hypotheses following from Explanation B*. At the same time there will be some scientists who despite all the accumulated data still accept the Flat Earth Hypothesis. If you think this is nonsense, you have not been reading the news about the Covid epidemic.

Further complications arise from two streams of thought. The first is that the way forward is via simple mathematical models to represent the system. There is much literature on modelling in ecology which is most useful when it is based on good field data, but for too many ecological problems the model is believed more than the data, and the assumptions of the models are not stated or tested. If you think that models lead directly to progress, examine again the Covid modelling situation in the past 2 years. The second stream of thought that complicates ecological science is that of descriptive ecology. Many of the papers in the current literature describe a current set of data or events with no hypothesis in mind. The major offenders are the biodiversity scientists and the ‘measure everything’ scientists. The basis of this approach seems to be that all our data will be of major use in 50, 100 or whatever years, so we must collect major archives of ecological data. Biodiversity is the bandwagon of the present time, and it is a most useful endeavour to classify and categorise species. As such it leads to much natural history that is interesting and important for many non-scientists. And almost everyone would agree that we should protect biodiversity. But while biodiversity studies are a necessary background to ecological studies, they do not lead to progress in the scientific understanding of the ecosphere.

Conservation biology is closely associated with biodiversity science, but it suffers even more from the problems outlined above. Conservation is important for everyone, but the current cascade of papers in conservation biology are too often of little use. We do not need opinion pieces; we need clear thinking and concrete data to solve conservation issues. This is not easy since once a species is endangered there are typically too few of them to study properly. And like the rest of ecological science, funding is so poor that reliable data cannot be achieved, and we are left with more unvalidated indices or opinions on species changes. Climate change puts an enormous kink in any conservation recommendations, but on the other hand serves as a panchrestron, a universal explanation for every possible change that occurs in ecosystems and thus can be used to justify every research agenda, good or poor with spurious correlations.

We could advance our ecological understanding more rapidly by demanding a coherent theoretical framework for all proposed programs of research. Grace (2019) argues that plant ecology has made much progress during the last 80 years, in contrast to the less positive overview of Peters (1991) or my observations outlined above. Prosser (2020) provides a critique for microbial ecology that echoes what Peters argued in 1991. All these divergences of opinion would be worthy of a graduate seminar discussion.

If you think all my observations are nonsense, then you should read the perceptive book by Peters (1991) written 30 years ago on the state of ecological science as well as the insightful evaluation of this book by Grace (2019) and the excellent overview of these questions in Currie (2019).  I suggest that many of the issues Peters (1991) raised are with us in 2022, and his general conclusion that ecology is a weak science rather than a strong one still stands. We should celebrate the increases in ecological understanding that have been achieved, but we could advance the science more rapidly by demanding more rigor in what we publish.

Currie, D.J. (2019). Where Newton might have taken ecology. Global Ecology and Biogeography 28, 18-27. doi: 10.1111/geb.12842.

Grace, John (2019). Has ecology grown up? Plant Ecology & Diversity 12, 387-405. doi: 10.1080/17550874.2019.1638464.

Peters, R.H. (1991) ‘A Critique for Ecology.’ (Cambridge University Press: Cambridge, England.). 366 pages. ISBN: 0521400171

Prosser, J.I. (2020). Putting science back into microbial ecology: a question of approach. Philosophical Transactions of the Royal Society. Biological sciences 375, 20190240. doi: 10.1098/rstb.2019.0240.

On Global Science and Local Science

I suggest that the field of ecology is fragmenting into two large visions of the science which for the sake of simplicity I will call Global Science and Local Science. This fragmentation is not entirely new, and some history might be in order.

Local Science deals with local problems, and while it aspires to develop conclusions that apply to a broader area than the small study area, it has always been tied to useful answers for practical questions. Are predators the major control of caribou declines in northern Canada? Can rats on islands reduce ground-nesting birds to extinction? Does phosphate limit primary production in temperate lakes? Historically Local Science has arisen from the practical problems of pest control and wildlife and fisheries management with a strong focus on understanding how populations and communities work and how humans might solve the ecological problems they have largely produced (Kingsland 2005). The focus of Local Science was always on a set of few species that were key to the problem being studied. As more and more wisdom accumulated on local problems, ecologists turned to broadening the scope of enquiry, asking for example if solutions discovered in Minnesota might also be useful in England or vice versa. Consequently, Local Science began to be amalgamated into a broader program of Global Science.

Global Science can be defined in several ways. One is purely financial and big dollars; this not what I will discuss here. I want to discuss Global Science in terms of ecological syntheses, and Global Science papers can often be recognized by having dozens to hundreds of authors, all with data to share, and with meta-analysis as the major tool of analysis. Global Science is now in my opinion moving away from the experimental approach that was a triumph of Local Science. The prelude to Global Science was the International Biological Program (IBP) of the 1970s that attempted to produce large-scale systems analyses of communities and ecosystems but had little effect in convincing many ecologists that this was the way to the future. At the time the problem was largely the development of a theory of stability, a property barely visible in most ecological systems.

Global Science depends on describing patterns that occur across large spatial scales. These patterns can be discovered only by having an extensive, reliable set of local studies and this leads to two problems. The first is that there may be too few reliable local studies. This may occur because different ecologists use different methods of measurement, do not use a statistically reliable sampling design, or may be constrained by a lack of funding or time. The second problem is that different areas may show different patterns of the variables under measurement or have confounding causes that are not recognized. The approach through meta-analysis is fraught with the decisions that must be made to include or exclude specific studies. For example, a recent meta-analysis of the global insect decline surveyed 5100 papers and used 166 of them for analysis (van Klink et al. 2020). It is not that the strengths and limitations of meta-analysis have been missed (Gurevitch et al. 2018) but rather the question of whether they are increasing our understanding of the Earth’s ecology. Meta-analyses can be useful in suggesting patterns that require more detailed analyses. In effect they violate many of the rules of conventional science in not having an experimental design, so that they suggest patterns but can be validated only by a repeat of the observations. So, in the best situations meta-analyses lead us back to Local Science. In some situations, meta-analyses lead to no clear understanding at all, as illustrated in the conclusions of Geary et al. (2020) who investigated the response of terrestrial vertebrate predators to fire:

“There were no clear, general responses of predators to fire, nor relationships with geographic area, biome or life-history traits (e.g. body mass, hunting strategy and diet). Responses varied considerably between species.” (page 955)

Note that this study is informative in that it indicates that ecologists have not yet identified the variables that determine the response of predators to fire. In other cases, meta-analysis has been useful in redirecting ecological questions because the current global model does not fit the facts very well (Szuwalski et al. 2015).

The result of this movement within both ecological and conservation science toward Global Science has been a shift in the amount of field work being done. Rios-Saldana et al. (2018) surveyed the conservation literature over the last 35 years and found that fieldwork-based publications decreased by 20% in comparison to a rise of 600% and 800% in modelling and data analysis studies. This conclusion could be interpreted that ecologists now realize that less fieldwork is needed at this time, or perhaps the opposite. 

In an overview of ecological science David Currie (2019) described an approach to understanding how progress in ecology has differed from that in the physical sciences. He suggests that the physical sciences focused on a set of properties of nature whose variation they analyzed. They developed ‘laws’ Like Newton’s laws or motion that could be tested in simple or complex systems. By contrast ecology has developed largely by asking how processes like competition or predation work, and not by asking questions about the properties of natural systems, which is what interests the general public trying to solve problems in conservation or pest or fisheries management. Currie (2019) summarized his approach as follows:

“Successful disciplines identify specific goals and measure progress toward those goals. Predictive accuracy of properties of nature is a measure of that progress in ecology. Predictive accuracy is the objective evidence of understanding. It is the most useful tool that science can offer society.” (page 18)

Many of these same questions underlay the critical appraisal of ecology by Peters (1991).

There is no one approach to ecological science, but we need to continue to ask what progress is being made with every approach. These are key questions for the future of ecological research, and they are worthy of much more discussion because they determine what students will be taught and what kinds of research will be favoured for funding in the future.

Currie, D.J. (2019). Where Newton might have taken ecology. Global Ecology and Biogeography 28, 18-27. doi: 10.1111/geb.12842.

Geary, W.L., Doherty, T.S., Nimmo, D.G., Tulloch, A.I.T., and Ritchie, E.G. (2020). Predator responses to fire: A global systematic review and meta-analysis. Journal of Animal Ecology 89, 955-971. doi: 10.1111/1365-2656.13153.

Gurevitch, J., Koricheva, J., Nakagawa, S., and Stewart, G. (2018). Meta-analysis and the science of research synthesis. Nature 555, 175-182. doi: 10.1038/nature25753.

Kingsland, Sharon .E. (2005) ‘The Evolution of American Ecology, 1890-2000  ‘ (Johns Hopkins University Press: Baltimore.) ISBN: 0801881714

Peters, R.H. (1991) ‘A Critique for Ecology.’ (Cambridge University Press: Cambridge, England.) ISBN: 0521400171

Ríos-Saldaña, C. Antonio, Delibes-Mateos, Miguel, and Ferreira, Catarina C. (2018). Are fieldwork studies being relegated to second place in conservation science? Global Ecology and Conservation 14: e00389. doi: 10.1016/j.gecco.2018.e00389.

Szuwalski, C.S., Vert-Pre, K.A., Punt, A.E., Branch, T.A., and Hilborn, R. (2015). Examining common assumptions about recruitment: a meta-analysis of recruitment dynamics for worldwide marine fisheries. Fish and Fisheries 16, 633-648. doi: 10.1111/faf.12083.

van Klink, R., Bowler, D.E., Gongalsky, K.B., Swengel, A.B., Gentile, A. and Chase, J.M. (2020). Meta-analysis reveals declines in terrestrial but increases in freshwater insect abundances. Science 368, 417-420. doi: 10.1126/science.aax9931.

Why do Scientists Reinvent Wheels?

We may reinvent wheels by repeating research that has already been completed and published elsewhere. In one sense there is no great harm in this, and statisticians would call it replication of the first study, and the more replication the more we are convinced that the results of the study are robust. There is a problem when the repeated study reaches different results from the first study. If this occurs, there is a need to do another study to determine if there is a general pattern in the results, or if there are different habitats with different answers to the question being investigated. But after a series of studies is done, it is time to do something else since the original question has been answered and replicated. Such repeated studies are often the subject of M.Sc. or Ph.D. theses which have a limited 1-3-year time window to reach completion. The only general warning for these kinds of replicated studies is to read all the old literature on the subject. There is a failure too often on this and reviewers often notice missing references for a repeated study. Science is an ongoing process but that does not mean that all the important work has been carried out in the last 5 years.

There is a valid time and place to repeat a study when the habitat for example has been greatly fragmented or altered by human land use or when climate change has made a strong impact on the ecosystem under study. The problem in this case is to have an adequate background of data that allows you to interpret your current data. If there is a fundamental problem with ecological studies to date it is that we have an inadequate baseline for comparison for many ecosystems. We can conclude that a particular ecosystem is losing species (due to land use change or climate) only if we know what species comprised this ecosystem in past years and how much the species composition fluctuated over time. The time frame desirable for background data may be only 5 years for some species or communities but for many communities it may be 20-40 years or more. We are too often buried in the assumption that communities and ecosystems have been in equilibrium in the past so that any fluctuations now seen are unnatural. This time frame problem bedevils calls for conservation action when data are deficient.

The Living Planet Report of 2018 has been widely quoted as stating that global wildlife populations have decreased 60% in the last 4 decades. They base their analysis on the changes in 4000 vertebrate species. There are about 70,000 vertebrate species on Earth, so this statement is based on about 6% of the vertebrates. The purpose of the Living Planet Report is to educate us about conservation issues and encourage political action. No ecologist in his or her right mind would question this 60% quotation lest they be cast out of the profession, but it is a challenge to the graduate students of today to analyze this statistic to determine how reliable it is. We all ‘know’ that elephants and rhinos are declining but they are hardly a random sample. The problem in a nutshell is that we have reliable long-term data on perhaps 0.01% or less of all vertebrate species. By long term I suggest we set a minimal limit of 10 generations. As another sobering test of these kinds of statements I suggest picking your favorite animal and reading all you can on how to census the species and then locate how many studies of this species meet the criteria of a good census. The African elephant could be a good place to start, since everyone is convinced that it has declined drastically. The information in the Technical Supplement is a good starting point for a discussion about data accuracy in a conservation class.

My advice is that ecologists should not without careful thought repeat studies that have already been carried out many times on common species . Look for gaps in the current wisdom. Many of our species of concern are indeed declining and need action but we need knowledge of what kinds of management actions are helpful and possible. Many of our species have not been studied long enough to know if they are under threat or not. It is not helpful to ‘cry wolf’ if indeed there is no wolf there. We need precision and accuracy now more than ever.

World Wildlife Fund. 2018. Living Planet Report – 2018: Aiming Higher. Grooten, M. and Almond, R.E.A.(Eds). WWF, Gland, Switzerland. ISBN: 978-2-940529-90-2.
https://wwf.panda.org/knowledge_hub/all_publications/living_planet_report_2018/

On Questionable Research Practices

Ecologists and evolutionary biologists are tarred and feathered along with many scientists who are guilty of questionable research practices. So says this article in “The Conservation” on the web:
https://theconversation.com/our-survey-found-questionable-research-practices-by-ecologists-and-biologists-heres-what-that-means-94421?utm_source=twitter&utm_medium=twitterbutton

Read this article if you have time but here is the essence of what they state:

“Cherry picking or hiding results, excluding data to meet statistical thresholds and presenting unexpected findings as though they were predicted all along – these are just some of the “questionable research practices” implicated in the replication crisis psychology and medicine have faced over the last half a decade or so.

“We recently surveyed more than 800 ecologists and evolutionary biologists and found high rates of many of these practices. We believe this to be first documentation of these behaviours in these fields of science.

“Our pre-print results have certain shock value, and their release attracted a lot of attention on social media.

  • 64% of surveyed researchers reported they had at least once failed to report results because they were not statistically significant (cherry picking)
  • 42% had collected more data after inspecting whether results were statistically significant (a form of “p hacking”)
  • 51% reported an unexpected finding as though it had been hypothesised from the start (known as “HARKing”, or Hypothesising After Results are Known).”

It is worth looking at these claims a bit more analytically. First, the fact that more than 800 ecologists and evolutionary biologists were surveyed tells you nothing about the precision of these results unless you can be convinced this is a random sample. Most surveys are non-random and yet are reported as though they are a random, reliable sample.

Failing to report results is common in science for a variety of reasons that have nothing to do with questionable research practices. Many graduate theses contain results that are never published. Does this mean their data are being hidden? Many results are not reported because they did not find an expected result. This sounds awful until you realize that journals often turn down papers because they are not exciting enough, even though the results are completely reliable. Other results are not reported because the investigator realized once the study is complete that it was not carried on long enough, and the money has run out to do more research. One would have to have considerable detail about each study to know whether or not these 64% of researchers were “cherry picking”.

Alas the next problem is more serious. The 42% who are accused of “p-hacking” were possibly just using sequential sampling or using a pilot study to get the statistical parameters to conduct a power analysis. Any study which uses replication in time, a highly desirable attribute of an ecological study, would be vilified by this rule. This complaint echos the statistical advice not to use p-values at all (Ioannidis 2005, Bruns and Ioannidis 2016) and refers back to complaints about inappropriate uses of statistical inference (Armhein et al. 2017, Forstmeier et al. 2017). The appropriate solution to this problem is to have a defined experimental design with specified hypotheses and predictions rather than an open ended observational study.

The third problem about unexpected findings hits at an important aspect of science, the uncovering of interesting and important new results. It is an important point and was warned about long ago by Medewar (1963) and emphasized recently by Forstmeier et al. (2017). The general solution should be that novel results in science must be considered tentative until they can be replicated, so that science becomes a self-correcting process. But the temptation to emphasize a new result is hard to restrain in the era of difficult job searches and media attention to novelty. Perhaps the message is that you should read any “unexpected findings” in Science and Nature with a degree of skepticism.

The cited article published in “The Conversation” goes on to discuss some possible interpretations of what these survey results mean. And the authors lean over backwards to indicate that these survey results do not mean that we should not trust the conclusions of science, which unfortunately is exactly what some aspects of the public media have emphasized. Distrust of science can be a justification for rejecting climate change data and rejecting the value of immunizations against diseases. In an era of declining trust in science, these kinds of trivial surveys have shock value but are of little use to scientists trying to sort out the details about how ecological and evolutionary systems operate.

A significant source of these concerns flows from the literature that focuses on medical fads and ‘breakthroughs’ that are announced every day by the media searching for ‘news’ (e.g. “eat butter”, “do not eat butter”). The result is almost a comical model of how good scientists really operate. An essential assumption of science is that scientific results are not written in stone but are always subject to additional testing and modification or rejection. But one result is that we get a parody of science that says “you can’t trust anything you read” (e.g. Ashcroft 2017). Perhaps we just need to repeat to ourselves to be critical, that good science is evidence-based, and then remember George Bernard Shaw’s comment:

Success does not consist in never making mistakes but in never making the same one a second time.

Amrhein, V., Korner-Nievergelt, F., and Roth, T. 2017. The earth is flat (p > 0.05): significance thresholds and the crisis of unreplicable research. PeerJ  5: e3544. doi: 10.7717/peerj.3544.

Ashcroft, A. 2017. The politics of research-Or why you can’t trust anything you read, including this article! Psychotherapy and Politics International 15(3): e1425. doi: 10.1002/ppi.1425.

Bruns, S.B., and Ioannidis, J.P.A. 2016. p-Curve and p-Hacking in observational research. PLoS ONE 11(2): e0149144. doi: 10.1371/journal.pone.0149144.

Forstmeier, W., Wagenmakers, E.-J., and Parker, T.H. 2017. Detecting and avoiding likely false-positive findings – a practical guide. Biological Reviews 92(4): 1941-1968. doi: 10.1111/brv.12315.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLOS Medicine 2(8): e124. doi: 10.1371/journal.pmed.0020124.

Medawar, P.B. 1963. Is the scientific paper a fraud? Pp. 228-233 in The Threat and the Glory. Edited by P.B. Medawar. Harper Collins, New York. pp. 228-233. ISBN 978-0-06-039112-6