Monthly Archives: September 2014

What is Policy?

One seemingly popular way of muzzling scientists is to declare that they may not comment on issues that impact on government policy. In Canada and in Australia at the present time this kind of general rule seems to be enforced. It raises the serious issue of what is ‘policy’. In practice it appears that some scientific papers that discuss policy can pass the bar because they support the dominant economic paradigm of eternal growth or at least do not challenge it. But the science done by ecologists and environmental scientists often conflicts with current practices and thus confronts the economic paradigm.

There are several dictionary definitions of policy but the one most relevant to this discussion is:

“a high-level overall plan embracing the general goals and acceptable procedures especially of a governmental body”

The problem an ecologist faces is that in many countries this “high overall plan for the country” involves continuous economic growth, no limitations on the human population, the minimization of regulations regarding environmental pollution, and no long-term plan about climate change. But probably the largest area of conflict is over economic growth, and any ecological data that might restrict economic growth should be muzzled or at least severely edited.

This approach of governments is only partially effective because in general the government does not have the power to muzzle university scientists who can speak out on any topic, and this has been a comfort to ecologists and environmental scientists. But there are several indirect ways to muzzle these non-government scientists because the government controls some of the radio and TV media that must obtain funding from the federal budget, and the pressure of budget cuts unless ‘you toe the line’ works well. And the government also has indirect controls over research funding so that research that might uncover critical issues can be deemed less important than research that might increase the GNP. All of this serves the current economic paradigm of most of the developed countries.

Virtually all conservation biology research contains clear messages about policy issues, but these are typically so far removed from the day to day decisions made by governments that they can be safely ignored. A national park here or there seems to satisfy many voters who think these biodiversity problems are under control. But I would argue that all of conservation biology and indeed all of ecology is subversive to the dominant economic paradigm of our day so that everything we do has policy implications. If this is correct, telling scientists they may not comment on policy issues is effectively telling them not to do ecological or environmental science.

So we ecologists get along by keeping a minimal profile, a clear mistake at a time when more emphasis should be given to emerging environmental problems, especially long term issues that do not immediately affect voters. There is no major political party in power in North America or Australia that embraces in a serious way what might be called a green agenda for the future of the Earth.

The solution seems to be to convince the voters at large that the ecological world view is better than the economic world view and there are some signs of a slow move in this direction. The recent complete failure of economics as a reliable guide to government policy should start to move us in the right direction, and the recognition that inequality is destroying the social fabric is helpful. But movement is very slow.

Meanwhile ecologists must continue to question policies that are destroying the Earth. We can begin with fracking for oil and gas, and continue to highlight biodiversity losses driven by the growth of population and economic developments that continue the era of oil and natural gas. And keep asking when will we have a green President or Prime Minister?

Let me boil down my point of view. Everything scientists do has policy implications, so if scientists are muzzled by their government, it is a serious violation of democratic freedom of speech. And if a government pays no attention to the findings of science, it is condemning itself to oblivion in the future.

Davis, C., and Fisk, J.M. 2014. Energy abundance or environmental worries? Analyzing public support for fracking in the United States. Review of Policy Research 31(1): 1-16. doi: 10.1111/ropr.12048.

Mash, R., Minnaar, J., and Mash, B. 2014. Health and fracking: Should the medical profession be concerned? South African Medical Journal 104(5): 332-335. doi: 10.7196/SAMJ.7860.

Piketty, T. 2014. Capital in the Twenty-First Century. Belknap Press, Harvard University, Boston. 696 pp. ISBN 9780674430006

Stiglitz, J.E. 2012. The Price of Inequality. W.W. Norton and Company, New York.

 

Back to p-Values

Alas ecology has slipped lower on the totem-pole of serious sciences by an article that has captured the attention of the media:

Low-Décarie, E., Chivers, C., and Granados, M. 2014. Rising complexity and falling explanatory power in ecology. Frontiers in Ecology and the Environment 12(7): 412-418. doi: 10.1890/130230.

There is much that is positive in this paper, so you should read it if only to decide whether or not to use it in a graduate seminar in statistics or in ecology. Much of what is concluded is certainly true, that there are more p-values in papers now than there were some years ago. The question then comes down to what these kinds of statistics mean and how this would justify a conclusion captured by the media that explanatory power in ecology is declining over time, and the bottom line of what to do about falling p-values. Since as far as I can see most statisticians today seem to believe that p-values are meaningless (e.g. Ioannidis 2005), one wonders what the value of showing this trend is. A second item that most statisticians agree about is that R2 values are a poor measure of anything other than the items in a particular data set. Any ecological paper that contains data to be analysed and reported summarizes many tests providing p-values and R2 values of which only some are reported. It would be interesting to do a comparison with what is recognized as a mature science (like physics or genetics) by asking whether the past revolutions in understanding and prediction power in those sciences corresponded with increasing numbers of p-values or R2 values.

To ask these questions is to ask what is the metric of scientific progress? At the present time we confuse progress with some indicators that may have little to do with scientific advancement. As journal editors we race to increase their impact factor which is interpreted as a measure of importance. For appointments to university positions we ask how many citations a person has and how many papers they have produced. We confuse scientific value with some numbers which ironically might have a very low R2 value as predictors of potential progress in a science. These numbers make sense as metrics to tell publication houses how influential their journals are, or to tell Department Heads how fantastic their job choices are, but we fool ourselves if we accept them as indicators of value to science.

If you wish to judge scientific progress you might wish to look at books that have gathered together the most important papers of the time, and examine a sequence of these from the 1950s to the present time. What is striking is that papers that seemed critically important in the 1960s or 1970s are now thought to be concerned with relatively uninteresting side issues, and conversely papers that were ignored earlier are now thought to be critical to understanding. A list of these changes might be a useful accessory to anyone asking about how to judge importance or progress in a science.

A final comment would be to look at the reasons why a relatively mature science like geology has completely failed to be able to predict earthquakes in advance and even to specify the locations of some earthquakes (Steina et al. 2012; Uyeda 2013). Progress in understanding does not of necessity dictate progress in prediction. And we ought to be wary of confusing progress with p-and R2 values.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLoS Medicine 2(8): e124.

Steina, S., Gellerb, R.J., and Liuc, M. 2012. Why earthquake hazard maps often fail and what to do about it. Tectonophysics 562-563: 1-24. doi: 10.1016/j.tecto.2012.06.047.

Uyeda, S. 2013. On earthquake prediction in Japan. Proceedings of the Japan Academy, Series B 89(9): 391-400. doi: 10.2183/pjab.89.391.