Tag Archives: technology and science

On the Tasks of Retirement

The end of another year in retirement and time to clean up the office. So this week I recycled 15,000 reprints – my personal library of scientific papers. I would guess that many young scientists would wonder why anyone would have 15,000 paper reprints when you could have all that on a small memory stick. Hence this blog.

Rule #1 of science: read the literature. In 1957 when I began graduate studies there were perhaps 6 journals that you had to read to keep up in terrestrial ecology. Most of them came out 3 or 4 times a year, and if you could not afford to have a personal copy of the paper either by buying the journal or later by xeroxing, you wrote to authors to ask them to post a copy of their paper to you – a reprint. The university even printed special postcards to request reprints with your name and address for the return mail. So scientists gathered paper copies of important papers. Then it became necessary to catalog them, and the simplest thing was to type the title and reference on a 3 by 5-inch card and put them in categories in a file cabinet. All of this will be incomprehensible to modern scientists.

A corollary of this old-style approach to science was that when you published, you had to purchase paper copies of reprints of your own papers. When someone got interested in your research, you would get reprint requests and then had to post them around the world. All this cost money and moreover you had to guess how popular your paper might be in future. The journal usually gave you 25 or 50 free reprints when you published a paper but if you thought you’d need more then you had to purchase them in advance. The first xerox machines were not commercially available until 1959. Xeroxing was quite expensive even when many different types of copying machines started to become available in the late 1960s. But it was always cheaper to buy a reprint when your paper was printed by a journal that it was to xerox a copy of the paper at a later date.

Meanwhile scientists had to write papers and textbooks, so the sorting of references became a major chore for all writers. In 1988 Endnote was first released as a software program that could incorporate references and allow one to sort and print them via a computer, so we were off and running, converting all the 3×5 cards into electronic format. One could then generate a bibliography in a short time and look up forgotten references by author or title or keywords. Through the 1990s the computer world progressed rapidly to approximate what you see today, with computer searches of the literature, and ultimately the ability to download a copy of a PDF of a scientific paper without even telling the author.

But there were two missing elements. All the pre-2000 literature was still piled on Library shelves, and at least in ecology is it possible that some literature published before 2000 might be worth reading. JSTOR (= Journal Storage) came to the rescue in 1995 and began to scan and compile electronic documents of much of this old literature, so even much of the earlier literature became readily available by the early 2000s. Currently there are about 1900 journals in most scientific disciplines that are available in JSTOR. Since by the late 1990s the volume of the scientific literature was doubling about every 7 years, the electronic world saved all of us from yet more paper copies of important papers.

What was missing still were many government and foundation documents, reviews of programs that were never published in the formal literature, now called the ‘grey literature’. Some of these are lost unless governments scan them and make them available. The result of any loss of this grey literature is that studies are sometimes repeated needlessly and money is wasted.

About 2.5 million scientific papers are published every year at the present time (http://www.cdnsciencepub.com/blog/21st-century-science-overload.aspx ) and the consequence of this explosion must be that each of us has to concentrate on a smaller and smaller area of science. What this means for instructors and textbook writers who must synthesize these new contributions is difficult to guess. We need more critical syntheses, but these kinds of papers are not welcomed by those that distribute our research funds so that young scientists feel they should not get caught up in writing an extensive review, however important that is for our science.

In contrast to my feeling of being overwhelmed at the present time, Fanelli and Larivière (2016) concluded that the publication rate of individuals has not changed in the last 100 years. Like most meta-analyses this one is suspicious in arguing against the simple observation in ecology that everyone seems to publish from their thesis many small papers rather than one synthetic one. Anyone who has served on a search committee for university or government jobs in the last 30 years would attest to the fact that the number of publications expected now for new graduates has become quite ridiculous. When I started my postdoc in 1962 I had one published paper, and for my first university job in 1964 this had increased to 3. There were at that time many job opportunities for anyone in my position with a total of 2 or 3 publications. To complicate things, Steen et al. (2013) have suggested that the number of retracted papers in science has been increasing at a faster rate than the number of publications. Whether again this applies to ecology papers is far from clear because the problem in ecology is typically that the methods or experimental design are inadequate rather than fraudulent.

If there is a simple message here, it is that the literature and the potential access to it is changing rapidly and young scientists need to be ready for this. Yet progress in ecology is not a simple metric of counts of papers or even citations. Quality trumps quantity.

Fanelli, D., and Larivière, V. 2016. Researchers’ individual publication rate has not increased in a century. PLoS ONE 11(3): e0149504. doi: 10.1371/journal.pone.0149504.

Steen, R.G., Casadevall, A., and Fang, F.C. 2013. Why has the number of scientific retractions increased?  PLoS ONE 8(7): e68397. doi: 10.1371/journal.pone.0068397.

 

On Politics and the Environment

This is a short story of a very local event that illustrates far too well the improvements we have to seek in our political systems. The British Columbia government has just approved the continuation of construction of the Site C dam on the Peace River in Northern British Columbia. The project was started in 2015 by the previous Liberal (conservative) government with an $8 billion price tag and with no (yes NO) formal studies of the economic, geological or environmental consequences of the dam, and in complete opposition by most of the First Nations people on whose traditional land the dam would be built. Fast forward 2 years, a moderate left-wing government takes over from the conservatives and the decision is now in their hands: do they carry on with the project, $2 billion having been spent already, or stop it with an additional $1-2 billion in costs to undo the damage to the valley from work already carried out? 2000 temporary construction jobs in the balance, the government in general pro-union and pro the working person rather than the 1%. They decided to proceed with the dam.

To the government’s credit it asked the Utilities Commission to prepare an economic analysis of the project in a very short time, but to make it simpler (?) did not allow the Commission to consider in its report environmental damage, climate change implications, greenhouse gas emissions, First Nations rights, or the loss of good agricultural land. Alas, that pretty well leaves out most things an ecologist would worry about. The economic analysis was sitting on the fence mostly because the question of the final cost of Site C is an unknown. It was estimated to be $8 billion, but already a few days after the government’s decision it is $10.5 billion, all to be paid by the taxpayer. If it is a typical large dam, the final overall cost will range between $16 to $20 billion when the dam is operational in 2024. The best news article I have seen on the Site C decision is this one by Andrew Nikiforuk:

https://thetyee.ca/Opinion/2017/12/12/Pathology-Site-C/

Ansar et al. (2014) did a statistical analysis of 245 large dams built since 1934 and found that on average actual costs for large dams were about twice estimated costs, and that there was a tendency for larger dams to have even higher than average final costs. There has been little study for Site C of the effects of the proposed dam on fish in the river (Cooper et al. 2017) and no discussion of potential greenhouse gas emissions (methane) released as a result of a dam at Site C (DelSontro et al. 2016). The most disturbing comment on this decision to proceed with Site C was made by the Premier of B.C. who stated that if they had stopped construction of the dam, they would have to spend a lot of money “for nothing” meaning that restoring the site, partially restoring the forested parts of the valley, repairing the disturbance of the agricultural land in the valley, recognizing the rights of First Nations people to their land, and leaving the biodiversity of these sites to repair itself would all be classed as “nothing” of value. Alas our government’s values are completely out of line with the needs of a sustainable earth ecosystem for all to enjoy.

What we are lacking, and governments of both stripes have no time for, is an analysis of what the alternatives are in terms of renewable energy generation. Alternative hypotheses should be useful in politics as they are in science. And they might even save money.

Ansar A, Flyvbjerg B, Budzier A, Lunn D (2014). Should we build more large dams? The actual costs of hydropower megaproject development. Energy Policy 69, 43-56. doi: 10.1016/j.enpol.2013.10.069

Cooper AR, et al. (2017). Assessment of dam effects on streams and fish assemblages of the conterminous USA. Science of The Total Environment 586, 879-89. doi: 10.1016/j.scitotenv.2017.02.067

DelSontro T, Perez KK, Sollberger S, Wehrli B (2016). Methane dynamics downstream of a temperate run-of-the-river reservoir. Limnology and Oceanography 61, S188-S203. doi: 10.1002/lno.10387

 

Technology Can Lead Us Astray

Our iPhones teach us very subtly to have great faith in technology. This leads the public at large to think that technology will solve large issues like greenhouse gases and climate change. But for scientists we should remember that technology must be looked at very carefully when it tells us we have a shortcut to ecological measurement and understanding. For the past 35 years satellite data has been available to calculate an index of greening for vegetation from large landscapes. The available index is called NDVI, normalized difference vegetation index, and is calculated as a ratio of near infrared light to red light reflected from the vegetation being surveyed. I am suspicious that NDVI measurements tell ecologists anything that is useful for the understanding of vegetation dynamics and ecosystem stability. Probably this is because I am focused on local scale events and landscapes of hundreds of km2 and in particular what is happening in the forest understory. The key to one’s evaluation of these satellite technologies most certainly lies in the questions under investigation.

A whole array of different satellites have been used to measure NDVI and since the more recent satellites have different precision and slightly different physical characteristics, there is some problem of comparing results from different satellites in different years if one wishes to study long-term trends (Guay et al. 2014). It is assumed that NDVI measurements can be translated into aboveground net primary production and can be used to start to answer ecological questions about seasonal and annual changes in primary production and to address general issues about the impact of rising CO2 levels on ecosystems.

All inferences about changes in primary production on a broad scale hinge on the reliability of NDVI as an accurate measure of net primary production. Much has been written about the use of NDVI measures and the need for ground truthing. Community ecologists may be concerned about specific components of the vegetation rather than an overall green index, and the question arises whether NDVI measures in a forest community are able to capture changes in both the trees and the understory, or for that matter in the ground vegetation. For overall carbon capture estimates, a greenness index may be accurate enough, but if one wishes to determine whether deciduous trees are replacing evergreen trees, NDVI may not be very useful.

How can we best validate satellite based estimates of primary productivity? To do this on a landscape scale we need to have large areas with ground truthing. Field crops are one potential source of such data. Kang et al. (2016) used crops to quantify the relationship between remotely sensed leaf-area index and other satellite measures such as NDVI. The relationships are clear in a broad sense but highly variable in particular, so that the ability to predict crop yields from satellite data at local levels is subject to considerable error. Johnson (2016, Fig. 6, p. 75) found the same problem with crops such as barley and cotton (see sample data set below). So there is good news and bad news from these kinds of analyses. The good news is that we can have extensive global coverage of trends in vegetation parameters and crop production, but the bad news is that at the local level this information may not be helpful for studies that require high precision for example in local values of net primary production. Simply to assume that satellite measures are accurate measures of ecological variables like net aboveground primary production is too optimistic at present, and work continues on possible improvements.

Many of the critical questions about community changes associated with climate change cannot in my opinion be answered by remote sensing unless there is a much higher correlation of ground-based research that is concurrent with satellite imagery. We must look critically at the available data. Blanco et al. (2016) for example compared NDVI estimates from MODIS satellite data with primary production monitored on the ground in harvested plots in western Argentina. The regression between NDVI and estimated primary production had R2 values of 0.35 for the overall annual values and 0.54 for the data restricted to the peak of annual growth. Whether this is a satisfactory statistical association is up to plant ecologists to decide. I think it is not, and the substitution of p values for the utility of such relationships is poor ecology. Many more of these kind of studies need to be carried out.

The advent of using drones for very detailed spectral data on local study areas will open new opportunities to derive estimates of primary production. For the present I think we should be aware that NDVI and its associated measures of ‘greenness’ from satellites may not be a very reliable measure for local or landscape values of net primary production. Perhaps it is time to move back to the field and away from the computer to find out what is happening to global plant growth.

Blanco, L.J., Paruelo, J.M., Oesterheld, M., and Biurrun, F.N. 2016. Spatial and temporal patterns of herbaceous primary production in semi-arid shrublands: a remote sensing approach. Journal of Vegetation Science 27(4): 716-727. doi: 10.1111/jvs.12398.

Guay, K.C., Beck, P.S.A., Berner, L.T., Goetz, S.J., Baccini, A., and Buermann, W. 2014. Vegetation productivity patterns at high northern latitudes: a multi-sensor satellite data assessment. Global Change Biology 20(10): 3147-3158. doi: 10.1111/gcb.12647.

Johnson, D.M. 2016. A comprehensive assessment of the correlations between field crop yields and commonly used MODIS products. International Journal of Applied Earth Observation and Geoinformation 52(1): 65-81. doi: 10.1016/j.jag.2016.05.010.

Kang, Y., Ozdogan, M., Zipper, S.C., Roman, M.O., and Walker, J. 2016. How universal Is the relationship between remotely sensed vegetation Indices and crop leaf area Index? A global assessment. Remote Sensing 2016 8(7): 597 (591-529). doi: 10.3390/rs8070597.

Cotton yield vs NDVI Index

On Sequencing the Entire Biosphere

There is an eternal war going on in science which rests on the simple question of “What should we fund?” If you are at a cocktail party and want to set up a storm of argument you should ask this question. There may be general agreement among many scientists that we should reduce funding on guns and wars and increase funding on alleviating poverty. But then the going gets tough. It is easier to restrict our discussion to science. There is a clear hierarchy in science funding favouring the physical sciences that can make money and the medical sciences that keep us alive until 150 years of age. But now let’s go down to biology.

The major rift in biology is between funding blue sky research and practical research. In the discussions about funding, protagonists often confound these two categories by saying that blue sky research will lead us to practical research and nirvana. We can accept salesmanship to a degree. The current bandwagon in Canada is to barcode all of life on earth, at a cost of perhaps $2 billion but probably much more. Or we can sequence everything we can get our hands on with the implicit promise that it will help us understand these organisms better or solve practical problems in conservation and management. But all of this is driven by what we can do technically, so it is machine driven, not necessarily thought driven. So if you want another heated discussion among ecologists, ask them how they would spend $2 billion for research in ecology.

We sequence because we can. Fifty years ago I heard a lecture by Richard Lewontin in which he asked what we would know if we had a telephone book with all the genetic sequences of all the organisms on earth. He concluded, as I remember, that we would know nothing unless we had a purely ‘genetic-determinism’ view of life. There is more to life than amino acid sequences perhaps.

No one I know thinks that current ecological changes are driven by genetics, but perhaps I do not know the right people. So for example, if we sequence the genomes of all the top predators on earth (Estes et al. 2011, Ripple et al. 2014), would we know anything about their importance in community and ecosystem dynamics? Probably not. But still we are told that if in New Zealand we sequence the common wasp genome we will find new ways to control this insect pest. Perhaps an equally important area would be funding to understand their biology in New Zealand, and the threats and threatening processes in an ecosystem context.

We are back to the starting question about the allocation of resources within biology. Perhaps we cycle endlessly in science funding in search of the Promised Land. In a recent paper Richards (2015) makes the argument that genome sequencing is the key to biology and thus the Promised Land:

“The unifying theme of biology is evolutionary conservation of the gene set and the resultant proteins that make up the biochemical and structural networks of cells and organisms throughout the tree of life.”

“The absence of these genome references is not just slowing research into specific questions; it is precluding a complete description of the molecular underpinnings of biology necessary for a true understanding of life on our planet.” (p. 414)

There seems little room in all this for ecological thought or ecological viewpoints. It is implicit to me that these arguments for genome sequencing have as a background assumption that ecological research is rather useless for achieving biological understanding or for solving any of the problems we currently face in conservation or management. Richards (2015) makes the point himself in saying:

“While the author is fond of ‘stamp collecting’, there are many good reasons to expand the reference sequences that underlie biological research (Table 2).”

The table he refers to in his paper has not a single item on ecological research, except that this approach will achieve “Acceleration of total biological research output”. It remains to be seen whether this view will achieve much more than stamp collecting and a massive confusion of correlation with causation. It requires a great leap of faith that this approach through genome sequencing can help to solve practical ecological problems.

Richards, S. (2015) It’s more than stamp collecting: how genome sequencing can unify biological research. Trends in Genetics, 31, 411-421.

Estes, J.A., et al. (2011) Trophic downgrading of Planet Earth. Science, 333, 301-306.

Ripple, W.J., et al. (2014) Status and ecological effects of the world’s largest carnivores. Science, 343, 1241484.

Why Do Physical Scientists Run Off with the Budget Pie?

Take any developed country on Earth and analyse their science budget. Break it down into the amounts governments devote to physical science, biological science, and social science to keep the categories simple. You will find that the physical sciences gather the largest fraction of the budget-for-science pie, the biological sciences much less, and the social sciences even less. We can take Canada as an example. From the data released by the research councils, it is difficult to construct an exact comparison but within the Natural Sciences and Engineering Research Council of Canada the average research grant in Chemistry and Physics is 70% larger than the average in Ecology and Evolution, and this does not include supplementary funding for various infrastructure. By contrast the Social Sciences and Humanities Research Council reports research grants that appear to be approximately one-half those of Ecology and Evolution, on average. It seems clear in science in developed countries that the rank order is physical sciences > biological sciences > social sciences.

We might take two messages from this analysis. If you listen to the news or read the newspapers you will note that most of the problems discussed are social problems. Then you might wonder why social science funding is so low on our funding agenda in science. You might also note that environmental problems are growing in importance and yet funding for environmental research is also at the low end of our spending priority.

The second message you may wish to ask is: why should this be? In particular, why do physical scientists run off with the funding pie while ecologists and environmental scientists scratch through the crumbs? I do not know the answer to this question. I do know that it has been this way for at least the last 50 years, so it is not a recent trend. I can suggest several partial answers to this question.

  1. Physical scientists produce along with engineers the materials for war in splendid guns and aircraft and submarines that our governments believe will keep us safe.
  2. Physical scientists produce economic growth by their research so clearly they should be more important.
  3. Physical sciences produce scientific progress on a time scale of months while ecologists and environmental scientists produce research progress on a time scale of years and decades.
  4. Physical scientists do the research that produce good things like iPhones and computers while ecologists and environmental scientists produce mostly bad news about the deterioration in the earth’s ecosystem services.
  5. Physical scientists and engineers run the government and all the major corporations so they propagate the present system.

Clearly there are specific issues that are lost in this general analysis. Medical science produces progress in diagnosis and treatment as a result of the research of biochemists, molecular biologists, and engineers. Pharmaceutical companies produce compounds to control diseases with the help of molecular biologists and physiologists. So research in these specific areas must be supported well because they affect humans directly. Medical sciences are the recipient of much private money in the quest to avoid illness.

Lost in this are a whole other set of lessons. Why were multi-billions of dollars devoted to the Large Hadron Collider Project which had no practical value at all and has only led to the need for a Very Large Hadron Collider in future to waste even more money? The answer seems to lie somewhere in the interface of three points of view – it may be needed for military purposes, it is a technological marvel, and it is part of physics which is the only science that is important. The same kind of thinking seems to apply to space research which is wildly successful burning up large amounts of money while generating more military competition via satellites and in addition providing good movie images for the taxpayers.

While many people now support efforts on the conservation of biodiversity and the need for action on climate change, the funding is not given to achieve these goals either from public or private sources. One explanation is that these are long-term problems and so are difficult to get excited about when the lifespan of the people in power will not extend long enough to face the consequences of current decision making. Finally, many people are convinced that technological fixes will solve all environmental problems so that the problems environmental scientists worry about are trivial (National Research Council 2015, 2015a). Physics will fix climate change by putting chemicals into the stratosphere, endangered species will be resurrected by DNA, and fossil fuels will never run out. And as a bonus Canada and Scandinavia will be warmer and what is wrong with that?

An important adjunct to this discussion is the question of why economics has risen to the top of the heap along with physical sciences. As such the close triumvirate of physical sciences-engineering-economics seems to run the world. We should keep trying to change that if we have concern for the generations that follow.

 

National Research Council. 2015. Climate Intervention: Carbon Dioxide Removal and Reliable Sequestration. The National Academies Press, Washington, DC. 140 pp. ISBN: 978-0-309-36818-6.

National Research Council. 2015a. Climate Intervention: Reflecting Sunlight to Cool Earth. The National Academies Press, Washington, DC. 234 pp. ISBN: 978-0-309-36821-6.

Two Visions of Ecological Research

Let us assume for the moment that the goal of scientific ecology is to understand the reasons for changes in the distribution and abundance of animals, plants, and microbes. If you do not think this is our main agenda, perhaps you should not read further.

The conventional, old paradigm to achieve this goal is to obtain a good description of the natural history of the organisms of interest in a population or community, define the food web they operate within, and then determine by observations or manipulations the parameters that limit its distribution and abundance. This can be difficult to achieve in rich food webs with many species, and in systems in which the species are not yet taxonomically described, and particularly in microbe communities. Consequently a prerequisite of this paradigm is to have good taxonomy and to be able to recognize species X versus species Y. A whole variety of techniques can be used for this taxonomy, including morphology (the traditional approach) and genetics. Using this approach ecologists over the past 90 years have made much progress in deriving some tentative explanations for the changes that occur in populations and communities. If there has been a problem with this approach, it is largely because of disagreements about what data are sufficient to test hypothesis X, and whether the results of manipulation Y are convincing. A great deal of the accumulated data obtained with this approach has been useful to fisheries management, wildlife management, pest control, and agricultural production.

The new metagenomics paradigm, to use one label, suggests that this old approach is not getting us anywhere fast enough for microbial communities, and we need to forget most of this nonsense and get into sequencing, particularly for microbial communities. New improvements in the speed of doing this work makes it feasible. The question I wish to address here is not the validity or the great improvements in genetic analysis, but rather whether or not this approach can replace the conventional old paradigm. I appreciate that if we grab a sample of mud, water, or the bugs in an insect trap and grind it all up, and run it through these amazing sequencing machines, we get a very great amount of data. We then might try to associate some of these kinds of data with particular ‘species’ and this may well work in groups for which the morphological species are well described. But what do we do about the undescribed sequences? We know that microbial diversity is much higher than what we can currently culture in the laboratory. We can make rules about what to call unknown unit A, unknown unit B, and so on. That is fine, but now what? We are in some sense back where Linnaeus was in 1753 in giving names to plants.

Now comes the difficult bit. Do we just take the metagenomics approach and tack it on to the conventional approach, using unknown A, unknown B, etc. instead of Pseudomonas flavescens or Bacillus licheniformis? We cannot get very far this way because the first thing we need to decide is does unknown A a primary producer or unknown B a decomposer of complex organic molecules? So perhaps this leads us to invent a whole new taxonomy to replace the old one. But perhaps we will go another way to say we will answer questions with the new system like is this pond ecosystem changing in response to global warming or nutrient additions? We can describe many system shifts in DNA-terminology but will we have any knowledge of what they mean or how management might change these trends? We could work all this out in the long term I presume. So I guess my confusion is largely exactly which set of hypotheses are you going to test with the new metagenomics paradigm? I can see a great deal of alpha-descriptive information being captured but I am not sure where to go from there. My challenge to the developers of the new paradigm is to list a set of problems in the Earth’s ecosystems for which this new paradigm could provide better answers more quickly than the old approach.

Microbial ecology is certainly much more difficult to carry out than traditional ecology on macroscopic animals and plants. As such it should be able to use new technology that can improve understanding of the structure and function of microbe communities. All new advances in technology are helpful for solving some ecological problems and should be so used. The suggestion that the conventional approach is out of date should certainly be entertained but in the last 70 years the development of air photos, of radio telemetry, of satellite imagery, of electrophoresis, of simplified chemical analyses, of automated weather stations, and the new possibilities of genetic analysis have been most valuable to solving ecological questions for many of our larger species. But in every case, at every step we should be more careful to ask exactly what questions the new technology can answer. Piling up terabytes of data is not science and could in fact hinder science. We do not wish to validate the Rutherford prediction that our ecological science is “stamp collecting”.

10 Limitations on Progress in Ecology

Ecological science moves along slowly in its mission to understand how the Earth’s populations, communities, and ecosystems operate within the constraints of human impacts on the Biosphere. The question of the day is can we identify the factors currently limiting the rate of progress so that at least in principle we could speed up progress in our science. Here is my list.

1. A shortage of ecologists or more properly jobs for ecologists. In particular a scarcity of government agencies employing ecologists in secure jobs to work on stable, long-term environmental projects that are beyond the scope of university scientists. Many young ecologists of high quality are stalled in positions that are beneath their talents. We are in a situation similar to having highly trained medical doctors being used as hospital janitors. This is a massive failure on many fronts, regional and national, political and scientific. Many governments around the world think economists and lawyers are key while environmental scientists are superfluous.

2. The lack of proper funding from both government, private companies and private individuals. This is typified by the continual downsizing of government scientists working on natural resource problems – fisheries, wildlife, park management – and continuing political interference with scientific objectives. Private companies too often rely on taxpayers to fund their environmental investigations and do not view them as a part of their business model. Private citizens give money to medical research rather than to environmental programs largely based on the belief that of all the life on Earth, only the human component is important.

3. The deficiency of taxonomic expertise to define clearly the species that inhabit the Earth. The estimates vary but perhaps only 10% of the total biota can be given a Latin name and morphological description, leaving out for the moment all the bacteria and viruses. Equate this with having a batch of various shaped coins in your pocket with only a few of them giving the denomination. This problem has been identified for years with little action.

4. Given adequate taxonomy, the lack of adequate natural history data on most of the biota. This activity, so critical for all ecological science, was called “stamp collecting” and thus condemned to the lowest point on the scientific totem pole. The consequence of this is that we try to understand the Earth with data only on butterflies, some birds, and some large mammals.

5. A failure of ecologists to map out the critical questions facing natural populations, communities, and ecosystems on Earth. The roadmap of ecology is littered with wrecks of ideas once pushed to explain nearly everything, and we need a more nuanced map of what is a critical issue. There are a considerable number of fractures within the ecological discipline about what needs to be done, if people and money were available. This fosters the culture of I win = you lose in competition for money and jobs.

6. The confusion of mathematical models with reality. There is a strong disconnect between models and data that persists. Models rapidly proliferate, data are slow to accumulate, so we try to paper over the fragility of our understanding with mathematical wizardry, trying to be like physicists. Connecting model predictions with empirical data studies would go a long way to righting this problem but it is a tall order in a world that confuses the number of publications and h scores with important contributions.

7 The fact that too many ecologists do not adopt the scientific method of investigation, to carry out experiments with multiple alternative hypotheses with clear predictions. Arguments continue endlessly based on words (‘concepts’) that are so vaguely defined as to be meaningless operationally. If you need an example, think ‘stability’ or ‘diversity’. These vague words are then herded into pseudo-hypotheses to doubly confound the confusion over what the critical questions in ecology really are.

8. The need for ecologists to work in stable groups. Serious ecological problems demand expertise in many scientific specialities, and we need better mechanisms to foster and maintain such groups. The assessment of scientists on the basis of individual work is long out of date, the Nobel Prize is an anachronism, and we need strong groups concentrating on important issues for long term studies. At the moment many groups exist to do meta-analyses and fewer to do science.

9. Placing the technological horse in front of the ecological cart. Ecology like many sciences is often led by technology rather than by questions. The current DNA bandwagon is one example, but we should not get so confused to think that that most important questions in ecology are those that use the most technology. Jumping from one technological bandwagon to the next is a good recipe for minimizing progress.

10. The fractionation of ecology into subdisciplines and the assumption that the only important research work has been done since 2000. Aquatic ecologists do not talk to terrestrial ecologists, microbial ecologists live in their own special world, and avian ecologists do not talk to insect ecologists. The result is that the existing literature is too often wasted by investigators who have no idea that question XX has already been answered either in another subdiscipline or in existing literature from 50 years ago.

Not all of these limitations apply to every ecologist, and at best I would view them as a set of guideposts that need to be considered as we move further into the 21st century.

Krebs, C. J. 2006. Ecology after 100 years: progress and pseudo-progress. New Zealand Journal of Ecology 30:3-11.

Majer, J. D. 2012. Critical times: How has ecological research responded over the past 35 years? Austral Ecology 37:149-152.

Sutherland, W. J. et al. 2010. A horizon scan of global conservation issues for 2010. Trends in Ecology & Evolution 25:1-7.