Category Archives: Evaluating Research Quality

Back to p-Values

Alas ecology has slipped lower on the totem-pole of serious sciences by an article that has captured the attention of the media:

Low-Décarie, E., Chivers, C., and Granados, M. 2014. Rising complexity and falling explanatory power in ecology. Frontiers in Ecology and the Environment 12(7): 412-418. doi: 10.1890/130230.

There is much that is positive in this paper, so you should read it if only to decide whether or not to use it in a graduate seminar in statistics or in ecology. Much of what is concluded is certainly true, that there are more p-values in papers now than there were some years ago. The question then comes down to what these kinds of statistics mean and how this would justify a conclusion captured by the media that explanatory power in ecology is declining over time, and the bottom line of what to do about falling p-values. Since as far as I can see most statisticians today seem to believe that p-values are meaningless (e.g. Ioannidis 2005), one wonders what the value of showing this trend is. A second item that most statisticians agree about is that R2 values are a poor measure of anything other than the items in a particular data set. Any ecological paper that contains data to be analysed and reported summarizes many tests providing p-values and R2 values of which only some are reported. It would be interesting to do a comparison with what is recognized as a mature science (like physics or genetics) by asking whether the past revolutions in understanding and prediction power in those sciences corresponded with increasing numbers of p-values or R2 values.

To ask these questions is to ask what is the metric of scientific progress? At the present time we confuse progress with some indicators that may have little to do with scientific advancement. As journal editors we race to increase their impact factor which is interpreted as a measure of importance. For appointments to university positions we ask how many citations a person has and how many papers they have produced. We confuse scientific value with some numbers which ironically might have a very low R2 value as predictors of potential progress in a science. These numbers make sense as metrics to tell publication houses how influential their journals are, or to tell Department Heads how fantastic their job choices are, but we fool ourselves if we accept them as indicators of value to science.

If you wish to judge scientific progress you might wish to look at books that have gathered together the most important papers of the time, and examine a sequence of these from the 1950s to the present time. What is striking is that papers that seemed critically important in the 1960s or 1970s are now thought to be concerned with relatively uninteresting side issues, and conversely papers that were ignored earlier are now thought to be critical to understanding. A list of these changes might be a useful accessory to anyone asking about how to judge importance or progress in a science.

A final comment would be to look at the reasons why a relatively mature science like geology has completely failed to be able to predict earthquakes in advance and even to specify the locations of some earthquakes (Steina et al. 2012; Uyeda 2013). Progress in understanding does not of necessity dictate progress in prediction. And we ought to be wary of confusing progress with p-and R2 values.

Ioannidis, J.P.A. 2005. Why most published research findings are false. PLoS Medicine 2(8): e124.

Steina, S., Gellerb, R.J., and Liuc, M. 2012. Why earthquake hazard maps often fail and what to do about it. Tectonophysics 562-563: 1-24. doi: 10.1016/j.tecto.2012.06.047.

Uyeda, S. 2013. On earthquake prediction in Japan. Proceedings of the Japan Academy, Series B 89(9): 391-400. doi: 10.2183/pjab.89.391.

On Journal Referees

I have been an editor of enough ecological journals to know the problems of referees first hand. The start is to try to find a referee for a particular paper. I would guess now that more than two-thirds of scientists asked to referee a potential paper say they do not have time. This leads to another question of why no one has any time to do anything, but that is a digression. If one is fortunate you get 2 or 3 good referees.

The next problem comes when the reviews of the paper come back. Besides dealing with the timing of return of the reviews, there are four rules which ought to be enforced on all referees. First, review the potential paper as it is. Do not write a review saying this is what you should have written- that is not your job. Second, if the paper is good enough, be positive in making suggestions for improvement. If it is not good enough in your opinion, try to say so politely and suggest alternate journals. Perhaps the authors are aiming for a journal that is too prestigious. Third, do not say in so many words that the author should cite the following 4 papers of mine…. And fourth, do not make ad hominem attacks on the authors. If you do not like people from Texas, this is not the place to take it out on the particular authors who happen to live there.

Given the reviews, the managing editor for the paper ought to make a judgment. Some reviews do not follow the four rules above. A good editor discards these and puts a black mark on the file of that particular reviewer. I would not submit a referee’s review to the authors if it violated any of the above 4 rules. I have known and respected editors who operated this way in the past.

The difficulty now is that ecological journals are overrun. This is driven in part by the desire to maximize the number of papers one publishes in order to get a job, and in part by journals not wanting to publish longer papers. Journals do not either have the funding or the desire to grow in relation to the number of users. This typically means that papers are sent out for reviews with a note attached saying that we have to reject 80% or so of papers regardless of how good they are, a rather depressing order from above. When this level of automatic rejection is reached, the editor in chief has the power to reject any kinds of papers not in favour at the moment. I like models so let’s publish lots of model papers. Or I like data so let’s publish only a few model papers.

One reason journals are overrun is that many of the papers published in our best ecology journals are discussions of what we ought to be doing. They may be well written but they add nothing to the wisdom of our age if they simply repeat what has been in standard textbooks for the last 30 years. In days gone by, many of these papers I think might have been given as a review seminar, possibly at a meeting, but no one would have thought that they were worthy of publication. Clearly the editors of some of our journals think it is more important to talk about what to do rather than to do it.

I think without any empirical data that the quality of reviews of manuscripts has deteriorated as the number of papers published has increased. I often have to translate reviews for young scientists who are devastated by some casual remark in a review. “Forget that nonsense, deal with this point as it is important, ignore this insult to your supervisor, go have a nice glass of red wine and relax, ……”. One learns how to deal with poor reviews.

I have been reading Bertram Murray’s book “What Were They Thinking? Is Population Ecology a Science?” (2011), unfortunately published after he died in 2010. It is a long diatribe about reviews of some of his papers and it would be instructive for any young ecologist to read it. You can appreciate why Murray had trouble with some editors just from the subtitle of his book, “Is Population Ecology a Science?” It illustrates very well that even established ecologists have difficulty dealing with reviews they think are not fair. In defense of Murray, he was able to get many of his papers published, and he cites these in this book. One will not come away from this reading with much respect for ornithology journals.

I think if you can get one good, thoughtful review of your manuscript you should be delighted. And if you are rejected from your favourite journal, try another one. The walls of academia could be papered with letters of rejection for our most eminent ecologists, so you are in the company of good people.

Meanwhile if you are asked to referee a paper, do a good job and try to obey the four rules. Truth and justice do not always win out in any endeavour if you are trying to get a paper published. At least if you are a referee you can try to avoid these issues.

Barto, E. Kathryn, and Matthias C. Rillig. 2012. “Dissemination biases in ecology: effect sizes matter more than quality.” Oikos 121 (2):228-235. doi: 10.1111/j.1600-0706.2011.19401.x.

Ioannidis, John P. A. 2005. “Why most published research findings are false.” PLoS Medicine 2 (8):e124.

Medawar, P.B. 1963. “Is the scientific paper a fraud?” In The Threat and the Glory, edited by P.B. Medawar, 228-233. New York: Harper Collins.

Merrill, E. 2014. “Should we be publishing more null results?” Journal of Wildlife Management 78 (4):569-570. doi: 10.1002/jwmg.715.

Murray, Bertram G., Jr. 2011. What Were They Thinking? Is Population Ecology a Science? Infinity Publishing. 310 pp. ISBN 9780741463937


Research funding for women

NSERC funding by gender

Success rates are similar, but women still get less

Judith Myers UBC

NSERC has over the years provided data on request for the Discovery Grant Program for Ecology and Evolution broken down by both gender and different categories of applicants, eg. established, new first renewals etc.  In 2008, I summarized these data for presentation at the Canadian Coalition of Women in Science, Engineering, Trades and Technology (CCWESTT). This can be found as “NSERC Discovery Grant Statistics for males and females 2002 – 2008 at That analysis showed a consistent trend for women to receive smaller grants than men with the exception of new applicants in 2007 and 2008 for which grants for women were larger.

Here, I analyze the NSERC data from 2009 and 2013. I show that success rates for grant applications are similar between men and women; however, the trend for women to receive lower grant funding on average continues.

Figure 1

Figure 1. Proportion applicants successful in 2009 and 2013 competitions.  Numbers of applicants are given in the legend. “Renewal” is first time renewal and “first” includes those applying for the first time and applicants that were previously unsuccessful in their first attempt.  Horizontal lines indicate overall average success rate, 73% in 2009 and 63% in 2013. Number of applicants is at the top of the bar.

Figure 1 shows that the overall success rate in 2013 is approximately 10% lower than in 2009, the successes of males and females are similar, and the success rate across categories is similar although first renewal success is lower and is lowest for females. Given the importance of this stage for the establishment of the future careers of these applicants this trend is of concern.

Figure 2 nserc

Figure 2. Average grants of different categories of applicants for NSERC Discovery Grants in 2009 and 2013.  The horizontal line indicates the overall average grants grant size, $33 351 (grants $5028 less for females than males) in 2009 and $31 828 ($6650 less for females than males) in 2013.

Figure 2 shows that the trend seen in earlier data continues with grants of males being larger than those of females by a substantial amount.  A factor here is that there are no female high fliers who have substantially larger grants than the average, and overall median grants are about the same for males and females. I have not taken accelerator grants into consideration here.

Given that females are on average receiving approximately $6500 less than their male colleagues, it would be interesting to know how this is translated into productivity measured as the number of publications in one year.  For an indication of how publications relate to grant sizes, I selected individuals from the NSERC results for 2013 taking from a range of grant sizes but including those with the largest grants and a sampling from the lower grant sizes.  I then used Web of Science to determine the number of publications for the year 2012-2013 for each chosen individual.

Figure 3 nserc

Figure 3. Size of grant awarded in 2013 and number of publications in 2012-2013 for an arbitrary sample of grantees. Neither relationship is significant, but that for males is influenced by the high publication number for two of the male “high fliers”.

The lack of relationship between yearly publication rates and grant size shows that productivity does not relate strongly to funding success. No female received a grant of more than $50 000 in 2013 so the range of the data is less for them.  For males, high publication numbers for two “high fliers” cause a weak upward trend in the relationship of publications to funding, but average publication numbers for four “high fliers” pulls this relationship down.  For these selected data the average number of publications for males was 10.5 and for females 9.1.  Removing the data for “high fliers” in the male data sets results in a slightly higher grant size for males than for females but only 7 publications on average for males compared to 9 for females for similar funding levels. Although this is a small and selected data set, it likely reflects the overall pattern for little relationship between grant size and publication numbers.  Similarly Lortie et al. 2012 (Oikos 121: 1005–1008) found that for the mostly highly-funded North American ecologists and environmental scientists, citations per paper were not related to increased levels of funding although for NSERC funded researchers there was a weak relationship. Fortin and Currie (2013) found that the number of papers, highest times cited, and number of high impact articles were only weakly related to NSERC funding levels for Animal Biology, Chemistry and Ecology and Evolution (PLOS ONE, DOI: 10.1371). Missing from these analyses are the data for individuals who receive no funding.  Thus the reduced proportion of successful renewals in the current funding environment, and the slightly reduced success of first time renewals are not reflected in these evaluations of research productivity. A recent study of global patterns of publications and citations shows that women publish less than men particularly in areas in which research is expensive, they are less likely to participate in international collaborations and are less likely to be first or last authors on papers (Larivière et al. 2013. Nature 504:211 – 213). There are many factors involved here.

We do not have data on HQP numbers, a metric that is heavily weighted in the NSERC Discovery Grant evaluation.  It is likely that the reduced funding level for females results in fewer HQP for them and this could have a strong impact on average funding from NSERC and publication numbers in the future.

In conclusion the new system of Discovery Grant evaluation appears to result in more similar levels of funding across categories but does not remove the bias towards larger grants on average for males. The impact on research productivity of the 37% of applicants that receive no funding as a result of the lower success rate is not easy to evaluate, but data do not support the hypothesis that higher funding for fewer individuals increases Canada’s research productivity.

Models need testable predictions to be useful

It has happened again.  I have just been to a seminar on genetic models – something about adaptation of species on the edges of their ranges.  Yes this is an interesting topic of relevance to interpreting species’ responses to changing environments.  It ended by the speaker saying something like, “It would be a lot of work to test this in the field”. How much more useful my hour would have been spent if the talk had ended with “Although it would be difficult to do, this model makes the following predictions that could be tested in the field,” or “The following results would reject the hypothesis upon which this model is based.”

Now it is likely that some found these theoretical machinations interesting and satisfying in some mathematical way, but I feel that it is irresponsible to not even consider how a model could be tested and the possibility (a likely possibility at that) that it doesn’t apply to nature and tells us nothing helpful about understanding what is going to happen to willow or birch shrubs at the edge of their ranges in the warming arctic (for example).

Recommendation – no paper on models should be published or talked about unless it makes specific, testable predictions of how the model can be tested.

Open Letter from a Scientist to a Bureaucrat

Let us assume for the moment that I am a scientist who has worked in a government research organization for 25 years under a series of bureaucrats. I have just retired and the object of this letter is to tell a bureaucrat what is good and what is bad about the bureaucratic government system. If you work in a perfect government system, perhaps you do not need to read further.

Dear Sir/Madam:

I would like to offer you some free advice that comes from a scientist who has worked in government for many years. This is presumptuous to be sure in light of our relative positions, but I feel you might benefit from some notes from the trenches.

First, science should never be organized in a top-down manner. We ecologists know about trophic cascades and the consequences it has for the lower trophic levels. You should not tell us what to do because you know nothing about the subject matter of the science, in this case ecology. I note especially that an MBA does not confer infinite wisdom on science matters. So I suggest you consider organizing things bottom-up. Your job is to provide scientists with the technical support, the funding, and the facilities to do their work. I note that this does not preclude you providing us with general areas of science in which we are expected to do our research. If our general position is to study the effectiveness of pollination in California crops, you should not tolerate us going to Africa to study elephant ecology. We appreciate that the government has at least some general ideas of what is critical to study. If they do not, it would be advisable to gather a group of scientists to discuss what the critical problems are in a particular area of science. Scientists do not work in closed rooms and do have a general understanding of what is happening in their field.

Second, do not muzzle us about anything scientific. We do not work for you or for the current government but we do work for the people of Canada or Australia or whatever country, and our mandate is to speak out on scientific questions, to provide evidence based policy guidance and to educate the public when errors are promulgated by people who know nothing about what they speak. This could well include government ministers who are known at least on occasion to utter complete nonsense. Our job is not to support the government’s policies of the day but to provide evidence about scientific questions. In general we scientists do not see government ministers crying out that they know more about brain surgery than trained doctors, so we think the same attitude ought to be taken toward ecologists.

Third, ask your scientists about the time frame of their scientific studies. Most bureaucrats seem to think that, since the world was created in 7 days, scientific work ought to take no more than a year or two or perhaps three. We would like to tell you that many, perhaps most, important ecological questions involve a time frame of 10 years or more, and some require continuous funding and support for periods in excess of 50 years. You apparently did not ask medical scientists to stop working on cancer or malaria after 3 years or even 50 years, so we are uncertain why ecologists should be kept to short time frames for their research. Ecological research is perhaps the most difficult of all the sciences, so if we do not find answers in a few years it is not because we are not working hard enough.

Finally, ask your scientists to publish in national and international journals because that is the corner stone for judging scientific progress. We do not mind having rules about rates of publication. And as a spur please fund your scientists to go to scientific meetings to present their results to the scientific world. And have them communicate to the public what they are doing and what they have found. After all the public pays, so why should they not hear about what has come of their tax dollars.

Your job, in a nutshell, is to support your scientists not to hinder them, to encourage their work, and to speak to the higher levels of government about why funding science is important. And to (at least on occasion) protest about government policies that are not based on scientific evidence. If you are successful in all of this, the people of your country will be the better for it. On the other hand, you may be headed for early retirement if you follow my advice.

I wish you success.

Sincerely yours,

A.B.C. Jones PhD, DSc, FRS, FAA

On Understanding the Boreal Forest Ecosystem

I have spent the last 40 years studying the Yukon boreal forest. When I tell this to my associates I get two quite different reactions. First, on the positive side they are impressed with the continuity of effort and the fact that we have learned a great deal about the interactions of species in the Canadian boreal forest (Krebs, Boutin, and Boonstra 2001). Alternatively, on the negative side, I am told I am at fault for doing something of no practical management importance for so long when there are critical conservation problems in our Canadian backyard. Clearly I prefer the positive view, but everyone can decide these issues for themself. What I would like to do here is to lay out what I think are the critical issues in the Canadian boreal forest that have not been addressed so far. I do this in the hope that someone will pick up the torch and look into some of them.

The first issue is that ecological studies of the boreal ecosystem are completely fractionated. The most obvious division is that we have studied the boreal forest in the southwest Yukon with few concurrent studies of the alpine tundra that rises above the forest in every range of mountains. The ecotone between the forest and the tundra is not a strict boundary for many plant species or for many of the vertebrate species we have studied. On a broader scale, there are few studies of aquatic ecosystems within the boreal zone, either in lakes or streams, another disconnect. The wildlife management authorities are concerned with the large vertebrates – moose, bears, caribou, mountain sheep – and this work tends not to tie in with other work on the smaller species in the food web. Interests in the carbon dynamics of the boreal zone have greatly increased but these studies in Canada are also completely disconnected from all other ecological studies that consider population and community dynamics. I think it is fair to say that carbon dynamics in the boreal forest could turn out to be a very local affair, and too much generalization has already been made with too little spatial and temporal data.

One could consider the ecology of the boreal zone like a puzzle, with bits of the puzzle being put together well by researches in one particular area, but with no view of the major dimensions of the total puzzle. This is readily understood when much of the research is done as part of graduate thesis work that has a limit of 4-5 years before researchers move on to another position. It is also a reflection of the low funding that ecology receives.

Within the Yukon boreal forest there are several areas of research that we have not been able to address in the time I and my many colleagues have worked there. Mushroom crops come and go in apparent response to rainfall (Krebs et al. 2008) but we do not know the species of above ground mushrooms and consequently do not know if their fluctuations are uniform or if some species have specialized requirements. Since fungi are probably the main decomposers in this ecosystem, knowing which species will do what as climate changes could be important. On a practical level, foresters are determined to begin logging more and more in the boreal zone but we have no clear understanding of tree regeneration or indeed any good studies of forest succession after fire or logging. Since logging in northern climates is more of a mining operation than a sustainable exercise, such information might be useful before we proceed too far. If the turnaround for a logged forest is of the order of 300 years, any kind of logging is unsustainable in the human time frame.

The list goes on. Snowshoe hare cycles vary greatly in amplitude and we suspect that this is due to predator abundance at the start of any 10 year cycle (Krebs et al. 2013).  The means to test this idea are readily available – satellite telemetry – but it would require a lot of money because these collars are expensive and need to be deployed on lynx, coyotes, and great-horned owls at least. And it needs to be done on a landscape scale with cooperating groups in Alaska, the Yukon, the Northwest Territories, and British Columbia at least. Large-scale ecology to be sure, but the results would be amazing. Radio-telemetry has the ability to interest the public, and each school in the region could have their tagged animals to follow every week. Physicists manage to convince the public that they need lots of money to do large experiments, but ecologists with down to earth questions are loath to ask for a lot of money to find out how the world works on a large scale.

Migratory songbirds have been largely ignored in the boreal forest, partly because they leave Canada after the summer breeding period but at least some of these songbirds appear to be declining in numbers with no clear reason. Yet studies on them are virtually absent, and we monitor numbers in imprecise ways, and continue to mark the position of the deck chairs on the Titanic with no understanding of why it is sinking.

Insect populations in the boreal forest are rarely studied unless they are causing immediate damage to trees, and consequently we have little information on their roles in ecosystem changes.

At the end of this list we can say in the best manner of the investigative reporter why did you not do these things already? The answer to that is also informative. It is because almost all this completed research has been done by university professors and their graduate students and postdocs. What has been done by all my colleagues is amazing because they are not in charge of the boreal forest. The people are, via their governments, provincial and federal. The main job of all of us when this research in the Yukon boreal forest was being done has been education –to teach and do research that will train students in the best methods available. So if you wish to be an investigative reporter, it is best to ask why governments across the board have not funded the federal and provincial research groups that had as their mandate to understand how this ecosystem operates. Because all these questions are about long-term changes, the research group must be stable in funding and person-power in the long term. There is nothing I have seen in my lifetime that comes close to this in government for environmental work except for weather stations. In the short term our governments work to the minute with re-election in sight, and long term vision is suppressed. The environment is seen as a source of dollars and as a convenient garbage can and science only gets in the way of exploitation. And in the end Mother Nature will take care of herself, so they hope. Perhaps we need a few Bill Gates’ types to get interested in funding long-term research.

But there remain for ecologists many interesting questions that are at present not answered, and will help us complete the picture of how this large ecosystem operates.

Krebs, C.J., S. Boutin, and R. Boonstra, editors. 2001. Ecosystem Dynamics of the Boreal Forest: the Kluane Project. Oxford University Press, New York.

Krebs, C.J., P. Carrier, S. Boutin, R. Boonstra, and E.J. Hofer. 2008. Mushroom crops in relation to weather in the southwestern Yukon. Botany 86:1497-1502.

Krebs, C.J., K. Kielland, J. Bryant, M. O’Donoghue, F. Doyle, C. McIntyre, D. DiFolco, N. Berg, S. Carrier, R. Boonstra, S. Boutin, A.J. Kenney, D.G. Reid, K. Bodony, J. Putera, and T. Burke. 2013. Synchrony in the snowshoe hare cycle in northwestern North America, 1970-2012. Canadian Journal of Zoology 91:562-572.

On Alpha-Ecology

All science advances on the back of previous scientists. No advances can be made without recognizing problems, and problems cannot bet recognized without having completed a great deal of descriptive natural history. Natural history has been described by some physicists as ‘stamp-collecting’ and so has been condemned forever in the totem pole of science as the worst thing you could possibly do. Perhaps we would improve our image if we called natural history alpha-ecology.

Let us start with the biggest problem in biology, the fact that we do not know how many species inhabit the earth (Mora et al. 2011). A minor problem most people seem to think and little effort has gone into encouraging students to make a career of traditional taxonomy. Instead we can sequence the genome of any organism without even being able to put a Latin name on it. Something is rather backwards here, and a great deal of alpha-biology is waiting to be done on this inventory problem. Much of taxonomic description considers low-level hypotheses about evolutionary relationships and these are important to document as a part of understanding the Earth’s biodiversity.

In ecology we have an equivalent problem of describing the species that live in a community or ecosystem, and then constructing the food webs of the community. This is a daunting task and if you wish to understand community dynamics you will have to do a lot of descriptive work, alpha ecology, before you can get to the point of testing hypotheses about community dynamics (Thompson et al. 2012). Again it is largely a bit of detective work to see who eats whom in a food web, but without all this work we cannot progress. The second part of community dynamics is being able to estimate accurately the numbers of organisms in the different species groups. Once you dig into existing food web data, you begin to realize that much of what we think is a good estimate of abundance is in fact a weak estimate of unknown accuracy. We have to be careful in analysing community dynamics to avoid estimations based more on random numbers than on biological reality.

The problem came home to me in a revealing exchange in Nature about whether the existing fisheries data for the world’s oceans is reliable or not (Pauly, Hilborn, and Branch 2013). For years we have been managing the oceanic fisheries of the world on the basis of fishing catch data of the sort reported to FAO, and yet there is considerable disagreement about the reliability of these numbers. We must continue to use them as we have no other source of information for most oceanic fisheries, but there must be some doubt that we are relying too much on unreliable data. On the one hand, some fishery scientists argue with these data that we are overexploiting the ocean fisheries, but other fishery scientists argue that the oceanic fisheries are by and large in good shape. Controversies like this confuse the public and the policy makers and tell us we have a long way to go to improve our alpha-ecology.

I think the bottom line is that if you wish to test any ecological hypothesis you need to have reliable data, and this means a great deal of alpha-ecology is needed, research that will not get you a Nobel Prize but will help us understand how the Earth’s ecosystem operates.

Mora, C., et al. 2013. How Many Species Are There on Earth and in the Ocean? PLoS Biology 9:e1001127.

Pauly, D., R. Hilborn, and T. A. Branch. 2013. Fisheries: Does catch reflect abundance? Nature 494:303-306.

Thompson, R. M., et al. 2012. Food webs: reconciling the structure and function of biodiversity. Trends in Ecology & Evolution 27:689-697.

In Defence of Hypothesis Testing in Ecology

In two recent scientific meetings I have attended (which must remain nameless to protect the innocent), I have found myself wondering about the state of hypothesis testing in ecological science. I have always assumed that science consists of testing hypotheses, yet I would estimate roughly that 75% of the talks I have been able to attend showed no sign of any hypothesis. I need to qualify that. Some of these studies are completely descriptive – what species of ferns occur in national park X? Much effort now is devoted to sequencing genomes, the ultimate in descriptive biology. This kind of research work can be classified as alpha-biology, basic description which is necessary before any problems can be formulated. In my particular specialty of population cycles in mammals, much descriptive work had to be carried out to recognize the phenomenon of “cycles”. But then the question arises – at what point should we stop simple descriptions of mammal populations rising and falling? Do we need to study the dynamics of every rodent species that exists? Or in genetics, is our objective to sequence the genome of every species on earth? My point is that after we have enough basic description, we should move into hypothesis testing, or asking why some phenomenon occurs, the mechanisms behind the simple observations. The important point here is that we should not have a single hypothesis or explanation for any set of observations but rather several alternative hypotheses. As a simple example, if we find our favourite plant species is declining in abundance, we should not simply try to connect this decline with climatic warming without having a series of alternative explanations with the emphasis that our observations or experiments should be capable of distinguishing among the alternative hypotheses.

The alternative argument is that we do not know enough about ecological systems to set up a series of credible alternative hypotheses. It is quite possible to go on describing events endlessly in science in the hope that some wisdom will emerge. I do not think this is a profitable use of time or money in science. In ecology in particular I would argue that there is not a single question one can ask that cannot be answered by at least 2 or 3 different mechanistic hypotheses. Our job is to articulate these alternatives and to do whatever studies or experiments are needed to distinguish among them. Of course it is always possible that the correct answer is not among the 2 or 3 hypotheses we suggest at the start of an investigation, and this is often why one study leads to a further one. Consequently we cannot accept statements like “I have no idea why this observation has occurred”. Such a statement means you have not thought deeply enough about what you are studying. Ecological surprises certainly occur while we study any particular community or ecosystem, but we know enough now to suggest several possible mechanisms by which any ecological surprise might be generated.

So I think it incumbent on every ecologist to ask (1) what is the problem or question my research is addressing? And (2) what probable mechanisms can be invoked as the cause of this problem or the answer to this question. Vagueness may be a virtue in politics but it is not a virtue in science. And I look forward to future conferences in which every paper specifies a precise hypothesis and alternative hypotheses. Chamberlin (1897) stated the case for multiple hypotheses, Karl Popper (1963) asked very specifically what your hypothesis forbids from happening, and John Platt (1964) pulled it together in a critical paper. There was important work done before the Iphone was invented. Good reading.

Chamberlin, T. C. 1897. The method of multiple working hypotheses. Journal of Geology 5:837-848 (reprinted in Science 148: 754-759 in 1965).

Platt, J. R. 1964. Strong inference. Science 146:347-353.

Popper, K. R. 1963. Conjectures and Refutations: The Growth of Scientific Knowledge. Routledge and Kegan Paul, London.

On publishing in SCIENCE and NATURE

We are having an ongoing discussion at the University of Canberra Institute for Applied Ecology about the need to obtain a measure of our strength in research. We have entered the age of quantification of all things even those that cannot be quantified, and so each of us must get our ranking from our citation rates or h-scores, or journal impact factors. And institutes rise and fall along with our research grants on the basis of these numbers. All of this seems to be necessary but is quite silly for two reasons. First, the importance of any particular paper or idea can only be judged in the long term, so trying to decide if you should have a job because of your citation rate is a cop out. Second, this quantification undermines the importance of judgment of scientists and administrators as adjudicators of the relative merits of specific research and specific scientists. The problem is that as a young scientist in particular you are caught in a web of nonsense and you have to play the game.

The name of the game is to get a paper in SCIENCE or NATURE. To do this you must shorten the presentation so much that it is nearly unintelligible and violates the staid assumption that a scientific paper must have enough detail in it that someone else can repeat the study and test its conclusions. These details are typically left to be put in the supplementary materials that one can download separately from the published paper. So these papers become like headlines in a newspaper, giving a grand conclusion with little of the details of how it was reached. But this publication is the hallmark of success so one must try. The only rule I can suggest is to have a Plan B for publication since about 99% of papers are rejected from SCIENCE AND NATURE.

There is a demography at work here that we must keep in mind. If scientific output is doubling every 7 years approximately, then getting a paper into SCIENCE or NATURE now is twice as hard as it was 7 years ago, on a totally random model of acceptance. So when your supervisor tells you that he or she got a paper in SCIENCE xx years ago, and so should you now, you might point out the demographic momentum of science.

Editors of any journal especially SCIENCE and NATURE are under great pressure, and if anyone thinks that their decisions are completely unbiased, they probably think that the earth is flat. All of us think some parts of our science are more important than others, and editorial decisions are far from perfect. The important message for young scientists is not to get discouraged when rejection slips appear. Any senior scientist could paper the hallways with letters of rejection from various journals. The important thing is to do good research, test hypotheses, make interesting speculations that can be tested, and move on, with or without a paper in SCIENCE or NATURE.

Finally, if someone wants an interesting project, you might trace the history of papers that have appeared in SCIENCE and NATURE over the last 50 years and see how many of them have been significant contributions to the ecological science we recognize now. Perhaps someone has done this already and it has been rejected by SCIENCE and is sitting in a filing cabinet somewhere…….