Any scientist will tell you that science is a process of trial and error. Error, however, is rarely rewarded (and often not even admitted to) by the scientific community. This is a result of stagnation in the way that scientific discoveries are made public, with scientific publishing seemingly stuck in a paradigm unaffected by the modern digital age. The current system makes it difficult to publish negative results or failed experiments, often resulting in different labs repeating the same experiments and making the same mistakes. This lack of recognition for the value of failure holds back creative risk-taking in science.

Whilst scientific publishing at this moment isn’t broken, it does leave a lot to be desired. Scientific discoveries are reported by publishing papers in journals, a convention dating back to 1665 when the first scientific journal was established. The submitted paper is traditionally a polished work, containing the necessary background information and details of a set of experiments which, taken together, lead to a certain conclusion being proposed. The case for scientific journals is strong – thorough peer review ensures that conclusions proposed are properly justified, and increases the integrity of the science behind them.

This lack of recognition for the value of failure holds back creative risk-taking in science.

Unfortunately, publishing in journals is given too much credit in the scientific world; it is tied too tightly with the way that science is rewarded. Career scientists rely on constant funding to be able to continue their research. Scientists (as well as the quality of their research) are judged on the portfolio of ‘perfect’ papers that they have had published in well-respected journals. This, along with citations from other papers, is the basis of funding decisions; as such, publishing unfinished or ‘imperfect’ data can be received critically, and can sometimes even be career-damaging.

Prestigious journals such as Nature select papers based on the novelty and applicability of their findings, meaning that a negative or ‘uninteresting’ result will rarely make the cut. Furthermore, papers submitted to Nature are often squashed to include huge amounts of data that could be published over the course of many papers (probably much earlier, and in a substantially clearer fashion). Although this data could be valuable to the research of other scientists, it often takes years to see the light of day. Labs that take more risks will obtain more negative results, and will not be rewarded; the net effect of this is to discourage the kind of out-of-the-box creative thinking needed to accelerate scientific progression.

Journal articles – even when published online – are static. They exist alone, and are not able to evolve; they can only be retracted, have comments added or be referenced to within separate publications which bring attention to any incorrect assertions. Science is not static. Styles of analysis and theoretical explanations evolve, but original articles still exist online and influence current research (even after better interpretations are found).

Science is not static.

Open access journals do exist, and provide a place where it is possible to publish results that aren’t novel enough to be accepted by other journals. An example is PLoS One, which reviews for scientific rigour rather than relevance of context. A 2008 BMJ survey showed that open access journals had “89% more full-text downloads, 42% more PDF downloads, and 23% more unique visitors than subscription access articles in the first six months after publication”, thus allowing the research to reach a much larger audience. There is, however, perceived to be a negative bias from funders and senior scientists against people who have too many PLoS One papers on their CV. As such, one might assume that it is okay to discuss ‘uninteresting’ or negative results, so long as you don’t publish too many of them. The “file drawer effect”, described by Rosenthal as long ago as 1979, refers to exactly this situation where scientists do not publish results that are negative, not in line with expectations, or not statistically significant on their own. The Economist calculated that only 10-30% of the scientific literature is made up of negative results, despite the negative results being more statistically trustworthy.

The Economist calculated that only 10-30% of the scientific literature is made up of negative results, despite the negative results being more statistically trustworthy.

It would be easy from this description to think that the problem lies, as often is the case, with the money. However, Brian Nozek – cofounder of The Centre for Open Science – disagrees, stating that “the critical barriers to change are not technical or financial; they are social. Although scientists guard the status quo, they also have the power to change it.” In the last few years we have seen some of the power in politics being shifted towards ordinary people, through social media and increased internet connectivity. Science has the potential to move in the same direction, and the key is also via the effective use of the internet.

Why should publishing and evaluation be so intrinsically linked? Currently, each journal conducts its own peer review (a lengthy process) before deciding whether or not to accept an article. The result: 49% of articles are reviewed by more than one journal, and the average time between submitting a paper and it being published is about 2 years. What if, instead, there could be one peer reviewer, who grades each article independently? Might this allow the separation of the novelty of an article from its scientific rigour?

As always, the internet provides. Programmes like BioRiv and F1000 allow articles to be published online, before the peer review process begins. Sadly, this is currently seldom used (primarily serving as a means through which to get feedback on articles pre-publishing), but it might well be the way forward. A UCL-led project, RIOJA consists of an overlay of an independent peer review service onto these journals, facilitating early publishing without sacrificing a thorough peer review process.

Internet advances have made it possible to strive towards a 21st century way of rating scientific scholarship. Papers’ own peer reviews are currently seen only by the authors, and are usually submitted anonymously, but why is this the case? Peer review offers clarification and suggestions of improvements to the experimental method, as well as possible new avenues for research (all of which are invaluable to scientists viewing the article online). A scientific career should not be focused solely on publishing articles; instead, it should emphasise the importance of and facilitate contribution towards science. If funding decisions were to take into account scientists reviews, as well as their broader contributions to science, it would incentivise higher-quality reviewing (something for which there is little incentive for at the moment). Scientists might spend less time worrying about what journals they get into, and more about the quality of the work that they are submitting.

This change is not a utopia – it is readily achievable. Whilst the platforms for a new type of science already exist, they need support in order to become the status quo.

Key to implementing this change is building new ways of rating and attributing value to scientists. If done correctly, the ‘publish or perish’ paradigm of scientific career progression could be broken. ImpactStory is a non-profit organisation that makes “tools to power the open science revolution”, amongst which is a program used to score scientists on the openness and accessibility of their publications. Another organisation, ResearchGate, can be likened to Facebook for scientists (providing scientists with a platform on which they can compare results and answer each others questions). Each user is scored based not only on their publications, but for providing consistently highly-rated answers to questions from other users. ResearchGate founder Ijad Madisch thinks that scientific research should mimic the environment commonly seen within tech companies, where all of the data obtained by each researcher is shared throughout the company in realtime. He writes in The Scientific American that “results, methods, questions, failures and everything in between should be published immediately, with unique identifiers and timestamps to make clear who discovered what and when, alleviating the fear of being scooped”. His ideas have great merit; this approach would replace scientific competition with realtime collaboration between scientists, and could potentially accelerate the speed at which technologies are developed, or diseases are cured.

This change is not a utopia – it is readily achievable. Whilst the platforms for a new type of science already exist, they need further support across the board in order to become the status quo. Scientists need to modernise their outlooks and take responsibility for progressing the culture of science, individually. The power is with the people.


Image: Eric Schmuttenmaer

Science Editor
Dominyka, having recently finished her studies at UCL, is enjoying getting stuck into all things creative. A dancer, illustrator, and writer, her greatest passions are accessibility, interdisciplinary work, and wine.

One thought on “Why Science Must Reward Failure

Leave a Reply

Your email address will not be published. Required fields are marked *