It’s a familiar complaint: Much of basic and preclinical research cannot be reproduced. What’s more, most irreproducible studies remain in the literature, leading other scientists to waste time and resources attempting to repeat the findings. One solution is to make it easier to publish negative data, but past efforts to do so have had at best limited success. Now, a wave of new forums are taking on the challenge, with innovative publishing models. The new outlets fill different niches. One is a channel at Faculty of 1000 Research that focuses on preclinical research, particularly from industry; one is an online journal—Science Matters—for single observations; another is a database where preclinical studies are graded. The new efforts appear amid a push for greater transparency in research. Some researchers have begun to post their lab notebooks, and other venues encourage scientists to upload unpublished or preprint data.

It is too early to know if these fledgling ventures will thrive. Researchers express enthusiasm for the initiatives in principle, but whether they will follow through and submit their data to these sites remains to be seen.

Regardless of whether these particular outlets take off, researchers agree that changes in science publishing are desperately needed. “A lack of reproducibility across the entire research spectrum is one of the biggest issues we are facing in the biological sciences,” Lorenzo Refolo at the National Institute on Aging, Bethesda, Maryland, told Alzforum. An alarming 2012 study by scientists at Amgen in Thousand Oaks, California, reported that they were unable to replicate key findings from 47 of 53 publications, and other studies have cast similar doubt on the reliability of the literature (see Begley and Ellis, 2012Vasilevsky et al., 2013Prinz et al., 2011). 

In response to such reports, the National Institutes of Health and other groups have established new guidelines calling for more openness as well as more rigorous research methods and data analysis (see May 2013 newsJan 2014 newsJul 2015 news). 

Thus far, many calls for reform have focused on cleaning up the literature. Part of the problem, researchers say, is that scientists have little incentive to submit contradictory or confirmatory data for publication. “Science rewards people who make new discoveries. Those who correct the literature don’t get credit,” Bruce Alberts at the University of California, San Francisco, told Alzforum. Alberts is a former editor in chief of Science magazine and president of the National Academy of Sciences. Those who do try to submit such data often run into barriers. Lawrence Rajendran at the University of Zurich went through two years and multiple rounds of revision before having a paper that challenged a high-profile finding rejected at Nature. Rajendran told Alzforum that other papers have met a similar fate, “Clearly, something is not working in science publishing.”

Some groups tried to correct this problem more than a decade ago by starting outlets for contradictory findings. They include the Journal of Negative Results founded by Bjorn Olsen at Harvard, or the negative results section added by Neurobiology of Aging (see May 2003 newsSep 2004 news). However, these resources remain underused. Scientists told Alzforum that cultural barriers remain. “There was a stigma to publishing negative results. It wasn’t seen as worthwhile,” Refolo told Alzforum. Alberts added that politics plays a role as well. “People don’t like to alienate powerful figures in science,” he said.

Bringing Industry Research into the Open
The newest publishing efforts attempt to overcome some of these problems by lowering the barriers to publication. They make it quick and easy to submit findings while encouraging open commentary and peer review. For example, in 2013, the Faculty of 1000 launched the online, open-access journal F1000 Research to rapidly publish findings in biology and medicine. Articles are posted within days, after being checked by an editor for proper formatting and to ensure they meet basic standards. After publication, at least two referees openly peer-review the paper. Authors can respond to criticism and make corrections. Papers that receive two or more positive reviews are submitted to PubMed and other online indexes.

Alberts thought this format would be a good fit for publishing replicative studies. Together with Alexander Kamb at Amgen, he inaugurated a new F1000 Research channel, Preclinical Reproducibility and Robustness, on February 4 to provide a forum for data that confirms or contradicts published preclinical studies. “It is our hope that, both through this format and others, a vigorous new publishing culture can be established to enhance the crucial self-correcting feature of science,” they wrote in their initial editorial. The launch was covered by science journals and other media (see stories in Nature, Science, and The Economist). 

In particular, the researchers hope this channel will tap into the wealth of unpublished data accumulated by industry scientists who attempt to replicate academic findings. “There’s a huge amount of privately funded research that should be part of the scientific literature,” Alberts told Alzforum. Amgen scientists kicked off the effort by publishing three such studies. Kamb is working with other companies to encourage submissions, Alberts said. Amgen declined to make Kamb available for an interview.

Industry scientists normally have little incentive to publish in-house data. The new channel may change that, Alberts said. He believes publication will benefit industry scientists both by correcting the literature, saving time that would otherwise be spent in futile experiments, and by pointing out instances where replication failed due to faulty methods. The authors of the original study are invited to comment on the papers and note any methodological problems.

Other scientists praised the new forum. “[This is] a valuable asset to the research community. I strongly support the initiative,” Sangram Sisodia at the University of Chicago wrote to Alzforum. Rita Guerreiro at University College London, U.K., noted that post-publication peer review works well for the most part. “Because the editorial and reviewing process is completely transparent, it increases confidence in the review and levels the process for everyone,” she wrote.

Some researchers, however, pointed out potential problems. Gary Landreth at Case Western Reserve University, Cleveland, wrote, “The work must be subject to rigorous peer review to establish whether the reproduction study was in fact a genuine attempt to reproduce the original experiments … A significant issue with ‘failure to replicate’ studies is that while they gain attention for challenging published findings, they are rarely subject to the same scientific scrutiny.” (See full comment below.) In one of the first three articles on the channel, Amgen scientists reported a failure to replicate previous findings by Landreth and colleagues on the ability of the cancer drug bexarotene to lower Aβ levels (see Feb 2012 newsMay 2013 newsFeb 2016 news). For his part, Landreth pointed out on the F1000 channel that Amgen scientists used a formulation of the drug that has different pharmacokinetic properties than the standard therapeutic version. “The ability to post comments on the F1000 site is a valuable feature of this forum,” he noted.

Observations, Not Stories
Another new outlet takes a different tack. Rajendran saw problems with the emphasis traditional science publishing places on telling complete stories. He believes this tendency introduces bias. “You can’t have plot spoilers or negative data,” Rajendran said. He started the online journal Science Matters in November 2015 to address this. This forum will publish single observations, including negative, confirmatory, and orphan data. Related observations will be linked on the site, with green lines indicating confirmatory data and red lines contradictions. “If a node has many green arrows, you can visually see it’s a better target for trials,” Rajendran said. This journal launch also attracted media coverage (see Science, Vox Science and Health).

Rajendran emphasizes that it is easy to publish through this venue. Submissions can be written using an online template, and then go through an editorial office that checks formatting and removes the authors’ identifying information. The anonymized paper goes to the editorial board, which selects a handling editor who sends the paper out for review. The identities of the authors, editors, and reviewers are all hidden from each other, so that politics can play no role in reviews. Reviewers score the paper on a one to 10 scale for technical quality, with a score of four or higher required for publication. They usually turn the paper around within two weeks. Accepted papers are indexed on PubMed. If a paper scores less than four, the reviewers must suggest ways to improve the study, and the authors have the option to resubmit with additional data. Rajendran believes this triple-blind, quantitative review process will help remove bias and allow more types of research to be published. “It’s democratizing the way we publish science,” he told Alzforum.

Rajendran said initial reaction has been positive. More than 400 scientists have joined the editorial board, including leaders in the area of reproducibility such as Thomas Südhof at Stanford University and Brian Nosek, the founder of the Center for Open Science in Charlottesville, Virginia. The site has received around 60 submissions so far, with the first 14 papers published in February. Rajendran used the forum to publish his own findings questioning the conclusions of a high-profile paper that identified γ-secretase activating protein (GSAP) as a modulator of Aβ production (see Jan 2014 news), after Nature rejected them repeatedly. He plans to publish additional data that extend the finding further. Observations published on the site are like LEGO bricks, Rajendran noted. “The [science] story develops naturally. You basically open up your lab notebook.”

A Shift Toward Openness
These publishing ventures reflect a zeitgeist that increasingly values openness. Scientists in some fields have literally opened up their lab notebooks online, for example on the Open Source Malaria site. Neurodegeneration researchers have been slow to follow this trend, but at least one is trying it out. Rachel Harding, a Huntington’s disease researcher at the University of Toronto, announced in February that she would blog about her research in real time and upload all her methods and raw data to the data-sharing site Zenodo. The project was initiated by her funding agency, the Cure for Huntington’s Disease Initiative (CHDI) Foundation, which supports openness and collaboration in research, she told Alzforum.

“This is an experiment to see if releasing real-time, warts-and-all data fosters new collaborations within the field, and helps us do more effective, efficient science,” Harding said. She hopes to receive valuable suggestions on how to improve her experiments, as well as give Huntington’s patients insight into the scientific process. The risk, she acknowledged, is that other researchers could replicate her findings prepublication and scoop her data. “A lot of researchers think it’s a crazy idea. But we want to answer the big scientific questions as quickly as we can.”

Other venues are also making unpublished data available. bioRχiv (pronounced “bio-Archive”), run by Cold Spring Harbor Laboratory, New York, posts preprint manuscripts that have not yet been submitted to journals. This allows authors to receive suggestions on drafts, as well as make data immediately available to the community. Several journals accept manuscripts directly from bioRχiv, and most of the published preprints eventually appear in peer-reviewed journals, according to an article at Phys.org. bioRχiv started in November 2013.

Research Ideas and Outcomes (RIO), founded in September 2015 by the academic publishing company Pensoft, goes a step further. This open-access journal pledges to publish all stages of research, from grant proposals, methods, and raw data to final results, as well as posters, conference abstracts, and thesis projects. Articles can be peer-reviewed either before or after submission. This model allows authors to receive credit for their work and ideas and find potential collaborators, the publishers suggest.

The National Institute on Aging is getting in on the act, as well. To improve research reproducibility, NIA scientists are developing a database of preclinical Alzheimer’s research. Institute researchers upload published papers and then grade them based on best-practice guidelines for preclinical studies. Every study receives an Experimental Design report card that checks for such features as whether experiments were properly blinded, balanced for gender, whether the researchers reported drug dose and formulation, and whether they included a power calculation to determine if the sample size was large enough to observe the predicted effect. Many studies fail these basic tests. Refolo, who runs the project, noted that of the first 100 or so studies uploaded to the database, only one included a power calculation. The database is currently in beta form and will be released to the public this summer, he said.

The database will also include an option for researchers to register studies and upload unpublished data. As with other online papers, the data will receive a digital object identifier (DOI) so they can be cited and authors will receive credit for the work, Refolo noted. He hopes this will encourage the publication of negative and confirmatory findings that might otherwise go unreported.

Will all these ventures complement each other? Guerreiro supports the idea of different channels where distinct types of scientifically valid research can be published. She noted that in genetics, researchers usually cannot publish data on known mutations because the findings are not considered novel. Yet this information is important because it describes phenotypes associated with each mutation, and helps scientists assess whether a mutation is pathogenic and dissect the role of genetic variability in disease. Publication of such findings in a database could advance the field, she suggested. “The most expensive research projects are those that are performed and not published,” Guerreiro wrote to Alzforum.

As all these new outlets spring up, the question remains: Will scientists use them? Many researchers say these initiatives are a great idea, but have reservations about whether they adequately protect researchers’ ideas. Promotions and grants are still based on a scientist’s ability to publish original findings in top-tier journals, commenters noted. “The big issue [for new outlets] will be doing outreach and getting buy-in from the community. We’re talking about a cultural change,” Refolo said. Guerreiro, meanwhile, suggested the new forums will evolve in response to user feedback and become better utilized with time. “Change is inevitable. These new publication formats are the future,” she wrote.—Madolyn Bowman Rogers

Comments

  1. The issue of reproducibility of preclinical studies is of substantial importance, and there is broad consensus that these types of investigation must be appropriately designed and powered. The current NIH–based effort to require rigorous experimental design is a welcome effort to mandate these changes.

    The F1000 initiative is laudable in principle. However, in order for it to be a reliable vehicle to validate, or not, studies published in the literature, the work must be subject to rigorous peer review to establish whether the reproduction study was a genuine attempt to reproduce the original experiments. Otherwise, the practice of replicating published research is open to abuse. 

    The recent report from Amgen published in the F1000 channel is indeed emblematic of the problems associated with "failure to replicate" studies. It concluded that the authors were unable to reproduce the bexarotene-mediated reduction in brain soluble Aβ levels published in Cramer et al., 2012. First and foremost, the Amgen study employed wild-type rats, whereas our original study was performed using transgenic mouse models of Alzheimer’s disease that overexpress β-amyloid. 

    Moreover, the authors treated the rats with a solubilized preparation of bexarotene. The original study used the clinical formulation of bexarotene (Targetin™), which is a micronized form delivered orally in aqueous solution. Drug formulation is critical, as the solubilized and micronized forms of bexarotene have very different pharmacokinetics. This point was explicitly discussed in the published literature on bexarotene, in the FDA filings, in a recent paper (Chen et al., 2014), and in the commentary on this work that was published by us in Science in 2013. 

    Tesseur and DeStrooper published a comprehensive summary of these issues (When the dust settles: what did we learn from the bexarotene discussion?). It is clear from their summary that, in six studies that attempted to repeat this aspect of our work, those that used the drug’s micronized form replicated the published results, whereas those that used solubilized forms of bexarotene did not. The Amgen study is particularly troubling, as this exact point was discussed extensively.

    In general, a significant problem with "failure to replicate" studies is that while they gain attention for challenging published findings, few are subject to the same scientific scrutiny. The Tesseur and DeStrooper summary is a rare example of a detailed examination of the scientific issues. More commonly, the titles of the papers disparage the published findings while the studies themselves are commonly less rigorous than those in the original publication. A clear example of this was a report on the failure to reproduce bexarotene–mediated behavioral improvement, in which the authors employed a mouse transgenic AD line that was not behaviorally impaired.

    The recently publicized Amgen report adds little to the discussion. I would argue that while the objectives are laudable, the editorial process was inadequate to meet their stated goals. The F1000 initiative is presumably a work in progress and has the potential to minimize these types of problems through a public discussion of contentious issues. The ability to post comments on the F1000 site is a valuable feature of this forum.

    References:

    . ApoE-directed therapeutics rapidly clear β-amyloid and reverse deficits in AD mouse models. Science. 2012 Mar 23;335(6075):1503-6. Epub 2012 Feb 9 PubMed.

    . Bexarotene nanocrystal-Oral and parenteral formulation development, characterization and pharmacokinetic evaluation. Eur J Pharm Biopharm. 2014 May;87(1):160-9. Epub 2013 Dec 12 PubMed.

    . When the dust settles: what did we learn from the bexarotene discussion?. Alzheimers Res Ther. 2013;5(6):54. Epub 2013 Nov 7 PubMed.

  2. It's rewarding to see the recent attention to a topic of such critical importance. At the NIH AD summit in February 2015, I recommended something that at the time seemed game-changing: namely, that after publishing a novel finding in a high impact journal—e.g., a novel biomarker for AD—the same journal offer a prize to investigators to submit follow-up publications aimed at replicating the results. The journal would then publish the results submitted by independent investigators in a special issue about six months after the initial paper, and all manuscripts positive or negative would be accepted following peer review. In this way, the field could gain at least some idea of the viability of promising new biomarkers without waiting years to learn that the results did not hold up.

    Such a strategy would also accelerate the regulatory endorsement of novel tools such as biomarkers by gaining consensus for the most promising candidates as early as possible. The leading biomarkers will show the most consistent repeatability in different labs by different investigators and thus can be used with more confidence in clinical trials.

    In the current era that is embracing the importance of sharing clinical data to foster advancing drug development, far less attention has been paid to sharing preclinical data. The issue of reproducibility when considering research in industry deserves special attention, because the costs for such activities are daunting yet go unrecognized.

    Failing fast is a term often used in drug development, yet this concept is challenging to impose in the traditional academic culture. The initiatives summarized in this article will truly have impact when embraced by the larger community.

    I applaud these efforts, yet I agree with Dr. Landreth's comment that exact methodology is required to adequately interpret the results. Such findings may elucidate criteria for fostering standardization parameters for the future.

Make a Comment

To make a comment you must login or register.

References

News Citations

  1. Guidelines at Nature Aim to Stem Tide of Irreproducibility
  2. National Institutes of Health Tackles Irreproducibility Problem
  3. New Journal Guidelines Aim to Boost Transparency in Research
  4. That Should Have Worked! Where to Publish? Try the New Journal of Negative Results
  5. Neurobiology of Aging to Publish Negative Results—Call for Manuscripts
  6. Upping Brain ApoE, Drug Treats Alzheimer's Mice
  7. Bexarotene Revisited: Improves Mouse Memory But No Effect on Plaques
  8. Bexarotene—First Clinical Results Highlight Contradictions
  9. GSAP Revisited: Does It Really Play a Role in Processing Aβ?

Paper Citations

  1. . Drug development: Raise standards for preclinical cancer research. Nature. 2012 Mar 28;483(7391):531-3. PubMed.
  2. . On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ. 2013;1:e148. Epub 2013 Sep 5 PubMed.
  3. . Believe it or not: how much can we rely on published data on potential drug targets?. Nat Rev Drug Discov. 2011 Sep;10(9):712. PubMed.

External Citations

  1. Journal of Negative Results
  2. Preclinical Reproducibility and Robustness
  3. editorial
  4. Nature
  5. Science
  6. The Economist
  7. Science Matters
  8. Science
  9. Vox Science and Health
  10. Open Source Malaria 
  11. blog 
  12. Zenodo
  13. bioRχiv
  14. Phys.org
  15. Research Ideas and Outcomes

Further Reading