New Journal Guidelines Aim to Boost Transparency in Research
Quick Links
Irreproducible results pose an enduring problem that plagues the scientific research community and slows progress, but what can be done about them? The sources and potential fixes are myriad, but in a perspective published on June 26 in Science, researchers at the Center for Open Science (COS) in Charlottesville, Virginia, focused on journals. The reason? “Publications are the currency of science,” said Brian Nosek, the director of COS and first author of the paper. A psychology researcher at the University of Virginia, Nosek, along with more than three dozen other researchers, published a list of eight guidelines that journals could implement to boost research transparency and reproducibility. So far, more than 100 journals and 30 organizations have endorsed the guidelines, but it remains to be seen how many will put them into action.
Lack of reproducibility in scientific studies has come under the spotlight in recent years, as several large-scale efforts to replicate preclinical findings have produced dismal results (see Prinz et al., 2011; Begley and Ellis, 2012; Arrowsmith 2011; Vasilevsky et al., 2013). A recent analysis conducted by Leonard Freedman and colleagues at the Global Biological Standards Institute in Washington, D.C., estimated that half of the $56 billion spent on preclinical research every year in the United States funds experiments that cannot be replicated (see Freedman et al., 2015). Freedman and colleagues proposed tackling the problem from the bottom up—by training graduate students and postdocs in proper experimental design, and implementing standards for reagents and protocols more akin to those employed in clinical research.
In their perspective, which is freely available at Science, Nosek and colleagues present a complementary approach aimed at increasing reproducibility from the top down. The guidelines are the result of a November 2014 meeting of the Transparency and Openness Promotion (TOP) committee. They encourage journals to adopt a range of standards as part of the publication process. Each of the eight standards has three levels of stringency that journals could enforce (all of them better than level zero, which does nothing to promote transparency). The first five standards concern the proper sharing of citations, data, analytic methods, research materials, and experimental design. At the highest stringency (level 3), researchers would be required to submit their data and/or methods into public repositories, and some experiments would have to be repeated by independent researchers prior to publication.
The final three standards focus on mechanisms to promote the sharing of negative data, whether or not it is formally published. Two—preregistration of studies and preregistration of analysis plans—dictate that researchers submit their entire study, or simply the experimental design of a future study, to the journal or a public repository prior to submission. If the study does not get accepted for publication, then results would still be available online, which would allow researchers to access negative data. Preregistration of an analysis plan would allow readers to cross-reference the author’s original plan with the results ultimately published. The final guideline ensures the publication of negative results in the form of “registered reports,” which are confirmatory studies published after the original findings.
A more detailed description of each guideline, as well as a list of journals and organizations that have so far endorsed them, is available on the Center for Open Science website.
If implemented widely across journals, the guidelines could go a long way to improving transparency and reproducibility in research, Nosek and several commentators agreed. However, the big question is: Will journals move forward on this? Not everyone is convinced. “My suspicion is that everyone will welcome these guidelines, which are generally sensible, but that nothing will change,” commented John Hardy of University College London. Nosek pointed out that until several top journals put the guidelines into action, some researchers may choose to submit their work to journals with less-stringent guidelines. As more journals execute the changes, pressure will mount for others to do the same. Some journals already have taken steps to ensure transparency and replicability of studies that grace their pages; for example, Nature implemented its own guidelines in 2013 (see May 2013 news). Patricia Mabry from the Office of Behavioral and Social Sciences Research at the National Institutes of Health (NIH) felt that journals will slowly adapt the guidelines. “Over time, as the overall culture in science changes (which we believe it already is doing), we expect journals will become comfortable requiring additional measures by authors and reviewers to support reproducibility,” she wrote (see full comment below). Mabry was involved in drafting the guidelines.
In addition to journals, funding agencies are another powerful potential driver of transparency. Nosek said that several funding agencies endorsed the TOP guidelines, which can be implemented much the same way in the grant application process as they are in journal publication. The National Institutes of Health, together with Science and Nature Publishing Group, produced a set of guidelines for publishers last year, similar to TOP, and the funding agency now lists the TOP guidelines on its website as well. NIH is heading its own initiatives to promote reproducibility among the researcher community (see Jan 2014 news).
Freedman commented that the TOP guidelines are an important step toward boosting reproducibility, but said he favors the bottom-up approach that focuses on researchers. In part with funding from the NIH, Freedman is leading efforts to establish programs for graduate students and postdocs. These courses cover aspects of experimental design and use of reagents, which should help standardize techniques across labs of a similar discipline, thus boosting the quality of data submitted to journals in the first place. He added that journals may not have sufficient incentives to implement changes that cost them more time and money, so instilling change at the research level could be most effective.
John Trojanowski of the University of Pennsylvania agreed that implementing TOP guidelines, especially at the highest level, will be costly and incentives are lacking. However, he added that the research community seems primed for the challenge. “In my view, the Alzheimer’s Disease Neuroimaging Initiative (ADNI) comes the closest among NIH-funded research studies to approaching but not completely meeting level 3 TOP guidelines,” he wrote. “Thus, ADNI shows that the scientific culture, at least in the AD research arena on AD biomarkers, is willing to embrace the concepts outlined in this essay, but the cost is high.”
Finally, academic institutions will also need to change their policies to enhance reproducibility, commented Bruce Lamb of the Cleveland Clinic in Ohio. “Publication in the ‘high-impact’ and ‘high-profile’ journals is so highly valued by most committees on promotion and tenure, and reviewers of grant applications, that this drives researchers to not focus on publishing ‘negative’ findings, even though these are also critical for moving science forward,” he wrote. “Reforms in the journals regarding publication requirements are unlikely to completely solve the problem without considerable and parallel reforms within institutions and granting agencies.” —Jessica Shugart
References
News Citations
- Guidelines at Nature Aim to Stem Tide of Irreproducibility
- National Institutes of Health Tackles Irreproducibility Problem
Paper Citations
- Prinz F, Schlange T, Asadullah K. Believe it or not: how much can we rely on published data on potential drug targets?. Nat Rev Drug Discov. 2011 Sep;10(9):712. PubMed.
- Begley CG, Ellis LM. Drug development: Raise standards for preclinical cancer research. Nature. 2012 Mar 28;483(7391):531-3. PubMed.
- Arrowsmith J. Trial watch: Phase II failures: 2008-2010. Nat Rev Drug Discov. 2011 May;10(5):328-9. PubMed.
- Vasilevsky NA, Brush MH, Paddock H, Ponting L, Tripathy SJ, Larocca GM, Haendel MA. On the reproducibility of science: unique identification of research resources in the biomedical literature. PeerJ. 2013;1:e148. Epub 2013 Sep 5 PubMed.
- Freedman LP, Cockburn IM, Simcoe TS. The Economics of Reproducibility in Preclinical Research. PLoS Biol. 2015 Jun;13(6):e1002165. Epub 2015 Jun 9 PubMed.
External Citations
Further Reading
Papers
- Karassa FB, Ioannidis JP. Clinical trials: A transparent future for clinical trial reporting. Nat Rev Rheumatol. 2015 Jun;11(6):324-6. Epub 2015 May 5 PubMed.
Primary Papers
- Nosek BA, Alter G, Banks GC, Borsboom D, Bowman SD, Breckler SJ, Buck S, Chambers CD, Chin G, Christensen G, Contestabile M, Dafoe A, Eich E, Freese J, Glennerster R, Goroff D, Green DP, Hesse B, Humphreys M, Ishiyama J, Karlan D, Kraut A, Lupia A, Mabry P, Madon TA, Malhotra N, Mayo-Wilson E, McNutt M, Miguel E, Paluck EL, Simonsohn U, Soderberg C, Spellman BA, Turitto J, VandenBos G, Vazire S, Wagenmakers EJ, Wilson R, Yarkoni T. SCIENTIFIC STANDARDS. Promoting an open research culture. Science. 2015 Jun 26;348(6242):1422-5. PubMed.
Annotate
To make an annotation you must Login or Register.
Comments
The TOP Guidelines are expressly designed with journals’ needs in mind. We invite them to sign on at any of three levels of support. For example, we wanted them to be able to express some level of support immediately, with minimal changes to the way they already do business. Thus, the barrier to entry for supporting the guidelines is low. Over time, as the overall culture in science changes (which we believe it already is doing), we expect journals will become comfortable requiring additional measures by authors and reviewers to support reproducibility. These practices could include requiring pre-registration of study hypotheses and analytic plans, and/or making software code for analyses and raw data available in a third party repository. Among the most stringent practices would be requiring results be reproduced by an independent party prior to publication.
We are extremely heartened to see widespread support of the TOP Guidelines so quickly, which tells us the field was ready to receive them. We have 114 journals and 41 organizations that have backed the guidelines as signatories in a mere few weeks since we began reaching out to them. Many more are considering signing on, but must go through formal (and sometimes lengthy) approval procedures before expressing public support. All of us on the committee are strong believers that reproducibility is a critical feature of rigorous science and that these guidelines, and others like them, will be instrumental for strengthening the practice of science. The “Principles and Guidelines for Reporting Preclinical Research” sponsored by NIH, Science, and Nature, have paved the way for guidelines in other areas. Whereas those guidelines are aimed at preclinical researchers, the TOP Guidelines apply to all domains of science—basic and applied, biological, social and behavioral, and beyond—because the principles underlying them apply equally to all branches of science. With buy-in from such a long list of scientific organizations and publications across so many different areas of science, including the leading journals in a number of fields, we cannot help but think that, indeed, reproducible science is well on its way to being de rigueur, which is good for all of us.
National Institute on Aging
We think these recommendations are very good and, if widely adopted, could benefit science in general, and research in neurodegenerative diseases in particular. One area that has been problematic is the use of animal models in AD research because some publications, and even clinical trials, have moved forward with neither confirmation of initial results nor disclosure of negative results that might have cautioned others to not pursue certain avenues. Sometimes trials have moved forward even without strong evidence for target engagement by the drug or molecule being tested. Underpowered, poorly designed initial animal studies with no blinding have sometimes led investigators to false conclusions, and because they don’t always include complete methodology or negative results in their publications, others cannot replicate the studies.
NIH has been addressing these issues for years and has tried to educate the community and peer reviewers to be more aware of these problems. NIA is in the process of setting up a new facility, a Preclinical Database, to catalog both positive and negative results from animal studies that examine potential clinical interventions for Alzheimer’s disease. This should be operational in the near future and we will be encouraging scientists to submit their results for inclusion in this database, which will be publicly available. The International Alzheimer’s Disease Research Funders Consortium has set up a Working Group on Reproducibility as well. NINDS is currently recruiting a Rigor and Reproducibility Officer who will create programs to address the reproducibility issue and increase oversight of research the Consortium funds. Other NIH institutes are also taking steps to address the situation.
Indiana University
There certainly is a great need for promoting reproducibility within the scientific literature. As noted by the authors, reproducibility problems are likely due to various incentive systems that drive researchers, including institutions/universities, granting agencies, and publishers. While the article focuses on standards for journals and publishers (which is certainly one place to start), there is perhaps as great or perhaps an even greater need for reforms in institutions regarding promotion and tenure as well as in granting agencies. Publication in the “high-impact” and “high-profile” journals is so highly valued by most committees on promotion and tenure, and by reviewers of grant applications, that this drives researchers to eschew publishing “negative” or findings with less impact, even though these are also critical for moving science forward. Reforms in publication requirements are unlikely to completely solve the problem without considerable and parallel reforms within institutions and granting agencies.
University of Pennsylvania
Transparency, openness, and reproducibility, as the authors emphasize in the first paragraph, are of critical importance to advancing all areas of scientific research, so I think everyone would embrace them. However, the implementation of strategies to promote these ideals is the problem and subsequent paragraphs in this essay are on the mark as to why this is so. Notably, the lack of academic as well as publishing and funding incentives to promote transparency, openness, and reproducibility is a key factor, or barrier, as the essay notes. The table summarizing TOP guidelines is clear and achieving level 3 would be desirable, but this is daunting to implement due to the costs involved, even if the scientific research culture was aligned to accept the most stringent standards. That said, in my view, the Alzheimer’s Disease (AD) Neuroimaging Initiative (ADNI) comes the closest among NIH funded research studies to approaching but not completely meeting level 3 TOP guidelines (see Weiner et al., 2015), but I do not know of any research programs that fully implement level 3 TOP guidelines and I expect that the cost of doing so could double the NIH budget. For studies such as ADNI, which makes all data generated accessible to the public, the cost of efforts to promote transparency, openness and reproducibility while high, is very worthwhile, but this may not be the case for all research, as the essay notes. Thus, ADNI shows that the scientific culture, at least in the AD research arena on AD biomarkers, is willing to embrace the concepts outlined in this essay, and in many ways this reflects the attitudes of the ADNI Core Leaders who adopted the principals of openness and transparency as core ADNI values when they signed on in 2004.
Indeed, this essay focuses on journals and publishing, but I think the need for change in how academia measures and incentivizes success in research is far more challenging than changing journal practices since academia rewards the success of individuals, not teams, which is a significant barrier to promoting transparency, openness, and reproducibility.
Make a Comment
To make a comment you must login or register.