4 October 2008. No one would want to make a major investment without carefully weighing the pros and cons. But when it comes to your most prized investment—your health—the cons may not always be readily apparent. According to a recent study, fewer than half the clinical trials carried out to support new drug applications are published in the medical literature within five years of Food and Drug Administration (FDA) approval. The study also found that the trials with statistically weaker results are least likely to go through the peer-review process, adding to the perception that drug companies ignore, or even try to suppress less than stellar data. The findings question whether doctors and patients have access to all the facts when deciding if and how to medicate. “Journal articles are the most influential means of getting clinical trial evidence out to the public. If the evidence that is coming through this literature is selective—that positive trials are more likely to come through that channel than negative trials—then it just makes it look like these drugs are better than the totality of the companies’ studies would show,” said Ida Sim, principal investigator on the study. “That clearly has an influence on drugs that get prescribed and the way that dollars are spent in healthcare,” she told ARF.
Writing in the 23 September PLoS Medicine, Sim, who is at the University of San Francisco, California, and colleagues report surveying 90 drugs that had been approved by the FDA between January 1998 and December 2000. From FDA review documents, first author Kirby Lee and colleagues identified 909 clinical trials that had been conducted during the development of those drugs. Only 394 of the trials (43 percent) had appeared in the medical literature by August 2006. “That wasn’t too surprising, but it was still a sobering finding,” said Sim. By reviewing the FDA Summary Basis of Approval Documents, which outline some of the clinical data and statistical analyses performed during the approval process, Lee and colleagues were able to correlate the likelihood of publication with statistical outcome. They found that trials with statistically significant results were nearly twice as likely to be published as those without. Pivotal trials, which measure efficacy in predefined outcomes, were also more likely to be published, 76 percent making it through the peer-review process.
“The findings confirm and are even worse than what we found,” Erick Turner of Oregon Health and Science University, Portland, told ARF. Earlier this year Turner and colleagues reported a similar review of 12 antidepressants in the New England Journal of Medicine (Turner et al., 2008). For those particular medicines, Turner and colleagues found that 68 percent of trials were published. “It is revealing to see that they have taken a broad swath of indications to see what happens across the board,” said Turner.
It is not clear why clinical trials, and negative trials in particular, are not being published. It does not simply seem to be a matter of time, since Lee and colleagues found that drug trials, if they are published, appear almost exclusively within three years of FDA approval. Sim suggested that much of the publication bias, for clinical trials and beyond, can be attributed to a type of self-censorship. “It’s sort of human nature that if you do a study and it does not turn out to be an interesting positive study that there is just less interest in pursuing it,” said Sim. But she added that when there is money at stake, the reasons get more complex. “In the commercial sector there are additional reasons for being more enthusiastic about submitting positive trials, because once those get into the medical literature they help with marketing, of course, whereas the negative studies do not,” she said.
Turner suggested that the publication bias problem may be deeper than simply not reporting data. “Sometimes there is a dramatically different spin on the data,” he said. His study on antidepressants found that the FDA concluded that 36 out of 74 trials were negative or questionable. However, out of those 36 trials, 11 were subsequently published in the medical literature as having positive outcomes.
Another reason drug companies may not want to publish negative data is because it may aid the competition. “Absolutely, if you’ve been spending many dollars on a particular avenue of research and it ends up being negative, you don’t particularly want to broadcast that because then you might tempt other companies to go down that road, even if it’s a dead-end,” said Sim. But she added that there are bigger interests at stake. “One cannot forget that clinical trials are experiments on humans, and that there are risks with trials given that drugs can cause harm, so if a company explores an avenue of research that another company has already deemed to be unfruitful, that second company is putting additional patients at unnecessary risk,” she said. This is one of the main reasons why transparency is not just a scientific but also an ethical issue, Sim added.
Some steps are already being taken to increase the transparency and accountability of clinical trials. They are driven in part by rofecoxib’s (Vioxx) and similar debacles (see ARF related news story). Safety data that emerged long after FDA approval of rofecoxib forced Merck to pull the drug from the market and precipitated a flurry of lawsuits, many still in progress. Had all the data been available at the outset, much of that may have been avoided. “I think not only would we have known [the side effects] earlier but the question of whether the cardiovascular risks were significant or not would have been approached as a scientific question and we would have had a much more dispassionate discussion than we are having now,” said Sim. “It is a shame that the controversy really overshadowed the benefits of the drug,” she added.
One of the moves precipitated by the lack of transparency in the pharmaceutical world is the FDA Amendments Act (FDAAA), passed last year. It stipulates that clinical trial results, whether positive or negative, have to be submitted into a publicly accessible database. Ironically, that might lead to even fewer trials appearing in peer-reviewed journals, suggested Sim. This goes back to the fact that the main source for publication bias is the investigators themselves. “If you were thinking your study was negative and not all that exciting and maybe doesn’t help marketing plans to have the information out there, and then you think ‘well actually I’ve done my civic duty, the negative results are there already in the public domain on the clinicaltrials.gov register,’ then I could imagine that may be a disincentive to write up that study and get it published,” said Sim.
Turner agrees that the FDAAA could well result in fewer trials being published in medical journals. The bottom line for patients and the medical community is “caveat emptor all the more,” he said. “We’ve been trained to be suspicious about marketing messages when we buy soap or a car, but in our medical training we think the peer-reviewed article is the Holy Grail. I’m saying not so fast, maybe it’s not a great resource after all.” The other issue, as he points out today in a letter in Science, is that the FDAAA only applies to future drugs. All drugs that are already approved are grandfathered in, and there is no indication that those drugs are becoming obsolete. Prescriptions for some approved diabetes, cholesterol-lowering, and antidepressant drugs are increasing at an annual rate of 8, 10, and 22 percent, respectively. “While this act provides for a registry and results database that is prospective, we need one that is also retrospective,” Turner and colleagues write.—Tom Fagan.
Lee K, Bacchetti P, Sim I. Publication of clinical trials supporting successful new drug applications: A literature analysis. PLoS Medicine 2008, September 23;5:e191. Abstract