Nonprofit, nonpartisan journalism. Supported by readers.

Donate

UCare generously supports MinnPost’s Second Opinion coverage; learn why.

When you’re told a drug is safe and effective, you might want to ask, ‘Says who?’

Can we believe the published results of studies sponsored by drug companies? Or is the financial conflict-of-interest behind those studies so great that it produces distorted evidence — distortions that lead to medical decisions that harm patients?

In the Dec. 5 issue of the British Medical Journal, physician-journalist Ben Goldacre and former Merck bigwig Vincent Lawton go head to head on this topic.

As far as I’m concerned, Goldacre delivers the knockout arguments. He does it by presenting some devastating evidence, such as the results of a large meta-analysis of studies conducted on non-steroidal anti-inflammatory drugs (NSAIDs):

[The analysis] found all the studies that had ever been published where one NSAID was compared to another. In every single trial, the sponsoring company’s drug was either equivalent to, or better than, the drug it was compared to: All the drugs were better than all the other drugs. Such a result is plainly impossible.

Another review cited by Goldacre looked at 30 industry-funded studies and found that “studies sponsored by drug companies were more than four times as likely to have outcomes favouring the funder compared with studies with other sponsors.”

How does this systematic bias occur? “One answer is questionable trial design,” says Goldacre. “Studies are conducted, for example, where the competitor drug is given at an inadequate dose, or worse, at a higher does, increasing the risk of side effects, and so making the sponsor’s drug appear to be preferable.”

Goldacre also describes how companies pick and choose which data to publish — and which to make sure never sees the light of day. The companies make sure that disappointing results remain unpublished while positive results are published repeatedly, but in ways that are difficult to spot.

The success of these efforts is “staggering,” says Goldacre.

Ramsey and Scoggins went to clinicaltrials.gov and found all the trials on cancer: 2028 in total. Only 17.6% of these trials could be found published on PubMed, but 64% of those that were published reported positive results. Restricting their analysis to only industry sponsored trials, these results became even more extreme: just 5.9% were on PubMed, but of those trials, 75.0% gave positive results.

These kinds of shenanigans can lead doctors (and information-seeking patients) to believe a drug is better than it is, says Goldacre.

In medicine, bad information leads to bad decisions: We prescribe one drug where an alternative would have been more effective, or had fewer side effects; or we prescribe an expensive drug, unnecessarily, when a cheaper alternative was equally effective, and so we deprive the community of limited healthcare resources. This is dangerous and absurd. Doctors who are making treatment decisions need access to good quality trial data presented transparently and all of it, not just the positive findings that drug companies choose to share.

Lawton’s response? He argues that there are already plenty of regulations in place to ensure good studies, that studies sponsored by noncommercial interests can also be substandard, and that the pharmaceutical industry has developed “various transparency measures” with the input of regulators and the academic community.

(Hmmm. … No mention of the scandals involving academic researchers’ conflicts of interests with drug and medical device makers. Nor does he mention questionable ties that have arisen in the past between government regulators and the pharmaceutical industry. )

Lawton also points out how expensive it is for drug companies to bring a new product to market (an average of $1.2 billion per drug, he says), and if companies were compelled to surrender their intellectual property (aka, details of their studies), drug innovation would essentially come to a halt.

“This seems to be a sure way to drive away the incentive to innovate,” he says. “At present about 75% of the funding for clinical trials in the United States come from industry and total industry spending on research is greater than that of the National Institutes of Health.”

Maybe I’m missing something, but what’s the point of that argument if the results of those trials can’t be trusted?

You can read the arguments of both Goldacre and Lawton in full at the BMJ website. The two men also argued this topic at the PharmaTimes Great Oxford Debate last August.I’ve made a cursory search, but I haven’t been able to find a video or podcast of the event. If any MinnPost reader can locate one, please send me an email and I’ll put up the link.

You can also learn about all our free newsletter options.

Comments (3)

  1. Submitted by Paul Scott on 12/09/2009 - 04:28 pm.

    Yes, but if you subtract from the $1.2 billion spent bringing a new drug to market the moneys needed for boxes of glazed rings and bear claws for secretaries at medical offices, you end up at about $36.50.

  2. Submitted by Mike Wyatt on 12/10/2009 - 01:52 pm.

    It has been frustrating in the last few years listening to government officials oppose things based solely on the FDA not signing off or studying it. Namely, cannabis for medical purposes. Having used many “FDA-Approved” drugs for my condition, and experiencing the scary side effects all while making my condition WORSE, what do you suppose I think of the FDA rubber stamp approval process?
    The cost of health care would likely drop significantly if so many drugs didn’t carry such a myriad of side effects. It’s like there are three pills to counteract the side effects of the first pill. The profit potentials of these dangerous drugs are sickening. But more sickening is the closed-loop funding that insures their passage through the FDA rubber stamp process. Our drug and food supplies are not scrutinized closely enough, or with the safety of consumers first and foremost in their priorities. More skimping at the Federal level on things that we cannot afford to go “cheap” on.

  3. Submitted by Bernice Vetsch on 12/10/2009 - 04:32 pm.

    During George Bush’s tenure, the FDA was managed by an appointee who was given instructions to move new drugs more quickly to market, some times no doubt before any long-term study was conducted. That should change now.

    The U.S. pays most of the cost of research, although the drug companies claim that as an onof the main reasons why they raise their prices every year.

    Dennis Kucinich suggested a few years ago that the U.S. pay 100 percent of all drug research costs and conduct all studies. Each new drug, after its effectiveness was proven, would enter the market as a generic drug that any and all companies could manufacture and sell in competition with one another. (Companies could, if they wished to patent a new drug and sell it for more, pay 100 percent of the R & D costs on it.)

Leave a Reply