Skip to Content

Support MinnPost

UCare generously supports MinnPost’s Second Opinion coverage; learn why.

How 'outcome switching' can fool us into thinking certain drugs are effective

Reporter Julia Belluz on Study 329's claims about Paxil's effectiveness for children: “It became clear that the study’s original conclusions were wildly wrong. Not only is Paxil ineffective, working no better than placebo, but it can actually have serious side effects, including self-injury and suicide.”

In an article published last week on the website Vox, health reporter Julia Belluz highlights the devious and insidious practice in medical research known as “outcome switching.”

As Belluz explains, outcome switching has long been used to dupe both doctors and the public about the effectiveness of various prescription drugs. One of the most infamous cases involved a clinical trial known as Study 329. That study, funded by the drug company GlaxoSmithKline (GSK) and published in 2001, claimed to show that the antidepressant paroxetine, or Paxil, was “well tolerated and effective” for kids.

But years later — after doctors had written 2 million Paxil prescriptions for children and adolescents — “it became clear that the study’s original conclusions were wildly wrong,” writes Belluz. “Not only is Paxil ineffective, working no better than placebo, but it can actually have serious side effects, including self-injury and suicide.”

Belluz explains how the study’s authors managed to fool everybody:

Before researchers start clinical trials, they’re supposed to pre-specify which health outcomes they’re most interested in. For an antidepressant, these might include people’s self-reports on their mood, how the drug affects sleep, sexual desire, and even suicidal thoughts. 

The idea is that researchers won’t just publish positive or more favorable outcomes that turn up during the study, while ignoring or hiding important results that don’t quite turn out as they were hoping.

But that doesn’t always happen. “In Study 329,” explains Ben Goldacre, a crusading British physician and author, “none of the pre-specified analyses yielded a positive result for GSK’s drug, but a few of the additional outcomes that were measured did, and those were reported in the academic paper on the trial, while the pre-specified outcomes were dropped.”

And, yes, it’s certainly OK to report on unexpected outcomes in studies. That’s one of the ways science moves forward. But it’s not OK to simply drop any mention of the drug’s effects on the pre-specified outcomes just because the results weren’t positive.

“Switching your outcomes breaks the assumptions in your statistical tests,” Goldacre explains to Belluz. “It allows the ‘noise’ or ‘random error’ in your data to exaggerate your results (or even yield an outright false positive, showing a treatment to be superior when in reality it’s not).” 

“We do trials specifically to detect very modest differences between one treatment and another,” he adds. “You don’t need to do a randomized trial on whether a parachute will save your life when you jump out of an airplane, because the difference in survival is so dramatic. But you do need a trial to spot the tiny difference between one medical intervention and another. When we get the wrong answer, in medicine, that’s not a matter of academic sophistry— it causes avoidable suffering, bereavement, and death. So it’s worth being as close to perfect as we can possibly be.”

Holding researchers accountable

Goldacre, with the help of a team of medical students at the University of Oxford, launched a new initiative in October called the Compare Project. They are “systematically checking every trial published in the top five medical journals, to see if they have misreported their findings.” (Those five journals are the New England Journal of Medicine, the Journal of the American Medical Association, The Lancet, the Annals of Internal Medicine and BMJ.)

As of yesterday, Jan. 4, the project had checked 66 clinical trials. Nine of the trials were “perfect” — in other words, the studies reported all of their pre-specified outcomes and didn’t add any new ones. But in the other trials, 355 pre-specified outcomes were not reported and 336 new outcomes were silently added.

“On average, each trial reported just 58.2% of its specified outcomes,” Goldacre and his team write on the Compare Project website. “And on average, each trial silently added 5.1 new outcomes.”

When Goldacre and his team uncover evidence of outcome switching, they write a letter about it to the academic journal — and then track if the letter gets published. Of the 55 letters sent thus far, five have been published, eight have been rejected for publication, and 24 remain unpublished after four weeks.

As Goldacre makes clear to Belluz, he doesn’t think “every trialist and journal that we’ve caught switching its outcomes is doing so deliberately, to rig the results. Often it’s clumsiness, or a failure to take the issue sufficiently seriously.” 

“But this sloppy reporting gives cover to people who are deliberately cherry-picking their results and rigging their findings,” he adds. “That’s why it’s so important to hold the line and strive to report trials correctly, or at least explain why you’ve switched from your original plans.”

You can read Belluz’s article on Vox. For more information about the Compare Project, go to its website.

Get MinnPost's top stories in your inbox

Related Tags:

About the Author:

Comments (4)

Bad Pharma

Goldacre's "Bad Science" is – in a non-scientist's humble opinion – a classic critique, for scientists and non-scientists alike.

Goldacre's second book, "Bad Pharma," would seem to provide at least part of the intellectual basis for the reporting of Juilia Belluz, though I have no idea if the reporter has read Goldacre's book, or if the two of them have ever communicated. The WIkipedia summary of Goldacre's book is pretty similar to what's quoted in Susan's piece above:

“Drugs are tested by the people who manufacture them, in poorly designed trials, on hopelessly small numbers of weird, unrepresentative patients, and analysed using techniques which are flawed by design, in such a way that they exaggerate the benefits of treatments. Unsurprisingly, these trials tend to produce results that favour the manufacturer. When trials throw up results that companies don't like, they are perfectly entitled to hide them from doctors and patients, so we only ever see a distorted picture of any drug's true effects.”

I don't have the remaining lifespan or resources to fact-check every assertion Goldacre makes, but, "outcome-switching" has much in common with many another example Susan has provided in these spaces over the past few years. When there's a clash between profit and medical evidence, at least in this society, profit very often wins the day, and apparently without serious question or argument among members of drug company boards of directors.

Goldacre

Speaking as a humble scientist with some background in behavioral pharmacology, I've found Goldacre's work convincing.
The problem is (to address another comment) that the government directly funds mostly basic research; things like the mechanisms of drug action. Most applied research -- whether a given drug is safe and effective -- is funded by drug companies ) (Big Pharma) to meet FDA licensing requirements.
It's not surprising that the 'back drawer effect' (publishing only positive outcomes has been documented by others than Ben Goldacre. Just recently the FDA has started requiring that all studies be made available to the public, not just the positive ones that are submitted for publication.

Cutting science budgets not so much a great idea after all

Well, conservatives have been slashing government funding for research for over a decade now. This has warped the funding stream, narrowed focus, and damaged credibility and integrity.

Thank you

to Susan for her continued quality reporting on issues of importance, bringing us detailed information we might otherwise not encounter.