A re-analysis of the 2001 study that led physicians to prescribe the antidepressant paroxetine (Paxil) to millions of adolescents has uncovered disturbing evidence that the study’s authors misrepresented the clinical trial’s data.
After a yearlong review of 77,000 pages of the study’s original documents, a team of independent investigators has concluded that Paxil is no more effective than placebo for the treatment of major depression in teens.
They also found that Study 329, as the 2001 clinical trial is known in the medical literature, significantly downplayed harms associated with the drug, including an increased risk of suicidal behavior.
The original study — funded by the pharmaceutical company SmithKline Beecham, which became part of GlaxoSmithKline in 2000 — had asserted that paroxetine was “generally well tolerated and effective” in adolescents.
The re-analysis, which was published Wednesday in the journal BMJ, has renewed calls for the retraction of Study 329 from the Journal of the American Academy of Child and Adolescent Psychiatry (JAACAP), where it was published.
“Study 329 is not just about antidepressants in children,” says Dr. Jon Jureidini, an Australian child psychiatrist and one of the researchers involved in the re-analysis, in a released video. “It’s the paradigm of industry-financed clinical trials reporting tainted results. It’s a rallying point for how difficult it’s been to get the truth about questionable drugs.”
(As they disclose in the re-analysis, Jureidini and one of his co-authors, U.K. psychiatrist Dr. David Healy, have received financial compensation for providing expert testimony in lawsuits against companies that manufacture and market paroxetine and similar antidepressants.)
Questions from the start
Concerns about the design and integrity of Study 329 were expressed by scientists and medical experts from the start — concerns that were strong enough to keep the U.S. Food and Drug Administration (FDA) from approving the use of Paxil for the treatment of depression in adolescents.
In fact, a formal FDA review of Study 329 in 2002 reported that “on balance, this trial should be considered as a failed trial” because it did not show that Paxil was more effective than placebo.
But that didn’t stop GlaxoSmithKline from using Study 329’s findings to launch a major marketing campaign that encouraged doctors to prescribe Paxil to adolescents “off label.”
The campaign worked, and with remarkable ease. In 2002 alone, more than 2 million prescriptions for Paxil were written for children and teenagers in the United States — all off label, according to the New York Attorney General’s Office, which sued GlaxoSmithKline in 2004.
Sales only slowed after June 2003, when concerns about increased suicidal thoughts and behavior in young people taking Paxil caused the FDA to require a “black box” warning on the drug’s label. (That warning was later extended to other selective serotonin reuptake inhibitor antidepressants, or SSRIs.)
Data not easy to access
In 2012, GlaxoSmithKline made a settlement with the U.S. Department of Justice to plead guilty to criminal charges and to pay a $3 billion fine for various fraudulent practices, including the promotion of paroxetine to adolescents.
As part of a settlement in an earlier class-action lawsuit, the company had also agreed to release all its Study 329 data and documents to the public.
But, until now, no one had systematically returned to that data to examine how the original researchers had come to their flawed determination that paroxetine was “well tolerated and effective.”
That changed with the launch in 2013 of the global RIAT (restoring invisible and abandoned trials) initiative. The re-analysis of Study 329 is an outcome of that project.
‘Very different conclusions’
“In our re-analysis, we came to very different conclusions than those published in the original paper, both in terms of the efficacy of the drug and the harms that it caused,” says Jureidini in the video.
He and his colleagues found, for example, that the original authors had not actually followed the study’s protocol, and that when the protocol was followed it showed that Paxil was neither clinically nor statistically superior to placebo.
Even more alarming was the finding that the study’s authors had classified certain information about the adolescents who were taking Paxil in ways that undercounted their suicidal thoughts or behaviors.
The data “was often misleadingly coded,” says Jureidini. “For example, a significant suicide attempt was called ‘emotional lability.’”
GlaxoSmithKline did not make it easy for Jureidini and his colleagues to identify those mislabeled events.
“We had to carry out a very onerous examination of individual patient data, and it was difficult both to obtain that data and to analyze it,” says Jureidini. “We had to negotiate for a considerable time with [GlaxoSmithKline] to get access to case report forms. And once we did have access to it, it was through a remote desktop interface, which was extremely problematic to use, extremely time-consuming, and would, I think, have defeated most research teams.”
Those efforts revealed some stunning differences in adverse events between paroxetine and placebo — differences that weren’t reported in the original study.
“Severe adverse events were 2.6 times more frequent in the paroxetine group [than in the placebo group],” says Jureidini. “Psychiatric adverse events were 4 times more frequent.”
Perhaps most striking, he says, was the finding that 11 of the 93 children taking Paxil in the study — not just five as reported in the original study — had developed suicidal or self-harming thoughts. That compared to one in the placebo group.
Resistance to correcting the record
In an article that accompanies the re-analysis, Peter Doshi, an associate editor at the BMJ and a professor of pharmaceutical health services at the University of Maryland, describes in damning detail how academic and professional institutions have resisted attempts to address and correct the “many allegations of wrongdoing” in the paroxetine story.
For example, investigative reports conducted by newspapers, congressional committees and lawyers revealed long ago that the original study was essentially ghostwritten by a GlaxoSmithKline consultant, despite having a long list of academic researchers listed as its authors.
In addition, the study’s lead author — Brown University’s then chief of psychiatry, Martin Keller — had a history of financial relationships with the pharmaceutical industry, which were not fully reported when the study was published.
Furthermore, the AACAP, which published the study (and continues to refuse to retract it), has received between $500,000 to $1 million each year since 2003, or 5 to 20 percent of its annual revenue, from the pharmaceutical industry, Doshi points out.
“None of the paper’s 22 mostly academic university authors, nor the journal’s editors, nor the academic and professional institutions they belong to, have intervened to correct the record,” writes Doshi. “The paper remains without so much as an erratum, and none of its authors — many of whom are educators and prominent members of their respective professional societies — have been disciplined.”
It is often said that science self corrects,” he adds. “But for those who have been calling for a retraction of the Keller paper for many years, the system has failed.”