The randomized controlled trial (RCT) is generally considered the gold standard of research.

But even its findings can be misrepresented, or “spun” — and right in the published study’s own abstract and conclusions.

Sadly, such spinning goes on with discouraging regularity, according to a study (which, presumably, took special care not to spin its own findings) in the May 26 issue of the Journal of the American Medical Association (JAMA).

For this new study, a team of British and French researchers closely examined and analyzed 72 RCTs (all published in 2006). All the studies had statistically nonsignificant outcomes — in other words, the experimental treatment showed no evidence of being beneficial, according to the study’s own data.

But that didn’t stop many of the studies’ authors from dressing up the findings to make the treatments appear beneficial. More than half of the 72 studies reviewed for the JAMA study spun their statistically nonsignificant conclusions in a positive way.

“[T]he reporting and interpretation of findings was frequently inconsistent with the results,” the JAMA researchers concluded.

In talking with Reuters Health, one of the study’s authors, Dr. Isabelle Boutron of the Universite Paris Descartes in France, expressed a bit more outrage.

“Some of it was quite shocking,” she told the Reuters reporter. One study, Boutron noted, had concluded that a system of detecting cancer worked when the study’s actual results showed no such thing.

How was the spinning done? In several different ways, the JAMA study found. Sometimes a study’s authors would parse their data until they found some statistically significant results (within a small subgroup, for example), which they would then emphasize. Sometimes, they would describe only how patients did before and after treatment, but not how they responded to a placebo. And sometimes, as in the case of the cancer detection system, they simply claimed a beneficial outcome when there wasn’t any.

Deliberately misleading?
Does such spin reflect a deliberate attempt to mislead or ignorance about how to interpret statistics — or both? Boutron and her colleagues said they couldn’t tell. “Nor are we able to draw conclusions about the possible effect of the spin on peer reviewers’ and readers’ interpretations,” they wrote.

However, the fact that the spin occurred in the studies’ abstracts has important implications. Other research has found that “readers often base their initial assessment of a trial on the information reported in an abstract,” the JAMA authors wrote. “They may then use this information to decide whether to read the full report, if available. Furthermore, abstracts are freely available, and in some situations, clinical decisions might be made on the basis of the abstract alone.”

For more about study bias and spin, check out my post earlier this year on a study of drug company trials by Metropolitan State University psychology professor Glen Spielmans.

Leave a comment