Skip to Content

Support MinnPost

This content is made possible by the generous sponsorship support of UCare.

Newly established medical practices often prove to be ineffective, study finds

A study that analyzed articles published in just one prominent medical journal over a period of 10 years has found that newly established medical practices — even ones in wide use — are often reversed by subsequent evidence-based research.

The finding, which was published online this month in the journal Mayo Clinic Proceedings, belies the common assumption that the very latest screening technology, medication or surgical technique is an improvement on care.

Indeed, in a video that accompanied the publication of the analysis, Dr. Vinay Prasad, the study’s lead author and a medical oncologist at the National Institutes of Health, said that “of all those things we’re during currently [in medicine] that lack good evidence, probably about half of them are incorrect.”

10 years of research

For the study, Prasad and his colleagues evaluated 1,344 original articles published in the New England Journal of Medicine between 2001 and 2010. Each had assessed a new or established medical practice, such as a screening, diagnostic test, medication, surgery or other procedure. 

They found that very few of the studies — 363, or 27 percent — had tested an established medical practice, or things doctors were currently doing. Most — 981, or 73 percent — focused on a new practice.

As Prasad notes in a press statement, “While the net breakthrough is surely worth pursing, knowing whether what we are currently doing is right or wrong is equally crucial for sound patient care.”

In addition, Prasad and his colleagues found that articles that tested new practices were more likely to find them beneficial than articles that tested existing ones. That finding supports a growing body of research that has uncovered a positive bias in studies involving new drugs, devices, or procedures.

Practices that ‘never worked’

Of the 363 articles that investigated established medical practices, 146 (40.2 percent) found the practices to be ineffective, while 138 (38 percent) reaffirmed the practices. The remaining 79 articles (21.8 percent) were inconclusive — in other words, they couldn’t determine if the practices were effective or not.

The 146 medical reversals “weren’t just practices that once worked and have now been improved upon,” Prasad states. “Rather, they never worked. They were instituted in error, never helped patients, and have eroded trust in medicine.”

Examples of these reversals include the following:

  • Using stents for the treatment of stable coronary artery disease.
  • Prescribing hormone therapy to postmenopausal women to protect against heart disease
  • Routinely installing a pulmonary artery catheter for patients in shock
  • Using the drug aprotinin during heart surgery
  • Prescribing COX-2 inhibitors (Celebrex, Vioxx) for inflammation and pain
  • Urging people with diabetes to adhere to very strict blood sugar targets
  • Treating osteoarthritis of the knee with arthroscopic surgery
  • Routinely screening older men for prostate cancer with the prostate specific antigen test (PSA)
  • Advising patients with dust-mite allergies to buy impermeable mattress covers

The mattress-cover recommendation, Prasad notes in the video, is a good example of a medical practice that seemed to make sense — and that launched a $26 million-a-year industry — until it was thoroughly debunked by a set of NEJM papers.

Medicine’s inertia

Medical reversals tend to play out in similar ways.

“Although there is a weak evidence based for some practice, it gains acceptance largely through vocal support from prominent advocates and faith that the mechanism of action is sound,” write Prasad and his colleagues in their report. “Later, future trials undermine the therapy, but removing the contradicted practice often proves challenging.”

The societal cost of medical practices instituted in error “on the basis of premature, inadequate, biased, and conflicted evidence” are “immense,” the researchers add. Not only do such practices harm people while the practices are in favor, but they continue to cause harm long after being proven ineffective.

“Medical practices have an inertia to them,” says Prasad in the video. Research has shown, he explains, that it takes about a decade for physicians — and patients — to accept evidence that an established practice doesn’t work.

Medical reversals also undermine patient trust. “These are practices that should never have been instituted,” says Prasad.

Seek good evidence

What should patients do to protect themselves against ineffective medical practices?

“Patients who are embarking upon procedures — screening tests, diagnostic tests — should really try to ascertain whether those tests are based on good evidence,” says Prasad. “By good evidence I mean randomized controlled trials that are powered for hard endpoints, such as mortality or morbidity, and not surrogate endpoints. Many of the reversals that we examined were actually propagated on faulty surrogate data that was ultimate overturned by studies examining hard endpoints.”

In other words, is a treatment’s proclaimed effectiveness based on its proven ability to, say, reduce the incidence of heart attacks and stroke (a hard endpoint) or only on its ability to lower cholesterol (a surrogate endpoint)? Is a treatment being touted because it has been shown to lower the incidence of bone fractures (a hard endpoint) or simply because it improves bone density (a surrogate endpoint)?

(For a better understanding about why surrogate markers are appealing to researchers, particularly companies trying to launch a new medical product or procedure, and why patients should be wary of medical practices that rely on such markers to claim effectiveness, read this primer by Minnesota-based health-media expert Gary Schwitzer on his HealthNewsReview website.)

“The take-away message of our paper is that a large proportion of the medical practices which are based on little to no evidence are probably incorrect,” concludes Prasad. “Their continued use jeopardizes patient health and wastes limited health care resources. We should work toward identifying these medical practices of which the data is not robust and subject them to systematic appraisal, ideally with large, well-done randomized controlled trials powered for hard endpoints and conducted by non-conflicted bodies.”

You can read the study on the Mayo Clinical Proceedings website, where you’ll also find the video of Prasad’s discussion of the results.

Get MinnPost's top stories in your inbox

Related Tags:

About the Author:

Comments (2)

Good points!

There just might be a link to the sources of funding for most medical research, and the bias towards publishing positive compared to negative outcomes.

Bias takes many forms

Undoubtedly financial incentives play a huge role in reporting positive outcomes, or stopping clinical trials early when benefit *might* be apparent. There is good data on the correlation between industry funding and positive outcomes. Its a complex issue, though, as industry funded research is highly prevalent, and in many cases, so long as the investigators control both the data and the writing, it can still lead to valid conclusions.
However, the bias doesn't stop there. Researchers are in many cases biased by their hypothesis; its very hard not to be.
There is no question that there is a challenge to strike the appropriate balance between hope and skepticism. For one, medical schools need to teach better critical appraisal of research validity and application- known as Evidenced-Based Medicine. Many of them are in fact are actually doing so.
Its great reporting like this that can lead to more informed and appropriately cautious consumers of health care.