The statistics in question are five-year survival data. The commentary does a terrific job of explaining why this data — which Komen cites frequently — is meaningless in regard to mammography.
“If there were an Oscar for misleading statistics, using survival statistics to judge the benefit of screening would win a lifetime achievement award hands down,” write the commentary’s authors, Dr. Steven Woloshin and Dr. Lisa Schwartz of the Center for Medicine and the Media at the Dartmouth Institute for Health Policy and Clinical Practice.
Understanding ‘lead time’
Komen featured such statistics, point out Woloshin and Schwartz, in an ad launched last October during “Breast Cancer Awareness Month,” which urged women to get screened “now” because “early detection saves lives.” The ad underscored that message by declaring that the five-year survival rate for breast cancer is 98 percent when the disease is “caught early,” but only 23 percent “[w]hen it’s not.”
“This benefit of mammography looks so big that it is hard to imagine why any woman would forgo screening,” write Woloshin and Schwartz. “She’d have to be crazy.”
But it’s the advertisement that’s crazy, not the women responding to it, they say. Here’s why (warning: British spellings):
[S]creening changes the point during the course of cancer when a diagnosis is made. Without mammography screening, a diagnosis is made when the tumour can be felt. With screening, diagnosis is made years earlier when tumours are too small to feel. Five year survival is all about what happens from the time of diagnosis: it is the proportion of women who are alive five years after diagnosis. Because screening finds cancers earlier, comparing survival between screened and unscreened women is hopelessly biased.
The time between when a cancer can be diagnosed by screening and when it can be felt is called the “lead time.” Although a screening test must create lead time to have the possibility of working, lead time can bias survival statistics. Barnett Kramer, director of the National Cancer Institutes’ Division of Cancer Prevention, explained lead time bias by using an analogy to The Rocky and Bullwinkle Show, an old television cartoon popular in the US in the 1960s. In a recurring segment, Snidely Whiplash, a spoof on villains of the silent movie era, ties Nell Fenwick to the railroad tracks to extort money from her family. She will die when the train arrives. Kramer says, “Lead time bias is like giving Nell binoculars. She will see the train — be ‘diagnosed’ — when it is much further away. She’ll live longer from diagnosis, but the train still hits her at exactly the same moment.”
To see how much lead time can distort five year survival data, imagine a group of 100 women who received diagnoses of breast cancer because they felt a breast lump at age 67, all of whom die at age 70. Five year survival for this group is 0%. Now imagine the women were screened, given their diagnosis three years earlier, at age 64, but still die at age 70. Five year survival is now 100%, even though no one lived a second longer.
Lead-time distortion is not the only reason five-year survival data is meaningless in the context of screening.
“[S]creening detects some cancers that would never have killed — or even caused symptoms during a person’s lifetime,” explain Woloshin and Schwartz. “That is because some cancers detected by screening grow extremely slowly or not at all. Overdiagnosis distorts survival statistics because the numerator and denominator now include people who have a diagnosis of cancer but who, by definition, survive the cancer. Overdiagnosis inflates survival statistics even when screening fails to save lives. The more overdiagnosis that occurs, the greater the inflation.”
Even doctors are fooled by five-year survival statistics. A recent national survey conducted by Woloshin and Schwartz made the troubling finding that most primary-care physicians in the U.S. mistakenly believe improved survival rates are evidence that screening saves lives.
More reliable numbers
“The only reliable way to know that a screening test works is the extent to which it reduces deaths in a randomized trial,” write Woloshin and Schwartz.
And what do those trials tell us? They show that mammography screening reduces the likelihood that a woman in her 50s will die from breast cancer over the next 10 years from 0.53 percent to 0.46 percent, a difference of 0.07 percentage points.
That’s a long, long way from the 75 percentage points cited in the Koman ad. Furthermore, as Woloshin and Schwartz point out, the ad says nothing about the harms of screening: the unnecessary biopsies that occur with false positive results and the unnecessary chemotherapy, radiation or surgery that women go through when they are overdiagnosed.
“Women need much more than marketing slogans about screening: they need — and deserve — the facts,” write Woloshin and Schwartz. “The Komen advertisement campaign failed to provide the facts. Worse, it undermined decision making by misusing statistics to generate false hope about the benefit of mammography screening. That kind of behaviour is not very charitable.”
The commentary is part of BMJ’s “Not So” series, which the editors call an “occasional series highlighting the exaggerations, distortions, and selective reporting that make some news stories, advertising, and medical journal articles ‘not so.’” I wish I could send MinnPost readers to the BMJ website to read it, but for reasons that are inexplicable to me, the journal has decided to keep this paper behind a paywall.