Nonprofit, nonpartisan journalism. Supported by readers.


UCare generously supports MinnPost’s Second Opinion coverage; learn why.

How the rise of anti-vaccine and other ‘misinfodemics’ makes us more susceptible to disease

Digital health misinformation is having increasingly catastrophic impacts on physical health.

Facebook ad
A Facebook ad can be seen at Earls Court underground station in London, Britain
REUTERS/Henry Nicholls

The importance  — to our health  — of becoming digitally savvy is the focus of a provocative article published online Thursday in the Atlantic magazine.

The article describes the health-related dangers posed by a phenomenon the article’s authors call misinfodemics — “the spread of a particular health outcome or disease facilitated by viral misinformation.”

As the article’s authors — An Xiao Mina, a technologist who works for the digital media nonprofit Meedan, and Nat Gyenes, a health and technology researcher affiliated with Harvard University’s Berkman Klein Center for Internet & Society — explain, “digital health misinformation is having increasingly catastrophic impacts on physical health.” For example:

Recent research found that Twitter bots were sharing content that contributed to positive sentiments about e-cigarettes. In West Africa, online health misinformation added to the Ebola death toll. In New South Wales, Australia, where the spread of conspiracy theories about water fluoridation run rampant, children suffering from tooth decay are hospitalized for mass extractions at higher rates than in regions where water fluoridation exists.

And, closer to home:

Over the past several weeks, new cases of measles — which the Centers for Disease Control and Prevention declared eliminated from the United States in 2000— have emerged in places such as PortlandBostonChicago, and Michigan [Minnesota, too]; researchers worry that the reemergence of preventable diseases such as this one is related to a drop in immunization rates due to declining trust in vaccines, which is in turn tied to misleading content encountered on the internet.

The source of one digital virus

“With new tools and technologies now available to help identify where and how health misinformation spreads,” Mina and Gynese write, “evidence is building that the health misinformation we encounter online can motivate decisions and behaviors that actually make us more susceptible to disease.”

The two researchers point out that today’s vaccine hesitancy can be traced back to the 1998 fraudulent article published in a medical journal by a former British physician, Andrew Wakefield. The article has since been retracted by the journal’s editors, and Wakefield’s medical license has been revoked, “but the virus his article produced has continued to infect our information channels,” say Mina and Gynese.

Article continues after advertisement

“The fraudulent study has been referenced as a basis for health hoaxes related to flu vaccines, misinformed advice to refuse the provision of vitamin K to newborns for the prevention of bleeding, and modifying evidence-based immunization schedules,” they explain.

“Like the germs running through the River Thames, toxic information now flows through our digital channels,” Mina and Gynese add.

A tepid response

So far, public-health officials have mounted a weak response to these misinfodemics, the two researchers say, primarily because officials haven’t fully changed their communications tactics to meet the digital challenges of today:

To date, many public-health interventions seem to be addressing the outward signs of a misinfodemic by debunking myths and recommending that scientists collect more data and publish more papers. As well, much of the field remains focused on providing communications guidelines and engaging in traditional broadcast-diffusion strategies, but not search-engine optimization, viral marketing campaigns, and accessing populations through social-diffusion approaches.

Research demonstrates that public-health digital outreach uses a lot of language and strategies that are inaccessible to the populations it is trying to target. This has created what the researchers Michael Golebiewski and danah boyd call “data voids”: search terms where “available relevant data is limited, non-existent, or deeply problematic.”

In examining these environments, researchers such as Renée DiResta at Data for Democracy have documented the sorts of algorithmic rabbit holes that can lead someone into the depths of disturbing, anxiety-inducing, scientific-sounding (albeit unvalidated and potentially harmful) content that often profits from explanations with quick fixes at a cost.

An uphill battle

Some progress is being made, however. Mina and Gynes note that Google has revised its search-related guidelines to give more credence to authoritative and trusted websites. The researchers also describe various projects (such as the Credibility Coalition) that are working to improve online content standards and, thus, stop misinfodemics from taking hold.

But it’s going to be an uphill battle.

As Mina and Gynes point out, although evidence-based groups like the Centers for Disease Control and Prevention (CDC) and the Mayo Clinic are now on Instagram, “their collective following is 160,000 people, or 0.1% of Kim Kardashian’s follower count.”

I took a quick look on Twitter and found that the numbers there are equally lopsided. The CDC has 1.09 million Twitter followers, and the Mayo Clinic has 1.88 million.

That compares to 58.7 million for Kardashian.

Equally discouraging, however, was my finding that the Twitter account of Gwyneth Paltrow, the actor-turned-lifestyle guru whose pseudoscientific health advice and products have been thoroughly debunked, has 2.83 million followers.

For more information: You can read Mina and Gynes’ article on the Atlantic’s website.