U.S. Surgeon General’s Advisories are reserved for urgent public health threats. Previously, advisories have addressed the importance of things like mental health, breast feeding, and disease prevention.
Now, the threat of health misinformation has made the list.
In the golden age of social media, we no longer simply experience epidemics—we also experience infodemics.
Infodemics are the overabundance of information, both correct and incorrect.
For decades, we have had the information superhighway at our fingertips. Our reliance on the internet to provide us answers quickly might be contributing to the increasing issue of infodemics.
Increasingly, it feels like the gaps in what we do know are being filled with speculation. And even some facts that are more concrete, if not a little boring, are being overridden with more hysterical claims and conspiracies.
The ease and speed with which you can receive information over social media platforms is fueling the increased spread of mis- and dis-information.
The definitions of misinformation and disinformation can vary.
When I say misinformation, I’m talking about things that are stated as fact, but are actually false based on the body of scientific evidence.
Disinformation is similarly false, but where many people might spread misinformation unknowingly, thinking it is true, disinformation is spread with the intention to mislead.
Basically, we could all fall victim to accidentally spreading misinformation, but when someone shares information that they know is false, it becomes disinformation.
Before we get into the current state of misinformation, it is important to note that it is not exactly a new phenomenon.
In our vaccines episode, we talked about the history of skepticism around the smallpox vaccine. But there were also full-blown conspiracy theories that circulated during the smallpox eradication campaign.
Burma (now known as Myanmar) was a former British colony, and during the aggressive smallpox vaccination campaign, rumors began to circulate that the Queen of England had a dream that the British monarchy would be overthrown by a Burmese child, and vaccination was actually a cover to infect and kill the population to prevent the end of the monarchy.
This is, of course, similar to the conspiracy that COVID vaccines are actually a tool to kill off a massive portion of the population.
Cholera outbreaks in the 1800s in England caused people to believe that you were more likely to die in the hospital than if you just stayed home (which is also similar to a conspiracy we had during COVID), or, in the worst case, that the doctors were actually killing people to supply cadavers for anatomy schools.
But misinformation doesn’t just happen by the public’s word of mouth. Governments and health workers have also, at times, fallen short of their intended communication goals.
Take, for example, the United States government and the 1918 flu pandemic. In an effort to prevent the pandemic from overshadowing the war effort and causing the public to panic, the government may have downplayed the prevalence of disease.
Because of this, illness came with a stigma that getting the flu was for the weak, which could cause soldiers to hide their symptoms and potentially spread the disease further.
During the AIDS epidemic, researchers originally termed the disease “gay-related immunodeficiency,” and a publication in the Journal of the American Medical Association (a prominent and trusted medical journal) incorrectly reported that HIV could be spread outside of direct contact with bodily fluids.
As understanding about how the virus spread and who it could infect grew, these initial false claims remained difficult to correct. As a result, we still experience a strong stigma against men who have sex with men, as well as other vulnerable groups underestimating their own risk if they view it as a disease of “gay men.”
Historically, medical misinformation has been a group effort, with the public, government, and public health all having the potential to contribute to the confusion.
While that is still true today, it does seem that the general population’s contribution to the problem is rapidly increasing, thanks to our near-universal use of social media.
According to the Pew Research Center, 40% of Instagram users and 52% of TikTok users report regularly getting their news from the social media sites.
And that doesn’t necessarily mean that they’re getting it from verified news organizations on those sites.
When you start to think about the way social media platforms are designed to function, and the content that gets rewarded in the algorithm, it almost feels like they are tailor-made to help misinformation spread.
Part of the reason that misinformation can spread so quickly is because it is often framed to evoke an emotional response. Untrue medical claims often highlight the danger of something and the risk it poses to you and your family.
These claims often speak in absolute or definite terms, but science is always evolving, and risks change based on nuanced circumstances.
Misinformation can also appeal to our previously held biases.
If someone is telling you something you already believe is true, then there is less of a desire to fact check what they’re saying.
Especially in cases of medical or health misinformation, having someone to blame is also a strong motivator.
Instead of facing the reality that bad things sometimes just happen, like the perfect mutation of a virus jumping from animals to humans or stronger hurricanes happening as a result of years of out-of-control CO2 emissions, it is more comforting to be able to point the finger at someone or something.
Like a lab leak being responsible for the pandemic or that it’s the government controlling the weather.
These motivators for spreading misinformation are amplified by the social media algorithm and reward system.
Engagement is a form of currency on social media platforms, and the platforms themselves are engineered to make users seek out the reward of likes and comments.
A quick way to increase engagement is by posting emotionally charged content.
Algorithm-based platforms then prioritize showing users content that has the most engagement, creating a cycle of misinformation promotion.
But it’s not just social media that fuels the infodemic. Conflicting answers on search engines can cause confusion among people looking for health information.
In an effort to address this, Google has been attempting to summarize search results with AI, and its responses haven’t always worked out well.
Shortly after debuting the AI program Gemini, users began noticing recommendations like putting glue on pizza and eating one small rock a day.
So how do we combat misinformation, and what can you do to stop its spread?
First, there is a lot of work to be done by institutions and governments to help people understand the limitations of knowledge.
A great example of this is the efficacy of mRNA vaccines. When we first started getting efficacy results from the clinical trials in 2020, they were amazing—95% effective at preventing COVID compared to people who got a placebo.
But what needed to happen and did not, was communication around the uncertainty of how long that efficacy would last.
And when the protection from the first vaccines began to wane, and those who were vaccinated started getting COVID again, people lost trust in vaccines, despite the fact they still work incredibly well to prevent serious disease and reduce the risk of long COVID.
A large part of science literacy is helping people understand the limitations of knowledge, which can help them spot bad science. Because spotting bad science, and addressing it before it spreads, can make a huge impact.
So here are some things to look out for:
Who is giving you the information? Are they being transparent about their qualifications? Someone who presents themselves as an expert without making their credentials, relevant research, or experience available to you should give you pause.
How is the information presented? Real scientific research rarely presents itself in absolute terms. Are the limitations of knowledge being expressed? Are they talking about the context of risk?
Where is the information coming from? Does the person or organization that is presenting this information to you have a conflict of interest when it comes to their funding or partners?
Relatedly, is someone telling you about a health problem that is putting you into a panic and then conveniently offering to sell you the solution?
Are the things they are talking about based on a large, well designed study, or is it based on anecdotal experiences?
Is someone trying to convince you of a conspiracy theory?
While conspiracies do happen, conspiracy theories are often based on misinformation and hard to refute due to their general lack of evidence.
Abbie Richards, a researcher specializing in understanding how misinformation, conspiracy theories, and extremism spread on TikTok, has some great tips for spotting the difference between a conspiracy (which is a secret plan by a group to do something unlawful, and have actually happened) and a conspiracy theory.
For example, is the goal a single tangible outcome (like stealing election information from the opposition) or is the end goal outlandish and vague (like making the entire population of a country 5G receivers).
Would the conspiracy involve a small group of people who could potentially keep the secret safely? Or does it require collaboration between an unbelievably large group of people?
Real conspiracies are typically hidden until they get brought to light with evidence from investigations by journalists. Conspiracy theories seem to be constantly and openly talked about, yet never have any evidence presented.