News Update

Opinion: One way to stop the dangerous spread of vaccine myths

Kara  Alaimo Kara  Alaimo
White House officials have expressed outrage over the failure of Facebook and other platforms to crack down on false claims about vaccines amidst rising numbers of coronavirus cases, hospitalizations and deaths in recent days. Last week, President Joe Biden claimed that social networks are “killing people” by allowing health misinformation to proliferate, though on Tuesday he took back those words and instead placed blame on the authors of such misinformation.
Facebook, for one, disagrees with the claims that it is responsible for fueling misinformation. A spokesperson told CNN that Biden’s allegations that tech companies are responsible for spreading vaccine misinformation “aren’t supported by the facts. The fact is that more than 2 billion people have viewed authoritative information about COVID-19 and vaccines on Facebook, which is more than any other place on the internet.”
As the coronavirus continues to sicken Americans and so many others worldwide, people (especially those lucky enough to live in a country with strong Covid-19 vaccine access) need to be encouraged to get vaccines that can not only protect them from serious illness and death, but also stop them from spreading the virus to those around them who aren’t eligible to be inoculated — like children and people with certain medical conditions. The spread of misinformation about vaccines must be stopped.
That’s why the White House is right to question Section 230. It does need to be updated. But the exceptions to this law must be extremely narrow and focus on widespread misinformation that clearly threatens lives.
According to the Center for Countering Digital Hate, just 12 people are responsible for 65% of the misinformation about vaccines that has been circulating online. The organization found 812,000 instances of anti-vaccine content on Facebook and Twitter between February 1- March 16, 2021, which it reported was just a “sample” of the misinformation that is widely spreading.
Trump's meritless lawsuits don't stand a chanceTrump's meritless lawsuits don't stand a chance
The failure of tech companies to stop it is unconscionable. But instead, according to the Center’s report, misinformation has on occasion actually been recommended by Instagram (owned by Facebook) to its users. And, even when this false content has been reported to social media companies, they have overwhelmingly declined to take action against it. While the Center faults Facebook, Twitter and Google for failing to identify and remove anti-vaccine content, it notes that “the scale of misinformation on Facebook, and thus the impact of their failure, is larger.
Many Internet activists oppose changing Section 230 because removing its protections against legal liability for online intermediaries who host or republish content could limit our ability to have wide-ranging conversations on social media — including those about controversial topics. And, clearly, it would not be feasible for tech companies to monitor and fact-check all of the conversations we have on social media every day. “Attacking Section 230 of the CDA does nothing but show that you have no idea what you’re talking about when it comes to ending abuse online,” Zoë Quinn, who was the victim of fake online claims that she slept with a reviewer in order to get him to write a glowing review of a game she created and was deluged with death threats and other abuse as a result, wrote in her book Crash Override: How Gamergate (Nearly) Destroyed My Life, and How We Can Win the Fight Against Online Hate.
But there’s a way to protect the openness of the Internet and the ability of social networks to operate while still cracking down on falsehoods that cause mass harm. Congress should pass a law holding tech companies responsible for removing content that directly endangers lives and achieves mass reach — such as more than 10,000 likes, comments, or shares. The definition of endangering lives should also be narrow. It should include grave threats to public health — like vaccine misinformation — or other direct invitations to cause serious harm to ourselves or others.
This requirement of this type of updated legislation would allow tech companies to focus their efforts on policing content that spreads widely (and, by the way, also makes them the most money, since social networks rely on popular content to keep people on their sites so they can earn advertising revenue). Content with the most reach and engagement is, of course, the most influential and thus potentially harmful.
Of course, there’s plenty of legal precedent for this. As I’ve pointed out before, it’s Constitutional to restrict freedom of speech in limited cases, such as when it threatens to facilitate crimes or poses imminent and real threats. Clearly, information that fuels a deadly pandemic qualifies.
What anti-vaxxers sound like to meWhat anti-vaxxers sound like to me
Such a law would also come with serious danger that cannot be discounted and must be addressed: politicians could try to use it to hamper the spread of information they don’t like that is actually true. (Remember how former President Trump frequently called accurate reports he didn’t like “fake news?”). That’s why the arbiters of truth in such cases would need to be federal judges, who are nominated by the president but confirmed by the Senate and are supposed to be impartial. The Justice Department and state attorneys general could bring suits against social networks for failing to remove deadly misinformation that spreads widely on their platforms, such cases could be decided by a panel of judges (to further protect against a single activist jurist), and tech companies found to be in violation of the law could face monetary fines.
The real idea here is that the prospect of financial penalties and the public relations damage that comes with lawsuits would cause social networks to step up their policing of misinformation to avoid facing suits in the first place. That would keep the onus mostly on companies to ferret out and shut down fake news that is dangerous and widespread.
Of course, that’s exactly what happens with copyrighted material. Copyright infringement isn’t protected under Section 230, so when a user shares copyrighted material on a social network without permission, the owner of the copyright can sue the platform for damages. That’s why social networks have gotten so savvy about removing such content — and how we’ve ended up in past situations like when Twitter removed a clip posted by former president Trump of the band Nickelback while separately allowing him to invoke the prospect of a civil war.
If tech companies can figure out how to remove clips that harm people’s commercial interests, surely they can also figure out how to take down posts that pose threats to our lives.
Social networks could have avoided this kind of regulation by doing a better job of cracking down on misinformation in the first place. But they have long tried to shirk responsibility for the social effects of the misinformation that spreads on their platforms. In 2017, Facebook CEO Mark Zuckerberg used his voting power to block a shareholder resolution that would have required the company to merely publicly report how it deals with misinformation and the impact of its misinformation policies.
Like the viruses vaccines protect us against, misinformation has become explosively contagious and deadly on social media. Congress should inoculate us against some of the worst of it while still maintaining the viability of broad, unfettered speech that doesn’t threaten lives on social media.
Click to comment

Leave a Reply

Your email address will not be published. Required fields are marked *

Most Popular

To Top