Pinterest banned climate change disinformation. Will other social media giants do the same?
Editor's note: After this article was published, Twitter was purchased by Elon Musk and the company's measures to stop the spread of climate change disinformation were weakened.
Pinterest is the place to go for inspiration on everything from recipes to re-decorating. Now it may inspire change within its own industry.
The website, which has 431 million active users, is the first major social platform to ban content that denies the scientific consensus that climate change is happening and influenced by human activity. The new rules, announced in April, apply to both posts and ads.
"We repeatedly heard from experts that climate misinformation, including climate change denying narratives, is causing real harm by impeding meaningful climate action," says Pinterest’s policy head Sarah Bromma.
Climate disinformation pollutes every social media platform, but is especially prolific on Facebook and Twitter. One study estimates that there are up to 1.36 million views of climate misinformation every day on Facebook. On Twitter, bots account for 25 percent of the tweets about climate change, and those fake accounts tend toward denial.
But, despite millions of people being misled — and an Intergovernmental Panel on Climate Change (IPCC) report calling out disinformation as a barrier to critical global action — it's rare for a social media company to ban content, even content that is verifiably false.
That’s why Friends of the Earth's, Michael Khoo, who advocated for Pinterest's new policy as co-chair of the Climate Disinformation Coalition, calls it the new "gold standard."
If there were a will, there is a way to stop climate disinformation
"Climate disinformation is unique," explains Environmental Defense Fund's Lauren Guite, who co-chairs the Climate Disinformation Coalition. "If you look at who has the incentive to spread climate disinformation, it's actually a really small group of people."
In fact, one watchdog group found that just 10 accounts, two of which have ties to Exxon, are responsible for nearly 70 percent of the climate disinformation circulating on Facebook.
So why not just clamp down on those accounts?
"Facebook does have the power," Guite says. "It just doesn’t have the will."
Nicola Aitken, who works on content policy for Meta (which owns Facebook and Instagram), says the company does take action against accounts that repeatedly share false claims about climate science. She says that Facebook makes posts less likely to be seen in users' feeds and can take away a page's ability to register as a news page. Aitken also points out that Facebook has a network of more than 80 independent fact-checking organizations who review and rate content, including climate content, in more than 60 languages.
Unfortunately, says Guite, the fact-checking is uneven. The Center for Countering Digital Hate estimates that Facebook failed to label 50 percent of posts promoting climate change denial.
Compared with Pinterest’s 431 million — Facebook has nearly 3 billion monthly users — making its failure to reign in misinformation far more detrimental. But Meta's size, and ownership of Instagram (which has 1 billion users), also offers an opportunity for meaningful action.
Meet the 'Toxic 10' delaying climate action
Nearly 70% of climate disinformation on Facebook comes from just 10 accounts. These 10, two of which have ties to oil giant Exxon, exploit social media’s system of amplifying stories with a lot of shares and likes with posts designed to provoke outrage (and be reflexively shared.) These toxic 10 are:
2. Western Journal
4. Townhall Media (founded by the Exxon-funded Heritage Foundation)
5. Media Research Center (receives funding from Exxon)
6. The Washington Times
7. The Federalist Papers
8. Daily Wire
9. Russian state media, via RT.com and Sputnik News
10. Patriot Post, a secretive site whose writers use pseudonyms
Will Meta and Twitter follow Pinterest’s lead?
"If the new Pinterest policy becomes an industry game-changer, it will be because the public realizes these platforms have the power to make these kinds of changes, and demands it," says Guite who also runs the Misinformation Brigade, an army of social media users dedicated to reporting misinformation on the web. (You can join them here.)
Since the Pinterest announcement, Twitter, which has 330 million monthly users, announced it would "accelerate its climate commitments" by prohibiting misleading ads that contradict the scientific consensus on climate change. (Google's YouTube and Meta have already committed to prohibiting ads that feature climate disinformation.)
While Twitter's ad policy change is another step in the right direction, Guite points out that the new policy doesn’t address climate disinformation that spreads organically through shares. "That's a much bigger problem than paid ads," Guite says. "We'd like for them to have policies to stop the spread of climate misinformation like they do for Covid or elections."
In May, Twitter also announced that it would limit the visibility of tweets it calls "copypasta," a slang term for content that’s purposefully copied and shared at scale. It has legitimate uses. Advocacy organizations often ask followers to copy and paste a tweet to get something noticed. But it’s also a disinformation tool, and taking steps to curb it will help limit bad information from being artificially amplified.
Still, Pinterest has gone a step beyond any social media company by committing to remove inaccurate content about climate change, as well as misrepresentations of scientific data that aims to erode trust in experts. They are also committed to enforcing the rules, with repeat offenders potentially having their accounts removed.
So through the Climate Disinformation Coalition, EDF is keeping up the pressure on Meta, Twitter and other platforms to do more to reduce disinformation through transparency, including weekly reports on the scale of climate change disinformation and their mitigation efforts, as well as new protections for people and communities who are climate disinformation targets.
But what's also desperately needed, says Guite, is regulation.
How do you regulate what's lawful but awful?
"For a long time, politicians were trapped in the free speech debate around this question," says Alaphia Zoyab, director of campaigns and media for Reset, which works on policy change to address the harms of disinformation.
Especially in the US, legislators from both parties have been stymied from taking action by an army of industry lobbyists who frame regulation as a choice between free speech and online safety.
But European regulators have found a way forward: Focus on the system that amplifies divisive and extreme ideas, rather than on what's good or bad speech.
To do this, the European Parliament agreed to pass the Digital Services Act, replacing the patchwork of fact-checking policies crafted by the social media companies themselves with uniform requirements for accountability and transparency across platforms.
"The companies have to assess the risks [like climate change] that their platforms exacerbate," explains Zoyab. "It's an elegant way forward because it sidesteps the free speech issues, but looks at the amplification of disinformation."
The new law will also establish a regulatory body and EU commission with audit and inspection powers. "It's a whole new system of demanding data from the platforms that will let independent investigators determine if the platforms are telling the truth," Zoyab says.
Khoo says that these policy shifts would start social media companies on a real path toward reform. And compared to the climate crisis, he sees disinformation as an easier problem to solve.
"Humans wrote the code that amplifies disinformation, so humans can fix that code," he says.
Hope for a warming planet
Get the latest Vital Signs stories delivered to your inbox