Artificial Intelligence: The Next Climate Wildfire?

Artificial Intelligence: The Next Climate Wildfire?

Artificial Intelligence: The Next Climate Wildfire?

Donate Now!

Your contribution will benefit Friends of the Earth.

Stay Informed

Thanks for your interest in Friends of the Earth. You can find information about us and get in touch the following ways:

Name(Required)
Hidden
Opt-in
This field is for validation purposes and should be left unchanged.

by Michael Khoo, climate disinformation program manager
Originally posted in The Messenger

Generally, we think nothing of filling a prescription, test-driving a new car, or taking a flight — because we know the companies that made those products had to prove they were safe.

Yet, new technology like artificial intelligence (AI) — so advanced it can literally improve itself — is entering increasingly widespread use with essentially no safety checks at all. This is alarming.

There are obvious dangers in allowing artificial intelligence to spread and multiply in an unregulated and profit-driven free-for-all. You might think ofthe kind of self-aware, self-evolving science-fiction technology in the “Terminator”film. AI using AI to improve itself is, however, a faraway threat. The more immediate danger is humans weaponizing AI to make it easier for disinformation to spread.

Lina M. Khan, chair of the Federal Trade Commission, has urged that regulators not repeat the mistakes made with social media: Allowing new technology out of the box with no rules. We know that privacy breaches and real social harms have resulted from the developed social media without strong regulatory guardrails. And she says that unless we learn from our mistakes, artificial intelligence also risks “turbocharging fraud.”

We’re not talking about fake videos of the Pope dressed up like Elton John, either. AI imaging has already reportedly been used on the campaign trail, and algorithms can vastly expand the capacity to run micro-targeted — down to even an individual level — persuasion campaigns. These abilities mean AI risks amplifying existing disinformation, including about climate change.

Misleading information about climate change and extreme weather is already a serious problem — without the help of AI. We recently saw conspiracy theories circulating on social media about the recent Canadian wildfires spread quickly — which included falsities that the fires were caused by environmentalists or LGBTQ activists or intended to clear space for renewable energy projects.

Imagine if something that could create more believable lies and spread them further was producing the disinformation. AI very well could, unless we demand the protections that we take for granted in pharmaceuticals and transportation. As we saw with social media, once new technologies leave the lab, it’s too late.

In the wrong hands, AI could forever undermine factual climate discourse with its ability to tailor-make stories, arguments, even realistic images. By scraping social media posts and other digital activity, AI has the potential to create billions of pieces of disinformation and then personalize them and disseminate them — which could make it extremely difficult to tell fact from fiction. This could not only hinder fact-based climate action, but it could also pose serious danger around extreme weather events — when clear and accurate information is critical.

Fortunately, we can still prevent the AI genie from leaving the bottle by applying the same principles regulators have used on pharmaceuticals and automobile safety to artificial intelligence.

We should start with transparency. Before AI enters use, companies must be compelled to show regulators how algorithms work — and prove that they are safe. Even China is proposing this commonsense step in its AI regulations. American rules to ensure AI accountability currently being developed by the Commerce Department should, too.

This should include rigorous safeguards against AI algorithms explicitly using disinformation, hate speech and fraud to manipulate human emotions. Strict regulations surround pharmaceutical advertising. There’s every reason to similarly constrain AI, for as the United Nations has noted, “the propagation of scientifically misleading information” has severe “negative implications for climate policy.”

It should also have to adhere to community standards. We already require television to do so. Even TikTok has adopted standards specifically to counter climate disinformation. That same logic should be applied to AI, especially given its potential power to permeate what we read, see and think.

Lawmakers and regulators were slow to acknowledge the potential harms of social media. Facing the next great technological revolution — this one with the ability to change and adapt without human oversight — we can’t make the same mistake as with social media. AI industry leaders have even asked lawmakers to prioritize AI regulation.

The sad trajectory we’ve seen social media take wasn’t inevitable. It was the direct consequence of policy decisions taken — or, more accurately, not taken — when it was in its infancy. That’s where we are with artificial intelligence today.

And that’s why, to prevent its sweeping ability to spread disinformation about the most existential crisis facing humanity, the federal government must do what it’s done for decades. Require companies to prove products are safe before Americans use them.

Related News