Report: Artificial Intelligence A Threat to Climate Change

Report: Artificial Intelligence A Threat to Climate Change, Energy Usage and Disinformation

Tech accountability and environmental groups sound the alarm on the potential harms of AI to the planet and information ecosystems

WASHINGTON – Today, partners in the Climate Action Against Disinformation coalition released a report that maps the risks that artificial intelligence poses to the climate crisis.

Topline points:

  • AI systems require an enormous amount of energy and water, and consumption is expanding quickly. Estimates suggest a doubling in 5-10 years.
  • Generative AI has the potential to turbocharge climate disinformation, including climate change-related deepfakes, ahead of a historic election year where climate policy will be central to the debate. 
  • The current AI policy landscape reveals a concerning lack of regulation on the federal level, with minor progress made at the state level, relying on voluntary, opaque and unenforceable pledges to pause development, or provide safety with its products.


“AI companies spread hype that they might save the planet, but currently they are doing just the opposite,” said Michael Khoo, Climate Disinformation Program Director at Friends of the Earth. “AI companies risk turbocharging climate disinformation, and their energy use is causing a dangerous increase to overall US consumption, with a corresponding increase of carbon emissions.”

“We are already seeing how generative AI is being weaponized to spin up climate disinformation or copy legitimate news sites to siphon off advertising revenue”, said Sarah Kay Wiley, Director of Policy at Check My Ads, “Adtech companies are woefully unprepared to deal with Generative AI and the opaque nature of the digital advertising industry means advertisers are not in control of where their ad dollars are going. Regulation is needed to help build transparency and accountability to ensure advertisers are able to decide whether to support AI generated content.”

“The evidence is clear: the production of AI is having a negative impact on the climate. The responsibility to address those impacts lie with the companies producing and releasing AI at a breakneck speed,” said Nicole Sugerman, Campaign Manager at Kairos Fellowship. “We must not allow another ‘move fast and break things’ era in tech; we’ve already seen how the rapid, unregulated growth of social media platforms led to previously unimaginable levels of online and offline harm and violence. We can get it right this time, with regulation of AI companies that can protect our futures and the future of the planet.” 

“The climate emergency cannot be confronted while online public & political discourse is polluted by fear, hate, confusion and conspiracy,” said Oliver Hayes, Head of Policy & Campaigns at Global Action Plan. “AI is supercharging these problems, making misinformation cheaper and easier to produce and share than ever before. In a year when 2 billion people are heading to the polls, this represents an existential threat to climate action. We should stop looking at AI through the “benefit-only” analysis and recognise that, in order to secure robust democracies and equitable climate policy, we must rein in big tech and regulate AI.”

“The skyrocketing use of electricity and water, combined with its ability to rapidly spread disinformation, makes AI one of the greatest emerging climate threat-multipliers, said Charlie Cray, Senior Strategist at Greenpeace USA, “Governments and companies must stop pretending that increasing equipment efficiencies and directing AI tools towards weather disaster responses are enough to mitigate AI’s contribution to the climate emergency.”  

Previously, the coalition submitted letters to President Biden and Senator Chuck Schumer that call on them to implement climate concerns into proposed AI legislation. The letters echo recommendations made in the report, including:

  • Transparency: Companies must publicly report on energy usage and emissions produced, assess any environmental justice concerns related to developing AI technology and disclose how their AI models produce information in a way that prioritizes climate science.
  • Safety: Companies must be able to publicly demonstrate their products are safe for users and the environment. In addition, governments should develop standards on AI safety reporting and invest in research that maps the risks AI poses to the spread of climate disinformation. 
  • Accountability: Governments should enforce rules on investigating and mitigating the climate impacts of AI with clear, strong penalties for noncompliance. Companies and their executives must be held accountable for any harms that occur as a result of their products.


Communications contact: Erika Seiber, [email protected]

Related News Releases