The GPT-5 Dilemma: A Leap in AI, a Step Back for the Environment

Date:

OpenAI’s new GPT-5 model is a technological marvel, but its release is shadowed by a critical environmental concern: its potentially massive energy consumption. While the company has remained notably quiet on the issue, experts are sounding a clear alarm. They argue that the enhanced capabilities of GPT-5—such as its ability to create websites and answer PhD-level questions—come with an unprecedented and steep environmental cost. This lack of transparency from a major AI developer is sparking serious questions about the industry’s commitment to sustainability.
A key finding from a study by the University of Rhode Island’s AI lab provides a stark illustration of this problem. Their research found that a single medium-length response of around 1,000 tokens from GPT-5 can consume an average of 18 watt-hours. This marks a dramatic increase from earlier models. To put this into a more relatable context, 18 watt-hours is the amount of energy an incandescent light bulb uses in about 18 minutes. Given that a service like ChatGPT handles billions of requests daily, the total energy consumed could be astronomical, potentially matching the daily electricity demand of millions of homes.
The surge in energy use is directly linked to the model’s increased size and complexity. Experts believe GPT-5 is substantially larger than its predecessors, with a greater number of parameters. This theory is supported by research from French AI company Mistral, which identified a strong correlation between a model’s size and its energy consumption. The Mistral study concluded that a model ten times bigger would have an impact that is an order of magnitude larger. This seems to be the case with GPT-5, with some experts theorizing its resource use could be “orders of magnitude higher” than even GPT-3.
This problem is further exacerbated by the model’s new architecture. Although it uses a “mixture-of-experts” system to improve efficiency, its ability to handle video, images, and complex reasoning likely negates these gains. The “reasoning mode,” which requires the model to compute for a longer time before delivering an answer, could make its power needs several times greater than for simple text tasks. This combination of increased size, complexity, and advanced features paints a clear picture of an AI system with a massive appetite for power, leading to urgent calls for greater transparency from OpenAI and the wider AI community.

LEAVE A REPLY

Please enter your comment!
Please enter your name here

Related articles

Beyond the Hype: Will Instagram’s PG-13 System Actually Work?

Meta has launched a major publicity effort around its new PG-13 safety system for Instagram, but beyond the...

The Groove of Rebellion: How the Vinyl Revival is Powering the Anti-Spotify Movement

The resurgence of vinyl is more than just a nostalgic trend; it has become a critical piece of...

UK Throws Down the Gauntlet to Google’s Search Engine Supremacy

The United Kingdom has officially thrown down the gauntlet, challenging Google's long-held supremacy in the internet search market....

How a Near Half-Million Deliveries Forged a Half-Trillion Dollar Fortune

The journey from manufacturing nearly half a million cars to possessing a half-trillion-dollar fortune is a direct line,...