Sustainable AI: Innovation and Environmental Responsibility

We’ve seen a lot of successful AI case studies lately, from speeding up medical diagnoses to optimising energy grids. But behind the promise of smarter systems lies a growing environmental concern. As AI adoption accelerates across industries, so too does its energy and resource consumption.

LLMs come with two categories for carbon emissions (operational and embodied). Operational emissions, caused by using the LLM both in training and inference. Embodied emissions, from the hardware manufacture. 

So from the electricity needed to power vast data centres to the water used for cooling and the materials mined for hardware, the environmental cost is mounting. If we’re to realise AI’s potential without compromising the planet, sustainability must become a core principle in its development.

The Environmental Footprint of AI

Training new models such as OpenAI’s GPT-4 or Google’s Gemini series demands enormous computational resources. Newer models often have more parameters needing much more computing power. The drive for improved model performance means frequent retraining of ever-larger models. Then all that effort becomes obsolete with the release of newer models over time.

Just training one large language model could produce about 300,000 kilograms of CO2. Raising pressing questions about the long-term viability and ethics of scaling AI under current practices. Unfortunately this area is a shallow rabbit-hole, there’s a ton of info out there but no standard to follow.

While most sustainability assessments of AI systems begin with measuring electricity consumption in kilowatt-hours (kWh), this only provides part of the picture. Energy usage is often converted into CO₂ equivalent (CO₂eq) using an emissions factor. Usually expressed in grams of CO₂ per kWh, for instance the UK’s was 207.05 g CO2e per kWh. The emissions factor depends on the local energy mix; for instance, electricity generated from coal will have a far higher emissions factor than that from wind or solar.

Using this metric, the warming potential of AI-related energy use is in a single, comparable metric. Although CO₂eq is not always included in public reporting, its useful in research and carbon accounting.

In Future

There’s a lot of limitations to emissions reporting when it comes to AI systems. Results from one hardware setup and a fixed emissions factor won’t apply to all systems. Also, due to significant architectural and design differences among AI models, it remains difficult to generalise sustainability findings.

Most current studies look at large, general-purpose models. These may not show the efficiency or emissions of smaller or specialised systems. We’d certainly be interested to see whether more narrowly focused models, such as those trained for specific tasks like code generation or medical diagnostics, can match performance with less energy consumption and carbon emissions.

This point is gaining traction among researchers, with several calling for broader evaluation frameworks. It’s conceivable that in some use cases, a small, well-targeted model could deliver excellent accuracy while operating at a fraction of the cost, both financially and environmentally, of a general-purpose LLM. As such, investing in task-specific architectures may prove vital to making sustainable AI in the long term.

The implications are already evident in the corporate world. Google has reported a 50% increase in emissions since 2019, largely due to the expansion of generative AI. Its data centres now contribute significantly to the roughly 10% of global electricity consumption attributed to these facilities. Similarly, Microsoft has observed a 30% increase in emissions since 2020, driven by the growth of its AI infrastructure.

Cooling Systems and Water Use

Although energy usage tends to dominate sustainability discussions, water consumption is another critical factor. Data centres consume substantial quantities of water to cool their high-performance computing infrastructure. Somewhere between 11 million and 19 million litres of water per day.

In response, data centres are using more and more efficient cooling technologies. Direct-to-chip cooling, where liquid coolant is applied directly to components like GPUs and CPUs, is one method. Immersion cooling, which submerges hardware in non-conductive liquid, is another example. Companies like Iceotope are going further, incorporating closed-loop liquid cooling systems that recycle heat for use elsewhere within a facility.

These solutions also support water conservation. Some advanced systems are capable of recovering and reusing up to 70% of the water they consume. As global demand for data processing increases, such practices will be essential in minimising the sector’s water footprint. Data centres have made great strides in improving their efficiency but is it enough?

A report from the Royal Academy of Engineering has called on the government to make sure tech companies accurately report how much energy and water their data centres are using. They also call for data centres to stop or reduce the amount of drinking water used for cooling. Waste water is a solution that many have already looked into. 

Manufacturing and E-Waste Challenges

AI’s environmental impact extends well beyond data centre operations, so now we examine more of the embodied emissions. Producing AI hardware, such as GPUs and accelerators, is highly resource-intensive. Manufacturing a typical 2 kg computer, for instance, can require up to 800 kg of raw materials. Many of these, including rare earth elements, are extracted from environmentally sensitive regions using methods that degrade soil, contaminate water supplies, and threaten local ecosystems.

Moreover, rapid hardware obsolescence adds to the global e-waste crisis. As outdated or malfunctioning equipment is discarded, it often ends up in landfills, where hazardous substances like mercury, cadmium, and lead can leach into the environment. While recycling technologies exist, they remain underutilised, particularly in low- and middle-income countries.

Energy Efficiency: A Paradox in Growth

Despite these growing demands, information technology as a whole accounts for just under 2% of global energy consumption, a figure that has remained relatively stable. Considering the growth of AI use, you’d expect a larger percentage. This is largely thanks to gains in hardware efficiency, data centre management, and software optimisation.

However, AI presents a potential deviation from this trend. A single prompt to a model like ChatGPT may use seven to ten times more energy than a basic Google search. As AI tools become more embedded in consumer apps and enterprise systems, maintaining this 2% benchmark will become increasingly difficult. Without intervention, AI’s energy use could surge significantly in the coming decade.

AI as a Tool for Sustainability

Ironically, AI the very source of these environmental pressures, may also offer solutions to mitigate them. Sustainability challenges can benefit from AI, as it excels at optimisation, large-scale data analysis, and complex modelling.

Smarter Energy Grids

The UK aims for all electricity to come from 100% zero-carbon generation by 2035. With more renewable sources feeding the grid like Hornsea 2, located off the Yorkshire coast. But wind turbines and solar arrays, fluctuate with weather patterns, making it harder to maintain stability without either overproducing or relying on fossil fuel backup. So the challenge is matching unpredictable generation with real-time demand .

National Grid ESO has trialled advanced AI forecasting platforms that integrate weather predictions, turbine data, and smart meter usage statistics. These systems can anticipate dips in renewable output and recommend drawing power from large-scale battery facilities such as the 100 MW Minety Battery Energy Storage Project in Wiltshire. Plus, when the wind is blowing, AI can help store excess electricity into batteries, charging and discharging to prevent overcharging.

Weather Forecasting and Climate Modelling

Traditional models depend on supercomputers and consume vast amounts of energy. In contrast, AI-based tools such as Google’s GenCast can generate similar or better results with lower computational overhead.

The UK Met Office has begun integrating AI into its operational forecasting, blending traditional physics-based models with machine learning techniques. One notable advance was DeepMind’s “GraphCast” model, which provides the single most likely weather prediction, useful for general planning and understanding the expected conditions.

The improved model GenCast was released in 2024, providing a range of potential outcomes and their likelihoods. It builds upon GraphCast by incorporating diffusion models instead of just using Graph Neural Networks (GNNs) and learning from  historical weather data. GenCast can identify and predict extreme weather events with greater accuracy and lead time than traditional methods. Making it especially useful for fields like energy optimisation and emergency preparedness. Reducing disruption and potentially saving lives. Plus its open-sourced.

Regulatory Compliance and Environmental Monitoring

Water companies have already been using AI to improve environmental monitoring. Machine learning can sift through emissions data, water quality reports, and satellite images to identify potential violations.

AI-powered sensors have been installed at wild swimming spots to predict bacteria levels with 87% accuracy, helping authorities detect pollution incidents more quickly and issue real-time health alerts. More broadly, AI-powered risk models enable more targeted inspections, dramatically improving detection rates while reducing operational costs. 

The AI-Water Quality Monitoring Project contributed to by Yorkshire Water and UnifAI Technology along with, The Rivers Trust, British Standards Institution (BSI). They won £1.935million for the development of AI-based models to monitor water quality at 20 inland bathing sites. The partners have added another £215,000 to take the project total to £2,150million. The The AI-Water Quality Monitoring project was named a winner in the Ofwat Innovation Fund’s fifth Water Breakthrough Challenge in May this year.

Waste Management and Circular Economies

AI is helping to modernise global waste systems. Advanced image recognition and robotics are used to identify and sort recyclables with far greater accuracy than manual methods. This reduces contamination and improves recovery rates.

At Veolia’s Southwark Integrated Waste Management Facility, AI-driven optical sorters and robotic arms handle between 35 and 50 items a minute. Cameras and near-infrared scanners detect materials at high speed, sorting them by type and quality with greater accuracy than manual methods. This has helped increase the facility’s recovery rate and reduce contamination, which improves the resale value of recyclates.

Making AI More Sustainable

There is growing momentum behind efforts to make AI itself more sustainable. Strategies being explored include:

  •  Developing task-specific models to reduce dependency on large, general-purpose systems.
  • Innovating in chip design, including neuromorphic and optical processors.
  • Transitioning data centres to 100% renewable energy sources.
  • Scheduling compute tasks to happen at periods of peak renewable energy availability.

Task-specific models, especially those tailored to particular domains, might match or exceed performance benchmarks while significantly lowering energy use and emissions. These innovations are not merely nice-to-haves, they’re essential if AI is to scale without placing undue strain on environmental systems. 

Interest in sustainable AI has been a growing google trend as more people are becoming aware of how resource intensive AI technology is becoming. As businesses embrace AI, there’s a stronger push to align innovation with sustainability goals. Government initiatives and regulatory discussions have also played a role, with the UK aiming to position itself as a leader in responsible AI development. At the same time, organisations are under pressure to meet ESG commitments, driving interest in greener technologies such as energy-efficient AI chips, low-carbon data centres, and cloud services optimised for sustainability.

The Role of Policy and Transparency

Government policy and corporate transparency will play a central role in this transition. The United Nations Environment Programme (UNEP) has proposed a framework that includes:

  • Standardised metrics for measuring AI’s environmental impact.
  • Mandatory reporting of energy and water usage by AI developers.
  • Inclusion of AI-related emissions in national carbon inventories.

Meanwhile, industry-led efforts are beginning to fill the gap. Platforms like Hugging Face are assigning sustainability scores for open AI models, and organisations such as Epoch.ai have been tracking the progress of AI models. Including the power draw to train each machine learning model. These tools equip users and developers with the data needed to make environmentally responsible choices.

Conclusion

Essentially AI embodies both the promise and peril of modern technology. Enormous potential combined with significant sustainability challenges. Like most industries, the sustainability of AI depends on the transition to green energy.

Sustainability must be a metric available to developers considering which models they want to use. This means prioritising efficient models, minimising water use, ensuring ethical sourcing, and embracing transparent reporting and policy oversight.

The future of AI is not set in stone. The decisions made today will determine whether AI serves as a tool for planetary resilience or a contributor to ecological degradation. With collaboration, foresight, and innovation, AI can indeed become a driver of a more sustainable world.

Share this page

Picture of Emily Coombes

Emily Coombes

Hi! I'm Emily, a content writer at Japeto and an environmental science student.

Got a project?

Let us talk it through