Every conversation about AI and the environment eventually lands on the same concern: AI uses a lot of energy. Data centres are growing. Training large models consumes megawatt-hours. Isn’t there a tension between deploying AI and caring about the planet?
It is a reasonable question. It is also the wrong framing. The real question is not whether AI consumes energy. It is whether we are being intelligent about what kind of AI we deploy and what we point it at.
The specialisation argument
The default trajectory of AI adoption is toward large, general-purpose models. The instinct is understandable: these models can do everything, so why not use them for everything? The problem is that generality comes at enormous computational cost, and for most real-world tasks, it is both wasteful and technically inferior.
A large language model with hundreds of billions of parameters will produce a passable answer to almost any question. But if you need to detect erosion features in drone imagery, or classify land cover from satellite data, or predict equipment failure from sensor readings, a specialised model trained for that specific task will outperform the general model while consuming a fraction of the energy. Often orders of magnitude less.
This is not a niche technical point. It is the central question of responsible AI deployment. The industry’s current direction, throwing ever-larger general models at every problem, is the energetic equivalent of driving a lorry to the corner shop. It works, but it is an absurd use of resources when a bicycle would get you there faster.
For any organisation deploying AI at scale, specialisation should be the default. Use the general model for genuinely general tasks: drafting, ideation, analysis where flexibility matters. For anything that runs repeatedly on a defined problem, build or fine-tune a specialist. You will get better results, lower latency, lower cost, and a fraction of the energy footprint. This is not a trade-off. It is a strict improvement on every axis.
The net-positive arithmetic
The second point is simpler but equally underappreciated. When you direct specialised AI toward environmental applications, the energy arithmetic is not even close. The cost of running the model is trivially small relative to the environmental value of what it enables.
Consider peatland restoration. UK peatland emissions are estimated at over 23 million tonnes of CO2 equivalent per year. Mapping degraded sites to prioritise restoration is essential but historically slow and expensive: human surveyors, manual GIS work, months of effort per site. An AI model running on a single GPU can map erosion features across a site in minutes, at a computational energy cost that is negligible in the context of the emissions it helps address. The counterfactual is not AI versus a human doing the same job. It is AI-enabled versus never done at all, because the human capacity to survey every site that needs attention simply does not exist.
This scales across environmental domains. Monitoring biodiversity across thousands of remote sites. Detecting illegal deforestation in near real-time. Optimising energy infrastructure maintenance to reduce downtime and waste. In each case, the question is not whether the AI consumes energy. It is whether the environmental outcome it enables outweighs that consumption. For well-targeted, specialised applications, the answer is overwhelmingly yes.
Deploy with intent
The environmental case for AI is not a blanket endorsement. Running a 400-billion parameter model to sort your email is a waste. Running a 50-million parameter specialist to map thousands of hectares of degraded peatland is one of the highest-leverage uses of compute available today.
The distinction matters because the conversation around AI and energy often collapses into a binary: AI is good for the environment, or AI is bad for the environment. Neither framing is useful. AI is a tool. Its environmental impact depends entirely on what kind you deploy and what you deploy it for.
The organisations that will get this right are the ones that resist the pull toward generality by default, build specialised systems where specialisation is warranted, and direct those systems toward problems where the return on energy invested is genuinely transformative. The scalpel, not the sledgehammer. Pointed at the problems that matter.
