environment
Academics say ChatGPT drinks 500ml of water per query. OpenAI's CEO claims it's a teardrop.
While OpenAI claims ChatGPT sips water by the teardrop, local communities in global drought zones are successfully blocking $64 billion in AI data centers.

The tech industry has spent two decades convincing the public that the internet lives in a weightless, ethereal dimension called "the cloud." This branding exercise successfully abstracted away the heavy industrial reality of server farms into a vague, meteorological concept. When users interact with a digital service, the physical mechanism executing that request remains entirely out of sight, buried under layers of glossy user interfaces. But the cloud is not a vapor; it is a physical, heavy infrastructure made of concrete, steel, lithium, and cold water. As generative AI models scale from parlor tricks to enterprise dependencies, the sheer physics of computing are crashing headlong into municipal resource limits.
The environmental impact of generative AI is driven predominantly by data center infrastructure and localized cooling demands rather than individual user queries, empowering communities to successfully weaponize water rights to halt multi-billion dollar tech expansions. While executives debate the marginal liquid cost of a single prompt, local zoning boards and drought-stricken municipalities are quietly becoming the most effective regulatory bodies constraining artificial intelligence. The physical footprint of these computational behemoths is dragging the tech industry out of the abstract realm of software and into the messy, highly litigious world of local resource management.
For years, the expansion of tech infrastructure faced little resistance beyond minor zoning disputes, with municipalities welcoming data centers via tax incentives. But the transition to AI-specific hardware has completely altered the resource equation. These new facilities are active, energy-intensive factories executing billions of continuous calculations, creating fierce local backlash over physical municipal resources. The most visible casualties of this conflict are not algorithms, but architectural blueprints. According to independent analysis, a massive $64 billion in data center projects have been blocked globally since 2023 due to unsustainable resource practices.
The Santiago Interruption: Drought Zones vs Big Tech

One of the most prominent examples of this new infrastructural limit unfolded in drought-stricken Santiago, Chile. The region has been suffering under a severe mega-drought for more than a decade, a slow-motion ecological crisis that has depleted local reservoirs and forced residents to fundamentally alter their water usage. Against this backdrop, multinational tech companies sought to expand their Latin American infrastructure. In September 2024, Google was forced to pause a $200 million data center project in Santiago.
The company did not halt construction because of data privacy laws or intellectual property disputes. It was stopped because a local environmental court issued a partial reversal of its authorization. The original proposal for the Cerrillos data center involved a cooling system that would have drawn heavily from the local aquifer. The community, already rationing water, recognized that the facility's demand would directly compete with human consumption and local agriculture. The ruling required Google to fundamentally revise its water usage to address these deep environmental concerns, particularly regarding the potential impact on the capital city's already fragile water table.
This legal block in Santiago provided a stark reality check for the global expansion of artificial intelligence. It proved that local protests and a court ruling possess the hard leverage that federal tech regulators often lack. While national governments struggle to draft comprehensive AI legislation, focusing largely on abstract risks like bias or existential threat, local courts are dealing with immediate, physical facts. You cannot negotiate with an empty aquifer.
The $64 billion figure represents a material shift in tech infrastructure. It indicates that the primary bottleneck for AI scaling is no longer strictly silicon availability, but municipal zoning and hydrological realities.
Google ultimately announced it would redesign the facility to use air-cooling instead of water-cooling. While this maneuver satisfies the immediate demands of the local court, it merely shifts the burden from the water grid to the electrical grid. Air-cooled data centers require significantly more power to operate their massive HVAC systems, which in turn drives up indirect water consumption at the power plants generating that electricity. The Santiago incident did not solve the resource problem; it merely relocated the stress point, demonstrating that AI infrastructure demands a heavy toll regardless of the specific cooling mechanism employed.
The European Front: Microsoft in the Netherlands
A similar dynamic played out in Europe, demonstrating that these conflicts are not limited to arid climates. In May 2021, revelations that sprawling Microsoft data centers in the Hollands-Kroon province could precipitate a local drinking water shortage sparked intense community backlash. The Netherlands, a nation famous for engineering its way out of excess water, suddenly found itself facing artificial drinking water shortages driven entirely by server cooling.
The facilities in the Wieringermeer polder were initially touted as a boon to the local economy. But as the true scale of their water consumption became public, the mood shifted. During peak summer heat events, the data centers required millions of liters of high-quality drinking water to maintain operational temperatures. The geographic irony of a nation engineered below sea level facing water scarcity due to tech infrastructure was not lost on local residents or the agricultural sector. Farmers, facing their own water restrictions during dry spells, watched as municipal supplies were diverted to cool server racks processing overseas data.
Microsoft, attempting to quell the anger, pledged to use alternative cooling methods, including capturing rainwater. The company invested in facilities designed to collect precipitation from the massive roofs of the data centers. Yet, the friction remained. Rainwater capture is inherently seasonal and unpredictable, making it a poor fit for the continuous, uninterrupted baseload requirements of an industrial data center. When the rain stops, the servers must still be cooled, and the municipal tap must be turned back on.
By August 2025, activists had escalated their tactics, with protest groups physically occupying the roof of a Dutch Microsoft data center. Their demonstration unified two distinct grievances: the unsustainable localized water extraction required to cool the servers, and the facility's alleged storage of data for the Israeli military. The occupation highlighted how water infrastructure has become a highly visible vulnerability for tech companies, representing a fundamental shift in how tech expansion is governed. The abstract promises of artificial general intelligence hold little weight against a municipality staring down a dry reservoir or an overtaxed water utility.
The 519ml Discrepancy: Measuring the Void
At the core of this localized resistance is a fierce debate over precisely how much water AI actually consumes. The industry and its critics are essentially speaking two different languages when evaluating the cost of a single prompt. To understand the discrepancy, we must establish a consistent definition of the metric in question.
Water Footprint = The total volume of freshwater used directly and indirectly to run AI models, including evaporation in data center cooling towers and water consumed during electricity generation.
The variance in how this footprint is calculated has resulted in wildly divergent claims. According to a heavily cited 2023 study by researchers at UC Riverside and UC Colorado, each 100-word ChatGPT response requires approximately 519 milliliters of water—roughly the volume of a standard plastic water bottle. This metric accounts for the full systemic cost of the model, tracking the water evaporated at the facility and the water boiled off at the power plants supplying the electricity.
Conversely, the industry paints a vastly different picture. Sam Altman claimed in a June 2025 blog post that a typical ChatGPT query uses roughly one 15th of a teaspoon of water, which equates to about 0.3ml. This figure represents a bold attempt to minimize the perceived cost of the technology.
How can a scientific study and a chief executive arrive at numbers separated by a factor of over 1,700? The answer lies in the scope of measurement and the mechanics of user behavior. Altman's figure plausibly isolates the marginal, direct cooling cost of a single, highly optimized, short-context API call run on next-generation hardware. It treats the query as a standalone event in a vacuum, ignoring the indirect water cost of electricity generation entirely.
The academic researchers, however, measure the systemic reality of the platform. Generative AI is not a slot machine where you pull a lever once; it is an iterative workflow. Users do not simply ask a single question and close the application. They engage in prolonged conversations, tweaking parameters and refining requests.
As technology writer Whitney A. Foster documented, the usage compounds rapidly in real-world scenarios. "Every time you regenerate a response because the first one wasn’t right, you’re spinning up servers. Every time you paste in your 5,000-word chapter for feedback, you’re using more resources than a simple query." The 519ml figure accounts for the heavy lifting: long context windows, continuous system availability, system idling, and the massive indirect water cost required to generate the gigawatts of electricity powering the grid. When you measure the entire industrial pipeline rather than a single frictionless mathematical operation, the teardrop becomes a torrent.
The Thermodynamics of the Cloud
To grasp why water has become the ultimate hard limit on AI expansion, one must look at the thermodynamics of modern computing. Servers are, fundamentally, highly sophisticated space heaters that happen to perform mathematical operations as a byproduct of converting electricity into heat. The laws of physics dictate that this heat must be continuously removed from the silicon, otherwise the hardware physically melts within seconds.
The industry measures the efficiency of this heat removal using a specific metric.
PUE (Power Usage Effectiveness) = A ratio that describes how efficiently a computer data center uses energy; specifically, how much energy is used by the computing equipment in contrast to cooling and other overhead.
A PUE of 1.5 means that for every dollar spent on computing, an additional 50 cents is spent strictly on cooling and overhead. Historically, traditional web hosting data centers relied heavily on air cooling to maintain operational temperatures by blowing chilled air across server racks. The heat absorbed by the air was transferred to closed water loops, pumped to cooling towers on the roof, and exposed to the ambient environment. Powerful fans drew outside air across the hot water, causing evaporation that strips the latent heat of vaporization from the remaining water. This evaporative process permanently consumes millions of gallons of municipal water.
As the industry transitioned from standard CPUs to the highly power-dense GPUs required for AI (such as Nvidia's H100 arrays), traditional air cooling hit its thermodynamic ceiling. You simply cannot blow enough cold air fast enough to keep a rack of dense AI chips from overheating. The density of the heat generated by these specialized processors requires a more conductive medium than air.
This physical requirement forced a transition toward liquid cooling, where chilled fluid is pumped directly to cold plates mounted on the chips themselves. Liquid is significantly more efficient at transferring heat than air. A Microsoft study published in Nature found that switching from traditional air cooling to direct-to-chip liquid cooling reduces water usage by 31-52%.
While this represents a massive percentage gain in efficiency, the absolute scale of AI infrastructure deployment negates much of this progress. A highly efficient liquid-cooled data center still requires vast quantities of local water to operate at the scale demanded by global tech giants. The efficiency gains per chip are outpaced by the sheer volume of new chips being deployed, keeping tech companies fundamentally reliant on local watersheds.
Defenders of Scale: The Optimization Argument
When faced with environmental criticism, the technology sector has frequently sought to reframe the narrative around individual user metrics rather than corporate industrial scale. This is a familiar public relations tactic, mirroring the oil industry's historical emphasis on personal carbon footprints. By shifting the focus to the individual, the industry attempts to dilute the impact of its aggregate infrastructure.
Defenders of current AI expansion trajectories argue that individual AI usage has a negligible environmental footprint, especially when weighed against the potential benefits of the technology. They point out, correctly, that an isolated query allegedly uses roughly one 15th of a teaspoon of water under optimal conditions. Furthermore, they note that an entire year of heavy usage emits less carbon than a short domestic flight.
According to available data, a user generating 50 queries a day for an entire year is responsible for roughly 125 kWh of electricity and 62.5 kg CO₂. This undeniably amounts to less carbon than a round-trip flight from New York to Chicago. From this vantage point, criticizing a high school student for using ChatGPT to outline an essay seems disproportionate and mathematically misdirected.
Furthermore, proponents of rapid AI scaling argue that the technology will ultimately solve the very climate crises it is currently exacerbating. They claim that advanced AI models will optimize power grids, discover new battery chemistries, and design vastly more efficient logistics networks. In this framing, the massive water and power consumption of today is framed as a necessary, temporary investment to achieve a sustainable future tomorrow. The argument essentially asks municipalities to endure short-term resource depletion for the promise of long-term ecological salvation.
While individual carbon footprints may seem low compared to air travel, independent academic studies place the per-query water cost at a much higher 519ml. Furthermore, attempting to frame the issue around individual user guilt masks the reality: the aggregate scale of these queries necessitates massive, concentrated infrastructure that disproportionately drains local municipal water supplies in vulnerable, drought-prone areas.
Carbon emissions mix globally in the atmosphere, making a ton of CO₂ emitted in New York roughly equivalent to a ton emitted in Tokyo. Water, however, is strictly local. Draining two million gallons of water a day from a drought-stricken reservoir in Chile creates an immediate, acute crisis for the people living there, regardless of how efficient a single user's query might be mathematically.
The debate is not whether a single person using an app is committing an environmental crime; the debate is whether a municipality can physically afford to host the sprawling concrete facility required to process a billion of those optimized queries simultaneously. The strain is not individual; it is infrastructural. The speculative promise that AI might eventually optimize a power grid does not replenish an empty aquifer today.
Water Positive Pledges and Geographic Loopholes
Recognizing that zoning boards are increasingly hostile to evaporative cooling towers, the tech industry has mounted a broad corporate sustainability campaign. The primary defensive shield against regulatory action is the pledge to become "water positive." Companies like Microsoft, Google, and Meta have all issued variations of water positive by 2030 commitments.
In public relations terms, being "water positive" sounds as though the data centers are somehow generating fresh water for the community. The terminology implies a regenerative process, suggesting the facility is an ecological asset rather than a liability. In accounting terms, the reality is much more mundane. Becoming water positive typically involves funding ecological restoration projects, purchasing water credits, or repairing leaky municipal pipes to offset the volume of water the data center evaporates.
The loophole in these pledges is entirely geographic. A corporation can legally claim water positivity by operating a massive data center in a drought zone in the American Southwest, evaporating millions of gallons from a dwindling aquifer, while simultaneously funding a wetlands restoration project in a completely different, water-rich state. The corporate ledger balances on a global spreadsheet, allowing executives to present a sustainable image to shareholders. But the local watershed near the data center remains permanently depleted.
This disconnect between global accounting and local reality is why these pledges often fail to placate angry residents. In the Netherlands, Microsoft's promise to use captured rainwater for cooling was an attempt to localize their sustainability efforts in response to direct municipal threat. But rainwater capture is highly variable and difficult to scale for continuous industrial baseloads. The broader industry commitments lack immediate enforcement mechanisms for current, ongoing construction in fragile ecosystems. You cannot cool an overheating Nvidia GPU with a water credit slated for purchase in 2029, and a restored wetland in another country provides no relief to a local farmer whose well has run dry.
Physical Limits to the Cloud
The trajectory of generative AI development has operated under the assumption of infinite scale: more data, more parameters, more compute, yielding better models. But the events of the past two years demonstrate that this software-first mindset has collided with the rigid limits of the physical world.
The thesis holds up under scrutiny: the environmental impact of generative AI is driven predominantly by data center infrastructure and localized cooling demands rather than individual user queries, empowering communities to successfully weaponize water rights to halt multi-billion dollar tech expansions. Whether a CEO claims the system is using 0.3ml of water or an academic logs 519 milliliters of water per output, the localized reality on the ground remains unchanged. The dispute over exact measurements obscures the undeniable physical footprint of the industry.
The $64 billion in data center projects blocked since 2023 serves as a stark metric. It proves that local governance, armed with environmental regulations and water rights, possesses the veto power that federal tech antitrust regulators are still struggling to articulate. When a multinational tech conglomerate is forced to pause a $200 million data center project in Santiago because of a local environmental court ruling, it becomes evident that the "cloud" has strict geographic and hydrological borders.
Tech executives will continue to minimize per-query usage metrics, focusing the discourse on individual efficiency rather than industrial scale. But algorithms cannot rewrite the laws of thermodynamics. Data centers will increasingly be forced out of drought-prone areas by local legislation, regardless of corporate sustainability pledges. The ultimate ceiling on artificial intelligence will not be dictated by a lack of training data, but by the physical limits of municipal plumbing.