Imagine a future in which an algorithm can address food security through crop yield optimization, you can delegate a task like planning an entire weekend away to a bot, and poets can “paint” accompanying illustrations sans paintbrush. This is just some of what might be possible with the advancement of generative AI.
The Latest Frontier
AI has already come a long way since the term was first coined in the 1950s. Within the past couple of years, generative AI—an algorithm (for example, ChatGPT) taught to perform steps on its own and used to create audio, code, images, tests, simulations, and videos—has emerged as a promising tool for work and personal use.
Today, generative AI can be used to run customer service chatbots, assist with repetitive tasks like data entry, tweak a document’s tone, generate a sick note to send your boss, code software, and assist in diagnosis in the areas of radiology and medical imaging, tuberculosis, and oncology.
But there is a cost to this production—and it’s more than just monetary.
There is a growing body of evidence that AI has an outsized impact on our energy infrastructure and efforts to mitigate climate change through, for example, its energy use, water consumption, and carbon emissions.
“It starts from the very beginning of the supply chain lifecycle,” says Shaolei Ren, a researcher at the University of California, Riverside, with a focus on responsible AI for building a resilient, sustainable, and equitable future. “The manufacturing stage is having a lot more environmental impact compared to the usage stage.”
The High Costs of Creation
To develop an interface like ChatGPT-3, Meta’s Llama models, or BERT (all of which are large language multimodal models—a subset of generative AI), computers need to analyze patterns from a massive body of human-created text.
In just one recent example of a large company cited by the World Health Organization, this analysis process took an estimated 3.4 GWh over 2 months. That’s on par with the yearly energy consumption of 300 American households.
And it’s estimated that, in creating GPT-3, there were 552 tons of carbon dioxide generated, equivalent to 123 gasoline-powered passenger vehicles driven for one year.
Similarly, a massive amount of water is needed to manage the heat generated from the computing calculations it takes to perform this analysis.
[callout]
AI’s Water-Guzzling Potential
It’s projected that, in 2027, the global AI demand may total 4.2 to 6.6 billion cubic meters of water withdrawal. This is more than half the total annual water withdrawal of the United Kingdom.
[/callout]
User-Generated Expenditures
Each time we use an AI tool, we wring out even more energy. In a recent paper by Ren, it was estimated that GPT-3 consumes the equivalent of a 500 mL bottle of water for every 10 to 50 GPT-3 responses, and the up-and-coming ChatGPT-4 may guzzle even more.
Generative AI’s water consumption is unlike the water withdrawal we see when we, for instance, take a shower, says Ren. While shower water is directly discharged back into the sewage system, water consumption refers to evaporation of water into the atmosphere. Water inequities between regions could result.
“The environmental impact of AI is really localized,” says Ren. For instance, just one company’s data center in an Oregon city used more than 25 percent of all the water consumed in that city.
Carbon emissions can be a similarly localized problem, says Ren. This problem is exacerbated by the fact that, while some AI algorithms use renewable energy, most energy comes from fossil fuels.
A Future We Can Live With?
As data centers proliferate with more and newer iterations of AI, future projections indicate this energy burden will only increase.
While this leaves a lot for companies to do in developing technology responsibly, given the many complex social issues involved, we may also need to add AI’s environmental impact to our perspective on its future.
Here are some questions you might want to consider:
Given the energy needed to continue developing AI, will our progress be limited by resource scarcity, or will we reach a breaking point in other areas?
If we divert renewable energy efforts to prioritize AI, its companies, and its data servers, are we comfortable using more fossil fuels elsewhere?
Should we limit the use of AI to essential areas, hindering personal queries to protect its usefulness to research, data analysis, and health care?
How transparent should companies be regarding their AI carbon footprint, water usage, and other environmental impacts, and should there be legislation to enforce disclosure?
[callout]
Ways Forward
Shaolei Ren believes AI has great potential to make a positive impact. “But in order to make that happen earlier, we should make AI itself more sustainable,” says Ren.
For starters, companies need to be more transparent about the energy costs associated with generative AI.
We know the water footprint of every agricultural product—from wheat to beef—explains Ren, but the cost per usage of a generative AI tool is often left undisclosed. If people knew how much energy it costs to, for example, tinker with image-generation AI tools, they might think twice about what they’re doing. Societal pressure may motivate companies to disclose these numbers.
Other potential solutions include:
Engaging in geographical load balancing, where a network of data centers located in various separate locations share the energy load.
Developing new technologies, including in the areas of data center power and cooling infrastructures.
Utilizing more renewable energy by scheduling computation for times of day when renewable energy is more available.
Enabling users to opt out of particular features: for instance, Google decides when to produce generative AI overviews for Google searches, a feature you currently can’t turn off.
[/callout]
This article was originally published in the November 2024 issue of alive magazine.
Source link