How Green is AI?

Playing Games

My father taught me how to play chess when I was nine years old or thereabouts. He played his fair share of games while he was studying engineering. He told me that a few of his classmates quit thanks to their chess obsession, an example I should not follow. Not a smart move, he added. A blunder, in chess speak. We owned what I thought was a beautiful wooden chess set with seemingly giant pieces. Rooks and bishops were my favorites, while understanding why the King was the weakest of all, despite historical evidence and abundant fiction, was challenging. A few years later, I studied chess theory to improve my game. My dad and one of my uncles finally stopped beating me sequentially.

Playing chess well requires excellent memory and sophisticated calculation skills. Indeed, one has to be able to remember well-known openings and their innumerable variations and combine that with accurate on-the-board move calculations to have a chance to win a game. Thus, it is unsurprising that computing and chess have had a long-term romantic and intimate relationship. After all, modern computers have vast memories and can perform speed-of-light calculations. In fact, computers are way ahead of humans on both counts. That is why the most sophisticated chess programs can beat anyone, World Champions included.

Expectedly, AI also embraced chess (and other strategic board games, too!) to demonstrate its overwhelming power. In 2017, Alphazero saw the light of day and quickly became the best chess player in the world, sometimes using strategies humans have never used or seen. Unlike most other AI platforms, Alphazero did not rely on external (big) data. Instead, it learned chess by playing games against itself via sophisticated reinforcement learning algorithms running on state-of-the-art platforms and using specialized computer chips called Tensor Processing Units (TPUs). I cannot, therefore, run Alphazero on my laptop or desktop. In any case, TPUs were not only faster than other microchips but also proved to be more energy efficient. Scope 2 emissions could thus be reduced in principle, as long as the rebound effect does not show its face.

The Unbearable Slipperness of AI

The above example shows AI’s complexities, starting with its name. Indeed, agreement on a precise definition of AI is still in the works and will probably not be reached any time soon. Not that AI cannot function without such an appalling lack of consensus. A good starting point to rock the AI boat is Crawford’s suggestion that AI is neither artificial nor intelligent – albeit I would argue that, if we rely on dictionary definitions, the artificial part does work as the technology is created by humans to mimic human behavior. Turing’s famous imitation game immediately pops up here. Artificial Behavior could thus be a more accurate term as not all human behavior is, by default, “intelligent” or “rational.” Crawford’s brilliant book takes a macro approach and connects the almost innumerable AI dots. Missing is the analysis of the actual AI production process, which is perhaps outside the scope of her project.

Since we are interested in AI’s GHG and CO2 emissions here, Crawford’s approach provides fertile ground to deploy the emission scopes developed by the GHG Protocol. The same goes for ICTs effects (direct,  indirect and rebound effects). What about ITU’s ICT emission life cycle scheme I described in a previous post? In that framework, the ICT sector is divided into four areas 1. End-user goods. 2. Network goods. 3. Data centers. And 4. Services. Where shall we place AI? Needless to say, AI can be effectively deployed in any of those four rubrics. And all four types are, in turn, inputs to AI production. We thus see a busy two-way highway with incessant traffic in each direction and lots of distracting noise generated in the process. Digging deeper into the ICT services area, we find that data processing, computing programming, and software development and publishing are core components. These perfectly fit into the overall AI production process.

Yet, they are insufficient to pinpoint its uniqueness. While AI does involve software development and programming, not all of the latter translates into AI. While most computer programming and software development targets increased automation and enhanced efficiency, AI’s goals go well beyond that. The core idea is to have a computational agent that can suck in (digitized) inputs from a given environment, do its artificial magic and return an output that impacts such environment while resembling human behavior. Being an agent implies the entity can act independently and does not need our help to generate results. If we agree that such computational agents do not think, then the issue is whether such agents are “rational” or not. The most commonly accepted AI definition endorses the rational approach, which is obviously subject to debate, but that goes beyond our purposes here.

Producing Agents, Intelligent or not

Looking closely at the most successful AI area, Machine Learning (ML), and simplifying a bit, we can distinguish three separate production phases. First is the problem definition and data management phase. Data plays a central role here as we must cover all bases, if possible. Second is the training phase, where we use a specific AI algorithm to teach the agent how to operate within the environment we have previously defined. And third is the actual deployment of the agent in the real world, which might also include monitoring to ensure any emerging issues can be addressed via further training and refining using new data.

In the case of Alphazero, we can easily see that while the first phase was relatively simple (as we did not have to feed any data from chess games to the agent), phase two was the most relevant as it demanded millions of runs to achieve optimal (not rational!) chess behavior. On the other hand, phase three has yet to be completed as the agent has not been deployed widely. Contrast that with autonomous vehicles. In this case, phase 1 demands billions of data points (pixels included) to ensure the agent can see the road and all obstacles and signals around. Phase 2 is also quite intensive as we must be almost 100 percent sure the agent does not miss a pedestrian or dog crossing the road, for example. And phase 3 is more pervasive as such vehicles are about to go mainstream. We find a similar situation in natural language processing (NLP), now grabbing many headlines thanks to the newly deployed ChatGPT.

Emissions-wise, production phases 1 and 2 are the most demanding of energy resources, given the computational resources needed for their completion. We must distinguish here between intensive and extensive use of such resources. One computational agent might require hundreds of thousands of training and testing runs, while others can be accomplished with a few. What really matters here is the amount of energy used per run. For example, the latest incarnation of GPT models seemingly consumes comparable vast amounts of energy per run. Researchers have recently estimated that one GPT-3 run could generate almost 225,000 kgs. of CO2e, equivalent to what 49 cars emit in one year. Not too shabby.

AI Emissions

Following ITU’s life cycle approach to tracking AI emissions promptly leads to a dead end. While the framework suggests a clear separation among four areas, the actual estimations for ICT services are limited to the number of employees in the sector. All other emissions are assigned to the other three areas, network goods and data centers perhaps being the most relevant to AI production. Is more helpful to grab the various scopes defined by the GHG protocol. Intuitively, we can quickly sense that scope 2 emissions (energy consumption) comprise a fair share of AI’s total, especially in the case of GPT-3. Remember that we are using an ML model that handles 175 billion parameters. And suppose the latter is offering us a glimpse of upcoming AGI (artificial general intelligence). In that case, we do not need a computational agent to predict what will happen in the near future.

AI scope 1 (direct) emissions can be correlated to data centers (storage, etc.) and network goods (microchips, servers, etc.). For leading companies such as OpenAI and Deep Mind, supported by Microsoft and Google, respectively, we can assume that they use the hardware and data facilities owned by their mother companies. In that case, AI emissions will appear as part of that from those companies and will thus fall into scope 3. The point here is that precisely quantifying related AI emissions might be a considerable challenge, especially when these companies are not in the habit of publicly releasing such information. Therefore, scope 3 emissions must also include part of Crawford’s Atlas, particularly the landmarks related to production.

In terms of ICT effects, modern AI seems to follow the same path as previous digital technologies. AI’s indirect effects have been overemphasized thanks partly to its nature. For the first time in history, one technology is expected not only to boost efficiency at all levels but also to enter the knowledge production arena, a feat never accomplished in the past. Indeed, our new computational agents are starting to invade a space immune to innovation. We are then told that AI could address issues humans have been incapable of solving  – including, for example, more sophisticated climate models and more efficient energy management systems. However, AI can also be used to intensify fossil fuel production globally. And if GPT-3 is the future, rebound effects might prevail in the end. We will see.

So how much GHGs AI emits? Surprisingly, a precise answer does not exist. Researchers have primarily focused on emissions related to model training and testing, which are only part of the overall damage package. A recent paper has some excellent estimates which might serve as a benchmark for future research. But the final answer, my friend, is still blowing in the wind.

Raúl

 

Print Friendly, PDF & Email