Regulating AI

The EU’s December agreement on legislation tackling AI deployment and use in the Union and beyond is yet more evidence of its global leadership in the area of digital technology regulation . A few weeks before the epic event, heavy lobbying by the usual suspects had placed the legislation’s future on the line . Generative AI (GenAI) was one of the critical issues under tenacious contention. Recall that the AI legislation’s first draft was completed by the European Commission (EC) in 2021, more than 18 months before ChatGPT saw the light of day. In June 2023, the European Parliament (EP) endorsed a revised draft. However, the EU legislation became obsolete in the blink of an eye once GPTs started to conquer the world. It thus took over a year to address the issue. After all, GenAI is indeed a game-changer and needs to be discussed without beating around the bushes.

The EU’s digital regulatory leadership stems from at least three sources. First is the policy imperative that demands avoiding market fragmentation, therefore preserving the single market approach that has defined the Union since its inception. The last thing EU policymakers desire or need is a plethora of national digital regulations (or lack thereof) pulling in different directions. Second in line is the existence of strong institutional capacities that allow the various EU entities and agencies to design and develop such policies and ensure successful implementation and oversight. That is undoubtedly a tall order for many countries and regional institutions in the Global South. Third, the absence of an advanced and cohesive tech sector makes the Union dependent on external agents and companies. In this light, most of the EU digital legislation aims to incentivize such a sector by trying to level the playing field without pushing away or kicking out successful ventures and firms.

A recent publication identifies three digital regulatory models which arguably are competing to conquer the world at the policy level. They are

  1. The US market-driven model. Here, little to no regulation is best to keep markets competitive. Indeed, competitive dynamics should auto-magically solve any market wrinkles. By the same token, tech firms are expected to self-regulate and show good overall behavior, just like high-school seniors.
  2. The Chinese state-centered model. Market regulation is needed to foster economic development and sustain social harmony and control. Powerful state institutions specialized in the topic are vital in ensuring political stability and centralization. The author calls this model “authoritarian.”
  3. The European rights-based model. Citizens are at the center of any regulatory framework, and their fundamental rights should be respected and defended by all means. However, digital technologies cannot be left alone to reach such a goal. Moreover, regulation is required to secure a fair distribution of their potential benefits.

To a libertarian, any regulation is “authoritarian” as the state is always involved. To a neoliberal, regulation is the source of the problem and never a solution, while the state should just facilitate a few things in the best-case scenario. While the analytical framework deployed to develop such classification is not presented in the book, the boundaries between these three categories are fluid at best. For starters, one could make the case that the classification is not unique to digital technologies but also applies to most other sectors and industries. That is certainly the case for the US and China, where the prevailing regulatory framework permeates the work of most, if not all, regulators.

In second place, the difference between the US and the European model is, well, China, that is, state-centered regulation. Of course, the EU’s goals are pretty different from those of China. However, one can easily argue that successful regulatory processes always require strong state institutions, as has been the case since the pioneering 1890 US Sherman Act. What is actually different are the goals and targets set by the regulations and their intended impact. Moreover, unlike the EU, China has its own big tech, which is the main target of the national regulations.

The current AI scene shows a similar situation. Two countries are way ahead, while all others are having difficulty catching up. China has already introduced AI regulations, authoritarian or not, including GenAI. So, the EU is not the first, as the mainstream media claims. However, I expect the EU regulation to have a broader global impact than anything from China.

A couple of weeks ago, a revised version of the AI draft bill was shared online. The 2022 draft approved by the EP had 12 Titles and 85 articles. The latest draft adds a new Title for GenAI, called General Purpose AI models, and includes three chapters comprising general obligations for providers and high-risk GenAI platforms. Two new annexes addressing technical and transparency information for GenAI providers have also been added. The number of articles is also evolving, and we should probably expect them to increase.

Nevertheless, the overall approach of the proposed AI legislation remains unchanged. While the DSA focuses on illegal content and the DMA on competitive markets, the AI proposal takes a different perspective and introduces risk as its core barometer. Indeed, the word risk appears almost 450 times in the pre-2024 text. Such an approach avoids taking a “blanket” approach to AI regulation. Instead, it targets the potential impact the technology could have on the fundamental rights of EU citizens and stakeholders and on democratic governance processes and institutions. AI systems triggering systemic risks are banned, including biometrics, social scoring, subliminal messaging, emotion manipulation and algorithms that discriminate users by their personal traits. Exceptions for law enforcement are also stipulated after a fierce debate on the issue and the opposition of civil society groups. High-risk AI systems are then the main target of the EU AI Act and get most of the obligations depicted in the latest draft. Medium and low-risk AI platforms are also included. In contrast, small and medium enterprises (SMEs) are provided support in an apparent effort to stimulate AI innovation within the geographical confines of the EU.

A more comprehensive analysis of the EU AI Act should be undertaken once the final draft is submitted for EP approval. Multiple amendments should be expected. For the record, the EP introduced almost 800 amendments to the 2021 EC AI draft. We will see.

While the various legislation drafts describe it as future-proof, gaps are certainly unavoidable. In my view, the apparent lack of synergies between the AI Act on the one hand and the DSA and the DMA on the other is surprising. Take the issue of illegal content created by GenAI systems. A situation where an algorithm helps another one that, in turn, assists yet another one (and so on, ad infinitum) is not directly covered by the DSA. Content created by a network of “intelligent” machines raises new policy issues. Never mind the IPR issue, which has now reached the legal system.

A related issue is the emergence of new platforms entirely running on AI and providing GenAI services. OpenAI is the best case here. It now has an App Store, taking a page from Apple and Google. Moreover, some older platforms can transform their business models and modus operandi by fully embracing AI. Is the DMA well-equipped to tackle the new challenges? Probably not. While OpenAI could soon be considered both a Very Large Online Platform (VLOP) and a “gatekeeper,” its business model, while still evolving, is not quite the same as that of the large digital platforms we are all very familiar with.

In this space where dull moments are scarce, celebrations should be limited both in noise levels and duration.

Raúl

References