The EU’s December agreement on legislation tackling AI deployment and use in the Union and beyond is yet more evidence of its global leadership in the area of digital technology regulation . A few weeks before the epic event, heavy lobbying by the usual suspects had placed the legislation’s future on the line . Generative AI (GenAI) was one of the critical issues under tenacious contention. Recall that the AI legislation’s first draft was completed by the European Commission (EC) in 2021, more than 18 months before ChatGPT saw the light of day. In June 2023, the European Parliament (EP) endorsed a revised draft. However, EU legislation became obsolete in the blink of an eye once GPTs began conquering the world. It thus took over a year to address the issue. After all, GenAI is a game-changer and deserves discussion without beating around the bush.
The EU’s digital regulatory leadership stems from at least three sources. First, there is the policy imperative to avoid market fragmentation, thereby preserving the single market approach that has defined the Union since its inception. The last thing EU policymakers want or need is a patchwork of national digital regulations (or a lack thereof) pulling in different directions. Second in line is the presence of strong institutional capacities that enable the various EU entities and agencies to design and develop such policies and ensure their successful implementation and oversight. That is undoubtedly a tall order for many countries and regional institutions in the Global South. Third, the absence of an advanced, cohesive tech sector leaves the Union dependent on external agents and companies. In this light, most EU digital legislation aims to incentivize this sector by leveling the playing field without pushing out or kicking out successful ventures and firms.
A recent publication identifies three digital regulatory models that arguably are competing to conquer the world at the policy level. They are
- The US market-driven model. Here, little to no regulation is best to keep markets competitive. Indeed, competitive dynamics should auto-magically solve any market wrinkles. By the same token, tech firms are expected to self-regulate and exhibit good overall behavior, just like high school seniors.
- The Chinese state-centered model. Market regulation is needed to foster economic development and sustain social harmony and control. Powerful state institutions specialized in the topic are vital in ensuring political stability and centralization. The author calls this model “authoritarian.”
- The European rights-based model. Citizens are at the center of any regulatory framework, and their fundamental rights should be respected and defended by all means. However, digital technologies cannot be left to themselves to achieve this goal. Moreover, regulation is required to secure a fair distribution of their potential benefits.
To a libertarian, any regulation is “authoritarian,” as the state is always involved. To a neoliberal, regulation is the source of the problem, not a solution, and the state should just facilitate a few things in the best-case scenario. While the analytical framework used to develop this classification is not presented in the book, the boundaries between these three categories are fluid at best. For starters, one could argue that the classification is not unique to digital technologies but also applies to most other sectors and industries. That is certainly the case for the US and China, where the prevailing regulatory framework permeates the work of most, if not all, regulators.
In second place, the difference between the US and the European model is, well, China, that is, state-centered regulation. Of course, the EU’s goals are quite different from China’s. However, one can easily argue that successful regulatory processes always require strong state institutions, as has been the case since the pioneering 1890 US Sherman Act. What is actually different are the goals and targets set by the regulations and their intended impact. Moreover, unlike the EU, China has its own big tech sector, which is the main target of its national regulations.
The current AI scene is similar. Two countries are way ahead, while all others are having difficulty catching up. China has already introduced AI regulations, including GenAI, regardless of whether they are authoritarian or not. So, the EU is not the first, as the mainstream media claims. However, I expect the EU regulation to have a broader global impact than anything from China.
A couple of weeks ago, a revised version of the AI draft bill was shared online. The 2022 draft approved by the EP had 12 Titles and 85 articles. The latest draft adds a new Title, General Purpose AI models, and includes three chapters covering general obligations for providers and high-risk GenAI platforms. Two new annexes addressing technical and transparency information for GenAI providers have also been added. The number of articles is also evolving, and we should probably expect them to increase.
Nevertheless, the overall approach of the proposed AI legislation remains unchanged. While the DSA focuses on illegal content and the DMA on competitive markets, the AI proposal takes a different perspective, placing risk at its core. Indeed, the word risk appears almost 450 times in the pre-2024 text. Such an approach avoids taking a “blanket” approach to AI regulation. Instead, it targets the potential impact the technology could have on the fundamental rights of EU citizens and stakeholders, as well as on democratic governance processes and institutions. AI systems that trigger systemic risks are banned, including biometrics, social scoring, subliminal messaging, emotion manipulation, and algorithms that discriminate against users based on their personal traits. Exceptions for law enforcement are also stipulated following a fierce debate on the issue and opposition from civil society groups. High-risk AI systems are the main target of the EU AI Act and bear most of the obligations outlined in the latest draft. Medium and low-risk AI platforms are also included. In contrast, small and medium enterprises (SMEs) are provided support in an apparent effort to stimulate AI innovation within the geographical confines of the EU.
A more comprehensive analysis of the EU AI Act should be undertaken once the final draft is submitted for approval by the EP. Multiple amendments should be expected. For the record, the EP introduced almost 800 amendments to the 2021 EC AI draft. We will see.
While the various legislation drafts describe it as future-proof, gaps are certainly unavoidable. In my view, the apparent lack of synergies between the AI Act on the one hand and the DSA and the DMA on the other is surprising. Take the issue of illegal content created by GenAI systems. A situation where an algorithm helps another one that, in turn, assists yet another one (and so on, ad infinitum) is not directly covered by the DSA. Content created by a network of “intelligent” machines raises new policy issues. Never mind the IPR issue, which has now reached the legal system.
A related issue is the emergence of new platforms entirely running on AI and providing GenAI services. OpenAI is the best case here. It now has an App Store, taking a page from Apple and Google. Moreover, some older platforms can transform their business models and modus operandi by fully embracing AI. Is the DMA well-equipped to tackle the new challenges? Probably not. While OpenAI could soon be considered both a Very Large Online Platform (VLOP) and a “gatekeeper,” its business model, while still evolving, is not quite the same as that of the large digital platforms we are all very familiar with.
In this space where dull moments are scarce, celebrations should be limited both in noise levels and duration.
Raúl
References

