Like the GDPR, the Artificial Intelligence Act (AI Act), is a major move toward the regulation of information technologies in the European Economic Area (European Union and some other european countries). While it will gradually come into force by August 2026, the magnitude of the changes it requires and the consequences of non-compliance make it relevant for many actors to start thinking about it.
Where the GDPR required mostly contractual work, user information and consent collection; the AI Act impacts heavily the engineering process for uses classified as “high risk” and the definition of artificial intelligence in the regulation is broad enough to include everything starting from an Excel model or a linear regression.
Caveat
I am not a lawyer, so my skills lean more towards understanding technology than how it is regulated.
The document has 144 pages in its final english version, and, because its prose is rather dull, I have not read it all. I had some level of understanding of it after writing EU AI Act: Notes and Thoughts on the Proposed Regulation, and I made my journey to write this article (and to prepare a talk) reviewing the key elements mentioned over there for accuracy. This is an introduction more than a comprehensive review.
As my personal interests are on the side of the producers / consumers of models and observers of the economics, I focused on those. I skipped non-mandatory statements (such as the codes of practice and codes of conduct) and the institutional elements (EU AI Board, AI Office, local regulators, scientific panel).
Context
The AI Act is referenced as EU Regulation 2024/1689, I suggest that you open the document in a new tab to get a quick view at the elements that might raise your curiosity.
artificialintelligenceact.eu has an explorer mode which is decent, albeit slow.
While it has been advertised has being around artificial intelligence, you could think it is related to generative AI and LLMs, it is not, at least not exclusively. The first proposal dates from 2021, before those major user-reaching innovations, it has been later adapted to them before being adopted.
The targets of the Act were :
- Protect fundamental rights.
- Prevent market fragmentation, that is to say the development of local regulations that goes against the idea of a European Single Market.
- Provide legal certainty, because the only thing worse than being regulated is to be about to be regulated.
A few key dates are set in the future (Article 113):
- 2 February 2025: effective prohibition on prohibited AI.
- 2 August 2025: restrictions start to apply on general purpose AI.
- 2 August 2026: general application date.
Key definitions
As written in the introduction, the definition of an AI system is broad, so much that it could be considered much larger than machine learning. It fits a semi-advanced Photoshop filter as well as a regression in Excel. Article 3.1 states:
‘AI system’ means a machine-based system that [operates on] the input it receives, [infers how to] generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.
Another key definition will be the one of general purpose models (called “foundational models” in some previous drafts), which is probably key for the hot topics of today such as LLMs and generative AI (Article 3.63).
‘general-purpose AI model’ means an AI model, […] that […] is capable of […] performing a wide range of distinct tasks and that can be used in […] systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market.
This definition has evolved from the drafts and does not seem to cover transfer learning anymore. The exclusion of research and development is also a nice thing to see.
Penalties
Sex is the best way to get people’s attention… the second best is probably money. The catchline that was able to grab people’s attention in the GDPR is reused in the AI Act: “crazy high fines”.
Up to a cap taken as the maximum between a percentage of an annual turnover, from 1% to 7%, or a few millions of euros, from 7.5 millions to 35 millions (Article 99).
What risk is
The document has a pretty standard definition of risk (Article 3.2), however as you will notice the key classification of the document, already branded as the “AI Pyramid”, focuses only on the risk caused by the use of an AI system. So it is highly linked to the environment and context of a use case, rather than the models themselves.
‘risk’ means the combination of the probability […] and the severity of […] harm.
Note that with regard to general purpose models a notion of systemic risk has been introduced (Article 3.65):
‘systemic risk’ means a risk that is specific to the high-impact capabilities of general-purpose AI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain.
Actors
The document considers different roles for operators:
- ‘Provider’: transparent (Article 3.3).
- ‘Deployer’: professional user of an AI System (Article 3.4).
- ‘Importer’: entity responsible for making available an AI system in the EU when the provider is an EU based entity (Article 3.6).
- ‘Distributor’: other entity than a provider and a deployer somewhere in the value chain (Article 3.7).
Levels of AI risk
Nota: AI models used for the sole purpose of scientific research and development are out of scope (Article 2.6).
Keeping the motivation of the regulation in mind helps to “guess” the classification of an AI Model: safeguarding democracy, individual rights (including equality regarding access to opportunities and fair treatment) and maintaining a functional market.
Prohibited AI
The notion of prohibited AI (Article 5) mostly covers systems that are considered unethical with regard to the fundamental rights of individuals: manipulation, classification of race or sexual orientation, causing harm to groups, identifications of individuals (out of criminal context), social scoring with disproportionate effects, …
The rules, as stated in the name, are simple: those practices are prohibited.
AI under Obligation of Transparency
Article 50 puts in place transparency obligations for AI systems. In clear, users should be informed that they are dealing with an AI generated content especially. However, Article 50.4 leaves some leeway in context where using AI as a mean would be expected (art, satire, …), there the use of AI can be done more discreetly to not alter then enjoyment of the content.
This includes a computer readable watermarking obligation, when technically feasible regarding costs and state-of-the-art knowledge (Article 50.2).
General-Purpose AI
Obligations for all General-Purpose AI
- Provide documentation (53.1.a, 53.1.b, Annex XII), among others, to be able to inform Providers relying on said general-purpose AI.
- Have an authorized representative within the Union.
Article 53.2 and 54.4 provide an exception for open source General Purpose AI, provided they do not qualify as carrying systemic risks.
General-Purpose AI with Systemic Risks
Systemic risks is caracterized by at least one of the following conditions :
- High impact capabilities (Article 51.1.a).
- More than 10²⁵ floating point operations used during training, 10 YFLOPS, (Article 51.2). For reference, GPT 4 is reputed to have been trained with slightly less than this.
- Decision of the Commission (Article 51.1.b, criteria are described in Annex XIII).
Obligations only for General-Purpose AI with Systemic Risks
- Notify the Commission within two weeks along with the relevant elements to substantiate why the criteria are met (Article 52).
- Ensure proper testing, including adversarial testing (Article 55.1.a).
- Assess and mitigate risks (Article 55.1.b).
- Document, secure and report incidents (Article 55.1.c and Article 55.1.d).
High Risk AI
Definition of High Risk AI
There the family is more diverse, the key definitions are in Article 6.1 and in Annex III, this annex might be updated in the future:
- Safety components (Article 6.1).
- Biometrics (Annex III 1).
- Critical infrastructure such as water, energy, internet and roads (Annex III 2).
- Education and training: access and evaluation of students (Annex III 3).
- Employment (Annex III 4).
- Essential services: healthcare, benefits, creditworthiness, fraud detection, emergency services. (Annex III 5).
- Law enforcement (Annex III 6).
- Migration and border control (Annex III 7).
- Justice and democratic process (Annex III 8).
Nevertheless, this list is nuanced by Article 6.3 that excludes cases based on narrowness or the assistive (to a human) aspect of an AI system. This assistive aspect can be understood as a human being informed after making a decision, not when an AI system can simply be overridden.
Obligations of High Risk AI
Risk Management
Mostly to ensure that a framework around risk is in place and
documented (Article 11) covering:
- Risks regarding health, safety and fundamental rights (as defined the Charter of Fundamental Rights of the EU) (Article 9.2.a).
- In normal use and potential misuse (Article 9.2.b).
- Including a post-market monitoring (Article 9.2.c).
- Risk management: preventing / reducing risk when possible, trainings (Article 9.5).
- Back testing, validation and test in real world conditions (Article 9.6 & 9.7).
- Appropriate data management and assessment (Article 10).
- Logging (Article 12).
Provider documentation & oversight
Oversight must be enabled through a proper documentation (Article 13) and the possibility to not use the AI system (Article 14.4.d) or to interrupt it (Article 14.4.e) for the Deployer.
Quality Management System
Article 10 and 17 asks for a clear process about the evolution (and creation) of a High Risk AI system, this means a written process about everything involved in the project including the existence of design documents and tests about data management, data processing, model and its assessment.
It is unclear to me if development versions and experiments could be excluded and brought to compliance before reaching production.
Article 63.2 makes this specific part lighter for small businesses.
Public Related Deployers
Public related deployers are to conduct a fundamental rights assessment (Article 27).
Lots, and lots, and lots of Bookkeeping, Cross-Checks, Due Diligence and Value-Chain Obligations
Article 18, Article 22, Article 23, Article 24, Article 25 & Article 26.
Conformity & registration
A conformity assessment has to be either self performed (Annex VI, simpler form) or through a third party ‘notified body’ (Annex VII, probably more reducing the risk of non-compliance).
CE marking is mandatory (Article 48). You can now try to stick a CE sticker on a ReLU unit.
Before placing on the market a High Risk AI System (or an AI System where the provider has decided that it does not fit) a registration is mandatory (Article 49).
Authorities have the right to request source code access (Article 75.13.a).
Supporting innovation
The whole chapter VI of the document is dedicated to the support of innovation. Without going into details, regulatory sandboxes sound like a form of incubation program to help with compliance (Article 57), not a lighter compliance scheme.
Similarly actual support for SMEs seems limited to providing template (Article 62.3.a) and some lighter quality management system (Article 63.2).