07 Sep 2023

EU AI Act: Notes and Thoughts on the Proposed Regulation

This article is massive. You are cordially invited to jump to the sections that feel the most relevant to you. The outline of the document is below.

I had the pleasure and honor to be invited to a panel discussion at the Annual Conference on Artificial Intelligence Systems and Fundamental Rights organized by the ERA. In order to not appear irrelevant on the legal aspect of things, I had to immerse myself in the proposed EU AI regulation known as the AI Act. This and the event itself taught me a great deal about regulating artificial intelligence (AI). It inspired this article.

To make it easier to navigate I will highlight parts that are more likely to interest what I think are the potential readership groups: legal professionals and tech people.

Caveat

I am not a lawyer, so my skills lean more towards understanding technology than how it is supposed to be regulated. Given that it is usually regulated by people with a more superficial understanding of technology, I think it is relevant to hear both sides on the issue.

Moreover, I am not an expert in the recent Artificial Intelligence advancements either. My knowledge is that of an engineer with some machine learning background. While I have read about these advancements and experimented with them, my engagement has not gone beyond that.

The documents mentioned below amount to about 108, 269, …, a lot of pages, I have skimmed through them, primarily glossing over the recitals and the articles that regard the institutional process. So, even when they are not inaccurate or wrong, my insights are certainly not comprehensive.

Personal Opinion

At its core, neutrality is often a facade. I have included this section for the sake of transparency about my personal views.

Even if we would love to see law-making as a purely rational process I think it is good to give a highlight to two conflicting views of technology. Simplistically:

Option 1: the happy technology path, where technology is going to save us all in case Baby Jesus cannot come. It will improve our health, our prosperity and give us a world of freedom and access to life’s enjoyments. It has already done so by making information so much more accessible.

Option 2: technology has been a threat to our values and our societies. It glues us to screens with a continuous flow of fake news and harmful content for the purpose of violating our privacy for marketing reasons. It must be put back to a reasonable position in society.

Nobody fully believes one or the other of those opinions, however. Option 1 is more Silicon Valley oriented, option 2 more EU politicians oriented.

I tend to gravitate more towards the first view, while I hate this attention society I do not understand the privacy fetishism that has perspired everywhere. Especially when saving privacy is about adding blanket regulations over corporates and largely ignoring government usages. Perhaps my entrepreneurial spirit does not perceive private endeavours as negatively as others do.

Conversely, when I discuss with people that know both the US and the EU, they do not enjoy that their data is mishandled in every possible way, or shared for any reason. But they are much more concerned by inefficiencies of the payment or credit scoring systems. The financial industry in Europe brings much more to our lives in terms of convenience, security and also ethics.

I feel that the political agenda should pivot back to addressing old school mishandled issues, rather than the last clickbait article about how “your data is being sold”.

The Documents

The AI Act for now is a proposed regulation. EU Regulations are legal texts that become legally binding without having to be implemented in national laws like Directives.

Three documents need to be considered:

artificialintelligenceact.eu centralizes a lot of other different elements related to the topic, including a summary presentation from the European Commission.

Timeline

If the first proposal of the AI Act was published in April 2021, we need to consider that it originated in yesterday’s world.

A world where most serious machine learning practitioners were reluctant on calling their work “artificial intelligence”, and where this wording by itself was able to attribute a sentence to the public relations department. With the sudden and concomitant arrivals of both large language models (with the market leader being ChatGPT) and generative art (such as MidJourney and Stable Diffusion), this has changed a lot. One of the point of the amendment is to catch up with generative models, and transfer learning even if the existential risk remains unaddressed.

The AI Act is advertised as a major industry regulation, a strong stance against the big technology corporations in a GDPR fashion.

Timing

In 2021, machine learning / AI was following its steady trajectory. There were advancements but nothing as ground-breaking as generative models, at least in their reach inside our daily lives.

In terms of public relations for the EU, it is a great moment to publish something about AI, but I believe making decisions about a technology at the pinnacle of its hype cycle might be ill-advised.

Emotions and fears are peaking, the sense of reality is not.

To illustrate my concerns, I will just observe that we are experiencing a shift in public opinion with regard to nuclear power, both for its CO2 and climate change benefits, as well as the energy independence it can provide. The general safety considerations seem to be weighted differently than in the wake of incidents like Three Mile Island, Chernobyl and Fukushima.

Concepts

Horizontality Versus Verticality

Law should be fair, it should be universal and outline principles rather than address a specific issue in a specific context.

In that context, seeking horizontality is key. This means that a legal principle should be applicable across various contexts. Every time a constitution is used to apply fundamental principles in new contexts, technologies or topics, this is horizontality.

If there is one thing that the AI Act lacks by essence, it is horizontality. Because it is geared primarily towards AI, it lacks a sense of generality that could transpose to many other topic. Article 5 prohibits AI systems used for unethical subliminal techniques… what about unethical subliminal techniques without AI? Does their exclusion suggest acceptance?

In an oversimplified view, what the AI Acts asks you if you are dealing with High Risk AI is to demonstrate that you have tried to have a rigorous approach and thought about the risks (e.g. error or discrimination). Those concerns are legitimate, I can recommend Chapter 3 of Deep Learning for Coders by Jeremy Howard & Sylvain Gugger to learn more on that matter. For example, does your HR AI system actually enhance hiring?

Now, when HR all around the globe use “psychological tests”, who are replicas of your summer edition of Cosmo “is he the right lover for you?”. If you are looking for pseudoscience made religion, it is probably a thing to look at.

Eliciting rules for better decision-making systems should not apply solely to AI. And establishing those rules for AI only undermines the principles of universality and predictability.

Predictability

Predictability is the idea that one should be able to discern whether an action is legal.

The SEC abrupt crackdown on crypto exchanges is a good example of a lack of predictability, probably mixed with bad administration.

Individuals and businesses need predictability to move forward. Essential clarity is lacking when the law leaves a grey area.

In a world where “unregulated” translates “soon to be regulated”, it also means that regulating gives predictability.

Isolation and Overregulation

Those concepts are pretty much transparent, but I thought it was only fair game to highlight that the legal field is very aware of the risks behind of regulation: blocking science, hindering innovation and exacerbating Europe’s lag in innovation capacity.

Outsiders perceive most decisions as sudden, bureaucratic and top-down. I feel it is not fully deserved as the legal community is very much aware of the possible unwanted consequences.

Fragmentation

Fragmentation is when different countries develop different rules around the same topic, thus creating a lack of uniformity leading to a fragmented market, which is against the European vision of a unified market.

One of the motivation behind the act is to address the demand for AI rules to prevent national governments to develop their own set of bespoke regulations.

If it is true that fragmentation has its drawbacks, it also gives some diversity. If we would all love to have the best of all worlds, “good” is often subjective and good intentions rarely equal good effects.

Had French ethical principles been universal law, we would not have had mRNA vaccines today because the research behind them would not have been possible.

If the German definition for renewable and reject of nuclear energy had become European standards, the energy adaptation following the Russo Ukrainian war and the effort in CO2 reduction would have been much harder.

Issue-Based Regulation Versus Precautionary Principle

Stem cells research, nuclear, nanomaterials, AI… many of those things are complex and good fearmongering material. Fear can be legitimate, it is a key thing in evolution. At the same time “fear is the mind killer”, it is not a good reasoning principle, particularly when it is a subjective experience guided by sensationalized press and TV tropes.

In essence Paul Nemitz, Principal Adviser at the EU Commission, explained that the AI Act was a transition from issue-based regulation to a consideration given to technology around its potential developments and the precautionary principle.

I understand the idea of anticipating issues, especially in a context where technology gives leverage to poor ideas. At the same time I am concerned about potential overreactions based on conjecture.

Fear not! I will contradict myself on the topic of existential risk in a moment.

Recitals

The first EU legal text I delved into was the GDPR. At that time I was puzzled by the recitals, this first part of the document, anything before Article 1, which seems like a mixture of wishful thinking and requests to Santa Claus. And it is not too far from truth.

To put it mildly, the recitals are the final resting place of empty promises. They are not legally binding, which is not what I was first expecting when opening a legal document. However, they explain the spirit of the law and may be resuscitated if a court needs to interpret the law in its spirit.

They represent a significant chunk of the document, if you need to skip something, this might be your best candidate.

Lack 1: Existential Risk, a Massive Oversight

As someone that belongs to the tech community I feel that a segue is due.

The last decade of tech marketing has fed on false advertising and lies, over promising and under delivering or simply rebranding old ideas to give the feeling that something new was around the corner.

To name a few:

  • Siri’s marketing. The personal assistant that is basically still useless since a decade beside setting timers and alarms for 99% of the user base.
  • The constant repackaging of concepts from machine learning, to big data, to data science, and finally, artificial intelligence. Everything was always done with a lot of promises that where rarely followed by anything real in the eye of the general public.
  • Blockchain, cryptocurrencies and NFTs.

Making things bigger than they are can trigger public and private investors and customers to spend time and money. Nobody wants to miss the next big thing. On the long run however, this attitude erodes trust in the information technology industry and its public relationships.

Yann Le Cun, in his talk the Epistemology of Deep Learning, explains why science is often done under a camouflaged language in order to not be rejected by peers and funding agencies. He made me realize that the issue is probably larger and older than I thought.

End of segue.


Existential risk is the idea that we may go extinct because an AI would go the wrong way. The first time you hear about it, the idea seems crazy. You have been used to the people predicting the end of the world in 2000, in 2012, because of COVID, because of the vaccines, because of 3G, because of 5G, because of too many things. You just pass to the next thing in your agenda. And you are right to do so!

The End of Days It’s judgment!… Repent!… The end of days is upon us!…

However, when a statement is endorsed by most of the relevant technological and scientific community around the threat that AI could represent. I think that the appropriate reaction should be to think about it rather than wave it away.

Mitigating the risk of extinction from AI should be a global priority alongside other societal-scale risks such as pandemics and nuclear war.

That precise risk is left completely unaddressed by the AI Act. And from the few comments you can hear, it is perceived as a smoke screen to delay the regulation at best.

The AI Act

Key Articles

Diving deep into the AI Act, there are several articles that stand out as central:

  • Article 1 and 2: clarifies the scope.
  • Article 3 & Annex I: mostly gives key definitions for the rest of the document.
  • Article 4: lays out general principles around AI.
  • Article 5: elaborates on prohibited AI
  • Article 6 & Annex III: defines what qualifies as high risk systems

If you do not develop generative systems, do not qualify for Article 5 or 6 (and Annex III) and do not provide “foundation models”, you are basically out of scope of most the document.

AI Risks in the AI Act

The core definitions of the AI Act are the different levels of AI risks, from most restricted to least restricted:

  1. Prohibited AI (Article 5.1): mostly things involving subliminal techniques (not a joke), exploiting vulnerabilities of people, discrimination, excessive evaluation of trustworthiness of individuals or biometric identification. 2.High Risk AI (Annexe III): those systems are at the core of the regulation. Biometric identification (whatever is not prohibited), management of infrastructure and transport, education recruitment, contractual devisions, access to services e.g: public services, credit or emergency dispatch. Law enforcement, migration, and the judiciary.
  2. Foundation Models (Article 28)
  3. Misinformation AI (Article 42): AI that can produce content resembling existing things must have a clear label to indicate that the content is AI generated. I am still unclear about why this gets a different treatment from regular photoshopping, which sends us back to the idea of horizontality.
  4. All the rest is left mostly unregulated.

As we can see most AI is not differentiated by the technics it deploys but by its domain of application.

Lack 2: The Missing Intermediate Category

Reading these definitions, I realized that my company will likely produce a high risk AI system: a property valuation model that is involved in the loan obtention process of some banks in Luxembourg. My ego appreciates it, my brain fears the extra administrative burden, I might be a bit biased in my observation.

However as much as I understand why some level of mandatory bureaucracy aims at putting safeguards around a technology that would help a judicial system.

I still believe that a trial is quite different from a mortgage:

  • It is mandatory: usually, you don’t choose your judge / jurisdiction.
  • You do not get to see competitors: you get a single judge (albeit sometimes multiple trial instances).
  • The stakes between jail time and getting a mortgage are different.

To generalize a bit, a court has a monopoly from the point of view of the individual facing it. A lender does not. Putting a threshold to the proportion of alternatives to a system offered on a market and having an intermediate category would have felt more appropriate.

Obligations: Due Diligence, Documentation, Information and Registration

Title III chapter’s 2 and 3 are meant to cover the requirements for high risk systems.

These chapters contain many unclear concepts for the layman: “risk management” in Article 9, “data governance” in Article 10, “human oversight” in Article 14.

In a nutshell, this means that documentation is expected around:

  • The whole production of the model (Article 10): this includes the data, its preprocessing, the assumption to be made on the data quality or the model quality. As well as a quality management system (Article 17).
  • A risk assessment of the model (Article 9): akin to the what you may found in information security. Risks are to be described, quantified and mitigated when possible.
  • The execution of the model (Article 12): logs enabling the traceability of the system’s behavior.

Chapter 5 details a form of conformity self assessments and certificates.

Article 51 in particular introduces a mostly optional registration.

The crux here is that those administrative demands are not limited to revenue generating applications or for-profit applications. I am afraid that the consequence will likely be the disappearance of low profit margin systems from Europe, including some we might have grown accustomed to.

Take WebMD for instance. They offer an expert system that basically performs an informative medical diagnosis online, for free. It fits right in the middle of high risk AI, and also high added value for users, especially where every medical institution seems to be understaffed. I do not know how big is Europe in their revenue, but I would be afraid to deter such products away from the market.

The part about foundation models expands on the impediment on innovation coming from this.

The requirements themselves are inherently problematic, they define a quality standard expected from a scientific or industrial standpoint. However, the scale of the fines may push actors to not jeopardize their whole business for only marginal part of its income.

Fines & Proportionality

The fines are hefty. Article 71 navigates between 10 to 40 million euros, or 2 to 7% of the worldwide turnover (for companies).

Should a regulatory body err (Article 72) the penalty could go up to 1.5 millions.

In essence:

  1. A lapse by the regulator might lead to a fine which in most cases may merely look a wire transfer between one public account to another. The responsible individual will probably not hear about it again after being scolded.
  2. A major corporation in a similar case will risk a significant share of their income and profit. The responsible individual will probably lose their job.
  3. Smaller corporations will be staring at bankruptcy. If the responsible individual is an entrepreneur, most of their wealth may evaporate.
  4. An individual will face their own personal bankruptcy.

It is hard for me to read proportionality and equality before the law here.

I would consider this a strong deterrent to the financing of innovative structures in Europe.

Lack 3: Foundation Models & Open Source, the Threat on the European Technology Scene

NB: foundation models are not mentioned in the first proposal, you need to dig in the proposed amendments.

“Foundation model” is a term that I had not encountered before reading the document.

“foundation model” means an AI system model that is trained on broad data at scale, is designed for generality of output, and can be adapted to a wide range of distinctive tasks

From my understanding this encompasses any model that can be specialized for a specific problem or converted to a different purpose. If one considers all the usage of transfer learning, this covers any image or language model that is made using deep learning. Specializing models has become the de facto standard to address these topics.

Article 28 lays out the expectation for foundation model providers, and it sounds a lot like what I have described in the part about Obligations.

The ramification on open-source models would be devastating: no individual in their right mind would publish any model, no company would be able to reuse a model that would lack documentation. This pushes back individual rights, businesses, as well as scientific progress more than a decade back. And this may very well keep them here.

The only entities that would rejoice from this are the corporates large enough to control a whole value chain and relearn a specific model with their own proprietary data. If you were looking for a moat in AI, this is a godsend.

Lack 4: Employment Disruption

Generative AI is perceived as being a threat to a lot of jobs.

Many, mostly content creators, are impacted, and it is probably just the tip of the iceberg. This is the most recurring concern I hear about this technology.

There are three positions around that:

  1. Denial: believing that it will not bring significant changes.
  2. Traditional: asserting that despite some job losses, new ones will compensate. Joscha Bach elaborates a bit at the beginning of this interview on the matter.
  3. Disruptive: predicting a world where jobs practically cease to exist.

History has often displayed a combination of these.

Most inventions are irrelevant. Society adapts to most innovations. Some make human skills obsolete.

Here, it is worth noting that any change purely enabled by software in our world can be extremely fast. All the physical infrastructure: computers, servers, networks are already there, the investment can be very low and the return on investment very fast.

Some think that a universal basic income can be a solution in the event or in anticipation of a rarefaction of work.

However, the AI Act does not touch upon this matter. It is debatable whether addressing this issue falls within the EU’s purpose.

Lack 5: Nothing Against Criminality

Generative AI may enable the most nightmarish scenario for scams as it renders impersonation easy and, even worse, automated.

The whole AI Act is geared around legitimate economical entities. It seems that in order to be a burden to society you need to be incorporated.

Regrettably, the Act offers no provisions to protect individuals, companies and our societies at large from criminals. In a context where most scammers and manipulators act beyond borders, it is probably an area where a supranational framework would make most sense.

Lack 6: What about the Military

The military being excluded from the AI Act scope, it is akin to handing it a carte blanche.

Conclusion

The AI Act will likely roll out with much fanfare, asserting that the EU regulators have now reined in AI. The document will probably require a healthy set of practices around critical AI (high risk AI), it will address some generic concerns around artificial intelligence. However, this will not solve them: a lot of topics are still considered research area, e.g.: biases will not disappear. It will ask AI actors to at least care and quantify the problem.

Such practices and the risk of fines will probably hinder the development of AI in Europe to the point that one may wonder if the costs will not severely outweigh the benefits.

At the same time, lots of concerns will remain unaddressed as I have listed in this article. Some, like the existential risk, are probably out of scope for the EU and in need of international discussions, others, like criminality, felt like a low-hanging fruits that were not given consideration.


Many thanks to those who had the patience to help me proofread this article. 🙏


Would you like to hear more from me?

I thought about interrupting your reading midway, but then I decided to respect my readership.

If you wish, let's stay in touch regardless of changes in your social network feed!


Fräntz Miccoli

This blog is wrapping my ideas and opinions about innovation and entrepreneurship.

For some time now, I am the happy cofounder, COO & CTO of Nexvia.

Ideas expressed are here to be challenged.


About me The Dark Side Twitter