Up
0
Down

Navigating the nexus of Policy, Digital Technologies, and Futures (S1/E9)

S1/E9: The difficult art of regulating magic. I mean, technology

As Arthur C. Clarke purportedly said, Magic is just science that we don’t understand yet. And as my son, about four years old, once told me, Look dad, I’m super intelligent! 2+2=4!

Artificial Intelligence (AI) may be a serious topic, but that doesn't mean that it must be the hottest topic for such a long time, or that it should consume all funds available. Even the metaverse is suffering from the shift of capital towards generative AI of the ChatGPT kind.

In any case, in this and the next episodes of our blog series, we'll explore the recent developments in the European legal framework for AI, and its potential implications.

As a starter, a no-brainer: if I type “2+2=” in my phone’s calculator, it’ll show the same super intelligence as my child at four years of age. Hence, how to differentiate computing technology to which we’re used, from Terminator-type AI?

A matter of definition

To answer the question above, the European Commission (EC) chose the following AI definition, for its proposal for an AI Act in the spring 2021.

AI is software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with.

Which beggars the question: What is in ANNEX I, for lawyers’ cabinets’ sake?

This is what ANNEX I of the proposal says.

ARTIFICIAL INTELLIGENCE TECHNIQUES AND APPROACHES

referred to in Article 3, point 1

(a) Machine learning approaches, including supervised, unsupervised and reinforcement

learning, using a wide variety of methods including deep learning;

(b) Logic- and knowledge-based approaches, including knowledge representation,

inductive (logic) programming, knowledge bases, inference and deductive engines,

(symbolic) reasoning and expert systems;

(c) Statistical approaches, Bayesian estimation, search and optimization methods.

Uh… Any software system by any chance?

Encouragingly, the European Parliament (EP) didn’t quite like this definition, and, in the text put forward for voting by its plenary (due today, 14 June 2023!), chose instead to align the AI definition with that provided by the OECD (a club of mainly rich countries, as coined by The Economist), below.

An AI system is a machine-based system that is capable of influencing the environment by producing an output (predictions, recommendations or decisions) for a given set of objectives. It uses machine and/or human-based data and inputs to (i) perceive real and/or virtual environments; (ii) abstract these perceptions into models through analysis in an automated manner (e.g., with machine learning), or manually; and (iii) use model inference to formulate options for outcomes. AI systems are designed to operate with varying levels of autonomy.

 

It may be better in the eyes of the EP. But even not being a lawyer, I still can’t differentiate my phone’s calculator from generative AI by using the definition above. And this without being too picky on Point (ii), that states that An AI system is a machine-based system that […] abstracts perceptions through analysis […] manually… Right. Manually. Like my son.

 

Bear with me. Is keyboard correction and completion just a computing technology, while generative AI is … magic? And Maps, Waze, etc? What are they?

The good news is then that the EU is deciding to go about the regulation by looking at AI usage, rather than systems. Even though the jury is still out on my own opinion that the EU is trying to regulate the whole business of SOFTWARE applications, masquerading it as an AI regulation.

 

The Risky Business of AI

When it comes to the usage of AI systems, there are varying levels of risk identified. From "unacceptable risk" to "minimal or no risk," it's almost like we're talking about amusement parks instead of cutting-edge technology. In the EU, AI systems that manipulate human behaviour will be banned. Goodbye, persuasive toy voice assistants. And don't even get me started on governments using AI for mass-surveillance using AI-powered facial recognition, as this will be proscribed. Unless, that is, if you are one of the 27 governments of the EU, since the Council’s position and that of one of the parliamentary groups in the EP is that this ban must not apply to National Security or Law Enforcement Bodies. If you are confused by the fact that they want to regulate the use of AI by governments but not by their own (whose?), remember that the EU governance is a tad schizophrenic, with the large majority of legislation being proposed by its Executive only.

The usage of some AI systems is to be categorized as "high risk," and it's actually not hard to see why. Critical infrastructures, educational systems, and even employment processes are all affected by AI. Just imagine a robot performing surgery, scoring your exams, and deciding whether you're suitable for a job—all in one day! Talk about high stakes. Maybe it's time to reconsider the saying, "It's not brain surgery," because, well, it could be.

Fortunately, not all the usages of AI systems fall into the categories above. Some are deemed to have "limited risk," and others have "minimal or no risk." It's a relief to know that AI-enabled video-games and spam filters fall into the minimal risk category. We can all sleep soundly knowing that the EU considers that our virtual adventures and junk email filters won't be causing havoc in our lives. These are AI-related news that, finally, won't make our hearts skip a beat. Apparently, our children will be allowed to continue to happily kill as many enemies inside their preferred video-games as they like.

Obligations and Oversights

To ensure the safety and reliability of usages of high-risk AI systems, strict obligations will be put in place. From risk assessments and high-quality datasets to clear documentation and human oversight, they've seemingly covered all the bases. It's as if these AI systems are being treated like super intelligent but mischievous children, in need of constant supervision. Let's hope they don't start asking for bedtime stories or throw tantrums when things don't go their way. And whomever had children in need of supervision know the real meaning of the word ‘constant’. Which, in the context of the AI Act translates into enforcement. Yet another EU body is proposed by the EP, to support the implementation of the legislation: The AI Office. Given the amount of recent digital legislations stemming from the EU, soon there’ll be more enforcement agents and agencies than other economic actors. Complying to, and enforcing, this regulation will be a Terminator’s nightmare.

 

Please don’t get me wrong. Artificial Intelligence usage is certainly complex and evolving. In my view, the AI Act may open wide avenues for wild interpretations of AI that will just encompass the usage of the totality of IT systems, and not only what might one day have been the intention, ie, to regulate Machine Learning systems. And even so, I’m not sure it’d be a good idea. Coming back from another episode of this series, it all feels as the current European Commission felt that they were losing the control of the narrative in Europe and the world and decided to ensure that no system is beyond their control. Be them media or computing technology.

In the realm of AI regulation, we must tread cautiously to avoid stifling innovation and impeding the potential benefits that artificial intelligence is ushering in. While oversight is necessary to address risks and ensure ethical use, excessive regulation can hamper progress and hinder the advancements that are definitely and positively impacting society.

As we navigate the European legal framework for AI, let's appreciate the peculiarities that arise. From AI systems that act like mischievous children to the high-stakes decisions they make, there's no denying that a little distance is required, even in the most serious of topics. So, next time you delve into the world of AI, remember to demystify it and enjoy the ride!

 

Keep an eye at this space! In the next episode I’ll describe in more details what is in the EC proposal, what may come out of the EP vote on 14 June 2023, and what may end up being promulgated eventually, once the negotiation with the Council is concluded. Back to you soon.

 

 

[This blog series is inspired by research work that is or was partially supported by the European research projects CyberSec4Europe (H2020 GA 830929), LeADS (H2020 GA 956562), and DUCA (Horizon Europe GA 101086308), and the CNRS International Research Network EU-CHECK.]

 

Afonso Ferreira

CNRS - France

Digital Skippers Europe (DS-Europe)