There have been numerous calls to establish ethics boards to monitor and control AI development and also to enforce transparency, but little has been done. Now, a group as influential as the European Commission has issued a 108-page book of regulations including serious consequences for those ignoring the rules.

On April 21, 2021, in Brussels, a press announcement from the European Commission outlined a wide-ranging, coordinated solution. “[It’s] the combination of the first ever legal framework on AI and a new Coordinated plan with Member States [that] will guarantee the safety and fundamental rights of people and businesses, while strengthening AI uptake, investment and innovation across the EU [European Union]. New rules on Machinery will complement this approach by adapting safety rules to increase users’ trust in the new, versatile generation of products.”


THE RULES

The draft rules will apply across all 27 EU nations, and the penalties make clear the serious intent of the Commission. For infringements on prohibited practices and noncompliance related to requirements on data, the penalty is up to €30 million or 6% of the total worldwide annual turnover of the preceding financial year, whichever is higher. That’s seriously damaging for small enterprises, but it’s hard to imagine the numbers when that 6% is applied to companies like Google, Microsoft, Apple, or Facebook.

The rules will apply to both public and private actors inside and outside the EU when the AI system is placed on the EU market, or its use can affect people who live in the EU bloc.

According to the documents, the new rules will be based on a future-proof definition of AI, and the rules follow a risk-based approach. As a legal matter, the “future-proof” definition is needed for the integrity of enforcement, but it does raise interesting questions about the possible “Who knew?” innovations of future AI. The risk categories are more easily defined:

Unacceptable risk: The rules ban “AI systems considered a clear threat to the safety, livelihoods and rights of people. This includes AI systems or applications that manipulate human behaviour to circumvent users’ free will (e.g. toys using voice assistance encouraging dangerous behaviour of minors) and systems that allow ‘social scoring’ by governments.”

High risk: The initial list of high-risk AI is long. It includes technology used in:

  • Critical infrastructures (for example, transport) that could put citizens at risk;
  • Educational or vocational training that may determine the access to education and professional courses of someone’s life (an example would be scoring of exams);
  • Essential private and public services, such as credit scoring for loans;
  • Law enforcementthat may interfere with people’s fundamental rights;
  • Migration, asylum, and border control management(for example, verification of authenticity of travel documents); and
  • Administration of justice and democratic processes,such as applying the law to a concrete set of facts.

To ensure safety, the rules will subject AI systems to strict obligations before those systems are made available to the public.

Of particular concern to the Commission is the use of remote biometric identification. All of these systems, including facial recognition, are considered high risk and subject to strict requirements. “Their live use in publicly accessible spaces for law enforcement purposes is prohibited in principle. Narrow exceptions are strictly defined and regulated (such as where necessary to search for a missing child, to prevent a specific and imminent terrorist threat or to detect, locate, identify or prosecute a perpetrator or suspect of a serious criminal offence).”

Limited risk: The example used for limited risk is the requirement for specific transparency. For instance, online chatbots must inform the user that they’re interacting with a machine so they can make an informed decision to continue or step back.

Minimal risk: The vast majority of AI systems today are in apps like AI-enabled video games or spam filters. The draft rules don’t interfere with these, as they represent minimal or no risks to users’ safety.


A EUROPEAN APPROACH

The rules published by the European Commission have their origin in its 2018 Coordinated Plan on Artificial Intelligence. That activity helped shape the current European strategy, which is described now as the Commission’s European approach to excellence in AI.

The press release elaborates, “Coordination will strengthen Europe’s leading position in human-centric, sustainable, secure, inclusive and trustworthy AI. To remain globally competitive, the Commission is committed to fostering innovation in AI technology development and use across all industries, in all Member States.”


A LONG OVERDUE START

Carly Kind is the director of the Ada Lovelace Institute in London, which studies the ethical use of AI. Her comments in The New York Times about the rules reflect some of the impatience on the part of those waiting for some kind of coordinated effort with legislation. “There has been a lot of discussion over the last few years about what it would mean to regulate AI, and the fallback option to date has been to do nothing and wait and see what happens. This is the first time any country or regional bloc has tried.” And although the Commission’s work is welcome, Kind expresses concern that the policy is “overly broad and left too much discretion to companies and technology developers to regulate themselves.”

The draft rules will require a lengthy period of review and debate, but it’s a start. And something is certainly better than standing around watching as the technology gets smarter, more powerful, and more opaque as weeks go by. The time that’s now designated to sharpen the focus of the rules and to gather the coordination and commitment is overdue.

About the Authors