EuropeanAI #79: AI Act battleground + pushing standardisation organisations
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe
Welcome to the EuropeanAI Newsletter covering the European AI and technology ecosystem. If you want to catch up on the archives, they're available here.
What happened in 2022 ?
Year 2022 marks an array of regulatory developments concerning AI in Europe and beyond. Nothwithstanding ongoing developments with the AI Act, here is a recap of some of the developments we were most excited about:
The EU gradually fleshed out its approach to AI standardization in a strategy published by the European Commission and a report by the European Parliament, of which we have reported here and here.
The Council of the European Union approved the Data Governance Act and the Machinery Regulation 2021 advanced through the legislative process at the EU institutions (an overview of key provisions under both is available here).
The European Commission unveiled its proposal for a regulation on cybersecurity requirements for products with digital elements, the so-called Cyber Resilience Act (our overview is available here).
The European Commission proposed two directives detailing new product liability rules, which cover AI (read our report here).
Spain announced the launch of a pilot regulatory sandbox for the development and testing of AI systems (our commentary is available here).
The U.S.-EU TTC laid the foundations of transatlantic cooperation on AI policies and regulation during its two meetings this year (a brief overview of the first one available here, and of the second -- below in this issue).
The UK government announced its initial plans for regulating AI, as well as its AI Action Plan for implementing the National AI Strategy, on which we commented .
Across the Atlantic, the Canadian government tabled a draft law on artificial intelligence, of which we have reported here. The US published their Blueprint for an AI Bill of Rights outlining several principles to guide future policies and practices.
We put a lot of hard work and time into this freely available newsletter. Support us by sharing the subscription link with 3 people who would enjoy this newsletter.
Policy, Strategy and Regulation
The AI Act advances through the Council of the European Union
On 6 December, the Council of the European Union adopted a common position on and its final compromise text of the AI Act. They project certain notable deviations from the initial draft of the AI Act proposed by the European Commission.
The definition of “AI system” under Article 3(1) is narrowed down to include only machine learning, logic and knowledge based approaches. Annex I (previously detailing various AI techniques in scope of the Act) is deleted. General purpose AI falls within the application scope of the AI Act by virtue of a new Title IA in the Council’s compromise text.
The list of high-risk use cases of AI under Annex III is amended as well. Certain types of use cases (e.g. deep fake detection by law enforcement authorities, crime analytics) are excluded from the list. New use cases are added (e.g. critical digital infrastructure, supply of gas, health and life insurance) and existing descriptions of use cases are redrafted in a way that potentially extends their scope (e.g. education). An additional criterion for qualifying an AI system as high-risk is added under Article 6(3). In order for an AI system to be considered as high-risk, it must not only be deployed for one of the use cases under Annex III but also likely to lead to a significant risk to the health, safety or fundamental rights.
The compromise text sets specific objectives in Article 40(2) for the European Commission and the EU standardization bodies to pursue in their standardization efforts. Contributing to international standards development, as per our piece below, features among those objectives. The European Commission is mandated with adopting common technical specifications for the requirements set out in Chapter 2 of Title III with respect to high-risk and general purposes AI systems if the EU standardization bodies have been requested but not delivered such standards.
In terms of governance, the Council’s compromise text advances proposals by the French Presidency and the European Commission regarding testing facilities and extending the scope of regulatory sandboxes, as we have previously reported here. The European Commission will designate one or more AI testing facilities that would advise the national market surveillance authorities on the implementation of the AI Act. AI regulatory boxes will allow for testing of innovative AI systems in real world conditions supervised by the national competent authorities. An addition to Article 53(3) now immunes participants in regulator sandboxes from fines for infringements of EU and national AI-related legislation if the participants respected the sandbox plan and the terms and conditions for participation. Participants will, however, remain liable for third-party damages. Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes would also be possible, subject to certain conditions, pursuant to a new Article 54a. An ethical review would, however, be required for such unsupervised real world testing.
End of year EU-US Trade and Technology Council Meeting
Earlier this month, the EU-US Trade and Technology Council (TTC) released a joint statement, reaffirming their commitment to “rules-based” and “human-centric” approaches to technology, trade and digital transformation.
They issued a Joint Roadmap on Evaluation and Measurement Tools for Trustworthy AI and Risk Management (AI Roadmap). This Roadmap intends to align the EU and U.S. approaches to AI risk management and standardization and outlines key areas for collaboration. For example, interoperable terminology in upcoming regulatory frameworks, standardization, and know-how for monitoring and measuring existing and emergent AI risks.
On standardization in particular, the parties plan to cooperate on AI pre-standardization research and international technical standards development. This includes the creation of a knowledge base of metrics for measuring AI trustworthiness and risk management tools. The next steps include the setting up various expert working groups, mapping terminology and taxonomies, and landscape analyses of international standards.
Notes in the Margins: They also agreed to set up an early warning mechanism for notification about and cooperation on overcoming disruptions in the chips supply chain. A common mechanism for reciprocal exchange of information about public subsidies to the semiconductor sector will also be set up. The aim appears to be avoiding a subsidies race and attaining more transatlantic cooperation in the chips supply chain.
Enjoy learning about Europe? Share the subscription link with friends, colleagues, enemies...
Contact Charlotte Stix at:
www.charlottestix.com
@charlotte_stix
Dessislava Fessenko provided research and editorial support.
Interesting events, numbers or policy developments that should be included? Send an email!
Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.
Copyright © Charlotte Stix, All rights reserved.