EuropeanAI #80: Machinery Regulation + EU LLMs
The EuropeanAI Newsletter
Covering Artificial Intelligence in Europe
Welcome to the EuropeanAI Newsletter covering the European AI and technology ecosystem. If you want to catch up on the archives, they're available here.
We put a lot of hard work and time into this freely available newsletter. Support us by sharing the subscription link with 3 people who would enjoy this newsletter.
Policy, Strategy and Regulation
Ongoing points of contention for the last lap of the AI Act
Towards the end of 2022, the Council of the European Union finalized their general approach on the AI Act. That text demonstrated notable deviations from the original proposal put forward by the European Commission in 2021. The last of the three key decision makers, the European Parliament is yet to come forward with a final text. In fact, media reports (here and here) indicate that discussions surrounding key aspects such as general purpose AI, fundamental rights and obligations for high-risk users are heated. This reveals a raft of possible contentions, or at least, open points for further negotiations within the factions of the Parliament and between the Council, Parliament and Commission.
One major point of contention appears to be the criteria for classifying AI systems as high-risk under Article 6, as well as the use cases considered to be high-risk under Annex III.
It also appears that views within the Council and the European Parliament differ regarding the scope of the impact assessment that high-risk AI systems would need to undergo. In particular, discussions center around whether the assessment would need to focus on risks or address more elaborately possible violations of fundamental human rights, EU and national laws.
Equally, discussions are ongoing as to the place of general purpose AI systems within the AI Act.
Finally, the scope of the ban on using AI under Article 5 appears to be still debated, with Germany reportedly requesting an absolute ban on remote biometric identification in real time in public spaces and on use of AI systems in assessing performance at the workplace and individuals’ delinquency in criminal matters.
European AI standards in the making
Also towards the end of 2022, it was the European Commission’s turn to issue a new draft standardization request to the European standardization organizations, European Committee for Standardisation (CEN) and to the European Committee for Electrotechnical Standardisation (CENELEC), instructing them on the development of technical standards reflecting key requirements under the AI Act.
Among the standards that CEN and CENELEC are requested to draft are those on: risk management and quality management systems, governance and quality of datasets used to build AI systems (including data curation, design choices, bias and representativeness), transparency and interpretability of AI systems, human oversight, accuracy, robustness and cybersecurity specifications, and conformity assessment of AI systems.
The European Commission’s draft request mandates the CEN and CENELEC to cooperate with the International Organization for Standardization (ISO) and the International Electrotechnical Commission (IEC) on standardization efforts in areas in which no, insufficient or inadequate international standards concerning AI systems exist.
Notes in the Margins: The Commission’s draft request also notes that ISO/IEC standards “may be adopted as European standards by CEN and CENELEC”. This may be a highly impactful move concerning how and by whom international norms and standards around AI could be set. Moreover, voluntary guidance for risk management, fairness, transparency, explainability, safety and security of AI systems is currently also being developed by the United States National Institute for Standards and Technology (NIST). The latest draft version of that guidance incorporates some concepts adopted in ISO/IEC technical standards. A final version of NIST’s guidance is expected to be released in January 2023.
Ecosystem
AI talent flow: German edition
Guest note by Pegah Maham, Lead Data Scientist, Stiftung Neue Verantwortung (SNV)
Berlin-based think tank Stiftung Neue Verantwortung (SNV) published an empirical study about AI talent flows in Germany. Focusing on a sample of 898 PhD students supervised by Germany’s leading AI professors, the authors find that half of all PhD students in their sample received their undergraduate degrees at universities abroad. EU countries were found to play a much less important role as countries of origin than anticipated, while China, India, and Iran are much more important than expected. Within the first couple of years after graduation, almost 40% of graduates have left Germany – often for the USA, the UK and Switzerland. In these countries, global tech companies are the most important employers for talents from Germany.
Enjoy learning about Europe? Share the subscription link with friends, colleagues, enemies...
Contact Charlotte Stix at:
www.charlottestix.com
@charlotte_stix
Dessislava Fessenko provided research and editorial support.
Interesting events, numbers or policy developments that should be included? Send an email!
Disclaimer: this newsletter is personal opinion only and does not represent the opinion of any organisation.
Copyright © Charlotte Stix, All rights reserved.