Guidance on Classification and Conformity Assessments for High-Risk AI Systems under EU AI Act
Résumé
The adoption by the European Commission (Commission) of a proposal for a Regulation Laying Down Harmonised Rules On
Artificial Intelligence (known as the “AI Act”) in 2021 has been widely debated, in particular Article 6, according to which AI systems
that could potentially harm fundamental rights are classified as high risk. Despite its value in terms of classifying AI systems as
‘high risk’ and imposing a requirement for third-party conformity assessments, Annex II of the AI Act seems to have attracted less
interest mostly because of the technical nature of the debate around this issue. Considering the wide scope of the products
covered by the AI Act, as well as the complexity involved in classifying AI systems intended to be used in the areas covered as high
risk, the objective of this paper is therefore to provide guidance on the classification of high-risk AI systems as well as the
conformity assessments that are required.
The Commission’s proposal for an EU AI Act has been debated since April 2021 following an ordinary legislative procedure (co-
decision)1 at both the Council of the EU (Council) and the European Parliament (Parliament); as a result of this debate, it is likely
to be amended. On December 6th, 2022, the Council firstly adopted its amendments to the AI act proposal under the Council
Approach, while the AI Act is still under consideration at the Parliament (Parliament approach). The developments provided in this
paper are based principally on the Commission’s draft. The Council and Parliament approaches will be briefly invoked whenever
its proposal has an impact on the classification and the conformity assessment of high-risk AI systems.