This blog post was co-authored by Tegan Johnson, Solicitor Apprentice, London and was was originally published on Compliance & Risks 'In Practice Series' blog, December 2023.
The first draft of the Artificial Intelligence Act (the “AI Act”) was proposed by the EU Commission in April 2021 and it became the first substantive framework of its kind. It aims to provide a single framework for AI products and services used in the EU, ensuring products placed on the EU market are safe while allowing for innovation.
The Act will apply to systems used and products placed on the EU market – even where the providers are not in the EU – and adopts an risk-based approach akin to that commonly seen in Medical Device Regulations, with obligations proportional to the level of risk.
There is no single agreed definition of AI within academia and industry, so to define its scope and seek to regulate in such a comprehensive manner is a bold approach by the European Commission and akin to the ambitions it had when introducing the General Data Protection Regulation or GDPR in seeking to put in place a gold standard globally influential regulatory framework.
Currently the AI Act has entered the final stage of the legislative process with the EU Parliament and Member States thrashing out the details of the final wording, with certain aspects in particular subject to intense debate. Indeed, as recently as early December, the final trilogues were taking place and substantive amendments debated. It was announced that a political agreement had been reached on the AI Act but we’re awaiting a final draft form to be published, likely in the New Year. Where possible, we’ve included the reported outcome of those debates in the below analysis.
Once a final form is agreed and approved the AI Act will enter into law and, following a grace period of up to 2 years, its requirements will apply. The European Commission’s ambition is that a final draft would be agreed prior to European elections next year for fear of this causing significant delays.
While it may be subject to some changes, the fundamentals of this draft are so important and such a step change for actors in this space that it’s important for businesses deploying AI to understand the proposed requirements and their passage to entering law.
The Legal Framework
The key provisions of the law include:
Definition Of AI | “Software that is developed with one or more of the techniques and approaches listed in Annex I and can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environments they interact with”. Annex I includes: machine learning approaches, logic programming and search and optimisation methods. |
Entities Within Scope |
|
Definitions Of Entities |
|
Risk Levels | The Act follows a risk-based approach whereby AI systems are categorised based on level of risk, with obligations proportionate to the level of risk posed. The risk levels comprise:
|
Prohibited Uses | Some forms of AI are explicitly prohibited under the AI Act including:
|
High Risk AI System Obligations |
In addition, recent debates seem to have introduced a requirement for bodies providing public services (such as healthcare or education) to conduct a “fundamental right impact assessment” before deploying high-risk AI systems. |
Limited Risk AI System Obligations | Limited risk systems are much less strictly regulated and there are fewer obligations for parties placing such systems on the market. The obligations that do apply include:
|
Exclusions & Exemptions | Military use exclusion: A specific exclusion for AI systems developed for or used for military purposes exclusively (the latest debates suggest this will apply both the AI systems used by nations and external contractors), and those used for law enforcement and judicial enforcement where utilised by public authorities. Notable proposed exemptions debated but not currently in available draft text In addition, exemption conditions for products ordinarily falling within the high-risk classification have been debated with strong dispute amongst Member States. Proposals include exemptions for AI systems:
|
The Consequences For Non-Conformity
It is Member States’ responsibility to set and decide the exact penalties for breach, as is the case with many EU product safety regimes. However, there are some specific examples and caps outlined in the draft, and we can draw conclusions from other regimes which may help understand the potential penalties for breach. The possible penalties include:
Procedures / Powers |
Practical Meaning |
---|---|
Corrective Actions | Competent Authorities generally have powers to rectify non-compliance, prevent placing on the market of a non-compliant product, and/or ordering withdrawal or recall. Authorities will have such powers in relation to AI systems under the General Product Safety Regulation as well as the draft provisions of the AI Act. The AI Act does develop this granting Competent Authorities additional investigatory powers, amongst others, including a requirement for regulatory authorities to be granted access when necessary to the training data, source codes and other relevant information relating to the AI system to determine if a breach has taken place. |
Monetary Penalty | There are varying fines and scales of fine outlined in the draft Act, intended to vary in seriousness depending on the type and scale of non-conformity. The fine is capped at 30 million euros or 6% of global income (whichever is the higher) for set infringements, and 20 million euros or 4% of global income, whichever is the higher, for others – though it is expected that Member States will create their own detailed rules and scales in practice. |
Criminal Sanctions | Select regulations allow competent authorities or Member States to set sanctions, and these can include criminal sanctions and imprisonment for serious breaches. It remains to be seen if Member States will set such strict sanctions for breaches relating to AI products. |
Civil Liability | Where a company fails to comply with its obligations, they may become liable for any resulting damages. In addition to this general power, the draft AI Liability Directive also being drawn up and progressed by the EU would be able to compel provision of evidence from providers of AI systems and reverse the burden of proof (if certain conditions are met) to assist claimants bringing claims under product safety legislation. |
Checklist To Improve Compliance
- Assess the scope of the draft and its key provisions as currently drafted to determine whether your products are likely to fall within the regulation, and the implications of their potential risk level, particularly where AI systems or products applying such systems might fall within the High risk category.
- In particular, consider undertaking an internal review of practices, particularly with a view to ensuring that sufficient data is recorded and saved to be of use in case of future inspection or for compliance with the act.
- All businesses expecting to be affected by the law should look to monitor future amendments, commencement and other related laws which will change or supplement the framework provided by it.
- If you provide AI systems already, adding disclaimers regarding the risks and intended usage of the products could go a long way to assisting in compliance – especially for products categorised as lower risk.
- The draft reveals certain priorities: one being the accuracy of the data used. Data used for training and input should be accurate and free of bias to the extent possible. Grappling with the quality of data earlier rather than later may save a last minute rush for compliance.
- Certain industries (medical devices for example) will automatically be categorised as high risk. For these companies, the obligations are much more strenuous. Creating plans for mitigation and internal governance early, and enlisting specialist help, can help create a functioning system from the get go.