EU AI Act: Up to EUR 35 Million Fines for Prohibited AI Practices
The EU AI Act brings new challenges and significant risks for non-compliance, with fines for prohibited artificial intelligence (further „AI”) practices potentially reaching up to EUR 35 million or even higher. The prohibitions shall apply from 2 February 2025. This substantial financial risk has raised the stakes for compliance, making it essential for businesses to understand the boundaries of acceptable AI use under the EU AI Act. In this article, I explore the types of violations that could trigger these fines and how to mitigate the risks.
Prohibited AI practices
The EU AI Act prohibits placing on the market, putting into the service or the use of several AI practices that pose unacceptable risks to fundamental rights and public interests:
I. Manipulative and deceptive AI systems: AI systems that use subliminal or purposefully manipulative techniques to influence human behavior by significantly impairing individuals' decision-making abilities resulting or likely to result in significant harm.
Examples of such manipulative AI systems could include deep fake technology for propaganda, addictive algorithms in social media, dark pattern AI in e-commerce, AI chatbots with subliminal persuasion, AI-driven manipulation in political campaigns, etc.
II. AI systems exploiting vulnerabilities: AI systems that exploit vulnerabilities based on age, disability, or socio-economic status to influence behavior resulting or likely to result in significant harm.
Examples of such AI systems could include AI-powered recruitment exploiting desperation, mental health apps exploiting emotional vulnerability, AI systems in gambling apps exploiting people with known gambling addictions, etc.
The prohibitions of manipulative and exploitative practices referred to in sections I and II above do not affect lawful practices in the context of medical treatment such as psychological treatment of a mental disease or physical rehabilitation, when those practices are carried out under the applicable law and medical standards. Additionally, common and legitimate business practices, such as advertising, that comply with the applicable law shall not, as such, be regarded as constituting harmful manipulative AI-enabled practices.
III. Social scoring: AI systems used to evaluate or classify people over time based on their social behavior or known, inferred, or predicted personality with a social score leading to unjustified or disproportionate treatment.
This ban does not affect lawful evaluation practices of people carried out for a specific purpose following the European Union and national law of Member States.
IV. Risk assessments for criminal predictions: AI systems that assess or predict a person’s risk of committing a criminal offense solely based on profiling or personality traits.
Exception applies when AI systems are used to support human assessments based on objective and verifiable facts directly linked to criminal activity.
V. Untargeted facial recognition: AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage.
VI. Emotion recognition in workplaces and education institutions: AI systems designed to infer emotions in workplaces and education institutions.
Exception applies where AI systems are intended to be used for medical or safety reasons.
VII. Biometric categorization: AI systems that categorize individuals based on their biometric data (such as fingerprints, etc.) to deduce or infer their sensitive characteristics such as political opinions, race, sex life or sexual orientation, etc.
This prohibition does not cover any labelling or filtering of lawfully acquired biometric datasets, such as images, based on biometric data (e.g. sorting images based on eye colour) or categorizing of biometric data in the area of law enforcement.
VIII. Real-time biometric identification for law enforcement: the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement is prohibited, unless it is strictly necessary for one of the following objectives:
(i) the targeted search for specific victims of abduction, human trafficking, or sexual exploitation of humans, as well as the search for missing individuals;
(ii) the prevention of a specific, substantial and imminent threat to the life or physical safety of individuals or a genuine and present or genuine and foreseeable threat of a terrorist attack;
(iii) the localization or identification of a person suspected of having committed a criminal offense, for the purpose of conducting a criminal investigation or prosecution or executing a criminal penalty for offenses referred to in Annex II of the AI Act and punishable in the Member State concerned by a custodial sentence or a detention order for a maximum period of at least 4 years.
Additional safeguards provided under the AI Act apply to the above exceptions to the use of real-time remote biometric identification systems for law enforcement.
Practices that are prohibited by data protection, non-discrimination, consumer protection, competition and other European Union laws are not affected by the prohibitions under the AI Act and further apply.
Guidelines by the authorities
The European Commission promised to issue guidance on the prohibitions before their entry into force. Let’s hope it shall provide examples of prohibited AI systems and interpretation of essential concepts, such as „significant harm“, “deceptive techniques”, in their guidelines.
On 24 September 2024, the Department for the Coordination of Algorithmic Oversight of the Dutch Data Protection Authority called for input on manipulative and exploitative AI practices. The input can be provided to the authority via email: dca@autoriteitpersoonsgegevens.nl until 17 November 2024. The purpose of this call is to gather information and insights from stakeholders (citizens, governments, business and other organisations) and their representatives. Collected information shall be used to provide guidelines on prohibited AI practices.
Penalties
Non-compliance with the rules on prohibited AI practices will be subject to administrative fines of up to EUR 35 million or, if the offender is undertaking, up to 7 % of the offender’s total worldwide annual turnover for the preceding financial year, whichever is higher. Non-compliant AI systems can also be taken off the European Union market.
Non-compliance of European Union institutions, bodies, offices and agencies with the prohibited practices shall be subject to administrative fines of up to EUR 1,5 million.
Each Member State shall separately establish rules on administrative fines for public authorities and bodies established in that Member State.
Practical steps for compliance
1 step – Identify and document all AI systems used in your company
2 step - Assess AI systems against prohibited AI practices
3 step – Develop and implement remediation plan
If any AI systems are identified as prohibited, plan their discontinuation or replacement with compliant AI systems and implement the plan by 2 February 2025.
4 step – Establish internal policies and procedures on acceptable usage of AI systems to prevent non-compliance
5 step – Continuously monitor the usage of AI systems in your company by different stakeholders (developers, HR, Marketing team, etc.)
6 step - Educate colleagues about the requirements of the AI Act and the importance of compliance.