
The Regulation (EU) 2024/1689 artificial intelligence (AI) Act aims to promote innovation and uptake of AI while ensuring high protection of health, safety, and fundamental rights. It classifies AI systems into four risk categories which are unacceptable risk, high risk, transparency risk and minimal to no risk.
On 4th February 2025, the European Commission issued guidelines on prohibited AI practices established by regulation (EU) 2024/1689 (AI Act). The guidelines are important to organisations and businesses with AI systems in the EU and detail which AI practices are unacceptable.
A summary of the eight prohibited AI practices, examples of AI systems to which they apply and systems that are out of scope are listed below.
Prohibited AI practices:
1. Harmful Manipulation and Deception - Article 5 (1)(a):
AI systems that employ subliminal, manipulative, or deceptive techniques that distort behaviour and cause significant harm are prohibited.
- Subliminal techniques: Examples include AI systems that use visual and auditory subliminal messages, subvisual and subaudible cueing and embedded images that can unconsciously influence users.
- Manipulative techniques: Examples include AI systems that purposefully manipulate users through background audio or images to induce mood changes such as increased anxiety or mental suffering resulting in significant harm.
- Deceptive techniques: Examples include an AI chatbot that impersonates a friend or relative with a synthetic voice leading to scams and significant harm.
2. Harmful exploitation of vulnerabilities - Article 5(1)(b):
AI systems that exploit vulnerabilities due to age, disability, or socio-economic situations resulting in distortion of behaviour and significant harm are prohibited.
- An example is an AI powered toy that encourages children to complete increasingly risky challenges for digital rewards and virtual praise, potentially leading to dangerous behaviour and physical harm.
- AI systems that use lawful persuasion rather than manipulation and that are not likely to cause significant harm are outside of the scope of Article 5(1)(a) and (b).
- For example, an AI system that uses personalised recommendations based on transparent algorithms and user preferences engages in lawful persuasion.
- For example, a therapeutic chatbot that uses subliminal techniques to guide users towards a healthier lifestyle and to quit bad habits such as smoking is not likely to cause significant harm even if users experience physical discomfort and psychological stress because of the effort made to quit smoking.
3. Social Scoring - Article 5(1)(c)
AI systems that evaluate or classify individuals based on social behaviour or personal characteristics resulting in unjustified treatment are prohibited.
- An example is an AI predictive tool that analyses taxpayers’ tax returns to select tax returns for closer inspection. The AI tool uses relevant variables such as yearly income as well as unrelated data such as a taxpayer’ social habits or internet connections to select individuals for closer inspection, leading to potential discrimination.
- Out of scope legitimate scoring practices include financial credit scoring systems which are used by creditors and credit information agencies to determine a customer’s ability to repay debts by analysing a range of financial data such as the customer’s income and expenses.
4. Individual Criminal Offence Risk Assessment and Prediction - Article 5(1)(d)
AI systems that predict criminal behaviour based solely on profiling or personality traits, without objective human assessment are prohibited.
- An example is a law enforcement authority that uses an AI system to predict criminal behaviour based on a personal characteristic such as age, nationality, address and marital status leading to unjust profiling.
- Out of scope AI systems include predictive policing systems which generate a score for the likelihood of criminality in different neighbourhoods based on previous criminality rates and other supporting information such as street maps, allowing law enforcement to identify areas that require additional police presence.
5. Untargeted Scraping of Facial Images - Article 5(1)(e)
Creating or expanding facial recognition databases through untargeted scraping from the internet or CCTV footage is prohibited.
- In terms of the internet, if a person has published facial images of themselves on social media it does not mean that the person has given permission for the images to be included in a facial recognition database. An example of scraping images from CCTV is an AI tool that uses images from surveillance cameras in public spaces like airports and streets without consent.
- Out of scope AI systems include databases that contain facial images but do not associate them with identifiable individuals e.g. datasets used solely for training or testing AI models without any intent to recognise or identify the persons in the images.
6. Emotion Recognition - Article 5(1)(f)
AI systems inferring emotions in workplaces and educational institutions, except for medical or safety reasons are prohibited.
- An example of ‘emotion recognition’ is an AI system that infers that an employee is unhappy, sad or angry to customers using body gestures, a frown or the absence of a smile.
- Out of scope are AI systems that infer emotions not on the basis of biometric data e.g. from written text and AI systems that infer physical states e.g. pain and tiredness.
7. Biometric Categorisation - Article 5(1)(g)
Categorising individuals based on biometric data to infer sensitive characteristics like race, political opinions, or sexual orientation are prohibited.
- An example is an AI system that categorises social media users by their presumed political orientation based on biometric data from uploaded photos to send them targeted political messages.
- Examples of permissible filtering include the categorisation of patients using images according to skin or eye colour which may be important for a medical diagnosis such as cancer.
8. Real-time Remote Biometric Identification (RBI) - Article 5(1)(h)
The use of real-time RBI systems in publicly accessible spaces for law enforcement is prohibited with exceptions only for serious threats and criminal investigations.
Safeguards and conditions for the exceptions (Article 5(2)-(7) AI Act) are documented in the guidelines which aim to ensure the responsible and ethical use of AI technologies while safeguarding fundamental rights and promoting trust in AI systems.
Enforcement of Article 5 AI Act
The prohibitions in Article 5 AI Act became applicable after 2 February 2025 and penalties, governance and confidentiality will apply from 2 August 2025.
Market Surveillance Authorities are responsible for the enforcement of rules in the AI Act for AI systems including prohibitions. The AI Act employs a tiered response to determine penalties for non-compliance. This system is designed to ensure that the severity of the infringement is appropriately matched with the corresponding penalty. Non-compliance with prohibitions in Article 5 AI Act are classified as the most serious infringement and subject to the highest fine which can be up to EUR 35,000,000 or up to 7% of worldwide turnover for the previous financial year, if the offender is an undertaking, whichever is higher.
Conclusion
AI in healthcare, such as software as a medical device (SaMD), must be developed with stringent ethical standards to ensure patient safety and protection of fundamental rights. The guidelines remind developers and regulators alike of the importance of maintaining transparency and safeguarding against AI misuse. As the healthcare sector continues to integrate AI into medical devices, these guidelines will serve as a key framework for ensuring that AI-driven solutions prioritise the well-being of patients while promoting innovation and trust in the healthcare system.
If you need guidance on navigating AI challenges, contact us today to arrange a free, no-obligation discussion.