Emotion Recognition in the workplace : Compliance Challenges Ahead of February 2025 EU Ban
The European AI Act will prohibit the use of AI systems designed to infer or identify emotions in workplace and educational settings, effective February 2025.
To clarify this prohibition, the Dutch Authority for Personal Data (Autoriteit Persoonsgegevens, AP) has released a call for input, inviting insights on the functions and applications of emotion recognition AI, especially in or near regulated settings. This call includes key questions aimed at refining the understanding of these systems:
Key Learnings from the Call for Input on Emotion Recognition AI
-
Strict Prohibition in Specific Contexts
The EU AI Act enforces a clear prohibition on emotion recognition AI systems used to infer emotions in workplaces and educational settings, as stated in Article 5, subparagraph (f).
Article 5, paragraph 1, subparagraph f (‘prohibition F’):
“The placing on the market, the putting into service for this specific purpose, or the use of AI systems to infer emotions of a natural person in the areas of workplace and education institutions, except where the use of the AI system is intended to be put in place or into the market for medical or safety reasons;”
This prohibition is intended to protect against risks associated with privacy and the potential for misuse in contexts where individuals may feel pressured or monitored. Exceptions are limited strictly to applications for medical or safety purposes.
-
Detailed Criteria for Defining Prohibited Systems
The call for input outlines specific criteria to determine whether an emotion recognition AI system falls under this prohibition:-
Criterion 1: Inference and Identification of Emotions and Intentions
Systems are prohibited if they infer or identify emotions or intentions using biometric data. The AP assumes that both inference and identification fall under this rule, emphasizing the broad scope of the prohibition when it comes to interpreting emotional states from biometric indicators. -
Criterion 2: Scope of Emotions or Intentions
Prohibited emotions and intentions include feelings like happiness, sadness, anger, and embarrassment. However, basic expressions or gestures—like a smile or raised voice—are not restricted unless used to infer deeper emotions. Physical states, such as pain or fatigue, are also excluded from this prohibition, highlighting a boundary between observable physical signs and inferred emotions. -
Criterion 3: Use of Biometric Data
The prohibition applies specifically to emotion recognition systems based on biometric data, as defined in both the AI Act and GDPR. Biometric data includes any personal data derived from physical, physiological, or behavioral characteristics that can uniquely identify individuals. This highlights the Act’s focus on data privacy, especially in the use of sensitive biometric indicators. -
Criterion 4: Workplace and Educational Settings
This criterion restricts emotion recognition systems in workplaces and educational institutions, covering even related settings such as remote work environments or online learning. This broad interpretation emphasizes the importance of protecting individuals from being emotionally profiled in contexts where power dynamics could lead to discomfort or unfair treatment.
-
-
Flexibility in Swiss Law
While the EU Act imposes an outright ban in these contexts, Swiss law under Article 26 of the Employment Ordinance (EmpO 3) allows emotion recognition AI if used proportionately for legitimate purposes, such as quality control or organizational improvements. This difference in regulatory approach suggests that while the EU prioritizes absolute safeguards, Swiss regulations provide more flexibility, as long as the use is not solely for monitoring behavior. -
Importance of Transparency and Clear Purpose
The AI Act’s stance highlights the importance of transparency and proportionality. Organizations must ensure that AI systems are not used disproportionately and that their deployment serves a clear, limited purpose. This emphasis on purpose aligns with both privacy protection and regulatory compliance. -
No High-Risk Classification for Emotion Recognition in Workplaces
The document clarifies that emotion recognition AI in workplaces and education is outright prohibited, not classified as high-risk. In other contexts, however, emotion recognition systems may be categorized as high-risk, with strict compliance requirements but no prohibition. This distinction reinforces the EU’s cautious approach in high-stakes, controlled environments. -
Cross-Jurisdictional Compliance Challenges
For organizations operating in both the EU and Switzerland, understanding these distinctions is crucial. The EU’s stricter limitations in specific settings, contrasted with Switzerland’s proportionality-based allowance, necessitate flexible governance frameworks that can adapt to regulatory requirements while protecting privacy and ethical considerations.
Conclusion: Preparing for a Complex Regulatory Landscape
The EU’s strong stance on emotion recognition AI reflects a broader commitment to upholding privacy and ethics in technology. For organizations navigating these evolving regulations, it’s essential to consider not only the local regulation and operational goals of emotion recognition systems but also the wider implications for data privacy, employee rights, and organizational transparency.
This call for input underlines the need for adaptive, robust AI governance frameworks that can meet the highest standards in privacy and ethics across jurisdictions.
As AI systems continue to evolve, maintaining a proactive approach to compliance and transparency will be key to deploying AI in diverse regulatory landscapes.
Stay connected with AI regulation and ethics updates!
Join our mailing list to receive monthly AI regulatory and ethics updates.
Don't worry, your information will not be shared.
We hate SPAM. We will never sell your information, for any reason.