You may well be aware of the EU AI Act, the world’s first comprehensive artificial intelligence law. Dr Alice Kerly has been investigating how this will affect the Conversational Artificial Intelligence (CAI) industry in a bit more detail, and this post outlines our thoughts.
Navigating the EU AI Act: What it Means for Conversational Artificial Intelligence
The EU AI Act is the first comprehensive legal framework for Artificial Intelligence worldwide – standards such as ISO/IEC 42001:2023 and the National Institute of Standards and Technology (NIST) AI Risk Management Framework have thus far been voluntary, rather than regulatory.
This landmark act aims to foster innovation in trustworthy AI by an ensuring that AI systems respect fundamental rights, safety, and ethical principles. The act has significant implications for developers and deployers of AI systems, including those in the Conversational AI (CAI) field, such as companies that implement chatbots, voicebots and virtual assistants.
Understanding the AI Act’s Risk-Based Approach
The AI Act employs a risk-based approach; it categorises AI systems into four levels of risk with more stringent regulation and compliance requirements applied the higher the identified risk.
- Unacceptable Risk: AI systems that pose a clear threat to people’s safety, livelihoods, and rights are prohibited. This includes practices such as social scoring by governments, and voice-activated toys that encourage dangerous behaviour.
- High Risk: AI systems that negatively affect safety or fundamental rights are considered high risk. High-risk AI systems are subject to strict obligations before they can be deployed. These include AI used in critical infrastructures (e.g. transport), education, employment, essential private and public services, law enforcement, migration, and administration of justice.
- Limited Risk: This category relates to a lack of transparency in AI usage. The AI Act introduces specific transparency obligations to ensure that humans are informed when necessary, fostering trust.
- Minimal or No Risk: This category includes the vast majority of AI systems currently used in the EU, such as AI-enabled video games and spam filters. There are no restrictions on these systems.
Impact on Conversational AI
The AI Act applies to CAI development and deployment, although the level of impact depends on the application and associated risk level.
- Transparency is key: The act introduces specific transparency obligations for CAI. Users should be made aware that they are interacting with a machine to enable them to make informed decisions.
- Labelling AI-generated content: Providers must ensure that AI-generated content is identifiable – text, audio, and video content created using AI must be labelled as such.
- High-Risk Applications: CAI used in certain contexts (e.g. employment or law enforcement) may be considered high risk. It will be subject to strict obligations including risk assessments, data quality, traceability, detailed documentation, human oversight, and a high level of robustness, security, and accuracy.
What Companies Should Do to Comply
Companies deploying CAI solutions, may need to take several steps to comply with the AI Act:
- Risk Assessment: Determine the level of risk posed by their AI systems. If the system falls under the ‘high-risk’ category, adhere to all associated obligations.
- Transparency: Implement clear mechanisms to inform users when they are interacting with an AI system. Ensure AI-generated content is clearly labelled.
- Data Quality: Ensure high-quality datasets are used to train the AI, to minimise risks and discriminatory outcomes.
- Documentation: Maintain detailed documentation on the AI system, its purpose, and how it complies with the act.
- Human Oversight: Implement appropriate human oversight measures to minimise risk.
- Ongoing Monitoring: Establish a post-market monitoring system, report serious incidents and malfunctions, and ensure ongoing quality and risk management.
- Compliance with Copyright Law: For generative AI models, ensure compliance with EU copyright law. This includes publishing summaries of copyrighted data used for training.
- Testing Environments: Take advantage of national authority testing environments to simulate real world conditions.
The Future of Life Institute, a nonprofit that works on maximising the benefits of technology and on mitigating its associated risks, has produced an interactive compliance checker that companies can use to help determine whether their AI system will be subject to the EU AI Act.
Where does the EU AI Act Apply?
Despite being legislation in the EU (European Union), the AI Act actually applies beyond the EU borders. It covers the use and supply of AI systems that might be sold or used in the EU, regardless on the location of the maker or seller. Therefore, if you are creating AI models or systems, including CAI, that are to be used in the EU, you may be in scope of the law and need to demonstrate that you meet its provisions.
Navigating the Transition
The AI Act came into force on 1 August 2024, and will be fully applicable two years later, on 2 August 2026, with some exceptions. Prohibitions on systems that pose unacceptable risks and AI literacy obligations took effect from 2 February 2025. Governance rules and obligations for general-purpose AI will be applicable on 2 August 2025. The rules for high-risk AI systems embedded into regulated products have an extended transition period until 2 August 2027.
To facilitate this transition, the European Commission launched the AI Pact, a voluntary initiative encouraging AI developers to comply with the AI Act ahead of time.
Final Thoughts
The EU AI Act is a significant step towards ensuring responsible and ethical AI development and deployment. For the field of CAI, this means
- an emphasis on transparency,
- risk management,
- and adherence to strict guidelines for high-risk applications.
While compliance may pose challenges, the act ultimately aims to foster trust in AI technologies, which is crucial for the long-term success of Conversational AI. By taking proactive steps towards compliance, companies can ensure they are well-positioned in this evolving landscape.
Please note, these thoughts do not constitute legal advice – which we’d always recommend when developing or deploying AI systems affected by the act, to ensure proper understanding of risk and compliance obligations.