AI cybersecurity standard (ETSI EN 304 223)
European Telecommunications Standards Institute (ETSI)The European Telecommunications Standards Institute (ETSI) has published standard EN 304 223, the first European standard establishing a structured set of cybersecurity requirements for artificial intelligence (AI) systems. With this publication, ETSI provides organizations with a harmonized framework for identifying, managing and mitigating AI-specific security threats, with the aim of strengthening the resilience of AI systems throughout their lifecycle.
AI cybersecurity standard (ETSI EN 304 223)
Executive summary
The growing deployment of AI systems across all sectors has accelerated the need for robust security standards tailored to the specific risks of AI. As AI technologies become more complex, especially with the rise of deep neural networks and generative models, the ETSI EN 304 223 standard provides organizations with a structured, lifecycle-based framework to protect AI assets against threats such as data poisoning, model manipulation and indirect command injection. The standard comprises 13 principles spread across five main lifecycle stages.
Main content
The document describes AI security principles to help organizations proactively mitigate AI-specific vulnerabilities, ensure visibility of model behavior, and maintain accountability through documentation, traceability, and human oversight.
- Secure design. Secure design emphasizes incorporating security considerations from the earliest stages of creating an AI system. It encourages organizations to integrate threat awareness, conduct risk assessments early on, and establish mechanisms that allow humans to monitor system behavior and intervene when necessary. The design phase also requires documenting key architectural decisions and ensuring that systems remain resilient in the face of adverse or unexpected interactions before moving on to the development phase.
- Secure development. During development, the focus is on creating a controlled and well-protected environment for data, models, and supporting infrastructure. This includes maintaining an accurate record of assets, implementing strict access controls, protecting third-party components, and validating the robustness of AI systems through structured testing. The development phase reinforces the importance of transparency by ensuring that datasets, models, and prompts are traceable throughout the engineering process.
- Secure deployment. Deployment addresses how organizations communicate with users and stakeholders once the AI system goes live. It requires operators to clearly explain the system's behavior, data usage, known limitations, and any safety-relevant conditions that may affect outcomes. The standard also requires explicit procedures to assist users and affected entities during incidents, ensuring that reporting channels and support processes remain accessible and well defined.
- Secure maintenance. The maintenance phase focuses on maintaining the safety and reliability of AI systems over time. This involves timely updates, monitoring system performance, reviewing logs, and identifying anomalies that may indicate misuse or emerging threats. Organizations are expected to treat major updates as significant system changes and reassess security measures accordingly, ensuring sustained protection throughout operational use.
- Secure end of life. End-of-life requirements ensure that AI systems and related assets are properly decommissioned. This includes the secure disposal or transfer of models, datasets, and configuration elements so that no residual information can be exploited after retirement. The standard highlights the need for controlled decommissioning processes involving relevant data stakeholders to prevent security breaches from persisting beyond the system´s final stage.
Download the technical note on the AI cybersecurity standard (ETSI EN 304 223).