As artificial intelligence continues to embed itself in the most important sectors, including healthcare, manufacturing, and mobility, concerns about cybersecurity and system resilience are growing just as quickly
AI systems are increasingly complex, interconnected and data-driven, making them targets for cyberattacks and vulnerable to failures that can have real-world consequences. A new European research initiative, SHASAI, has been launched to address these challenges head-on.
A lifestyle approach to AI security
SHASAI, short for Secure Hardware and Software for AI systems, is a new project funded under the European Union’s Horizon Europe programme. Its main goal is to strengthen the security, resilience and trustworthiness of AI-based systems throughout their entire lifecycle.
Rather than focusing on isolated technical fixes, the project takes a comprehensive approach that spans system design, development, deployment and real-world operation.
By integrating secure hardware, strong software engineering and risk-driven design methods, SHASAI aims to reduce vulnerabilities before AI systems are even deployed. This approach reflects a growing recognition that cybersecurity cannot be added as an afterthought, especially for AI technologies that interact directly with people, infrastructure and sensitive data.
A key strength of the SHASAI project is its focus on practical validation. The consortium will test and demonstrate its methods and tools across three real-world scenarios spanning different sectors.
In the agrifood industry, the project will work with AI-enabled cutting machines, where security failures could disrupt production or compromise safety. In healthcare, SHASAI will examine eye-tracking systems used in assistive technologies, where reliability and data protection are especially critical. The third use case involves a tele-operated last-mile delivery vehicle, representing the growing role of AI in mobility and logistics.
By working across these applications, the project aims to ensure its results are not limited to a single domain. Instead, the tools and practices developed through SHASAI are designed to be transferable to a wide range of AI-driven systems.
Supporting trustworthy and compliant AI
Beyond technical innovation, SHASAI contributes to Europe’s broader strategy for trustworthy AI. The project helps translate high-level principles around cybersecurity and AI safety into concrete engineering practices that organisations can apply in practice.
SHASAI is closely aligned with major European regulatory and policy frameworks, including the EU AI Act, the Cyber Resilience Act, the NIS2 Directive and the EU Cybersecurity Strategy.
By embedding security and compliance considerations directly into system design, the project supports organisations in meeting regulatory requirements while maintaining innovation and competitiveness.
A strong European collaboration
The SHASAI consortium brings together 16 partners from five countries: Spain, Italy, Germany, the Netherlands and Türkiye. Coordinated by IKERLAN in Spain, the group includes research organisations, universities, industry partners and technology providers. This mix of expertise allows the project to address AI cybersecurity from multiple perspectives, combining academic research with industrial needs and real-world deployment experience.
The project officially started on 1 November 2025 and will run until the end of April 2029. Over this period, SHASAI aims to deliver practical tools, validated methodologies and best practices that help organisations deploy AI systems that are secure, resilient and ready for real-world challenges.











