Technical Committee (TC) Securing Artificial Intelligence (SAI) Activity Report 2023

Chair: Scott Cadzow

Developing technical specifications that mitigate against threats arising from the deployment of Artificial Intelligence (AI) and threats to AI systems, both from other AIs and from conventional sources. 

AI is becoming increasingly endemic, with many software applications eventually likely to include AI elements. The primary responsibility of ETSI’s Industry Specification Group on Securing Artificial Intelligence (ISG SAI) has been to develop technical reports and specifications that mitigate against threats arising from the deployment of AI – and threats to AI systems – from both other AIs and from conventional sources. In response to the increasing interest in AI, and in recognising that standards are going to be needed for some time, the ISG SAI members agreed to undertake the transfer of their activity to TC SAI, an action that was completed in December 2023 with the first meeting of ETSI TC SAI and the formal transfer of the work programme to the new body.

As a Technical Committee, SAI will have the capability to directly contribute to standardization requests, which may include – but is not limited to – the future AI Act, the Cybersecurity Resilience Act (notably Article 8), and NIS2. As such the committee's members will create standards that serve as a baseline to ensure that AI is secure, safe, societally relevant, and suitable. To achieve this, it will take into account views of stakeholder groups including end users, manufacturers, operators, and governments. In its work on societal aspects, the committee will focus on protecting at-risk populations, such as those who may be targeted by AI‑generated content.

As the only ETSI standardization group that focuses on securing AI, ISG SAI has been working to rationalize the role of AI within the threat landscape, and in doing so, to identify measures that will lead to safe and secure deployment of AI. The group’s work has been used to build the foundations of a longer-term response to the threats to AI in sponsoring the future development of normative technical specifications in the new TC SAI.

The underlying rationale for ETSI's ISG (and subsequently TC) SAI is that autonomous mechanical and computing entities may make decisions that act against the relying parties either by design or as a result of malicious intent. The conventional cycle of risk analysis and countermeasure deployment represented by the Identify-Protect-Detect-Respond cycle needs to be re-assessed when an autonomous machine is involved, if only to protect against the accelerated learning curve of machines.

The interest of ISG SAI notably addresses these aspects of AI in the standards domain:

  • Securing AI from attack, e.g. where AI is a component in the system that needs defending.
  • Mitigating against AI, e.g. where AI is the ‘problem’ (or used to improve and enhance other more conventional attack vectors).
  • Using AI to enhance security measures against attack from other things, e.g. where AI is part of the ‘solution’ (or used to improve and enhance more conventional countermeasures).

ISG SAI’s work has been agnostic to the AI system deployment use case, focusing instead on fundamental threats to and from AI systems – especially where these threats differ from traditional IT systems – and the exploration of appropriate mitigation strategies.

ISG SAI has developed technical standards and reports that act as a baseline in ensuring that AI systems are secure. Stakeholders impacted by the activity of the group include end users, manufacturers, operators and governments.

In 2023 ISG SAI published four deliverables as Group Reports (GR). Three new reports collectively address explicability and transparency of AI processing and provide an AI computing platform security framework. A fourth explores the threats posed by so-called ‘deepfakes’ and strategies to minimize them.

  • GR SAI 007 ‘Explicability and transparency of AI processing’ – this report identifies steps to be taken by designers and implementers of AI platforms to give them assurance of the explicability and transparency of AI processing (which encompasses AI decision making and AI data processing). The GR addresses both static and dynamic forms in order to allow designers to be able to ‘show their working’ (explicability) and to be open to examination (transparency). For example, an AI can be biased by design if the purpose of the AI is to filter candidates for a job based on some personal characteristic. In such a case the explicability and transparency requirements will be able to identify that negative or trait-based filtering is at the root of the reasoning engine of the AI. NB: This deliverable will be republished in 2024 as an ETSI TC SAI report.
  • GR SAI 009 ‘Artificial Intelligence Computing Platform Security Framework’. This Group Report describes a security framework for an AI computing platform, containing hardware and software to protect valuable assets like models and data deployed on AI computing platform when they are used in runtime or stored at rest. The security framework consists of security components in AI computing platform and security mechanisms executed by security components in the platform. By specifying the security framework, an AI computing platform can be consolidated against the relevant attack and can provide security capabilities for stakeholders involved in AI systems who need to better protect the valuable assets (model/data) on an AI computing platform. NB: This deliverable will be republished in 2024 as an ETSI TC SAI report.
  • GR SAI 011 ‘Automated Manipulation of Multimedia Identity Representations’ – this groundbreaking report focuses on the use of AI for manipulating multimedia identity representations to create what are commonly referred to as ‘deepfakes’. Outlining immediate concerns raised by the rise of AI, the report also analyzes the threats posed by deepfakes in various attack scenarios as the basis for further technical and organizational measures to mitigate these threats. NB: This deliverable will be republished in 2024 as an ETSI TC SAI report.
  • GR SAI 013 ‘Proofs of Concepts Framework’ – describing a ‘lightweight’ framework to be used by ETSI ISG SAI (and subsequently TC SAI) to create multi-partner Proofs of Concepts (PoCs). Intended as a tool to demonstrate the applicability of ideas and technology, the framework is designed to be inclusive of as many AI‑based solutions as possible, including those fulfilling critical functions related to data analysis, infrastructures management and (cyber)security. In theory, an AI‑based system can become a target on its own, and detection of these types of attacks can pose a significant challenge. However, real-world examples of such attacks are less common. And understanding of the practical aspects – on one hand to conduct an impactful attack against an AI‑based system and on the other to defend against and respond to such a threat – is still limited. The role of PoCs in highlighting relevant issues is thus key to the success of securing AI. NB: This deliverable will be republished in 2024 as an ETSI TC SAI report.

Development continued on a new Group Specification, plus a further five Group Reports that have been transferred to the new TC SAI in late December with expected publication in early 2024. These variously address: traceability of AI models; collaborative AI; security aspects of using AI/ML in the telecom sector; an AI computing platform security framework; security testing of AI; and privacy aspects of AI/ML systems.

During the year ISG SAI continued to work closely with TC CYBER and OCG AI to consider how its own activities can contribute to the development of future EU Harmonised Standards under the EU AI Act and this participation will continue as TC SAI in 2024.

See the full list of SAI Work Items currently in development here.