Published: Oct 23, 2023

Saudi Data and Artificial Intelligence Authority Reveals AI Ethics Principles 2.0

The Saudi Data and Artificial Intelligence Authority (“SDAIA“) recently published the second iteration of its AI Ethics Principles (“AI Principles 2.0“). The objective of the AI Principles is to establish a principle-based ethical framework for the development and use of AI technologies in Saudi Arabia.

The framework is based on seven principles and accompanying controls, including Fairness, Privacy & Security, Humanity, Social & Environmental Benefits, Reliability & Safety, Transparency & Explainability and Accountability & Responsibility. SDAIA’s principles-based approach to compliance shows a commitment to a holistic and evolving compliance regime as opposed to a static check-box style approach.  This approach is reflected by the fact that the principles and controls are to be applied throughout the entire lifecycle of an AI project, starting from the design and planning phase and continuing during the deployment and monitoring phase.

As for the scope and application, the AI Principles equally apply to all public, private and non-profit AI stakeholders, whether they are developers, designers, users or individuals that are affected by AI systems within the Kingdom. There is some ambiguity in terms of application, in that the way in which the scope of application is described seems to be obligatory in some instances and optional in others. Clarity on this aspect will be essential.

Interestingly, the previous version of the AI Principles (Version 1.0, 2022) envisaged a limited waiver/exception to certain aspects of the framework; this has not been retained in AI Principles 2.0. Under the previous version, non-compliant AI systems could be built if it could be demonstrated (with good cause) that the purpose of the non-compliant AI system was to maintain public health, protect lives of individuals or for the protection of the vital interests of the Kingdom. The removal of the exception/waiver seems to suggest that SDAIA views ethical considerations as fundamental aspects that must underpin all AI projects within Saudi Arabia without exception.

The new version of the AI Principles has also introduced a tiered AI risks categorization system based on the associated risks with the development and use of AI systems and technologies. AI systems that are categorized having little or no risks need not comply with the AI Principles (although it is recommend); AI systems that are categorized as having either limited risk or high risk are required to comply with the AI principles and additional controls (as applicable); and AI systems that are deemed to have unacceptable risk (such as risks associated with exploitation of children or the safety of lives) are prohibited and cannot be developed.

The AI Principles provide a number of self-assessment tools for assessing and mapping AI risks against the principles. Adopting entities are primarily responsible for ensuring their own compliance with the AI Principles and as such; and are required to appoint certain key roles for assessing and monitoring compliance, including a Responsible AI Officer (RAIO) and an AI System Assessor.

In addition to the self-assessment tools, adopting entities are also encouraged to register with SDAIA under an optional registration scheme. Registered entities will be motivated to ensure high levels compliance through a ‘badge’ system that reflects their commitment to compliance. Such a system seems intended to influence end users in identifying trustworthy AI tools and technologies.

Key Contacts

Nick O’Connell

Partner, Head of Digital & Data - Saudi Arabia

n.oconnell@tamimi.com