Book an appointment with us, or search the directory to find the right lawyer for you directly through the app.
Find out moreThis Edition of Law Update, From Africa to Asia: Legal Narratives of Change and Continuity, takes you on a journey through dynamic markets.
Africa is undergoing a tech-driven transformation, overcoming regulatory challenges while its startup ecosystem thrives. India’s legal framework is evolving rapidly, keeping pace with its expanding economy and diverse business environment.
We also dive into China’s regulatory shifts, particularly how they are shaping investments in the MENA region, and explore Korea’s innovative global partnerships, which are driving advancements in industries across the UAE and beyond.
Read NowMartin Hayward - Head of Digital & Data - Digital & Data
A pioneer for innovation in the Arab world, the UAE government’s Centennial strategy recognises the pivotal role of emerging technologies in fostering economic development, reducing the impact of a future financial crisis, and addressing the 21st century’s unique set of challenges. In fact, innovation is at the very heart of the Centennial vision for a “future-focused government” and plays a central role in the UAE’s goal of becoming a diversified knowledge-based economy. Accordingly, the UAE government has drawn up various initiatives aimed at investing in emerging technologies and preparing future generations with the skills and knowledge needed to future-proof their careers within the context of this changing landscape. Specifically, Artificial Intelligence (‘AI’) has been mentioned as one of the key tools which will be used to “achieve the objectives of the UAE Centennial 2071”.
The UAE’s Strategy for Artificial Intelligence (‘AI Strategy’) launched in 2017 followed the objectives of the country’s National Innovation Strategy, which aims to use innovation as a pillar to achieve the targets of the UAE’s Vision 2021, and become one of the best countries in the world. This year marks the realisation of this vision, as the UAE has maintained its position within the Global Innovation Index (‘GII’) as the leading nation for innovation in the Arab world for the fifth consecutive year.
The UAE has shown that it backs vision and strategy with rigorous policy making, agile strategising and the adoption of disruptive technologies across key sectors. Accordingly, when it comes to the UAE’s strategies, AI is not just a buzzword. The AI Strategy aims to boost government performance on all levels, and to use AI to provide a range of services across key sectors such as health, space, transport, energy, water, technology, education, and traffic. In recognising the huge cost-saving potential of automation, the government strategy also sets the ambitious goal of saving 50 per cent of annual government costs using AI.
Other core objectives set by the AI Strategy include establishing an incubator for AI-related innovations, and employing AI in the field of customer service. Smart Dubai, the government office charged with the city’s overall digital transformation has an AI Lab which is focused on accelerating Dubai towards becoming the smartest city in the world. More tangible strategies include the Autonomous Transportation Strategy, which aims to transform 25 per cent of the transportation in Dubai to autonomous mode by 2030.
The strategies have led to noticeable AI developments in UAE government. For example, ‘Rashid’ is a call centre virtual agent which offers answers to customers’ questions about various transactions in Dubai. Another example is in healthcare, where the UAE Ministry of Health and Prevention (‘MOHAP’) has made use of AI in chest x-ray examinations to diagnose certain diseases such as tuberculosis. Within the transport sector, the Road and Transport Authority (‘RTA’) has developed Dubai’s Autonomous Taxi, which brings Dubai one step closer to achieving the targets set by the Autonomous Transportation Strategy.
Artificial Intelligence is expected to post a monumental shift to the pillars of our economy and society. To put this into context, PWC estimates that AI could contribute US$96 billion to the UAE’s GDP in 2030 (around 13.6 per cent of GDP). In this light, clear policy initiatives are required to ensure that our transition towards the Fourth Industrial Revolution is as ethical and responsible as practicable. To set the framework for a smooth transition, the UAE has established an AI Council which is tasked with proposing policies to create an AI friendly ecosystem, encourage advances in research and promote collaboration between the public and private sectors. Notably, the AI Council has announced that it aims to issue a government law on the safe use of AI. The UAE’s National Programme for AI has also committed to initiatives and collaboration between the private and public sectors. Examples include the Think AI Initiative, which includes a series of strategic roundtables to accelerate AI’s adoption, and a one year AI training programme designed for government employees.
With great opportunity comes great challenges, and AI is in many regards a double-edged sword for policymakers. While automation can lead to increased efficiency, there are several ethical dilemmas with artificial intelligence that should be carefully considered. As discussed below, the UAE is already taking many of the required regulatory precautions to effectively manage and optimise the use of AI. Having said that, continuous monitoring and review of the risks associated with AI are required to maximise its benefits within the next 50 years.
The following are just a few examples of how AI is being used and some of the associated policy challenges.
This could be anything from online recruitment tools, to usage in credit allocation or loan processing. For example, AI software is already increasingly being used to screen resumes. The algorithm in these tools typically uses NLP to recognise word patterns in resumes, and assess them against the company’s criteria. As with credit decisions, machine learning and big data allow for many types of data to be factored in.
There are two issues in this for policymakers. The first is that automated decision making can lead to biased decision outcomes (often reflecting biases in the underlying data), and the second is the “black box” difficulty with AI.
An example of the first issue is a recruiting tool which was being used by Amazon, where currently 74 per cent of the company’s managerial positions are held by men. Therefore, an algorithm that used NLP to recognise word patterns in resumes and compares them against the company’s predominantly male engineering department to determine an applicant’s fit was found to be gender-biased.
While the UAE needs to take legislative action to prevent discriminatory uses of AI in the future, the current DIFC Data Protection Law already has a section on automated individual decision making, whereby the data subject has the right to object to a decision which is based solely on automated processing where this can produce legal or seriously impactful consequences. This is similar to other data protection laws such as the GDPR.
In order for the UAE to take advantage of the full potential of AI, it needs more policy and discussion between the private and public sectors to clarify which laws will apply to emerging technologies.
Deep learning algorithms, through the use of convolutional neural networks (‘CNNs’), can analyse visual imagery. Facial recognition tools can identify or verify a person by comparing a person’s facial contours against previous data patterns (training data). MIT researchers have found that there is a potential for bias in the algorithms within some facial recognition systems, where darker skinned complexions were less likely to be recognised. This is because the training sets of facial recognition tools were overwhelmingly male and white. As such, where the person in the photo (the unseen data) was a white man, the model had 99 per cent accuracy.
Officials at Dubai airport have announced plans to use robots to detect the faces of wanted criminals in airports, and it is expected that their use will grow under the UAE’s futuristic vision. In line with this, policymakers need to be aware of current ethical issues with the use of facial recognition software and draw up adequate policies to address it. Proposed regulations could include imposing a legal obligation to have a diverse training data set.
One of the most controversial examples of the use of AI in court proceedings is through criminal Justice Algorithms: sometimes referred to as “risk assessments” or “evidence based methods” which purport to predict the future behaviour of defendants.
At the present stage, the UAE has opted to use AI to enhance, rather than replace judicial decision making. For example, in Abu Dhabi Courts, judges are allowed to use machine learning algorithms trained on past cases (predictive analytics) to help them in sentencing or acquitting criminals. Likewise, within the DIFC courts, Amna Al Owais, the Chief Registrar of DIFC Courts, has said that they are looking to use predictive analytics to help judges in their research.
Machine learning algorithms are used to make AVs capable of judgments in real time and a car must learn and adapt to the unpredictable scenarios from surrounding objects. A key concern is the safety of such algorithms and the ethical dilemmas of programming them. These issues must be addressed as part of the UAE’s Autonomous Transport Strategy.
Despite the ethical and policy issues with the increase in the use of AI the UAE, like many jurisdictions, has yet to enact an overarching law specifically targeted at regulating the use of AI. However, it is worth noting that this is not to stop existing regulations, such as those relating to product liability, to apply to AI just as they would with any other malfunction. For a detailed discussion of safety, liability, privacy and discrimination laws that would apply to AI, please see our article titled “Artificial Intelligence: Regulatory Approaches in the UAE and Abroad”.
The UAE recognises the importance of creating an ethical infrastructure. A set of non-binding AI principles & ethics guidelines have been issued by Smart Dubai’s AI lab. The principles are:
While this is a good start, in the future of the UAE, binding regulation may well be required to encourage innovative usage of AI, provide conceptual clarity, and manage associated risks. Regulators need to strike a balance between safeguarding risks present in the uses of AI without stifling potential advancements. The US and EU provide good examples that can be followed and improved upon in the UAE’s next decades.
In the EU, there has been a wealth of new guidelines, publications, and declarations from various bodies on AI in recent years. Notably, the EU’s ethics guidelines for trustworthy AI include similar guidelines to Smart Dubai, which strive to ensure that the use of AI is “trustworthy”. The guidelines flag that user trust is an essential component in the deployment of new technology. Without trust, the economic and potential societal benefits of AI cannot be realised as users will not adopt it. The guidelines outline lawfulness, ethics, and robustness as the three pillars of trust.
The EU’s Policy and Investment Recommendations for Trustworthy AI set out further steps that governments can take to further the adoption and acceptance of AI. These include:
a) developing an AI-specific cybersecurity infrastructure; and
b) redeveloping education to reflect emerging technologies, reskilling and upskilling the workforce to prepare for a change in the landscape.
As discussed concerning AVs and algorithmic decisions, the question of liability is increasingly important when adopting autonomous systems. Well conceived laws regarding the liability of AI systems can help manage this.
For example, in the EU, there is a European Commission report on Liability for Artificial Intelligence and other Emerging Digital Technologies. This recommends:
Regarding algorithmic decisions, the US has introduced a bill for algorithmic accountability “the Algorithmic Accountability Act (S. 1108, H.R. 2231). If enacted, it would require covered entities to conduct “impact assessments” on their “high-risk” automated decision systems to evaluate the impacts of the system’s design process and training data on “accuracy, fairness, bias, discrimination, privacy, and security.” Adopting such a law in the UAE could help to reduce algorithmic biases in companies that rely on automated decision making.
Facial recognition is another sensitive use of AI where legal clarity is essential. In the US, the Commercial Facial Recognition Privacy Act prohibits entities from using “facial recognition technology to collect facial recognition data unless such entities: (1) provide documentation that explains the capabilities and limitations of facial-recognition technology; and (2) obtain explicit affirmative consent from end users
A “wait and see” approach has been largely adopted by governments in relation to AI as they seek to understand how this emerging technology will develop and mature and what hitherto unknown risks may emerge. It is a delicate balance to mitigate such risks whilst avoiding the stifling of innovation. Work is underway by the World Economic Forum’s Centre for the Fourth Industrial Revolution to develop a roadmap for policy makers to facilitate discussion in regulation methodology regarding AI. It will be interesting to see if the UAE’s leadership in the advancement of AI will also see a focus on developing policies, laws and regulations that drive fairness, transparency and accountability whilst maintaining trust in the use of AI.
For further information, please contact Martin Hayward (m.hayward@tamimi.com).
To learn more about our services and get the latest legal insights from across the Middle East and North Africa region, click on the link below.