Book an appointment with us, or search the directory to find the right lawyer for you directly through the app.
Find out moreThis special edition of Law Update, marking Al Tamimi & Company’s 35th anniversary, explores the evolving legal landscape of energy and climate law across the region.
As the Middle East prioritises sustainable growth, this edition examines key developments shaping the future of the sector. From the UAE’s Federal Law No. 11 of 2024 to advancements in green hydrogen, solar financing, and carbon capture technology, we spotlight the innovative strides and challenges defining this critical area.
We also go into Saudi Arabia’s initiatives to integrate carbon capture into its industrial expansion and Egypt’s AFRICARBONEX platform, which underscores the region’s commitment to a sustainable and inclusive future.
Join us as we celebrate 35 years of legal excellence and forward-thinking insights, paving the way for a more sustainable tomorrow.
Read NowWe recently held, here at Al Tamimi and Company, a panel presentation and discussion comprising a number of internal and external experts on the rise of AI, its various use-cases, and the potential pitfall/risks that may arise.
This article provides a summary of that panel presentation and outlines a number of key takeaway recommendations on the use of AI.
AI is often defined as intelligent machines that can perform tasks that would normally require human intelligence, such as visual perception, speech recognition, decision-making, and language translation.
AI has exploded into the fore of the public consciousness through the popular generative AI tool ChatGPT. The widespread awareness of ChatGPT cannot be understated, with one million users achieved in five days. Compare this to Facebook, which took ten months to achieve the same number of users.
Generative AI is clearly the most popular form of AI at present, courtesy of ChatGPT. Generative AI typically takes a user input (or prompt) and generates content that it thinks makes the most sense as a response to the prompt. The output can range from text strings, to code, to images, and data.
ChatGPT and generative AI can be quite powerful at generating content that would appear to be otherwise created by a human being.
The UAE has adopted a forward-thinking strategy of supporting and encouraging technological developments in the AI sphere, rather than stifling growth. Panellist Saqr Bin Ghalib, Executive Director of the Artificial Intelligence, Digital Economy, and Remote Work Applications Office, expressed how the government is seeking to encourage side by side use of AI technology to embrace and leverage efficiency and productivity.
This approach is demonstrated by the UAE Government initiative Reglab. Reglab was launched in 2019 in partnership with Dubai Future Foundation in order to create an agile sandbox regulatory environment that is able to flexibly and quickly test and adapt to the rapid developments in technology.
The Office has also recently launched a “Generative AI Guide” which comprehensively outlines challenges and opportunities around Generative AI and recommends optimal approaches for managing the technology. 100 use-cases and applications of generative AI are detailed in the Guide, across a range of industrial and technological sectors.
ChatGPT is a language model that is trained on historical data, and not a knowledge model. The model functions by sequencing words based on a probability distribution for the most likely word given the previous words, rather than what is factually correct. Therefore, if prompted lazily or improperly, ChatGPT will happily manufacture information and declare it as the truth, as long as it believes the response to the prompt makes sense as a response. This poses risks when the data used by AI to learn and generate content is false or biased.
We have not yet achieved artificial general intelligence, meaning that there is no singular AI that performs at a human level across all intellectual tasks and use-cases. It is important to understand the limitations of each AI tool (not only ChatGPT) to determine the best and lowest risk way in which to use them.
AI has the potential to greatly enhance and drive efficiency across the board, with an rising interest in a number of applications, to name a few:
As AI becomes both more specialised (being able to outperform humans at specific tasks) and more generalised (approaching human level performance at most tasks that usually require human input), the number of applications and use-cases will multiply.
Following on from the preceding section, it is clear that one risk that may occur is reliance on false information. By asking a language model AI such as ChatGPT questions that require fact-based answers, there is an inherent risk that the response generated contains falsehood. Also, as ChatGPT may access and use proprietary information and material belonging to others without their knowledge or consent, there is an inherent risk that the use/reproduction of the content generated by ChaptGPT infringes over the IP rights of third parties.
To mitigate this risk, it is always necessary to proofread responses from AI language models, particularly when the subject matter is fact based. Obviously, to fully mitigate this risk, it would be best to avoid using language model AIs, or AIs at all to generate content that is dependent on facts and/or factual data.
Additionally, prompts and responses on the AI platforms go to the cloud and may eventually be accessible to all users. Therefore, information communicated on AI platforms may be considered a disclosure in the public domain, which creates a host of legal issues:
Therefore, utmost care must be taken when using AI tools to not disclose any information that might be adverse to your business if it were to be leaked into the public domain.
Furthermore, the legal stance on the ownership of content that is AI generated is murky. The use of private or public platforms to create content that is then relied upon as proprietary content would be risky, due to the lack of clarity over IP ownership. The prevailing opinion is that for IP ownership and protection to exist, there must be an element of human contribution (beyond merely prompting the AI).
Finally, there is reputational risk. Notwithstanding the aforementioned risks, if a client or third party is able to determine that content was generated by AI, there may be a loss of trust between you and the public/client.
It is important for all businesses (including law firms) to take a risk-averse stance to the internal usage of AI tools, particularly when handling confidential information.
Actions that can be taken include:
Regulators, education institutions, technology developers, the private sector, innovators, and business leaders are encouraged to work together to co-create an enabling environment that is legally and ethically compliant and “in step with the speed of innovation”.
To learn more about our services and get the latest legal insights from across the Middle East and North Africa region, click on the link below.