The AI buzzword: a look ahead to the coming year

The first few days of the New Year have seen a crescendo in the discourse on artificial intelligence (“AI”) in both the EU and the UK, as the focus of legislators, regulators, judiciaries and lawyers on the implications of the use of AI and its regulation intensifies.

In this article, we provide a brief overview of the latest key developments which are likely to set the tone for the coming year in the practice of the law in relation to AI and new technologies.

EU and UK regulation of AI

EU approach and the AI Act

On 8 December 2023, the European Parliament and the Council finally reached a political agreement on the EU AI Act. Whilst no text has yet been published and voted upon, statements issued by the EU bodies following their deal provide insight into some of the agreed provisions [1]. The Act reflects a “horizontal” regulatory approach, meaning it sets out rules for AI across all sectors and applications. It establishes specific obligations for AI based on potential risks and the level of impact on individuals and society as a whole. Accordingly, AI systems are divided into four different risk categories depending on their use cases: (1) unacceptable-risk, (2) high-risk, (3) limited-risk, (4) minimal/no-risk. Different compliance and transparency obligations will apply to systems of limited risk and those posing high risk, while ‘unacceptable-risk’ systems that are deemed to pose a clear threat to fundamental rights – such as biometric categorisation systems that use sensitive characteristics – will be banned in the EU (bar some narrow exceptions concerning the use of such systems for law enforcement purposes).

Non-compliance with the rules will lead to fines ranging from €7.5 million or 1.5% of global turnover to €35 million or 7% of global turnover, depending on the infringement and size of the company. Enforcement and standard setting will rely on a co-ordinated network of new and established regulators, including a central European AI Board and national competent authorities for AI in each Member State.

Notably, the EU has issued a proposal, currently being considered by the EU Parliament and Council, for a directive on adapting non-contractual civil liability rules to AI (the AI Liability Directive). This is aimed at complementing and modernising the EU liability framework to ensure that persons harmed by AI systems enjoy the same level of protection as persons harmed by other technologies in the EU. The Directive would create a rebuttable 'presumption of causality', to ease the burden of proof for victims to establish damage caused by AI systems.

UK approach

The UK Government has differentiated itself from the EU by re-confirming its willingness to adopt a less rigid, principles-based approach to the regulation of AI, due to concerns that heavy-handed regulation could curb growth in the sector and stifle innovation [2].

The white paper “A pro-innovation approach to AI regulation” published by the UK Government in March 2023 [3] outlines five principles that UK regulators should consider to best facilitate the safe and innovative use of AI in the industries they monitor: (1) safety, security and robustness; (2) transparency and explainability; (3) fairness; (4) accountability and governance; and (5) contestability and redress. Although the UK Government will initially issue these principles on a non-statutory basis, it may introduce a statutory duty on regulators to have due regard to the principles at a later date.

In contrast to the EU approach, the UK favours a ‘vertical’ approach whereby the principles will be implemented by existing regulators (including, for example, the Competition and Markets Authority (CMA)). The intention is for regulators to use their domain-specific expertise to tailor the implementation of the principles to the specific context in which AI is used.

In respect of the allocation of liability for AI systems, the white paper again is not prescriptive, with the UK Government’s consultation on the white paper actively seeking views as to the adequacy of existing redress mechanisms for harms caused by AI systems. It remains an open question whether current legal regimes are fit for purpose, and the UK Government expects existing regulators to explain current routes to contestability and redress under their regimes.

Looking ahead

Though the UK’s approach effectively remains in consultation – in contrast to the EU’s which is agreed subject to drafting – the fact that it is contemplated as being more flexible and seeking to utilise existing laws in principle, leaves a degree of latitude for private enforcement to complement regulatory intervention. Private enforcement can play a role in clarifying the application of the law, and setting out specific standards in practice to protect rights of affected parties, in addition to providing pathways for redress for those that may be adversely affected by future developments in AI.

Going forward, there may also be scope for synergistic leadership roles from the EU and UK. Whilst EU regulation provides a strong foundation for categorising and addressing harms, the UK’s agile leadership may be more suitable for addressing longer-term risks of AI systems that are not anticipated or adequately addressed by the AI Act.

AI and competition law

It is perhaps obvious that AI resources are, presently, concentrated in the hands of existing big tech –Amazon, Apple, Google, Microsoft and Meta – whose legal and ethical behaviour are under challenge by antitrust authorities and in private antitrust actions across the globe. To provide one example of how these companies dominate the ‘AI stack’: nearly everything that relates to AI depends on computational performance and therefore uses cloud services at some juncture, and the cloud services market is one in which Amazon, Google and Microsoft have the lion’s share.

Further, digital markets may present certain characteristics (such as network effects and lack of multi-homing), which can result in entrenched market positions and behaviour that is potentially harmful to competition, but which regulators may struggle to address within existing competition law frameworks. It is possible that the same may apply to AI.

Against this backdrop, on 9 January 2024, the European Commission announced an investigation specifically focusing on virtual worlds and generative AI markets, through which it aims to evaluate the competitiveness of these markets and understand how competition law can effectively govern them [4]. Notably, the European Commission will scrutinise agreements between major digital market players and generative AI developers, with Microsoft’s investment in OpenAI (the developer of ChatGPT) singled out for examination.

Meanwhile, the UK CMA launched an invitation to comment on the partnership between Microsoft and OpenAI on 8 December 2023, ahead of a potential Phase 1 investigation into the partnership. Also on 9 January, the CMA received comments from a coalition of seven civil society groups which raised concerns about the potential anticompetitive implications of Microsoft’s most recent multibillion dollar investment in OpenAI. The coalition’s submission recommended that the CMA investigate several competition concerns, including operational independence, strategic independence, exclusivity, privileged access, access to inputs, reverse killer acquisition, and the extent of Microsoft's control over OpenAI's inner workings and decision-making [5].

It will be interesting to see how – and how quickly – the Commission and CMA’s respective analyses develop during the course of the year.

The use of AI in dispute resolution in the UK

In terms of the impact of AI on legal practice, there have been several cases highlighting the risks of using AI in the context of litigation. The most recent ‘cautionary tale’ was in a ruling issued in December 2023 in the case of Harber v the Commission for His Majesty’s Revenue & Customs (HMRC), where nine legal authorities put before the First-tier Tribunal by a litigant in person challenging a penalty from HMRC were found to be fakes generated by an AI system. Tax tribunal judge Anne Redston accepted that Ms Harber had not known that the authorities were the product of what is known as ‘hallucination’, where a generative AI system produces highly plausible but incorrect results.

Judge Reston also expressed concerns that the practice would promote cynicism about judicial precedents, where instead judicial precedents should be considered, quoting Lord Bingham, “a cornerstone of our legal system” and “an indispensable foundation upon which to decide what is the law and its application to individual cases”.

Against a background of a growing recognition of these issues, on 12 December 2023, senior members of the judiciary issued the first judicial guidance on the use of AI in the courts in England and Wales [6]. Whilst the guidance recognises the limitations of AI tools and explaining the risks attached to the process of how they generate output, it also positively identifies lower-risk areas where it could be usefully deployed as a potentially useful ‘secondary tool’- such as summarising large bodies of text or preparing presentations.

The guidance is described as the first step in a "suite of future work" to support the interactions of the judiciary with AI. The guidance also follows the Solicitors Regulation Authority’s Risk Outlook report on the use of artificial intelligence in the legal market, published in November 2023, where the SRA confirmed it is determined to “help firms and consumers safely gain the benefits that AI can bring” and to that end “will produce guidance on specific issues as they come up” [7]. It is therefore reasonable to expect further guidance in the coming months as the application of current guidance is tested in the courts and as generative AI technology continues to develop.


1 See European Council, Press Release, "Artificial intelligence act: Council and Parliament strike a deal on the first rules for AI in the world", 9 December 2023; and European Parliament, Press Release, "Artificial Intelligence Act: deal on comprehensive rules for trustworthy AI", 9 December 2023.
2 The Financial Times, UK will refrain from regulating AI ‘in the short term’”, 16 November 2023. 
4 European Commission, Commission launches calls for contributions on competition in virtual worlds and generative AI, 9 January 2024.