Will the UK shift its regulatory approach to AI in 2024?

The UK Government introduced a light-touch, “pro-innovation” approach to AI regulation with its white paper on AI in March 2023 (White Paper). The Government then directed regulators to respond to the White Paper and provide updates on their strategic approaches to AI by 30 April 2024. The now-published reports make plain that regulators are concerned about the potential for AI to cause harm unless certain frameworks are put in place.

Given the above, with an upcoming General Election and the contemplation of an AI Regulation Bill, can the next UK Government continue pursuing a light-touch approach to AI regulation or is the tide turning just one year after the White Paper?

This article provides a summary overview of the regulators’ updates and considers whether these and other developments signal a change in the UK’s position.

Regulators report back

  1. Competition Markets Authority (CMA)

The CMA expressed its strongest concerns in relation to Foundation Models (FMs), noting that the sector is developing in ways that risk negative market outcomes.

A small number of Big Tech firms, which already hold market power in critical digital markets, are securing strong positions in FM value chains and thus have the potential and incentive to significantly shape FM-markets to the detriment of “fair, open and effective competition”.

The CMA highlighted 3 key risks the FM landscape presents:

  • firms may restrict access to critical inputs for FMs (e.g. chips, Cloud services);
  • incumbents could leverage their position to distort competition (e.g. by reducing choice, quality and increasing prices); and
  • partnerships between key players in the FM value chain [1] may entrench dominant positions.

In addition, the CMA identified false and misleading information generated by AI; and ‘personalised pricing’ [2] as potential threats to consumers.

Accordingly, the CMA noted that, where appropriate, it would:

  • issue “proactive guidance” on consumer law compliance in AI-related markets;
  • step up its use of merger control; and
  • issue substantial financial penalties for non-compliance.

The CMA’s ability to protect competition and consumers will be further enhanced when the Digital Markets, Competition and Consumers Act (DMCC) comes into force. For example, the CMA will be able to investigate an algorithm’s impact by conducting tests on a designated firm’s systems.

When considering what to investigate, the CMA expressly noted that it will follow developments in FM-markets; downstream AI software integration; and the use of consumer chatbots.

       2. Financial Conduct Authority (FCA)

The FCA echoed the CMA’s concerns about the competition risks that could arise from

the concentration of third-party technology services, such as cloud services and AI model development, among Big Tech firms, especially as this may lead to significant data asymmetries between Big Tech firms and traditional financial services firms.

While the FCA broadly welcomed a “principles-based, outcomes-focused” approach to AI regulation in financial services, it noted that it will closely monitor the adoption of AI across UK financial markets to identify material changes that impact on consumers and markets and keep under review whether amendments to the existing regulatory regime are required.

     3. Information Commissioner’s Office (ICO)

AI, together with children’s privacy and online tracking – will be one of the ICO’s core focus areas in 2024/25. The ICO has already published an AI and Data Protection Toolkit; issued warnings regarding “emotion recognition” technologies; and launched investigations into facial recognition technology companies.

In its strategic approach paper, the ICO announced that it will update its guidance on AI and Data Protection and on Automated Decision-Making and Profiling in 2025. The ICO will need to ensure that any such updates reflect proposed changes to the UK’s data protection law put forward in the hotly contested Data Protection and Digital Information Bill. However, that Bill was not passed in the “wash-up” period before the prorogation of Parliament and may therefore not be enacted any time soon.

    4. Ofcom

In its strategic approach paper, Ofcom, like the ICO, placed particular emphasis on online child safety and the potential harm caused by AI-generated media. It too identified risks that AI systems could pose to competition and consumers in the communications sector, including through personalised pricing.

Ofcom will be consulting on guidance in relation to its information gathering powers which, similarly to the CMA’s under the DMCC, would include access to algorithms. Ofcom intends to issue guidance and Codes of Practice to online regulated services and broadcasters to clarify their accountabilities with regards to AI. Further, it will continue to monitor domestic and international AI standards and legislation.

Although regulators broadly support a “principles-based” approach, their thinking shows serious concerns about the risks posed by AI to competition and consumers. Their proposed strategies reveal a need to pro-actively monitor the way in which the FM sector is developing and, where appropriate, amend legislation to ensure that regulatory gaps are filled.

A change in stance?

In April, the Financial Times reported that the Department for Science, Innovation and Technology had apparently moved away from the light-touch principles-based approach to AI regulation and is now considering what shape AI-specific legislation could take. Such legislation would likely cover FMs and impose requirements on FM developers to carry out safety tests and share their algorithms with the government. [3]

Further, following a third reading in the House of Lords on 10 May, the UK’s “Artificial Intelligence (Regulation) Bill”, was sent to the Commons. While this Bill was also not included in the parliamentary “wash-up”, if the next government were to revive the Bill in its current form, it would introduce an “AI Authority” tasked with ensuring alignment across regulators and reviewing relevant legislation to test its suitability vis-à-vis the challenges AI presents. The AI Authority would have regard to several regulatory principles, including that AI should be inclusive by design (although what this would require was not specified in the Bill) and generate interoperable data.

Further regulatory updates on AI that are expected in the coming months may place an even greater pressure on the next government to protect consumers by way of legislation and it is therefore unsurprising that there are already signs of change in the UK’s approach to AI regulation.

While a change in government may have an impact on timescales, the direction of travel towards stronger AI regulation in the UK is becoming increasingly apparent.

Update 

On 17 July 2024, as part of the State Opening of Parliament, the new UK Government announced its intention to establish legislation to “place requirements on those working to develop the most powerful artificial intelligence models”. This announcement stopped short of a commitment to introducing an AI Bill in this parliamentary session, indicating that any legislation will likely be preceded by an extensive period of consultation.

Read more about The AI buzzword: a look ahead to the coming year (published in January 2024) 

Footnotes

[1]  Such as between Microsoft and Mistral AI, Amazon and Anthropic, and Microsoft’s hiring of former employees and related arrangements with Inflection AI, in respect of which the CMA has invited comments.
[2] which involves offering consumers different prices for the same product, based on their personal characteristics (as determined by an algorithm and often adjusted in real time).
[3] UK rethinks AI legislation as alarm grows over potential risks (ft.com)