Trade unions and AI regulation: lessons from the Hollywood artists strikes, one year later
There is no doubt that generative AI will have an impact on workers and jobs. The Hollywood strikes last year pushed back against studios using AI to replace actors and writers. SAG-AFTRA's deal now requires performers' consent and fair pay for using their digital replicas, but there's still confusion around AI-generated characters or "synthetic performers." In the UK, the Trades Union Congress (TUC) proposed a new bill to protect workers from unfair AI decisions like hiring or firing, while also demanding transparency and safety checks for workplace AI. Both efforts show unions stepping up to make sure AI doesn't take jobs or mistreat workers.
Just over a year ago, the Screen Actors Guild-American Federation of Television and Radio Artists (SAG-AFTRA) and the Writers’ Guild of America (WGA) strikes came to an end in Hollywood. SAG-AFTRA and WGA union members were particularly concerned with studios using generative AI to replace actors and writers for example, by employing AI tools to write or re-write scripts; leveraging a studio’s copyright in hours of footage to generate digital replicas of real performers or to train a model into producing new “synthetic performers”. If the studios were to engage in such unrestricted conduct, union members would be deprived of potential work and earnings.
Taking the model agreement that ended the SAG-AFTRA strike as an example, performers have greater protections in the case of studios seeking to capture their likeness when shooting a film/TV show (“employment-based digital replicas”). Studios must obtain the performer’s explicit consent for the capture; give at least 48 hours’ notice to consider a proposal; engage in separate negotiations with the performer every time they intend to use the employment-based digital replica and pay a day rate for its use.
It remains to be seen how these clauses are enforced in practice, especially given that copyright gives studios greater bargaining power. However, both SAG-AFTRA and WGA were incredibly influential in bringing employment issues arising out of the deployment of generative AI to the fore and in forcing studios to incorporate clauses specifically dealing with the potential consequences of loss of work and earnings due to generative AI.
Another influential union sharing AI regulation in the context of employment law is the UK’s Trades Union Congress (TUC), an umbrella organisation bringing together 5.5 million working people from 48 member unions. The TUC had already published a series of reports on the use of AI for employee surveillance, as well as AI-specific guidance aimed at union officers and representatives to be used in collective bargaining processes.
In support of its core aim, the TUC Bill also seeks to ensure that AI systems deployed in a workplace are tested for safety and that workers and unions are consulted. Further, in order to redress data asymmetries, the TUC’s proposals for Workplace AI Risk Assessments include frameworks for workers, employees and unions to gain access to information on how AI systems are operating. Likewise, the TUC Bill contains a right for employees and jobseekers to receive a personalised statement which explains how an AI system made high-risk decisions about them. The TUC Bill also seeks to reverse the burden of proof, making it easier for employees to prove AI-based discrimination under the Equality Act 2010 has taken place, as well as introduce a right to disconnect – in recognition of the fact that greater automation and implementation of AI systems may lead to the intensification of work for employees.
While the UK Government has indicated that it will be considering the impact of AI on workers, the TUC Bill is another illustration of trade unions taking the lead and envisaging what AI regulation can and should look like for workers.
This article first appeared in the WAI Legal Insights blogs on 25th November 2024, and is reproduced by kind permission of the Women in AI.