Bandit Country: The Fight Against AI’s Outlaws
15 August 2023
There are few subjects better able to court controversy on a truly global scale right now than artificial intelligence (AI). On top of the longstanding privacy, public safety, bias and ethical concerns, events that have unfolded in recent months are bringing its effect on various branches of the creative arts to the fore.
With AI technology progressing at breakneck pace, it was always going to be difficult for the comparatively lumbering legal system to keep up. Consequently, many industry analysts are likening the emerging AI sector to the ‘Wild West’ era of mid-1800s North America, with pioneers looking to make their fortunes while showing only scant regard for the rule of law. This is prompting formidable responsive actions though.
Things first started to heat up back in the spring, when the members of the Writers Guild of America (WAG) went on strike. One of the main reasons behind this (others being wages, pensions and working conditions) was an ongoing dispute to prevent generative AI tools being used to re-work content originating from human writers, or produce further material based upon it. A few weeks later, the Screen Actors Guild - American Federation of Television & Radio Artists (SAG-AFTRA) joined their writing colleagues on the picket lines. Proving a major flashpoint for them was representative bodies within the TV/movie business proposing that background actors might be AI replicated then featured in shows and films with only minimal financial reward being received. It is thought that voice-over artists could soon be in a similar position, with the creation of artificial versions being derived from their existing recordings so that producers can cut their operating costs.
As well as strikes, legal action is already underway in relation to how generative AI tools are allegedly infringing on artists’ intellectual property (IP). For example, Getty Images has initiated a lawsuit against Stability AI for utilising content claimed to be illicitly sourced for training its text-to-image diffusion models. Several graphic artists and illustrators have already embroiled companies like Midjourney and Stability AI in court cases too, claiming copyright-protected creatives have been used for AI training purposes (to mimic their own distinctive styles) without any prior consent. Likewise, the Emmy award-winning American comedian Sarah Silverman is currently suing OpenAI and Meta for copyright violations - with her published writings (alongside those of several novelists) being ingested by the companies’ respective ChatGPT and LLaMA large language model (LLM) platforms. How long will it be before people from the acting profession start litigation proceedings too?
They say the camera never lies
It is understandable that jobbing actors are scared that about studios’ ability to potentially recreate their physical appearances and then exploit them. Conversely, the prospect of leveraging the power of AI is undoubtedly appealing to movie makers. It is therefore a matter of making sure everything is done in the right way. The sci-fi movie Rogue One (2016), for instance, saw Disney/Lucasfilm relying on advanced computer-generated imagery (CGI) technology to successfully construct a detailed digital emulation of Peter Cushing (who had died 22 years earlier). This meant that he could reprise his role as villain Governor Tarkin. It had all been done completely above board though - with the full consent of Cushing's family.
Clearly not everyone is going to be so diligent in their approach. A case in point is the disturbing AI-generated images that recently surfaced - showing love/hate US presidential candidate Donald Trump being hugged by much-respected civil rights campaigner Rev. Martin Luther King Jnr. The images have been universally condemned and sparked heavy criticism from the late Dr. King’s family (though it must be said that no direct connection has so far been shown that the Trump campaign commissioned the creation of these images).
The employment of generative AI content not just being restricted to Hollywood films and streaming services, but starting to infiltrate into political circles is a worrying development. In the US, the Federal Election Commission (FEC) is hence takings steps to stop AI generated deep fake advertisements and campaign materials being created. This follows on from Trump/King images getting widely shared across social media, as well as Trump’s main rival for the Republican nomination in the 2024 presidential race Ron Desantis being accused of following a similar tactic - with images believed to not be authentic disseminated to discredit his opponent (once again though, it must be noted the Desantis campaign fervently deny this). If measures aren’t put in place now, then activities of this kind will take ‘fake news’ to stratospheric heights. And even if official campaign material can be regulated, how easy will this be to apply to social media posts (where the original source is much more difficult to identify)?
Is it really art?
Though AI usage can heighten productivity, there are arguments that it will simultaneously blunt the impact of art (in whatever form) - leading to a lack of engagement. When I was a kid, me and countless others were overwhelmed by the special effects seen in films like Stars Wars (1976), Alien (1979), Raiders of the Lost Ark (1981) and Blade Runner (1982) - yeah I know that really dates me. :-) But then, following the advent of CGI, it became a lot harder to impress cinema audiences - the Lord of the Rings trilogy (2001-2003) probably being the last time that people were truly visually wowed. If the public no longer have respect for the creative process and the work that has gone into producing a film, an illustration, a song or a painting, then they will be less likely to want to spend money on it.
A further issue that someone outlined to me at a recent tech event was the fact that LLMs and other AI tools are currently relying on scraping then learning from human originating material - but over time the AI-generated material out there is going to quickly overtake this in terms of volume. That means in the future LLMs will be basing their creations on the output from other LLMs - leading to a gradual dilution in the quality of what's produced. Without any involvement from living organisms, it is debatable that anything can actually be called art - as that is, by definition, an expression of the human condition. What is produced instead will be soulless and uninspiring.
Hopefully once that dust settles from the legal battles and industrial disputes currently happening, some way of adequately compensating artists/writers/actors can be agreed. The issue then will be how well things are enforced to ensure such compensation is correctly attributed. This is why the writers/actors’ guilds, stock image vendors and various other organisations will need to underline to AI firms that the creative community is valued and not taken advantage of. The cooperation now being established between OpenAI and Shutterstock could be a potential indication of the kind of future business models that will be acceptable to both sides of the AI/creativity conflict. Here image creators will receive royalty payments if their work is used to help generate material via AI (though admittedly not as much as if they had been commissioned to do the work themselves).
The long-term fear is that despite AI’s ability to bring greater throughput and efficiency to various cultural spheres, it could eventually kill art. If adequate safeguards are not put in place to protect creative individuals (whether artists, actors, musicians, etc.), there will be a real danger that people will no longer see any reason to go into these vocations - and our society will be left culturally poorer as a result.