How Creatives are Safeguarding their Rights Against AI

Subscribe to HubSpot's Next in AI Newsletter
Molly Bookner
Molly Bookner

Updated:

Published:

With AI tools generating everything from screenplays and rap verses to social copy and award-winning photographs, creatives face a dilemma: how to protect their creative likeness against this rapidly advancing technology.

How Creatives are Safeguarding their Rights Against AI
Click Here to Subscribe to HubSpot's AI Newsletter

Last month, hundreds of music artists, including Billie Eilish, Nicki Minaj, Katie Perry, and Camilla Cabello, signed an open letter urging digital music developers to “cease the use of artificial intelligence (AI) to infringe upon and devalue the rights of human artists” — and they’re far from the only creatives concerned.

While it might take an artist weeks to craft a single painting or a musician months to compose a song, AI models can generate similar content in minutes.

AI algorithms also threaten creators with the risk of copyright infringement and job disruption while potentially degrading content quality.

Read on to learn about the relationship between AI and creativity, and some of the protections for artists.

AI’s Impact on Creative Industries

Many trace AI’s influence in creative fields back to the 1960s, when Eliza, a groundbreaking conversational program, was developed. Not too long after, in the 1970s, a British artist and computer scientist invented AARON, an AI program that could autonomously generate original artwork.

In the decades since, tech companies have released various AI models, like Dall-E 3 and GPT-4, capable of mimicking human content and boosting creativity. While opinions vary, many see AI as an artistic tool, similar to a camera or paintbrush.

AI programs can also help to optimize and streamline creative workflows, allowing creators to focus more on the creative aspects of their work.

Yet, for all the ways AI models inspire and fuel creativity, they also pose significant risks to creators. Some of these include:

  • Job displacement — Many creatives fear that AI will replace their jobs, with some claiming it already has. Entry-level roles seem to be particularly vulnerable, as some companies are already employing AI for aspects of their development processes, like crafting film mood boards.
  • Diminished and devalued human creativity — When humans create, they typically start from a blank canvas and are only limited by the confines of their imaginations. But when AI models produce content, they build upon large sets of training materials, typically taken from others’ pre-existing work, and are limited by the prompts they are given. Some argue that, over time, human creativity will decline and lose value as AI tools become increasingly enmeshed in the creative process.
  • Loss of control — AI companies train generative AI models onmassive datasets, which could include copyrighted material, without the owner’s permission or knowledge. Determining whether AI-generated content constitutes “original work” and is therefore eligible for copyright protections remains a contested topic (more on that later).

AI’s connection to creativity raises various questions, like:

  • Can AI-generated content ever be considered truly original?
  • Can something be considered art if there is no artist?
  • Can we view AI-generated content as creative in the same way we do human-generated content?
  • Does the value of AI-generated work stem from the humans who provide the prompts or the models who use those prompts to craft new content?
  • Should AI creators own the intellectual property rights to AI-generated content, or should the humans who influenced the model’s output?

The Legal Landscape for Creatives

Many of those questions have spurred legal disputes, and the creation of new laws (okay, it’s really only one law) and contract agreements to regulate AI’s use and guarantee fair compensation for creators.

Laws Protecting Artists from AI

Currently, there are no federal laws directly protecting creatives from some of the potential dangers of AI.

Only one such law exists at the state level, born in a place known for its whiskey, barbeque, rolling hills, a wealth of bachelorette parties, and MoonPies. Oh, and country music.

If you guessed Tennessee, you’d be correct.

On March 21, America’s country music capital became the first state to pass a law shielding music professionals from AI. The law, aptly titled the Ensuring Likeness, Voice, and Image Security Act or “ELVIS Act,” goes into effect July 1, 2024.

In a nutshell, this new law aims to:

  • Protect musicians’ vocal likeness.
  • Ensure that AI tools cannot reproduce artists’ unique voices without their explicit consent.
  • Hold people legally accountable for publishing or performing an artist’s voice without permission or using technology to mimic an artist’s name, photograph, voice, or general likeness without consent.

“We employ more people in Tennessee in the music industry than any other state,“ Tennessee’s governor Bill Lee told reporters after signing the law, as CBS News reported. ”Artists have intellectual property. They have gifts. They have a uniqueness that is theirs and theirs alone, certainly not artificial intelligence," he continued.

How effective will this legislation be? Only time will tell.

Contracts Protecting Artists from AI

Some unions, including the American Federation of Musicians (AFM) and the Screen Actors Guild ‐ American Federation of Television and Radio Artists (SAG-AFTRA), have signed contracts with TV studios, enacting safeguards for their members regarding AI.

On April 2, AFM — representing musicians who perform on TV, in movie scores, or appear on screen — voted to ratify its new contract with major TV studios. The agreement states that musicians must be compensated when their work prompts AI models and ensures that “human creativity remains at the heart of the industry.”

“This agreement is a monumental victory for musicians who have long been under-compensated for their work in the digital age,” Tino Gagliardi, the union’s international president, said in a statement to Variety.

The terms of the deal closely mirror those won by SAG-AFTRA and WGA during their strikes in 2023.

In March, SAG-AFTRA also ratified new contracts to protect voice actors in TV animation from the misuse of AI. The agreements define voice actors as strictly human and stipulate that producers must gain permission before using actors’ names to prompt AI models.

The contracts, which are valid for three years, also discuss voice actors’ rights regarding studios’ use of their digital replicas.

They require producers to notify and bargain with the union whenever they use AI-generated voices instead of human voice actors. More than 95% of union members supported these terms.

The Issue of Copyright Infringement

Developments in generative AI have underscored significant shortcomings in copyright laws and raised important questions about authorship, originality, and the limits of fair use.

Copyright falls under the umbrella of intellectual property and in the U.S., copyright law extends only to humans.

In addition to whether a human created a piece of content, the U.S. Copyright Office asks various questions when determining copyright eligibility. Some of those questions include:

• Is the work sufficiently original?

• Was the work independently created?

• Does the work possess at least some minimal degree of creativity?

As you might imagine, a tech giant might answer those questions differently than, say, an independent creator.

Beyond protecting creative works from manipulation, copyright law enables “fair use” of those works for purposes that advance and benefit society. As such, the question of whether AI tools merely plagiarize works or use them in accordance with the fair use policy to craft innovative content persists.

Over the past few years, this question has sparked numerous lawsuits.

One of the most widely publicized comes from The New York Times, which in December filed a lawsuit against OpenAI and Microsoft, alleging copyright infringement. The Times argues that both firms “scraped” — or digitally scanned and copied — many of its articles to train GPT-4 and other AI models.

Getty, an imaging licensing service, filed a similar lawsuit in 2023 against Stable Diffusion, alleging copyright and trademark violations of its watermarked photograph collection.

As a result, the legal system must specify what constitutes a “derivative work” under intellectual property laws.

“Copyright owners have been lining up to take whacks at generative AI like a giant piñata woven out of their works. 2024 is likely to be the year we find out whether there is money inside," James Grimmelmann, professor of digital and information law at Cornell, told Axios.

Some of these ongoing lawsuits could spark significant rulings, ultimately affecting AI’s progress.

The research institute Epoch predicts that by 2026, tech companies will have exhausted all high-quality language data, slowing ML advancement.

“The only practical way for these [AI] tools to exist is if they can be trained on massive amounts of data without having to license that data,” Sy Damle, a lawyer who represents a Silicon Valley venture capital firm, said last year in a public discussion about copyright law, The Times reported. “The data needed is so massive that even collective licensing really can’t work.”

Because of the novelty of these types of suits, many creatives have limited options at their disposal:

  • Use tools like Nightshade to prevent their work from being used by AI models.
  • Submit model training opt-out forms, like OpenAI’s.
  • Take legal action – at a cost.

Click Here to Subscribe to HubSpot's AI Newsletter

Related Articles

A weekly newsletter covering AI and business.

SUBSCRIBE

The weekly email to help take your career to the next level. No fluff, only first-hand expert advice & useful marketing trends.

Must enter a valid email

We're committed to your privacy. HubSpot uses the information you provide to us to contact you about our relevant content, products, and services. You may unsubscribe from these communications at any time. For more information, check out our privacy policy.

This form is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.