In January 2023, AI speech synthesis company ElevenLabs, Inc. released a beta platform for its natural-sounding vocal cloning tool. Using this platform, a brief snippet of a person’s voice could generate audio files of the target saying anything the uploader desired. This release created a spike in misappropriated vocal cloning from viral rap songs to parodies of political figures. Recognizing their software was being widely misused, ElevenLabs installed safeguards to ensure the company could trace the generated audio back to a creator. But it was too late. Pandora’s box was already open.
Since then, a wide range of similar tools have been developed to perform vocal cloning, leading to vocal deepfakes becoming a common source of scams and misinformation. And these issues have only been exacerbated by a lack of appropriate laws and regulations to rein in the use of AI and protect an individual’s right to their voice.
AI Vocal Deepfakes Hit Mainstream Media
Vocal deepfakes have invaded the mainstream media. As just one example, the song “Heart on My Sleeve” featured performances from cloned voices of artists Drake and the Weeknd, as well as a producer tag from Metro Boomin. After going viral in April 2023, the song received over 20 million views or streams before being taken down by Universal Music Group via a Digital Millenium Copyright Act (DMCA) notice-and-takedown, which allows copyright owners to give notice of infringement to online service providers who must then remove the content from their websites.
Because DMCA notices are not public, however, no one knows what argument(s) Universal asserted. They may have argued the creator would have had to copy the artists’ works onto their cloning system—constituting an act of copyright infringement. Or that the output itself constitutes copyright infringement, due to it being derivative of said artists’ works. Despite the streaming platforms’ compliance with Universal’s DMCA takedowns, these arguments, which present new and novel legal questions surrounding AI usage, remain untested.
While there is precedent that musical impersonation may violate that musician’s right of publicity, this does not necessarily entitle them to DMCA takedowns and/or other protections. And if a request for a DMCA takedown is unsuccessful, vocal artists may have to file a lawsuit to enforce their rights to their voice. For example, plaintiffs Karissa Vacker and Mark Boyett recently sued ElevenLabs for the alleged misappropriation of their voices, arguing ElevenLabs trained its AI by circumventing technological steps they took to protect their copyrighted material (including encryption and digital rights management technologies) in violation of the DMCA’s anti-circumvention statutes.
Unfortunately for similarly situated individuals, a vocal artist’s rights are not uniformly recognized—meaning success may depend on where the vocal artist lives as much as the facts of their case.
Only a handful of states have started to address these issues. This includes Tennessee’s “ELVIS Act” that expressly provides individuals with protectable property rights in their voice. Most recently, on Sept. 17, 2024, California enacted a pair of laws requiring artists give informed consent and have union or legal representation before giving up the right to their digital self (AB 2602) and also prohibiting commercial use of digital replicas of deceased performers without first obtaining the consent of those performers’ estates (AB 1836). Other measures were designed to address digitally altered or digitally created content related to elections (AB 2655, AB 2839, AB 2355). And while Congress has started to address issues regarding both specific harmful uses of digital replicas (i.e., the Preventing Deepfakes of Intimate Images Act; REAL Political Advertisements Act; and the Protect Elections from Deceptive AI Act) as well as their use generally (i.e., the No AI FRAUD Act; and the Nurture Originals, Foster Art, and Keep Entertainment Safe (NO FAKES) Act), none of these prospective laws have yet passed.
As shown, existing state/federal laws are inconsistent and insufficient in their current iteration—either being too limited in scope of covered persons or activity, or requiring additional showings, such as the commercial value of an individual’s identity. This inconsistency yields a patchwork of protections, whereby the availability of a remedy may be dependent on where the individual lives or where the unauthorized use occurred.
Government and Private Stakeholders Weigh In on AI Regulations
On July 31, the U.S. Copyright Office (USCO) published the first of several reports discussing copyright and artificial intelligence focusing on various digital replication techniques—including vocal cloning. Recognizing that under the current federal scheme, a replica of an individual’s image or voice alone would not constitute copyright infringement, the USCO recommended Congress act to protect an individual’s image from infringement. The USCO also advocated for a takedown system like the DMCA, which has questionable applicability to cases of voice cloning as it exists today.
Specifically, recognizing the lack of uniform state acceptance of Midler-style protections, the USCO’s report recommended Congress pass legislation that: establishes a federal right that protects all individuals during their lifetimes from knowing distribution of unauthorized digital replicas; allows this right to be licensable, subject to guardrails, but not assignable, with effective remedies including monetary damages and injunctive relief; applies traditional rules of secondary liability with appropriately conditioned safe harbor for online service providers; explicitly accommodates the First Amendment; and does not preempt well-developed state rights of publicity.
The USCO is not the only government agency concerned with AI regulations. The Federal Communications Commission (FCC) also proposed rules in July that would require radio and television stations to provide on-air announcements when AI-generated content is used with political ads. This has created controversy over which government agency should regulate this content, as members of both the FCC and Federal Elections Commission (FEC) have contested whether the FEC should regulate the use of AI in political campaigns. FCC Chairwoman Jessica Rosenworcel responded that both agencies had jurisdiction over these issues, and that the FCC could regulate the use of AI in political ads on television and radio, whereas the FEC would regulate other areas, such as political ads on the Internet. But with the U.S. Supreme Court’s recent decision in Loper Bright Enterprises v. Raimondo, that scaled back court deference to agency rulemaking and decisions, the path forward to regulate AI is unclear.
Next, in early August, multiple representatives from the music, movie, and video game industries met for a roundtable with the U.S. Patent and Trademark Office (USPTO) to address the growing power of AI. While many of the private sector stakeholders generally approved of AI regulation, they also cautioned against legislating too far.
For example, Benjamin Sheffner of the Motion Picture Association explained “legislating in this area requires very careful drafting to address real harms, without inadvertently chilling or even prohibiting legitimate, constitutionally protected uses of technologies to enhance storytelling.” On the other hand, Michael Lewen of the Recording Academy advocated for broad protections that extended to “all individuals, regardless of fame” because “the speed to create and the speed to spread digital replicas is unprecedented.”
The day after the roundtable, an updated version of the NO FAKES Act was reintroduced in the Senate with bipartisan support that further defines property rights available and incorporate notice-and-takedown structure for online services hosting user-uploaded material.
(Don’t’) Send in the Clones: Regulating Voice Cloning and Deepfakes
Given that the NO FAKES Act has already garnered bipartisan support, the federal government may soon provide baseline protections and appropriate remedies (like the notice-and-takedown process) vocal artists and others need to defend themselves against the rampant use of AI-generated vocal clones and deepfakes. This act addresses some of the most pressing issues for vocal artists. But there is still work to be done, including informing the public when AI has been used to clone or create digital and audio content.
Vocal artists should stay up to date on the NO FAKES Act and other federal laws to learn how the government is protecting their right to their voice. The adoption of the DMCA-style notice-and-takedown system is promising. But vocal artists will likely need greater protections on the improper or unauthorized use of their voice (like how the ELVIS Act allows for civil penalties), and that there are stronger regulations requiring the disclosure of any use of AI in advertising, promotions, or other digital or audio content placed on the internet. Given the amorphous legal landscape surrounding vocal cloning, and the existential threat is poses to creatives, seeking input from experienced counsel is encouraged.
"Attack of the (Voice) Clones: Protecting the Right to Your Voice," by Jeffrey N. Rosenthal, Timothy J. Miller, and Liam Leahy was published in The Legal Intelligencer on September 23, 2024.
Reprinted with permission from the September 23, 2024, edition of The Legal Intelligencer © 2024 ALM Properties, Inc. All rights reserved. Further duplication without permission is prohibited.