Music Industry Urges Action on AI Voice Theft \ Newslooks \ Washington DC \ Mary Sidiqi \ Evening Edition \ Tech and music industry leaders testified in Congress urging swift passage of the No Fakes Act to protect individuals from unauthorized AI-generated deepfakes. Artists, including Martina McBride, warned lawmakers about AI misuse in voice and image cloning. The bipartisan bill would hold creators and platforms liable for unauthorized digital replicas.

Quick Looks
- Senate panel hears testimony on deepfake dangers and AI misuse.
- No Fakes Act would protect against unauthorized voice and image cloning.
- Martina McBride and RIAA champion legislation to protect artists’ rights.
- Platforms may be held liable if they host unauthorized digital replicas.
- Legislation includes exemptions for First Amendment-protected content.
- Over 400 artists have endorsed the bill, including Scarlett Johansson.
- The bill complements Trump’s recently signed Take It Down Act.
- YouTube supports tech-neutral legislation to manage digital rights responsibly.
Deep Look
Artificial intelligence may offer unprecedented creative tools, but it also poses significant threats to privacy and artistic integrity—threats that lawmakers were urged to confront during a Senate Judiciary Committee hearing this week. On Wednesday, top executives from YouTube, the Recording Industry Association of America (RIAA), and country music artist Martina McBride testified before a subcommittee on privacy, technology, and the law, urging Congress to pass the bipartisan No Fakes Act, designed to curb the misuse of AI to create unauthorized deepfakes.
The hearing highlighted rising concern over AI’s ability to replicate human voices, likenesses, and personalities without consent. Industry experts and artists emphasized that without federal safeguards, individuals—ranging from teenagers to global music icons—remain vulnerable to exploitation, defamation, and fraud.
The No Fakes Act: A Federal Response to AI Abuse
Reintroduced last month in the Senate, the No Fakes Act seeks to establish clear federal protections for individuals whose voices, images, or performances are recreated using artificial intelligence without their consent. The bill would make creators and distributors of unauthorized deepfakes legally liable, including online platforms that knowingly host such content.
The legislation aims to create a uniform national standard to combat the proliferation of deepfakes, addressing a growing problem across entertainment, politics, and everyday social media use. The act includes a notice-and-takedown mechanism, enabling victims to request rapid removal of illicit content without needing to pursue legal action.
Martina McBride, one of the bill’s most vocal proponents, painted a vivid picture of the dangers posed by unchecked AI misuse.
“AI technology is amazing and can be used for so many wonderful purposes,” McBride testified. “But like all great technologies, it can also be abused—by stealing people’s voices and likenesses to scare and defraud families, manipulate young girls’ images, impersonate officials, or produce fake recordings of artists like me.”
Her testimony illustrated a wide spectrum of deepfake harm: from fabricated celebrity content to malicious impersonations of everyday citizens.
Support from Industry Giants and Artists
Backing for the No Fakes Act spans across the music, tech, and entertainment industries. Mitch Glazier, CEO of the RIAA, emphasized that the bill is a natural successor to the recently enacted Take It Down Act, which President Donald Trump signed earlier this week. That law imposes stricter penalties for the sharing of non-consensual intimate imagery and deepfakes.
“The No Fakes Act provides a remedy to victims of invasive harms that go beyond intimate images,” Glazier said. “It protects artists like Martina from non-consensual deepfakes and voice clones that breach the trust built with their audiences.”
Over 400 public figures have voiced support for the bill through the Human Artistry Campaign, including LeAnn Rimes, Missy Elliott, Bette Midler, Sean Astin, and Scarlett Johansson. These endorsements reflect widespread concern within the creative community about losing control over personal identity and intellectual property.
Tech Platforms Call for Balanced Regulation
Representing the tech sector, Suzana Carlos, YouTube’s Head of Music Policy, told senators that while her platform sees immense potential in AI, responsible regulation is crucial. She expressed support for the No Fakes Act, noting that it offers a “workable, comprehensive, and tech-neutral” framework that can help global platforms like YouTube manage digital rights more effectively.
Carlos emphasized that the bill does not unfairly penalize platforms or stifle innovation. Instead, it balances creators’ rights with free expression by including exemptions for content protected under the First Amendment.
“YouTube largely supports this bill because we see the incredible opportunity of AI,” Carlos said in her written testimony. “But we also recognize those harms, and we believe that AI needs to be deployed responsibly.”
She warned that without safeguards, the spread of unauthorized deepfakes could undermine the credibility of online content and erode public trust in digital media.
Scope of the No Fakes Act
The No Fakes Act would outlaw the creation and distribution of unauthorized digital replicas in:
- Audiovisual works
- Sound recordings
- Still images
If enacted, platforms could be held responsible if they knowingly fail to act against such content. The legislation also outlines a takedown process similar to that used for copyright violations under the DMCA, streamlining enforcement for victims without the need for legal proceedings.
Importantly, the bill includes a set of exclusions based on First Amendment protections, ensuring that the law does not infringe on constitutionally protected expression, parody, or satire. This element aims to distinguish harmful misuse from protected creative expression.
A Global Tech Challenge with National Implications
The hearing arrives amid growing national and global scrutiny over AI’s ability to create realistic but deceptive content. From election interference to sexual exploitation and celebrity impersonations, deepfakes have become a pressing legal and ethical issue.
By addressing both the creators and distributors of unauthorized digital replicas, the No Fakes Act could serve as a model for international legislation. It underscores an emerging consensus that AI regulation must strike a delicate balance between innovation and rights protection.
Artists, lawmakers, and tech companies seem to agree on one point: while AI offers immense potential, it must be wielded responsibly. The hearing concluded with bipartisan acknowledgment that this legislation may be one of the most important efforts to protect human identity in the digital age.
You must Register or Login to post a comment.