The music sector is battling on various fronts, including digital platforms, legal systems, and legislative bodies, aiming to stop the pilfering and misappropriation of artwork generated by artificial intelligence. However, this struggle continues to be extremely challenging.
Sony Music recently stated that they have already requested the removal of 75,000 deepfakes — which include fake images, audio recordings, or video clips that could easily pass as authentic content — highlighting the extent of the problem.
According to the cybersecurity firm Pindrop, AI-created music exhibits distinct characteristics and can be easily identified. Nonetheless, this type of music appears ubiquitous.
“Pinddrop, a company specializing in voice analysis, noted that even when AI-generated songs sound realistic, they frequently contain slight inconsistencies in pitch variations, rhythms, and digital patterns that you wouldn’t typically find in human performances,” he stated.
But it takes mere minutes on YouTube or Spotify — two top music-streaming platforms — to spot a fake rap from 2Pac about pizzas, or an Ariana Grande cover of a K-pop track that she never performed.
“Taking this very seriously, we are endeavoring to develop new tools in this area to improve the situation further,” stated Sam Duboff, who leads the policy organization at Spotify.
YouTube mentioned it is enhancing its capability to detect AI-generated content, with potential announcements expected in the upcoming weeks.
“As the malicious players became somewhat quicker to recognize the issue,” musicians, record labels, and other entities within the music industry found themselves “responding reactively,” according to Jeremy Goldman, an analyst at Emarketer.
“Given the several billion dollars YouTube generates annually, they have a significant stake in resolving this issue,” Goldman stated, emphasizing his belief that they are actively addressing it.
“If you’re on YouTube, you wouldn’t want the platform to turn into some kind of AI horror show,” he stated.
Litigation
However, aside from deepfakes, the music industry is especially worried about the improper usage of its material to educate generative AI systems such as Suno, Udio, or Mubert.
Last year, several prominent record labels initiated legal action in a federal court located in New York against the parent firm of Udio. They accused this entity of crafting its technological framework using “copyrighted audio recordings” primarily to attract listeners, fan bases, and prospective licensing partners from the original copyrighted works they duplicated.
Over nine months later, the legal proceedings have still not commenced significantly. Similarly, a related case against Suno, which was brought up in Massachusetts, has also failed to start properly.
The heart of the lawsuit revolves around the concept of fair use, which permits the usage of certain copyrighted materials without prior consent for restricted purposes. This notion has the potential to restrict how intellectual property rights are applied.
“There is indeed significant ambiguity here,” stated Joseph Fishman, who is a law professor at Vanderbilt University.
Any initial rulings won’t necessarily prove decisive, as varying opinions from different courts could punt the issue to the Supreme Court.
Meanwhile, the key participants in AI-created music keep refining their systems using material protected by copyright, which brings up the concern that the conflict may have already been resolved in favor of rights holders.
Fishman indicated that it might still be premature to conclude this: even though numerous models are currently being trained using copyrighted content, these models are regularly updated and released with newer versions. It remains uncertain whether future judicial rulings could lead to potential licensing complications for these evolving models.
Deregulation
In the legislative sphere, labels, artists, and producers have achieved minimal success.
Various pieces of legislation have been proposed in the US Congress, yet none have led to tangible outcomes.
Several states — most prominently Tennessee, which houses a significant portion of the influential country music sector — have implemented protective laws, particularly concerning deepfakes.
Another possible hurdle comes from Donald Trump; as the Republican President, he has positioned himself as a strong advocate for deregulation, especially concerning AI.
Notable players like Meta have entered the fray, with the company pushing for the administration to clarify that utilizing publicly accessible data for training models is clearly considered fair use.
Should Trump’s administration follow this counsel, it might tilt the scales against music professionals, despite the judiciary technically having the final say.
The situation in Britain is not much improved, as the Labour government is contemplating changes to legislation that would permit AI firms to utilize creators’ online content for developing their models, with the caveat that rights holders must actively choose to opt out.
Over a thousand musicians, such as Kate Bush and Annie Lennox, released an album titled “Is This What We Want?” in February. The album features the sound of silence recorded across multiple studios to demonstrate their opposition to those initiatives.
According to analyst Goldman, AI will probably keep causing problems for the music industry — provided it stays disorganized.
The music business is highly splintered,” he stated. “This fragmentation ends up being detrimental when trying to address this issue.