The counterfeit tracks, often called “deepfakes,” targeted some of Sony’s biggest names, including Beyoncé, Queen, and Harry Styles. According to the company, the proliferation of such songs can cause “direct commercial harm to legitimate recording artists,” particularly when they coincide with new album releases.
“In the worst cases, [these deepfakes] potentially damage a release campaign or tarnish the reputation of an artist,” said Dennis Kooker, president of Sony’s global digital business. He noted that the volume of AI-generated content is only increasing as the technology becomes cheaper and more accessible.
Sony estimates that the 135,000 tracks identified so far represent only a fraction of the total uploaded across streaming services. Since March 2025, the company has uncovered roughly 60,000 songs falsely purporting to feature its roster, with other potentially affected artists including Bad Bunny, Miley Cyrus, and Mark Ronson.
Kooker emphasized that the issue is “demand-driven,” taking advantage of the attention surrounding artists’ promotional cycles and ultimately detracting from their legitimate work.
Industry Growth and Regulatory Context
The announcement came during the launch of the International Federation of the Phonographic Industry’s (IFPI) Global Music Report in London, which highlighted a 6.4% growth in recorded music revenues in 2025, bringing total earnings to $31.7 billion (£23.8 billion). This marks the 11th consecutive year of growth, largely driven by streaming subscriptions.
The report also confirmed that the UK remains the world’s third-largest music market, while China overtook Germany as the fourth, despite entering the top 10 less than a decade ago. Taylor Swift emerged as the top global artist of 2025, with her album The Life of a Showgirl being the most popular worldwide.
The music industry event coincided with the UK government’s publication of a report into AI regulation, which relieved attendees by pausing plans to allow AI firms to train models on copyrighted works without permission. “We’ve seen a lot of governments grappling with this issue,” said Victoria Oakley, CEO of the IFPI. “In the UK, they’ve decided to pause and think again.”
Calls for Transparency in AI Music
Beyond deepfakes, the industry is also concerned about streaming fraud, where fake artists use AI-generated tracks to artificially inflate play counts and royalties. The IFPI estimates that as much as 10% of content across streaming platforms may be fraudulent.
“I hate to say it, but it’s very simple to fix,” Oakley said, advocating for mandatory labeling of AI-generated material. Without such transparency, fans cannot distinguish between human-created content and AI counterfeits, undermining trust and the user experience.
Kooker highlighted that some platforms, such as the French streaming service Deezer, have already implemented software to flag AI-generated content, with 34% of submissions now categorized as such. “Transparency shouldn’t be optional; it’s the foundation of a fair and sustainable music ecosystem,” he said.
The industry’s warnings underscore the dual-edged impact of AI in music: while it can enhance creativity and innovation, unchecked or unlabelled use risks harming both artists and audiences.
