How ghostwriter’s “AI Drake” Could Spell Trouble for Independent Artists

This week the music industry was up in arms following the release of ghostwriter’s AI-generated track ‘Heart on my Sleeve’ onto Spotify, Apple Music, Deezer, and other streaming services. And quite rightly so. While it might seem like a middle finger to big bad major labels, the knock-on effect this could have on independent music could be disastrous, but how?

Let’s go back to the beginning: What began as a viral TikTok became something of a historical moment within the music industry. The artist, known only as ghostwriter (or ghostwriter977 on TikTok) used generative AI to not only create music but also create vocal tracks in the style of Drake and The Weeknd.

For the most part, the TikTok video was harmless, showcasing how generative AI can be used. That was until the several comments of “this needs to be on streaming” became a little too much to handle (a fair assumption, but probably incorrect as I didn’t follow this story until it blew up), so ghostwriter, using a currently unknown distributor, dropped the track on DSPs. This quickly leads to the track receiving hundreds of thousands of streams across Spotify, and millions of streams on YouTube.

Of course, it didn’t last too long. With more and more reports of AI Drake making waves across the music industry, Universal Music Group, the label that has both Drake and The Weeknd on its roster, wasn’t having any of it and eventually got the track taken down.

Seems like the situation is pretty done and dusted, right? No harm, no foul. Well, not exactly. I personally think this has the potential to open the floodgates for people to utilise generative AI to pump out low-quality music onto streaming services further saturating the market that is already struggling to keep its head above water.

The most frightening thing is that the ghostwriter example isn’t a unique incident either – it’s just a more mainstream example due to how blatant it was with its use of intellectual property.

Streaming services, specifically Spotify, have had a fake artist problem for some time now, something that doesn’t seem to be stopping any time soon. Even as recent as this week (April 18, 2023) Adam Faze reported on Twitter that they were being served over 50 versions of the exact same song via Spotify’s own algorithm.

But again, what does this have to do with independent artists and how is it going to affect them in the future? To put it simply; the more that DSPs are flooded with low-quality (likely generative) content, the more restrictive and particular both DSPs and distributors will become to the music they host on their platforms.

While DSPs already have a few strict guidelines around distributing music, mostly regarding the metadata and the type of artwork you can upload (no URLs or social media on the cover, explicit artwork, copyright artwork, etc), we’re also seeing distributors weighing in on the type of music you can distribute.

Recently, one Twitter user posted that their “Noise for Sleep” release was rejected by Distrokid because it “doesn’t really comply with the spirit of Distrokid”. Turns out, they no longer accept jam tracks, sound effects, lectures, podcasts, background music, production tools, “and that sort of thing.”

Adding to this, DSPs that are considered “Rights Management Platforms” which include Facebook Rights Manager (aka Instagram), TikTok, YouTube Content ID, and Resso, are asking distributors to warn against or simply restrict any releases that contain non-exclusive samples. Non-exclusive samples are samples that anyone can download and use, such as Splice’s entire catalogue of royalty-free samples.

The reason for this is likely due to hundreds if not thousands of cut-and-paste tracks that are using royalty-free samples out of the box being registered in Content ID systems, causing unnecessary chaos and stress on these systems to detect actual copywritten content as opposed to two artists using the same royalty-free sample.

It’s also no secret that DSPs and major labels such as Universal Music Group are openly talking about “issues” such as non-music, poor-quality music, and even music that’s a certain length in order to trick the system into earning more royalties.

Furthermore, as Ted Gioia highlighted platforms such as Spotify are no longer just platforms for listening to music as their algorithm-based playlists and recommendations are now suggesting, or “manipulating” listeners into hearing what Spotify thinks they want to hear. For the most part, their recommendations can be pretty spot on, but as per Faze’s case above, there are flaws in the system.

This begs the question, are listeners going to be happy to hear nothing but generative content because Spotify is crammed full of hundreds if not thousands of pieces of music that all sound alike (and thus are being recommended more often) because artists used the same AI models? Probably not, and that means they’ll turn elsewhere for their music to either a competitor or, and this is a reach, back to physical media.

While I personally don’t think anything drastic is going to happen soon. I do fear that we’re going to go back to a similar landscape of music distribution found in the early 2000s when both iTunes and Spotify were in their infancy. There were no distributors that you could pay $30 a year to upload unlimited releases, you were limited by having to pay flat rates from around $99 PER SINGLE. Instead, it was easier to upload your music to MySpace or SoundCloud.

I’m worried that music distribution will once again become limited to the lucky few, whether that’s artists being scooped up by major or independent labels or distributors becoming incredibly restrictive and picky on the type of music they release.

Audio Mack’s Dave Edwards said it perfectly:

I’m not opposed to AI being used in music, in fact, I think it has some really great practicalities, but using it as a quick way to pump out low-quality music with low-quality artwork will do nothing but make the future of being an independent and smaller artist difficult.