Whether you’ve wanted to or not, you’ve heard AI–generated music. It’s everywhere these days, especially in rap music, with artists like Kanye and Metro Boomin openly embracing the technology. On TikTok especially, AI vocal covers seem to have found a home, time and time again. But there are also crystal–clear ethical criticisms to be made: It’s a loss of “humanity,” it lacks proper credit, and it’s a lazy cash grab. And while these may seem obvious, there are still cases of AI usage that manage to circumvent them all. Clearly, there’s a wide and layered landscape here, and it’s worth understanding.
It’s been a lifetime since the bright–eyed innocence of AI music’s early days, when it was still an uncontroversially obvious next step in the medium’s decades–long computerization. The evolution of synthesizers, from Moog, to MIDI, to an endless market of modular stuff, feels natural, as are their methods of production; from trackers to DAWs to even live–coded algorave. But what’s jarring is the massive leap AI has taken in recent years; algorithms aren’t just helping work out the details anymore.
There’s now a seemingly endless sludge of AI models that can generate entire songs with just a few pointers on tempo, genre, and vibe. More so, they’re all dubbed as haphazardly as modern streaming services: “You can make this on Chirp; there’s jazz on Bark, man; it takes two seconds in Boomy.” The giant among these is Bark, which is the model behind Suno AI, a brand whose CEO is adamant that most people making music don’t actually enjoy doing it. It’s an opinion you can only hold without having engaged with the medium at any level for your entire life.
It’s undeniable that the context and intent behind music composition and production matter a great deal. But generative AI tools, more often than not, remove all of that, relying on training data that they’re remarkably unclear about. They could be indiscriminately sourcing stuff from all cultures and time periods, or from stuff released in America over the last 20 years—unless told, we wouldn’t know. Legally, this is a massive headache, leading to copyright infringements on unfathomable scales. It’s no surprise that the highest–level AI music services have been targets of some landmark cases.
But looking past the obvious “stealing from real people” problem, AI–generated music is also existentially strange and almost grotesque to think about. Everything you hear excreted by Suno AI is a wallpaper amalgamation of millions of creators, allowing no messages or singular intent to shine through. You can’t appreciate it on any level deeper than aesthetic. You can’t even ask how it was made, or what level of expertise was required to produce its melodies, its solos, its bridges. It’s worth nothing more than a factory–made sausage of dubious origin.
There are absolutely murky middle grounds with AI music, but that’s why it’s important to first set our compass straight by defining what we, as music consumers, should want out of what we listen to. Put shortly, music should be self–conscious—it should act as a means of expression from artist to audience. I want to judge music, not only by how much it makes me feel but by how much of what I feel is shared with the artist.
Of course, this is a feeling derived from my thoughts on finding value in art, generally. To simplify a much larger rabbit–hole, there’s just a deeper satisfaction to be found when comparing your experience with an artistic product to an author’s experience with its creation. Without the author, you may still identify said product as art, but any higher–level, connective meaning is lost.
Now that we have a framework for ideal art–audience interaction, let’s go over the aforementioned “murky middle grounds.” Music is far more complex in production than traditional art or writing, so naturally, there are AI tools out there to replicate every component: Beyond the obvious use of LLMs like GPT4 for lyrics, AI–powered synthesizers and vocal replacement have also cropped up over the years.
As an example, Synplant 2 is an AI–powered synth that made massive waves upon its release two years ago. Instead of converting text to audio files, it converts audio files to synth patches, with knobs pre–tuned for artists to tweak further. In other words, it forms parameters and not set–length waveforms, trained on a dataset of algorithmically generated sounds all owned by Synplant’s creators. It’s a rare case of a genuinely ethical dataset, impervious to questions of bias and credit. And being on such a low level of production, they’re the closest AI has gotten to the digital tools we already have, still requiring musicianship to execute tastefully.
Then, there are AI vocals, an even iffier subject. In recent years, ghost–writers and producers have used vocal replacements to cook up viral fake–Drake hits, insane Spongebob parodies, and a “BBL Drizzy” sample that took center stage of the Kendrick beef for a bit. All the while, anyone at home can go viral by making BMO from Adventure Time croon over any song imaginable—it’s brain rot at its finest, but somehow, more conscionable than an artist like Playboi Carti passing off AI vocals as his own. The moral question about AI vocal replacement, then, becomes about intent—if it’s obviously just being used as a crutch for a lack of talent, then it feels insincere and lazy.
Another good example of this is Vocaloid music, an old and massive subculture built off pre–made voicebanks mapped onto melodies and lyrics like musical text–to–speech. Anyone who’s heard Hatsune Miku knows that the appeal lies in the artifacts of this AI–free process: The robotic quirks add an uncanny element irreplicable by natural voices. And now that there are AI–powered Vocaloid softwares that can replicate voices to near–perfection, the general consensus is against it. At that point, why wouldn’t you just listen to real vocalists?
Looking at all these cases (and referring to previous efforts to guide AI music ethics), we can outline multiple parameters by which we can judge the ethics of AI in music. The first, as we’ve seen, is the extent and means of production: Which elements were AI–generated? If only AI synths or vocal replacements were used, then the loss of creativity is almost negligible. There’s also the parameter of credit—more granular uses of AI don’t steal from nondescript swaths of artists.
But we should also consider transparency and intent. In Carti’s case, the allegations alone are a tragic reminder that we’ll need honesty about AI usage going forward. Even more crushingly, there’s a wealth of AI–generated lo–fi hip–hop channels on Youtube, amassing millions of views with zero acknowledgment of their use of AI. In the rare cases where AI does get credit, it’s for the better—artists who openly use AI often have a transformative intent behind their usage. One of 2025’s best records so far, Echoes on the Hem, uses an AI voice to mechanically drone off on Tujiko Noriko’s ambient poeticism, and the inhuman contrast is only transfixing. It’s intensely self–conscious about this dichotomy, and you’d be hard–pressed to ignore its human element.
It seems that from all angles, our compass for ethical AI usage should lie in its ability to enhance human creativity. Ambient pioneer and English pop legend Brian Eno, the namesake of an AI–generated documentary, recognizes this. In a piece from last December, he summed it up well:
“AI tools can be very useful to an artist in making it possible to devise systems that see patterns in what you are making and drawing them to your attention, being able to nudge you into territory that is unfamiliar and yet interestingly connected.”
In a perfect world, AI tools are just that: tools to develop, not our talent, but our creativity. But practically, the fast production rate and cheap costs of purely AI–generated music are too profitable to be ignored. In the last few years, we’ve seen mass amounts of bloat in the music world: Sony’s had to remove at least 75,000 AI tracks, at least ten percent of Deezer’s songs are AI–generated, and hundreds of thousands of AI songs on streaming services were made by just one guy. I haven’t even touched upon the conspiracy of Spotify stuffing its algorithms with AI music made by fake record labels to avoid paying royalties to real artists.
With all this in mind, it’s all too easy to be cynical about AI’s takeover of the music industry and society in general. But I don't fear the “humanity” in music going away any time soon. Any of the million music scenes surfacing and evolving all over the world is proof of that. Going to any live show in the city is proof of that. Picking up an instrument, a DAW, or even a DJ controller and finding joy in just noodling around with it is proof of that.
A vast majority of AI–generated music is, in its current state, irreconcilable. But there’s also glimmers of hope. By having a broad understanding of its current landscape, we can begin to navigate back to its most ethical usage: as a tool to promote our own creative impulses.