Music and technology are my two lifelong passions. I’ve worked as a software engineer for over a decade and have been playing guitar and making music for 20 years.
In my free time, I've built DrawBeats, a free web-based sequencer used by tens of thousands of students and hobbyists in over 100 countries. I've spent 16 years volunteering to help adults with various types of disabilities play music, including thinking about how assistive technologies might help them create and perform.
This essay is my attempt to understand why I'm troubled by AI-generated music.
It's easy to dismiss it as "soulless", but that reaction made me wonder why I felt that way. As someone caught between the worlds of music and technology, I wanted to make sure I was thinking clearly and could have an internally consistent answer on the topic that I could trust and defend. I also wanted to understand how others feel about it.
I've tried to steelman the other side and follow the evidence where it leads. What I found surprised me, made me feel uncomfortable, offered some hope, and made me think more deeply about music. Ultimately, this piece will be opinionated. I hope it gives you a useful framework for thinking through these questions yourself, because they're only going to get more pressing.
Where is the technology at right now?
AI music companies like Suno and Udio have been around for a few years now. They work by taking a text prompt and generating music based on it. These tools learn the patterns and structures of human music by training on vast libraries of existing music, then use that knowledge to generate new tracks from a text prompt in a matter of seconds.
Let's orient ourselves with a quick quiz:
What share of listeners cannot tell when a song is AI-generated?
Tap on the chart below to make your guess
Your guess: 50%
Deezer/Ipsos survey of 9,000 people across 8 countries (Nov 2025)
The answer may surprise you. It wasn't that long ago when we were collectively laughing at the videos of Will Smith eating spaghetti. Just like generative video and text, however, music generation has made undeniable progress in the last few years.
Don't just take my word for it. Try the quiz below and see for yourself. Each genre has two clips: an AI-generated one, and a "real" one made by a human (note that "real" here just means non-AI-generated, and may still include software instruments, samples, or drum loops). All of the real clips were recorded by me or are in the public domain.
Can you tell?
For each genre, listen to both clips, then pick the one you think is real.
Classical
Indie Rock
Jazz
Lo-fi
Is this stuff out there today?
Yes. Over 60,000 fully AI-generated tracks are uploaded to the Deezer platform every day, accounting for roughly 39% of daily uploads. Spotify doesn't disclose any equivalent figures, which is a data point in and of itself. This is no longer a hypothetical problem.
Consider the Sienna Rose case, a neo-soul "artist" who accumulated over 3 million monthly Spotify listeners and landed three songs on Spotify's Viral Top 50 in early 2026. She has no verified social media presence, no live shows, and released at least 45 tracks in four months. Deezer confirmed her music is entirely AI-generated. She's far from the only one.
Different platforms are adopting different approaches: Bandcamp is fully banning AI-generated music, Deezer is allowing it, but tagging it and excluding it from recommendations (while also demonetizing fraudulent streams), and Spotify isn't only allowing it but recommending it alongside real artists.
Tagging tracks and artists as AI-generated is admittedly not an easy problem to solve. It may cause unintended consequences for real artists who didn't use AI tools or used some technology to help them make music but were tagged as AI-generated anyway. It may also end up causing an arms race between AI musicians and engineers developing better detection tools. It's a problem that will require a lot of careful thought and iteration, but it's one that needs to be tackled.
The economics of AI-generated music
Streaming platforms like Spotify operate on a pro-rata royalty system: all subscription revenue flows into a single pool, then Spotify takes a cut and the rest gets divided according to each track's share of total streams. AI-generated tracks dilute that pool by increasing the total number of tracks and streams competing for the same revenue.
Bot fraud compounds this further: some estimates put up to 85% of streams on AI-generated tracks as themselves fraudulent, actively redirecting royalties away from human artists. For comparison, the total rate of fraudulent streams across all tracks is estimated to be less than 10%.
This suggests an emerging industry of bots listening to other bots and generating royalties, completely separate from human listeners making choices.
Royalty Pool Dilution
As more streams go to AI artists, your payout drops because your share of the total stream pool gets smaller, even if your own stream count stays the same.
If all streams were from human artists
Your 1M streams → ~$3,000 *
As AI artists flood the pool
Your same 1M streams → ~$1,500
Streams of your song
Streams of other humans
Streams of AI
* based on typical effective per-stream rates today
In 2024, journalist Liz Pelly published an investigation in Harper's Magazine uncovering an internal Spotify program called Perfect Fit Content, in which the company commissioned low-cost music from pseudonymous "ghost artists" specifically for its curated mood playlists. These ghost artists were paid a fraction of standard royalty rates.
The logic was simple: if users are only half-listening, why pay full royalties? The question is whether AI is the natural next chapter of that same logic. Spotify says no, but the incentive structure raises questions.
Consider what that trajectory looks like at scale. A 2024 study by CISAC, the global body representing over five million songwriters and composers, projects that 24% of music creators' revenues could be at risk by 2028. By then, the study projects that 60% of the music library market could be AI-generated.
Spotify's public position on AI-generated music is worth sitting with for a moment. In June 2025, the company's Head of Artist Partnerships Bryan Johnson stated there was "infinitely small consumption of fully AI-generated tracks on Spotify" and "no dilution of the royalty pool by AI music". Just six months later, Spotify's Head of Music Charlie Hellman described exploitation by bad actors creating low-quality slop to game the system and divert royalties.
AI is being exploited by bad actors to flood streaming services with low-quality slop to game the system and attempt to divert royalties away from authentic artists.
By late 2025, the major labels, who had sued AI music companies just a year earlier for copyright infringement on what they called an "almost unimaginable scale", had reached licensing settlements with them. It's a pattern that might feel familiar; it closely mirrors how labels negotiated streaming deals in the late 2000s, largely without artists at the table, putting into question whether copyright law was really about protecting creative labor. That process produced the per-stream rates musicians feel is unfair today.
An argument can be made that "musical wallpaper" (the music in the background while we're working, cooking, or exercising) is formulaic and worth automating. But it's also the quiet end of the music business, where many working musicians actually make their living. Furthermore, the tools aren't being used exclusively for this type of background music. An AI "artist" named Xania Monet has a song about pregnancy loss called "Miscarriage Blues" with over half a million plays on Spotify. The music is fully AI-generated; only the lyrics were written by a human.
At the very least, people deserve to know when they're listening to AI-generated music. I'd prefer even my lofi beats to be human-made.
The steelman
I want to make the strongest possible case I can for the other side. The technology is genuinely remarkable on a technical level, and music has survived every disruption people said would kill it. Why should this be any different?
Consider this from John Philip Sousa, the American composer who wrote many of the most popular patriotic songs of the late 19th and early 20th centuries:
These talking machines are going to ruin the artistic development of music in this country. When I was a boy...in front of every house in the summer evenings, you would find young people together singing the songs of the day or old songs. Today you hear these infernal machines going night and day. We will not have a vocal cord left. The vocal cord will be eliminated by a process of evolution, as was the tail of man when he came from the ape.
Sousa was, of course, wrong. Recorded music helped make music more accessible to more people, and helped it grow and evolve. And we certainly didn't see the end of singing.
History shows an interplay between technology and art. It's true that technology does disrupt and displace, but it also creates new opportunities and new forms of expression. Hip-hop producers used sampling to create new sounds. GarageBand created a generation of bedroom producers, like myself. Electronic artists turned synthesizers and drum machines into a new genre of music.
These disruptions have never been painless. Napster threatened to collapse the industry. Streaming helped with that problem, but created a new set of economic problems that musicians are still living with today. The interplay between technology and music is a constant tension, sometimes breaking barriers, and sometimes creating new ones. Good music has always found a way to survive.
Mikey Shulman, the CEO of Suno, describes their goal as "first and foremost giving everybody the joys of creating music". It's hard to dismiss that vision in isolation.
According to a 2022 YouGov survey, one in six Americans wishes they had learned an instrument but never did. Among those who did learn and later quit, more than half who played guitar regret stopping. The desire to make music is real and widespread. The barrier, for many people, is the difficulty of the craft. For some populations, such as those with disabilities, that barrier is even higher.
A tool that helps professionals work faster, amateurs create more easily, and enables people who might never otherwise make music is undeniably powerful. Just like photography ended up expanding the visual arts, new technology can expand music.
Why I'm unconvinced
Shulman's framing treats creativity as something of an execution gap: you have something in your head, and the tool's job is to get it out with as little friction as possible. The promise is straightforward: if the tools can do more, more people can make more output.
It's not really enjoyable to make music now... it takes a lot of time, it takes a lot of practice, you need to get really good at an instrument or really good at a piece of production software. I think the majority of people don't enjoy the majority of time they spend making music.
But that's not how making music works, at least not in my experience. The idea doesn't really exist in isolation before the process, just waiting to be expressed by a series of vague text prompts. You find it by playing, by making mistakes, by following something unexpected.
A tool that jumps straight to the output short-circuits all of that. The output arrives before any discovery can happen. This treats creativity as a productivity or "content creation" problem, where getting to the output as easily as possible is the goal.
There are of course parts of the music making process that are tedious and frustrating. An experienced producer sketching out an arrangement quickly or using AI to test a melodic idea before committing to it would be helpful. These cases straddle the line between a tool that helps us make music and a tool that generates it for us. I think these use cases can be much more straightforwardly helpful.
Suno's ambition goes well beyond smoothing those rough edges. Their goal, in Shulman's words, is not to make existing creators "10% faster". It's to build "meaningful consumption experiences" for a billion people. Suno even offers a TikTok-style social feed called Hooks, where users can combine their videos with AI-generated music. It's branded more as a paradigm shift and a full ecosystem than a new autotune or drum machine, so I think we should evaluate it as such.
We didn’t just want to build a company, let’s say, that makes the current crop of creators 10% faster or makes it 10% easier to make music. If you want to impact the way a billion people experience music you have to build something for a billion people.
Advocates will sometimes compare AI music's effect on music to photography's effect on visual art as a way to suggest that resistance to new technology is just fear of change. But photography created something new and announced itself as such. Shulman has said it "doesn't make sense to have an AI world and a non-AI world of music". The goal is indistinguishability, not a new medium. And unlike photography, which built its own identity from scratch, AI music was trained on the entire corpus of human music without consent and now competes directly with the people it learned from. There's no reason to think that new technology will always yield the same positive outcomes as it has in the past.
There's a clear difference in scale when we compare AI music generation to historic technological disruptions in music. Drum machines and samplers didn't autonomously generate 7 million songs a day and distribute them to streaming platforms with most of us being unaware. The speed at which AI music is being created, distributed, and recommended means that the usual process of culture adapting to new technology just doesn't have time to play out like it has in the past.
The goal of this essay isn't to gatekeep who gets to make music. I think everyone is capable of it, and I've spent a long time thinking about how technology can lower barriers rather than replace people entirely. A tool that helps someone with limited motor control play an instrument they couldn't otherwise play is categorically different from one that generates a song on their behalf. One empowers the human while the other does the work for them. One is assistive technology while the other is generative.
The value of being a listener
I don't think the sales pitch of 'everyone should be able to do everything' is as self-evident as it sounds. Not everyone needs to be able to do every craft at an expert level. I love visual art and dabble in watercolors occasionally, badly, and with some real joy. But I never feel cheated that I can't produce gallery-quality work. There's something profoundly human about standing in front of an expert's work and letting it in. We value their skill and craft precisely because it is rare and awe-inspiring. We should make access to great art and knowledge as easy as possible, not just shortcuts to the output.
Being a listener, a viewer, or a reader is its own complete relationship with art, not just a consolation prize for people who never learned the craft. The hunger AI music claims to satisfy might just be misdescribed. It's rarely "I need to be able to immediately generate my own version of this". More often it's "I want more of this feeling", and for that, human-made music already has an answer.
Creation or curation?
One interesting question to consider: is prompting a tool like Suno more like a producer working collaboratively with a band, or more like a very expressive Spotify search? The tools try to blur this line. You can hum a melody or feed in a rough sketch of a chord progression and get into a flow that feels like co-creation. To me, it's closer to an expressive search that receives search terms, goes into a black box, and presents the user several options that match the search terms.
The experience makes a user feel as if the music emerged from their own creative process, and can make other forms of creation feel like a chore in comparison. Writing effective prompts is a skill, but it's different from making the thing itself. Over time, it may accelerate our deskilling and make creative work feel more like playing a slot machine, prompting and regenerating until something sounds right.
As groundbreaking as synthesizers and drum machines were, they were still deterministic extensions of our own intent. A musician knew what they were getting with them. If the process of creating music is like putting together a puzzle, synthesizers and drum machines give us new shapes and size of pieces to work with. The human still does the putting together. AI music generation as Suno sells it is more like asking someone else to put a puzzle together for you entirely. If you have to "learn" the song after it's already made, something else created it.
There's a phrase I've been glad to see gaining traction: "the friction is the point".
Don't be fooled by the internet... It's cool to use the computer, don't let the computer use you. You all saw The Matrix. There's a war going on. The battlefield's in the mind, and the prize is the soul. Just be careful.
Ethics
The training data question is the most straightforward of the objections, and in some ways the hardest to argue around. Major generative music models are trained on open-web music files, without the consent or compensation of the artists who made them. This feels wrong on a basic level to me. AI companies disagree.
The defense hinges on the argument that the use is transformative, and follows a similar process to the ones humans use to learn and create. The legal case will continue to play out in court and in the court of public opinion. But it's telling when the justification sounds less like a moral principle and more like a technicality of laws enacted before these technologies could even be imagined.
There are externalized costs beyond the legal and ethical ones. Good music-specific energy numbers are still hard to find, but audio generation is generally understood to be between text and video in terms of energy usage. Even with conservative assumptions, the scale matters: Suno alone generates 7 million tracks per day, enough to match Spotify's entire library every two weeks. That generation has a real energy cost: U.S. data centers consumed 183 terawatt-hours in 2024 and are projected to grow sharply.
All technology has some environmental cost, but it's worth considering whether the scale of the cost is justified by the benefits.
Culture
The cultural cost is the hardest to quantify and perhaps the most important. Hyper-personalization changes how we listen to music, and AI will only accelerate what's already happening. In some interviews, Shulman has mentioned that hyper-personalization is a major risk of AI-generated music, but in other statements, he describes moving beyond genre-level targeting toward music that feels "truly personal", comparing the shift to the rise of selfies in the last 20 years.
Researchers have documented what they call "taste tautology", a feedback loop where recommendation algorithms reinforce prior preferences so strongly that listeners find themselves trapped in an endless loop of the same artists and the same sounds. UNESCO's 2025 report on AI and culture warned that these systems risk fostering monocultures, filtering out minority voices and experimental sounds that don't immediately match a user's prior data.
I was so tired of Spotify giving me the same overplayed recommendations. When everyone can create, the catalog becomes infinite and music becomes even more personalized. Instead of competing for mainstream hits, AI unlocks an ever-expanding long tail, meaning everyone can find their song, not just a song.
I find this vision of music as a disposable commodity rather dystopian. Music has always been one of the primary ways humans synchronize with each other, be it through oral tradition, shared playlists, or the moment at a show when a room full of strangers feels like a collective. When everyone's feed is perfectly individualized, there is a potential cultural loss that's difficult to measure.
Listeners
Artists
I wouldn't want to go back to a world where new music was reserved for the privileged or the especially curious. But an overcorrection into having no shared culture isn't a good outcome either. The goal should be more people finding more music, not everyone disappearing into their own perfectly optimized bubble managed by billion dollar companies.
Personal satisfaction
There is ample evidence that the act of creating something yourself is healthy and rewarding. Research consistently shows that engaging with music benefits mental well-being, including reduced loneliness and trauma resilience. A recent neuroscience study found that people who created without external aids produced more abstract, introspective work, while those who relied on AI or search engines tended toward more generic, outwardly framed output.
I've seen this in my own life and in the musicians I've played with. My experiments with Suno while putting this essay together were very impressive on a technical level, but I feel strongly that nothing I could ever make with that tool would equal the satisfaction of slow growth and my own expression, even if the technical quality of the final product converges with the human-generated one. In a February 2025 interview, Shulman described a vision of music as "a lot more quick hit dopamine, kind of the way music should be". This is a fundamental values disagreement for me.
To me, it seems crazy that music should not be as engaging as Fortnite.
Tech doesn't exist in isolation
It's very tempting to give into the allure of tools that make things easy. The sales pitches are compelling, especially in fields where outputs are often considered more valuable than the process of creating them.
Ultimately, I don't think it makes sense to judge any technology in isolation. Reasonable arguments can be made on either side of whether a given technology is net positive.
In this case, it's hard to reconcile the idea of democratizing creativity with the fact that the same period has seen the National Endowment for the Arts budget being gutted, arts education grants canceled mid-cycle, and the institutions that actually teach people to create systematically defunded.
Making music more accessible, whether it's for children, adults, or people with disabilities, is a virtuous goal. But it's even more of a cultural and political issue than a technical one. Democratizing creativity means valuing it and investing in it, rather than replacing the process and displacing the people who make things at scale.
It feels like the tech is being sold as an inevitability. We can't stop it, so we might as well get on board. It's true that the tech is here, but how society views art and embraces technology is ultimately up to us.
A hopeful ending
Let's end the same way we began, with a quiz:
What share of listeners want AI-generated music clearly labeled?
Your guess: 50%
Deezer/Ipsos survey of 9,000 people across 8 countries (Nov 2025)
Most people can't tell the difference between AI-generated and human-generated music, yet they still care whether it was made by AI. We've accepted technology in helping us create music, listen to it, and share it with others. Music can ultimately be represented as 1s and 0s, but we seem to have a natural aversion to something else creating it that way.
A 2025 Pew Research survey found that adults under 30 were nearly twice as likely to react negatively as those over 65. The generation companies like Suno are most explicitly targeting turns out to be the one most likely to feel betrayed by it.
So to answer the original question of this essay: I don't think music is just sound. At least not in the way I understand it.
It may just be the case that we need new words to describe these new ideas. There is a version of music that is oriented around how sound makes you feel, and that it doesn't matter where the sound comes from. That is a valid relationship to music, and AI-generated music will likely be transformational there.
But there will always be a version of music that is more strongly tied to artistry and human expression. People want to know that the feelings behind the music come from another human's story and lived experience. In the same way that paintings aren't just colors and poetry isn't just words, I think the output of music is just an artifact of process that is irreducibly human.
My hope is that the current moment pushes us towards that version of music.
Thank you!
If you like this type of content, you can follow me on BlueSky. If you wanted to support me further, buying me a coffee would be much appreciated. It helps us keep the lights on and the servers running! ☕
We're just getting started.
Subscribe for more thoughtful, data-driven explorations.
