Music and technology are my two biggest lifelong passions. I've worked as a software engineer professionally for over 10 years and spend a lot of my time on creative coding. I've been playing the guitar and making music for almost 20 years.
In my free time, I've built DrawBeats, a free web-based sequencer used by tens of thousands of students and hobbyists around the world. I've spent 16 years volunteering helping adults with developmental disabilities play music, including thinking about how assistive technologies might help them create and perform.
This essay is my attempt to understand why I feel the way I do about AI-generated music.
My gut reaction
I generally don't feel good about AI-generated music. As someone caught between these two worlds, I wanted to understand why. It's easy to dismiss it as "soulless", but I wanted something more satisfying than that. I wanted to make sure I was thinking clearly and could have an internally consistent answer on the topic that I could trust and defend. I also wanted to understand how others feel about it.
I've tried to steelman the other side honestly and follow the evidence where it leads. Am I just being a curmudgeon?
Ultimately this piece will be opinionated. What I found surprised me in places. I hope it gives you a useful framework for thinking through these questions yourself, because they're only going to get more pressing.
Where is the technology at right now?
Let's start with a quick quiz.
What share of listeners cannot tell when a song is AI-generated?
Tap on the chart below to make your guess
Your guess: 50%
Deezer/Ipsos survey of 9,000 people across 8 countries (Nov 2025)
The answer above may surprise you. Similar to other types of generative AI, the quality of the music is improving rapidly. It wasn't that long ago when we were collectively amused at the videos of Will Smith eating spaghetti. But just like generative video and text, music generation has made huge progress in the last few years.
AI music companies like Suno and Udio work by taking a text prompt and generating music based on it. These tools are trained on vast libraries of music from across the internet, and can produce surprisingly convincing results from a simple prompt in a matter of seconds.
Don't just take my word for it. Try the quiz above and see for yourself. Each genre has two clips: an AI-generated one, and a "real" one made by a human though "real" here just means non-AI-generated, and may still include software instruments, samples, or drum loops. All of the real clips were recorded by me or are in the public domain.
Can you tell?
For each genre, listen to both clips, then pick the one you think is real.
Classical
Indie Rock
Jazz
Lo-fi
Is this stuff out there today?
Over 60,000 tracks are uploaded to Deezer every day that are fully AI-generated, accounting for roughly 39% of daily uploads. Spotify doesn't disclose any equivalent figures, which is a data point in and of itself. This is no longer a hypothetical problem.
Consider Sienna Rose, a neo-soul "artist" who accumulated over three million monthly Spotify listeners and landed three songs on Spotify's Viral Top 50 in early 2026. She has no verified social media presence, no live shows, and released at least 45 tracks in four months. Deezer confirmed her music is entirely AI-generated.
Different platforms are adopting different approaches: Bandcamp is fully banning AI-generated music, Deezer is allowing it but tagging it, and Spotify is not only allowing it but recommending it alongside real artists.
It's admittedly not an easy problem to solve. It may cause unintended consequences for real artists who didn't use AI tools or used some technology to help them make music but were tagged as AI-generated anyway. It's a problem that will require a lot of careful thought and iteration, but I think it's one that needs to be tackled.
The economics of AI-generated music
The economics behind this shift are worth exploring.
Streaming platforms like Spotify operate on a pro-rata royalty system: all subscription revenue flows into a single pool, then gets divided according to each track's share of total streams. AI-generated tracks dilute that pool, since they increase the total number of streams.
Royalty Pool Dilution
As more streams go to AI artists, your payout drops because your share of the total stream pool gets smaller, even if your own stream count stays the same.
If all streams were from human artists
Your 1M streams → ~$3,000
As AI artists flood the pool
Your 1M streams → ~$1,500
Streams of your song
Streams of other humans
Streams of AI
Based on Spotify's average payout of ~$0.003–$0.005 per stream
In 2024, journalist Liz Pelly published an investigation in Harper's Magazine uncovering an internal Spotify program called Perfect Fit Content, in which the company commissioned low-cost music from pseudonymous "ghost artists" specifically for its curated mood playlists, with internal analytics tracking which tracks delivered "improved margins." The logic was simple: if users are only half-listening, why pay full royalties? The question is whether AI is the natural next chapter of that same logic. Spotify says no, but the incentive structure raises questions.
Also worth noting: up to 85% of streams on AI-generated tracks are themselves fraudulent. This suggests an emerging industry of bots listening to other bots and generating royalties, not listeners making choices.
Consider what that trajectory looks like at scale. A 2024 study by CISAC, the global body representing over five million songwriters and composers, projects that 24% of music creators' revenues could be at risk by 2028. By then, the study projects that 60% of the music library market could be AI-generated.
An argument can be made that this type of "stock" music is formulaic and worth automating, but this is also the quiet end of the music business, where many working musicians actually make their living. Beyond the economics, the cultural and emotional impact of this is worth considering.
Spotify's public position is worth sitting with for a moment. In mid-2025, the company's head of artist partnerships Bryan Johnson stated there was "infinitely small consumption of fully AI-generated tracks on Spotify" and that there was "no dilution of the royalty pool by AI music." Six months later, Spotify's head of music Charlie Hellman announced that "AI is being exploited by bad actors to flood streaming services with low-quality slop to game the system and attempt to divert royalties away from authentic artists."
The story has continued to move quickly. By late 2025, the major labels, who had sued Suno and Udio just a year earlier for copyright infringement on what they called an "almost unimaginable scale," had reached licensing settlements with both companies. It's a pattern that should feel familiar: it closely mirrors how labels negotiated streaming deals in the late 2000s, largely without artists at the table. That process produced the per-stream rates musicians feel is unfair today.
The steelman
I want to make the strongest possible case for the other side. The technology is genuinely incredible to me on a technical level, and music has survived every disruption people said would kill it. Why is this any different?
These talking machines are going to ruin the artistic development of music in this country. When I was a boy...in front of every house in the summer evenings, you would find young people together singing the songs of the day or old songs. Today you hear these infernal machines going night and day. We will not have a vocal cord left. The vocal cord will be eliminated by a process of evolution, as was the tail of man when he came from the ape.
Sousa was, of course, wrong. Recorded music helped make music more accessible to more people, and helped it grow and evolve. And we certainly didn't see the end of singing.
History shows an interplay between technology and art. It is true that technology does disrupt and displace, but it also creates new opportunities and new forms of expression. Hip-hop producers used sampling to create new sounds. GarageBand created a generation of bedroom producers, including me. Electronic artists turned synthesizers and drum machines into a new genre of music. This back and forth between technology and art is a cycle that has been happening for centuries.
Mikey Shulman, the CEO of Suno, describes their goal as "first and foremost giving everybody the joys of creating music." It's hard to dismiss that vision in isolation. According to a 2022 YouGov survey, one in six Americans wishes they had learned an instrument but never did. Among those who did learn and later quit, more than half who played guitar or electric guitar regret stopping. The desire to make music is real and widespread. The barrier, for most people, is the difficulty of the craft. For some populations, such as those with disabilities, that barrier is even higher.
Why I'm unconvinced
The promise is straightforward: a human uses a tool and makes an output. If the tools can do more, then more people can make more output.
Shulman's framing treats creativity as an execution gap: you have something in your head, and the tool's job is to get it out with as little friction as possible.
It's not really enjoyable to make music now... it takes a lot of time, it takes a lot of practice, you need to get really good at an instrument or really good at a piece of production software. I think the majority of people don't enjoy the majority of time they spend making music.
But that's not how making music works, at least not in my experience. The idea doesn't really exist in isolation before the process. You find it by playing, by making mistakes, by following something unexpected.
That being said, there are of course parts of the music making process that are tedious and frustrating. Tools that help us focus on the creative process and make the tedious parts easier are valuable but categorically different from tools that replace the process entirely.
Suno's ambition goes well beyond smoothing those rough edges. Their goal, in Shulman's own words, is not to make existing creators "10% faster". It's explicitly to build something for a billion people. It is branded more as a paradigm shift than a slightly more robust autotuner or drum machine.
We didn’t just want to build a company, let’s say, that makes the current crop of creators 10% faster or makes it 10% easier to make music. If you want to impact the way a billion people experience music you have to build something for a billion people
My goal is definitely not to gatekeep who gets to make music. I believe everyone is capable of it, and I've spent a long time thinking about how technology can lower barriers rather than replace people entirely. That distinction matters. A tool that helps someone with limited motor control play a chord they couldn't otherwise play is categorically different from one that generates a song on their behalf. One empowers the human while the other does the work for them. One is assistive technology while the other is generative.
It's also worth acknowledging that there are more nuanced cases. An experienced producer sketching out an arrangement quickly, or using AI to test a melodic idea before committing to it does feel like it straddles the line between assistive and generative. But Shulman has been explicit: this isn't a tool for people who already make music. It's for a billion people who don't. Those are different products with different implications, and I think it's worth being honest about which one we're actually talking about.
Ethics
The training data question is the most straightforward of the objections, and in some ways the hardest to argue around. Major generative music models are trained on essentially all music files on the open internet, without the consent or compensation of the artists who made them. Suno and Udio were sued by major labels for copyright infringement on exactly these grounds, and have since reached licensing settlements with both companies.
The defense hinges on the argument that the use is transformative. The legal case will play out in court. But there's something telling when the justification sounds more like a technicality than a moral principle.
It's also worth asking: is prompting Suno more like a producer working collaboratively with a band, or more like a very expressive Spotify search? The tools try to blur this line. You can hum a melody or feed in a rough sketch of a chord progression and get into a flow that feels like co-creation. Personally, it feels closer to an expressive search that receives search terms, goes into a black box, and presents the user several options that match the search terms. Even if the search is dynamic and the results are fast, curation and creation feel fundamentally different.
The externalized costs don't stop there. Good music-specific energy numbers are still hard to find, but audio generation is generally understood to be more compute-intensive than text and less than video. Even with conservative assumptions, the scale matters: Suno alone generates 7 million tracks per day, enough to match Spotify's entire library every two weeks, while U.S. data centers consumed 183 terawatt-hours in 2024 and are projected to grow sharply.
All technology has some environmental cost, but it's worth considering whether the scale of the cost is justified by the benefits.
Culture
The cultural cost is the hardest to quantify and potentially the most important. Hyper-personalization changes how we listen to music, and AI will only accelerate what's already happening. Researchers have documented what they call "taste tautology", a feedback loop where recommendation algorithms reinforce prior preferences so strongly that listeners find themselves trapped in an endless loop of the same artists, the same sounds. UNESCO's 2025 report on AI and culture warned that these systems risk fostering monocultures, filtering out minority voices and experimental sounds that don't immediately match a user's prior data.
But the deeper loss may be more collective. Music has always been one of the primary ways humans synchronize with each other, be it through shared playlists, songs everyone knows, or the moment at a show when a room full of strangers feels like one thing. When everyone's feed is perfectly individualized, there is a cultural loss that's hard to quantify.
I wouldn't want to go back to a world where only the most curious listeners sought out new music. But an overcorrection into having no shared culture at all isn't a good outcome either. The goal should be more people finding more music, not everyone disappearing into their own perfectly optimized bubble.
Personal satisfaction
Research consistently shows that music engagement promotes social cohesion and combats loneliness. This research is still early and should be taken with a grain of salt, but a recent neuroscience study found that people who created without external aids produced more abstract, introspective work with stronger intrinsic neural coupling, while those who relied on AI or search engines tended toward more generic, outwardly framed output.
I've seen this in my own life and in the musicians I've worked with.
Tech doesn't exist in isolation
It is very tempting to give into the allure of tools that make things easy. I wrestle with this in my own coding work. The sales pitches are compelling, especially in fields where outputs are often considered more valuable than the process of creating them.
Ultimately, I don't think it makes sense to judge any technology in isolation. Reasonable arguments can be made on either side of whether a given technology is net positive.
In this case, it's hard to reconcile the idea of democratizing creativity with the fact that the same period has seen the National Endowment for the Arts gutted, arts education grants canceled mid-cycle, and the institutions that actually teach people to make things systematically defunded.
Making music more accessible, whether it's for children, adults, or people with disabilities, is a virtuous goal. But it's even more of a cultural and political issue than a technical one. Democratizing creativity means valuing it and investing in it, rather than replacing the process and displacing the people who make it.
A hopeful ending
What share of listeners want AI-generated music clearly labeled?
Your guess: 50%
Deezer/Ipsos survey of 9,000 people across 8 countries (Nov 2025)
I find it striking that most people can't tell the difference between AI-generated and human-generated music, but still want to know when it's AI-generated. There is something in us that wants to know another person was on the other side of the music. Music can ultimately be represented as 1s and 0s, but we seem to have a natural aversion to something else generating it that way.
A 2025 Pew Research survey found that 38% of Americans would like a song less if they found out it was AI-generated. But what surprised me more was the age breakdown. Adults under 30 were nearly twice as likely to react negatively as those over 65. The generation Suno is most explicitly targeting turns out to be the one most likely to feel betrayed by it.
So to answer the original question of this essay, I don't think music is just sound. At least not in the way I like to think about it.
It may be the case that we need new words to describe these new ideas. There is a version of music that is oriented around how sound makes you feel, and that it doesn't matter where the sound comes from. That is a valid relationship to music, and AI-generated music will likely be transformational there.
But there will always be a version of music that is more strongly tied to artistry and human expression. People want to know the feelings behind the music come from another human's story. In the same way that paintings aren't just colors and poetry isn't just words, I think the output of music is just an artifact of a process that is irreducibly human.
Thank you!
If you like this type of content, you can follow me on BlueSky. If you wanted to support me further, buying me a coffee would be much appreciated. It helps us keep the lights on and the servers running! ☕
We're just getting started.
Subscribe for more thoughtful, data-driven explorations.
