AI in Rock Music: Tool, Provocation or New Genre?
In the summer of 2025, Spotify’s algorithm started surfacing a band no one had ever heard of. The Velvet Sundown — blurry retro photos of their supposed “members”, track titles like “Dust on the Wind”, a polished sound reminiscent of 1970s Californian rock. Over a million monthly listeners. No concerts, no interviews, no social media presence whatsoever.
The unmasking came quickly. Reddit users started asking questions when the band’s songs began appearing in their Discover Weekly playlists, yet almost nothing could be found about them online. Eventually, the creators confirmed it themselves: The Velvet Sundown was “a human-led synthetic music project, created, performed and visualised using artificial intelligence”. “This isn’t a trick — it’s a mirror”, they wrote.
The mirror turned out to be deeply uncomfortable.
The Problem Isn’t Quality — It’s Invisibility
The most unsettling thing about The Velvet Sundown isn’t that the music was AI-generated. It’s that listeners didn’t notice. Recommendation algorithms don’t distinguish between a real artist and a programme — they work with the acoustic characteristics of audio. The system is built to find “similar” — and it genuinely doesn’t care who produced it.
Platforms like Suno and Udio are now generating more than ten songs per second. On the French streaming service Deezer, over 20,000 AI tracks are uploaded every single day. This is no longer an experiment — it’s an industrial process running at full capacity.
Two Poles: Fear and Instrumentalisation
The music world’s reaction to all of this has split into two camps, and the fault line doesn’t run along age or genre — it runs along attitudes toward authorship itself.
The major labels — Sony Music, Universal and Warner — filed lawsuits against Suno and Udio, accusing them of massive copyright infringement. Thousands of musicians have called for a ban on using human creative work to train AI without consent. On the other side stand those who see new tools not as a threat, but as an expansion of possibilities.
Telling in this respect is the FOTKAI project — a Spanish music media outlet covering live music that simultaneously experiments with its own musical content: lyrics written by a human, music and vocals generated by AI. Rock, alternative, metal and electronic music exist here not as genre labels, but as raw material for experimentation. The result is surprisingly alive, unsettling in places, and nothing like the sterile, forgettable output usually associated with machine generation. Worth a listen out of sheer curiosity — just to answer the question for yourself: can you tell the difference?
What Comes Next — and Why It’s Not as Frightening as It Sounds
The story of The Velvet Sundown ended predictably: the project’s audience quickly dropped from a million to fewer than two hundred thousand listeners. Music with no real person behind it turned out to be short-lived — not because it sounded bad, but because listeners had nothing to hold onto. Rock has always rested on personality. On a voice that has a story behind it.
But this is precisely where the strongest argument in favour of AI lies — not instead of the human, but alongside them. Just as the electric guitar once seemed like a threat to acoustic music, and the synthesiser a threat to live instruments, AI is now becoming part of the creative toolkit. Not a replacement for the artist, but a new kind of instrument — simultaneously more powerful and more unpredictable than anything that came before.
There is just one difference: this instrument knows how to imitate authorship. And that is exactly why, in the hands of someone who actually has something to say, it becomes genuinely compelling. In the hands of an algorithm running on autopilot for traffic — just noise. What rock becomes with AI doesn’t depend on the technology. It depends on who picks it up first.
















