AI-generated music is already an innovative enough concept, but Riffusion takes it to another level with a clever, weird approach that produces weird and compelling music using not audio but images of audio.
Sounds strange, is strange. But if it works, it works. And it does work! Kind of.
Diffusion is a machine learning technique for generating images that supercharged the AI world over the last year. DALL-E 2 and Stable Diffusion are the two most high-profile models that work by gradually replacing visual noise with what the AI thinks a prompt ought to look like.
The method has proved powerful in many contexts and is very susceptible to fine-tuning, where you give the mostly trained model a lot of a specific kind of content in order to have it specialize in producing more examples of that content. For instance, you could fine-tune it on watercolors or on photos of cars, and it would prove more capable in reproducing either of those things.
What Seth Forsgren and Hayk Martiros did for their hobby project Riffusion was fine-tune Stable Diffusion on spectrograms.
“Hayk and I play in a little band together, and we started the project simply because we love music and didn’t know if it would be even possible for Stable Diffusion to create a spectrogram image with enough fidelity to convert into audio,” Forsgren told TechCrunch. “At every step along the way we’ve been more and more impressed by what is possible, and one idea leads to the next.”
What are spectrograms, you ask? They’re visual representations of audio that show the amplitude of different frequencies over time. You have probably seen waveforms, which show volume over time and make audio look like a series of hills and valleys; imagine if instead of just total volume, it showed the volume of each frequency, from the low end to the high end.
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss the 20th anniversary of TechCrunch, and a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.
Join 10k+ tech and VC leaders for growth and connections at Disrupt 2025
Netflix, Box, a16z, ElevenLabs, Wayve, Hugging Face, Elad Gil, Vinod Khosla — just some of the 250+ heavy hitters leading 200+ sessions designed to deliver the insights that fuel startup growth and sharpen your edge. Don’t miss a chance to learn from the top voices in tech. Grab your ticket before doors open to save up to $444.
Here’s part of one I made of a song (“Marconi’s Radio” by Secret Machines, if you’re wondering):

You can see how it gets louder in all frequencies as the song builds, and you can even spot individual notes and instruments if you know what to look for. The process isn’t inherently perfect or lossless by any means, but it is an accurate, systematic representation of the sound. And you can convert it back to sound by doing the same process in reverse.
Forsgren and Martiros made spectrograms of a bunch of music and tagged the resulting images with the relevant terms, like “blues guitar,” “jazz piano,” “afrobeat,” stuff like that. Feeding the model this collection gave it a good idea of what certain sounds “look like” and how it might re-create or combine them.
Here’s what the diffusion process looks like if you sample it as it’s refining the image:

And indeed the model proved capable of producing spectrograms that, when converted to sound, are a pretty good match for prompts like “funky piano,” “jazzy saxophone,” and so on. Here’s an example:
