Soundverse Just Released a Blueprint For Fair AI Music

Contents

Introduction

Here's a question worth asking: if AI can learn to make music by listening to thousands of songs, who gets paid when that AI creates something new?

It's not a hypothetical anymore. AI music tools are everywhere now. Some of them were trained on millions of songs without asking permission or paying anyone. The lawsuits have started. Major labels are suing companies like Suno and Udio, and in Germany, a music rights group found AI-generated songs that sounded almost identical to classics like "Daddy Cool" and "Forever Young." Courts are asking for proof, and regulators are starting to question whether this whole thing is even legal.

That's why Soundverse just published a 50-page whitepaper called "Towards an Ethical Framework for AI Music: End-to-End Infrastructure." Co-authored by Soundverse Founder and CEO Sourabh Pateriya, Co-Founder and CTO Riley Williams, and Board Member Linnea Sundberg, the document isn't a marketing piece. It's a position paper that lays out exactly how AI music can work without exploiting the people who make music.

Who this is for: If you're an artist hearing a lot about AI music and wondering whether it's a threat, a tool, or something you should ignore, this post breaks down what's at stake and why Soundverse's approach is designed to protect you. If you're a label executive, music publisher, rights organization, regulator, or technologist making decisions about AI music, this framework offers a concrete path forward.

The full whitepaper is available at soundverse.ai/whitepaper

The Core Problem

Most AI music companies built their systems the same way: scrape a bunch of songs off the internet, train an AI on them, and hope nobody notices. When they do get noticed, they claim it's "fair use" or offer artists a one-time payment and call it a day.

Soundverse's whitepaper says this approach is broken for three reasons.

First, training on music without permission is just taking someone's work without asking. It's the digital equivalent of sampling a song without clearing it, except it's happening at massive scale.

Second, one-time payments disconnect artists from the future. Imagine selling all rights to your entire catalog for a flat fee today, and then watching AI companies make billions using your music to train their systems. That's not a partnership, that's a raw deal.

Third, when there's no way to trace which songs influenced an AI's output, there's no way to pay anyone fairly. It all becomes a black box. The AI makes music, someone profits, and the original artists who made it possible get nothing.

Here's what that means for you: when AI learns your style, it's learning something with economic value. Your style becomes substitutable. It can displace your work in the market even if no individual song is copied. That's why attribution and compensation matter even when nothing is being directly reproduced.

The whitepaper calls this scrape-and-hope approach "strategically fragile." It might work in the short term, but it won't survive regulation, and it definitely won't survive artists and labels deciding they've had enough.

whitepaper landing page.png

What Soundverse Proposes Instead

The document introduces what it calls a six-stage framework. Think of it like this: if AI music is a pipeline, there are six major checkpoints where things can either go wrong or go right. Each stage represents a place where artists either lose control and compensation, or gain visibility and fair payment.

Six-stage framework diagram.png

Stage 1 - Model Creation (Where the Music Comes From): Instead of scraping, get permission. Build partnerships with artists who voluntarily contribute their music to train the AI, and make sure they know exactly what they're signing up for. This solves the manual tagging bottleneck, broken attribution, and the lack of clear paths for rights-holders to connect their catalogs and define license rules.

Stage 2 - Application Layer (How the AI Uses It): Create systems where artists can train private AI models on their own music while keeping their data safe. Soundverse calls these "DNA models." Think of them like personalized AI versions of an artist's style that the artist controls and profits from. This stage ensures license tracking flows through the AI workflow, uploads are scanned for copyrighted material, and artists can monetize their style, not just their tracks.

Stage 3 - Inference (When the AI Creates Something): Track what influenced each output. If an AI generates a song and it was influenced by three different tracks from the training data, log that. Make it transparent. This isn't guesswork. It works like credit allocation in collaborative writing or royalties in sampling. The technical methods that make this possible include influence functions and embedding analysis, and the whitepaper is honest about what works now and what still needs research.

Stage 4 - Export (When Someone Downloads It): Embed information into the audio file itself so that even after it leaves the platform, you can still trace its origins. Add a digital signature that includes licensing metadata and proof of provenance, creating a clear trail that survives beyond the platform.

Stage 5 - External Audio (Scanning the Wild): Build infrastructure to compare AI outputs against existing catalogs at scale. Check AI-generated music to make sure nothing slipped through that shouldn't have. If something's too similar to an existing song, flag it, block it, or route it for payment. This protects rights-holders from infringement and prevents confusingly similar works from flooding the market.

Stage 6 - Compensation (Paying People Fairly): Instead of one-time buyouts, pay artists ongoing royalties based on how much their music actually influenced what the AI created. The more your music shapes the outputs, the more you earn, and it keeps happening over time. This stage differentiates between flat payments, profit-linked royalties, recurring payments, and zero compensation, advocating for the model that keeps artists connected to long-term value.

The framework is detailed, but the heart of it is straightforward: if you can't trace influence, you can't pay fairly, and if you can't pay fairly, the whole system is built on exploitation.

Real Examples

This isn't just theory. Soundverse has been building tools around this framework.

Take Melody to Song. It's a feature that lets someone hum a melody and turn it into a full track with vocals, drums, bass, and everything else. Or MIDI to Song, which takes a simple MIDI file and builds a complete song around it in minutes.

melody to song full.png

These tools show what AI can do. They're fast, they're powerful, and they lower the barrier for anyone who wants to create music. But they also raise an important question: if the AI is trained on thousands of songs to know how to fill in drums, bass, vocals, and production, shouldn't the people who made those songs share in the value that creates?

That's where Soundverse DNA comes in. It's a way for artists to train an AI model on their own catalog, set a price for using it, and earn money every time someone generates music with it. Instead of losing control to AI, artists gain a new way to monetize their style while staying connected to what gets created.

DNA - 18 - Model-Based Singing - Singing voice generation mode.png

What the Pilot Program Revealed

In April 2024, Soundverse ran a pilot with 50 creators to test how this all works in practice. The learnings are worth paying attention to.

Creators wanted transparency. They didn't just want to be told "you'll get paid eventually." They wanted to see in real time when their music influenced something and how much they'd earn from it. Soundverse built dashboards to show that.

partner program landing page.png

Quality mattered more than quantity. Some creators uploaded hundreds of tracks thinking more data meant more money. But if the audio quality was poor, it didn't help the AI and didn't lead to payouts. It's not about flooding the system, it's about contributing music that's actually useful.

Thresholds mattered. Without setting a minimum level of influence, payouts became a mess of tiny fractions that didn't make sense to anyone. With clear thresholds, the system worked. If your music influenced a track by 5% or more, you got paid. If it was 0.03%, you didn't, because tracking and paying out that level of micro-contribution isn't practical or meaningful.

Most importantly, creators strongly preferred ongoing royalties over one-time payments. They compared it to how performance rights organizations like ASCAP or BMI work, where you keep earning as your music gets used. The difference here is the level of detail: instead of broad, opaque royalty pools, the system could show exactly which tracks influenced which outputs.

Why This Matters for You

The music industry is at a turning point. AI music is already here. It's not going away. The question is whether it will be built on scraping and exploitation or on consent and compensation.

For entry-level artists, this matters because it means AI could become a tool that helps rather than replaces. Instead of competing with an AI trained on your work without permission, you could train an AI on your own work and use it to speed up your process. Instead of worrying about AI flooding streaming platforms with cheap soundalikes, you could get paid when your style influences what gets created.

For labels and rights-holders, the cost of inaction is real. If you wait six to twelve months, AI lookalikes will flood channels, style-level substitution will erode your catalog's share of royalty pools, and you'll lack the provenance data needed for strong enforcement. Without influence logs and similarity scanning, you'll be forced into generic dataset-wide settlements instead of track-level claims, shifting leverage away from you.

The whitepaper argues the industry doesn't have to wait for courts to settle this. There's a better way to build AI music from the start: with attribution baked in, with licensing built into the system, and with compensation that reflects ongoing use instead of one-off deals.

The document is also a challenge to other AI companies: if you can't explain where your outputs come from, how you're paying the people who made your training data possible, and how you're preventing infringement, you're not ready for what's coming.

What's in the Document

The full whitepaper is available at soundverse.ai/whitepaper. It's 50 pages, and yes, parts of it are technical. The paper goes deep into the technical methods that make attribution possible, including influence functions, dynamic time warping, and embedding analysis, and is honest about what works and what still needs research. But there are also plain-language explanations of the problems, real examples from the pilot program, and frank discussions of trade-offs and constraints.

The document is written for multiple audiences. For label executives, music publishers, and rights organizations, it provides concrete infrastructure for catalog protection, provenance tracking, and revenue capture. For regulators and technologists, it offers a legally aligned framework that addresses substantial similarity standards and copyright compliance. And for artists, it translates complex technical concepts into practical implications about control, compensation, and creative empowerment.

The Bigger Picture

At the end of the day, this whitepaper is making one central argument: AI music can scale, but only if the people whose music makes it possible remain visible, valued, and compensated.

It's not anti-AI. The document is clear that AI will accelerate creativity, lower barriers, and create new markets. But it argues that those benefits should be shared, not extracted. Artists should be at the center of the ecosystem, not invisible fuel for someone else's product.

The framework Soundverse is proposing treats attribution like infrastructure. Not a nice-to-have feature, not a PR move, but core functionality, the same way payment systems are core to fintech or logging is core to software.

What this means for you is simple: you get to decide whether AI uses your work, you get visibility into how it's used, and you get compensated fairly when it happens. That's the difference between being displaced by technology and being empowered by it.

If the industry adopts something like this, AI music could become a genuine collaboration between human creativity and machine capability. If it doesn't, the current path leads to more lawsuits, more mistrust, and a system where innovation and fairness are constantly at odds.

The whitepaper is a blueprint for the first option. Whether the industry takes it is another question entirely.

Ready to explore an AI music future built on consent, attribution, and fair compensation? Read the Soundverse whitepaper and see how ethical AI music can scale without leaving artists behind.

Group 710.jpg


Sourabh Pateriya

BySourabh Pateriya

Share this article:

Related Blogs