The Music-Generative Open AI (MusGO) framework is a community-driven framework built to assess openness in music-generative models. With a collaborative approach, it invites contributions from researchers and artists, supports public scrutiny, and enables tracking of model evolution to promote transparency, accountability, and responsible development.

This website builds on the paper “MusGO: A Community-Driven Framework for Assessing Openness in Music-Generative AI”, authored by Roser Batlle-Roca, Laura Ibáñez-Martínez, Xavier Serra, Emilia Gómez, and Martín Rocamora. It serves not only as a companion to the publication, but also as a living resource, which is continuously updated and shaped by contributions from the community.

Read the paper | GitHub Repository | Detailed Criteria | How to contribute? | MIR Survey Results | Help us improve!

Openness Leaderdboard

How to interpret this table? MusGO framework consists of 13 dimensions of openness, distinguishing between essential (1–8) and desirable (9–13) categories. Essential categories follow a three-level scale: (✔︎ open, ~ partial, or closed). Desirable categories are binary, indicating whether an element exists (⭐) or not.

Models are ordered using a weighted openness score (O), based on essential categories (E) and normalised to a 100-point scale. Considering survey findings, the three most relevant categories (E1, E2 and E3, all with M1,2,3=5) are weighted twice as much as the others. Note that the score is used for ordering purposes only, and we do not intend to reduce openness to a single value. When models achieve the same score, the order is determined by the highest number of fulfilled desirable categories.

Each cell includes interactive elements:

For a detailed brakedown of each model’s evaluation, you can explore its corresponding YAML file in the project folder.


ProjectEssentialDesirable
Source codeTraining dataModel weightsCode
documentation
Training
procedure
Evaluation
procedure
Research paperLicenseModel cardDatasheetPackageUX
application
Supplementary
material page
MusicGen✔︎~✔︎✔︎✔︎✔︎✔︎✔︎
Meta AI
Stable Audio Open✔︎✔︎✔︎✔︎✔︎✔︎~~
Stability AI
JASCO✔︎~✔︎✔︎~✔︎✔︎✔︎
The Hebrew University of Jerusalem, Meta AI
GANsynth✔︎~✔︎✔︎✔︎~✔︎✔︎
Google Magenta
Musika✔︎~✔︎✔︎✔︎~✔︎✔︎
Johannes Kepler University Linz
MusicLDM~~✔︎✔︎✔︎✔︎✔︎~
University of California San Diego, Mila-Quebec Artificial Intelligence Institute, University of Surrey, LAION
VampNet✔︎✔︎✔︎✔︎~✔︎✔︎
Northwestern University and Descript Inc
Jukebox✔︎~✔︎✔︎✔︎~~~
OpenAI
RAVE✔︎✔︎✔︎✔︎✔︎~~
IRCAM
Moûsai~~~✔︎~✔︎✔︎
ETH Zürich, IIT Kharagpur, Max Planck Institute
Diff-A-Riff✔︎~✔︎
Sony Computer Science Laboratories Paris and Queen Mary University of London
Music ControlNet✔︎~✔︎
Carnegie Mellon University and Adobe Research
MusicLM~~~~
Google Research and IRCAM
Noise2Music~~~~
Google Research
DITTO-2~~✔︎
University of California San Diego and Adobe Research
MeLoDy~~✔︎
ByteDance

Acknowledgments

This site is an adapted version of https://opening-up-chatgpt.github.io/. We are deeply grateful to the original creators, Andreas Liesenfeld, Alianda Lopez, and Mark Dingemanse, for their groundbreaking work on openness, transparency, and accountability in generative AI, which has inspired and shaped this project.

For more details, please refer to their papers:

This work has been supported by IA y Música: Cátedra en Inteligencia Artificial y Música (TSI-100929-2023-1), funded by the Secretaría de Estado de Digitalización e Inteligencia Artificial and the European Union-Next Generation EU, and IMPA: Multimodal AI for Audio Processing (PID2023-152250OB-I00), funded by the Ministry of Science, Innovation and Universities of the Spanish Government, the Agencia Estatal de Investigación (AEI) and cofinanced by the European Union.

We thank our colleagues at the Music Technology Group at Universitat Pompeu Fabra for their thoughtful insights, constructive discussions and active engagement throughout the development of this work.