The Music-Generative Open AI (MusGO) framework is a community-driven framework designed to assess the openness of music-generative models. With a collaborative approach, it invites contributions from researchers and artists, supports public scrutiny, and enables tracking of model evolution to promote transparency, accountability, and responsible development.

This website builds on the paper “MusGO: A Community-Driven Framework for Assessing Openness in Music-Generative AI”, authored by Roser Batlle-Roca, Laura Ibáñez-Martínez, Xavier Serra, Emilia Gómez, and Martín Rocamora. It serves not only as a companion to the publication, but also as a living resource, which is continuously updated and shaped by contributions from the community.

Read the paper | GitHub Repository | Detailed Criteria | How to contribute? | MIR Survey Results | Help us improve!

Openness Leaderdboard

How to interpret this table? MusGO framework consists of 13 dimensions of openness, distinguishing between essential (1–8) and desirable (9–13) categories. Essential categories follow a three-level scale: (✔︎ open, ~ partial, or closed). Desirable categories are binary, indicating whether an element exists (⭐) or not.

Models are ordered using a weighted openness score (O), based on essential categories (E) and normalised to a 100-point scale. Considering survey findings, the three most relevant categories (E1, E2 and E3, all with M1,2,3=5) are weighted twice as much as the others. Note that the score is used for ordering purposes only, and we do not intend to reduce openness to a single value. When models achieve the same score, the order is determined by the highest number of fulfilled desirable categories.

Each cell includes interactive elements:

For a detailed brakedown of each model’s evaluation, you can explore its corresponding YAML file in the project folder.


ProjectEssentialDesirable
Source codeTraining dataModel weightsCode
documentation
Training
procedure
Evaluation
procedure
Research paperLicenseModel cardDatasheetPackageUX
application
Supplementary
material page
Stable Audio Open✔︎✔︎✔︎✔︎✔︎✔︎✔︎~
2024Stability AI
MusicGen✔︎~✔︎✔︎✔︎✔︎✔︎✔︎
2023Meta AI
SongGen✔︎~✔︎✔︎✔︎✔︎✔︎✔︎
2025Beihang University, Shanghai AI Laboratory, The Chinese University of Hong Kong, Harbin Institute of Technology, CPII (InnoHK)
GANsynth✔︎~✔︎✔︎✔︎~✔︎✔︎
2019Google Magenta
Musika✔︎~✔︎✔︎✔︎~✔︎✔︎
2022Johannes Kepler University Linz
MusicLDM~~✔︎✔︎✔︎✔︎✔︎~
2023University of California San Diego, Mila-Quebec Artificial Intelligence Institute, University of Surrey, LAION
VampNet✔︎✔︎✔︎✔︎~✔︎✔︎
2023Northwestern University and Descript Inc
Jukebox✔︎~✔︎✔︎✔︎~~~
2020OpenAI
RAVE✔︎✔︎✔︎✔︎✔︎~~
2021IRCAM
AFTER✔︎~✔︎✔︎~~~~
2024IRCAM
YuE (乐)~✔︎✔︎✔︎✔︎~✔︎
2025The Hong Kong University of Science and Technology and Multimodela Art Projection (M-A-P)
Moûsai~~~✔︎~✔︎✔︎
2023ETH Zürich, IIT Kharagpur, Max Planck Institute
Diff-A-Riff✔︎~✔︎
2024Sony Computer Science Laboratories Paris and Queen Mary University of London
Music ControlNet✔︎~✔︎
2023Carnegie Mellon University and Adobe Research
MusicLM~~~~
2023Google Research and IRCAM
Noise2Music~~~~
2023Google Research
DITTO-2~~✔︎
2024University of California San Diego and Adobe Research
MeLoDy~~✔︎
2023ByteDance

Key Findings

Limitations

Disclaimer: Future Developments 🚧

The MusGO framework is a living resource, developed through community collaboration, currently focused on assessing openness in music-generative AI. However, we are actively exploring complementary perspectives and refinements to further expand its scope and adaptability. We aim to better reflect the diverse ways in which music-generative systems can be understood, accessed, and used responsibly.

Updates will be shared once ready for community feedback.

Acknowledgments

This site is an adapted version of https://opening-up-chatgpt.github.io/. We are deeply grateful to the original creators, Andreas Liesenfeld, Alianda Lopez, and Mark Dingemanse, for their groundbreaking work on openness, transparency, and accountability in generative AI, which has inspired and shaped this project.

For more details, please refer to their papers:

This work has been supported by IA y Música: Cátedra en Inteligencia Artificial y Música (TSI-100929-2023-1), funded by the Secretaría de Estado de Digitalización e Inteligencia Artificial and the European Union-Next Generation EU, and IMPA: Multimodal AI for Audio Processing (PID2023-152250OB-I00), funded by the Ministry of Science, Innovation and Universities of the Spanish Government, the Agencia Estatal de Investigación (AEI) and cofinanced by the European Union.

We thank our colleagues at the Music Technology Group at Universitat Pompeu Fabra for their thoughtful insights, constructive discussions and active engagement throughout the development of this work.