The Music-Generative Open AI (MusGO) framework is a community-driven framework built to assess openness in music-generative models. With a collaborative approach, it invites contributions from researchers and artists, supports public scrutiny, and enables tracking of model evolution to promote transparency, accountability, and responsible development.
This website builds on the paper “MusGO: A Community-Driven Framework for Assessing Openness in Music-Generative AI”, authored by Roser Batlle-Roca, Laura Ibáñez-Martínez, Xavier Serra,
Emilia Gómez, and Martín Rocamora.
It serves not only as a companion to the publication, but also as a living resource, which is continuously updated and shaped by contributions from the community.
Read the paper
| GitHub Repository
| Detailed Criteria
| How to contribute?
| MIR Survey Results
| Help us improve!
Openness Leaderdboard
How to interpret this table? MusGO framework consists of 13 dimensions of openness, distinguishing between essential (1–8) and desirable (9–13) categories. Essential categories follow a three-level scale: (✔︎ open, ~ partial, or ✘ closed). Desirable categories are binary, indicating whether an element exists (⭐) or not.
Models are ordered using a weighted openness score (O), based on essential categories (E) and normalised to a 100-point scale. Considering survey findings, the three most relevant categories (E1, E2 and E3, all with M1,2,3=5) are weighted twice as much as the others. Note that the score is used for ordering purposes only, and we do not intend to reduce openness to a single value. When models achieve the same score, the order is determined by the highest number of fulfilled desirable categories.
Each cell includes interactive elements:
- Hovering over a cell reveals a tooltip with the justification behind the assigned score.
- Clicking on a cell redirects you to the source of information or relevant supplementary material (e.g., research paper, source code, model checkpoints, etc.).
Key Findings
- The leaderboard reveals substantial variation in openness across music-generative models, with training procedure often most open, while training data remains the most closed.
- Models that share source code, model weights, and documentation typically also apply open or responsible licenses, indicating a correlation across these core categories.
- Desirable categories like datasheets are less frequently fulfilled, while supplementary material pages with demos are becoming a community norm.
- The leaderboard helps identify incomplete openness claims and potential ‘open-washing’, providing clear, evidence-based signals for transparency and accountability in the field.
Limitations
- In music, assessing training data openness is challenging due to IP constraints, and fully open status may rely on detailed documentation rather than direct data release.
- Hardware implications are underexplored: while some models can be trained on personal computers, others require heavy computational resources, affecting reproducibility and accessibility.
- The leaderboard does not capture ethical, societal, or creative impacts of these models, focusing strictly on openness dimensions. Yet, it does provide a foundation upon which these critical aspects can be integrated in future iterations.
Acknowledgments
This site is an adapted version of https://opening-up-chatgpt.github.io/. We are deeply grateful to the original creators, Andreas Liesenfeld, Alianda Lopez, and Mark Dingemanse, for their groundbreaking work on openness, transparency, and accountability in generative AI, which has inspired and shaped this project.
For more details, please refer to their papers:
- Liesenfeld, Andreas, Alianda Lopez, and Mark Dingemanse. 2023. “Opening up ChatGPT: Tracking Openness, Transparency, and Accountability in Instruction-Tuned Text Generators.” In CUI '23: Proceedings of the 5th International Conference on Conversational User Interfaces, July 19–21, Eindhoven. DOI: 10.1145/3571884.3604316.
- Andreas Liesenfeld and Mark Dingemanse. 2024. “Rethinking open source generative AI: open washing and the EU AI Act.” In The 2024 ACM Conference on Fairness, Accountability, and Transparency (FAccT '24). DOI: 10.1145/3630106.3659005.
We thank our colleagues at the Music Technology Group at Universitat Pompeu Fabra for their thoughtful insights, constructive discussions and active engagement throughout the development of this work.