We here describe with complete detail the Music-Generative Open AI (MusGO) framework: a community-driven framework for assessing openness in music-generative AI models. This detailed criteria is linked to the research article “MusGO: A Community-Driven Framework for Assessing Openness in Music-Generative AI”, authored by Roser Batlle-Roca, Laura Ibáñez-Martínez, Xavier Serra, Emilia Gómez, and Martín Rocamora.

MusGO framework consists of 13 dimensions of openness, distinguishing between essential (1–8) and desirable (9–13) categories. Essential categories follow a three-level scale: (✔︎ open, ~ partial, or closed). Desirable categories are binary, indicating whether an element exists (⭐) or not.

Read the paper | GitHub Repository | Openness Leaderdboard | Evaluation Checklist | How to contribute? | Help us improve!

Essential

[1] Open source code.

Is the system source code, including the data processing, model architecture, training pipeline, and inference, openly available for inspection and use?

[2] Training data.

Is the training data of the model fully described, including sources, acquisition methods, and access conditions? Is training data available for inspection?

[3] Model weights.

Are the complete model weights (of the production-ready model) fully shared and accessible for inspection and use?

[4] Code documentation.

Is the codebase accompanied by sufficient and complete documentation to allow for its replication, extension, or modification?

[5] Training procedure.

Is the training procedure of the system fully documented to allow for replication and understanding of the system?

[6] Evaluation procedure.

Is the evaluation procedure of the system fully documented to support reproducibility of evaluation results and performance?

[7] Research paper.

Is there a publicly available and accessible research paper, or alternative technical report, that provides an overview of the introduced model? Is it peer-reviewed by an external group of reviewers?

[8] Licensing.

Is the system and its components licensed under a clear and adequate open framework, such as an OSI, RAIL license, or other context-appropriate license?

Desirable

[9] Model card.

Is a model card or equivalent documentation provided?

[10] Datasheet.

Does the model include datasheets or equivalent documentation that provide a systematic and standardized account of the data used for training?

[11] Package.

Is the model released as an indexed software package or provided through an equivalent developer-oriented solution?

[12] User-oriented application.

Is the model accessible via a user-oriented interface, such as an API or a UX tool for creative contexts?

[13] Supplementary material page.

Is there a demo page that showcases its capabilities by providing sonified generated examples of the model?