Bias is deeply ingrained in both society and music, especially against underrepresented communities. Any algorithm or data-driven AI system using data produced by such a society will inevitably inherit these biases. This is particularly relevant in digital music platforms, where minority groups remain underrepresented.
This talk examines some consequences of the use of data-driven AI in the music industry, focusing on the systemic issue of bias in digital music platforms. Current music metadata standards, such as those from DDEX, omit gender information. This omission prevents recommendation algorithms from detecting or correcting existing gender disparities, thereby amplifying the invisibility and reducing the income for minority-gender artists.
The Gender Music Tech and Gender Music Metadata projects directly address this issue. They propose an ethical, interoperable and gender-inclusive metadata system. The main contribution is to integrate structured, self-declared gender fields into industry standards, which will enable fairness-aware recommendation algorithms and ensure equitable discoverability for all creators. This is a realistic and scalable path to correct gender imbalances.
More broadly, this talk opens the debate on how to fix the systemic lack of diversity, inequity and exclusion (LIE) in the design of recommendation algorithms and data-driven AI systems.