By Sean Keenan, Founder, Soundboard Consulting
At this point in the conversation around AI and music, the adage “it’s better to ask for forgiveness than permission” is more contentious than helpful.
In a recent Variety guest column, Victoria Neuman highlights the challenge of both the permission and forgiveness approach. However ultimately both lead to the same destination; licensing. Although she rightly calls out that the ‘forgiveness’ route is far more expensive, hostile and complex. Yet, as AI innovation expands, the tension between respect for technical advancement and artistic respect in relation to AI remains foundationally, for now at least, about licensing.
Licensing music for any purpose can be daunting – let alone for AI. Ensuring your AI systems are legally compliant calls for sound strategy and an understanding of how your technologies or products will actually use music, for what purposes, including the scope and nature of the rights involved.
Fragmentation
Currently, there’s no unified regulatory framework for licensing music for AI, making an already complex system more fragmented. Instead, individual deals are being struck between those companies using AI and content owners, with some collective licensing happening at the private dataset provider level.
This trend is not only unsustainable but also disadvantages AI companies, artists and music rights holders. Individual deals don’t scale and are confidential, making it harder to establish common licensing models or terms.
While licensing can feel restrictive – and even provoke hostility from some – it remains the primary system available to music technology companies to legally train AI systems. In the long term, efforts to develop alternative, thoughtful and novel systems may lead to more efficiency and equitable fee structures.
It is even possible that copyright could be redefined and reimagined far beyond its current recognition (more on that here).
The harsh reality
Music licensing is inherently complex because it involves two distinct copyrights; one for the composition (or musical work) consisting of the melody, lyrics, and chord progression etc, and another for the sound recording (the master). As a result, securing permission from rights holders for any project (let alone for an AI system), and negotiating terms is a time-consuming and often costly process, especially for startups and smaller tech companies that lack the resources of larger organisations like OpenAI.
MiDiA music analyst Tatiana Cirisano notes in a post that, “many music-tech startups end up using unlicensed music for their products” because they either can’t afford the licences offered by big labels or struggle to get responses from publishers.
Forbes reports that the costs associated with resolving potential legal liabilities for failing to obtain such licences are now actually being factored into Wall Street valuations. Accordingly to Morgan Stanley analysts, the costs of settling disputes and then ultimately undertaking licensing initiatives will cost billions. It is therefore unsurprising why high-profile licensing deals continue to dominate headlines as demand for music in AI continues to soar. However, in certain instances, it may even be impossible for companies to obtain “forgiveness” from rights holders if their underlying business model is perceived as an existential threat to the licensors’ interests
Getting data
In a previous post for MTUK, I highlighted the expanding market for copyright-cleared datasets for use in AI (here). This market not only continues to grow but also offers one of the simplest ways for AI companies to access music for training purposes.
A great example of this is the DataSet Providers Alliance (DPA), which includes Rightsify’s Global Copyright Exchange (GSX), which offers a bridge between dataset providers, rights owners and, and AI innovators to facilitate licensing.
On the CMO and PRO side, many societies have released statements and principles outlining how AI companies should engage ethically with their members and artists when using music (see PRS for Music’s principles here). The German CMO and PRO, GEMA, has even released a two-component revenue-sharing licensing model as part of its approach to licensing its members catalogue to AI companies (model details).
Practical steps
Companies developing or using AI should take proactive steps to manage their activities and build internal frameworks for managing risks around use of copyright protected works.
Here are some key actions to consider:
- Understand your position in the market. Identify where your AI system or tool fits within the music tech landscape. If similar or compatible solutions already exist, consider a partnership or renting rather than building from scratch, which could save a lot of time, effort and money. AI algorithms require substantial care and resources and are already available for use by companies and developers through organisations like Music.AI.
- Identify relevant rights. Research and understand which music rights your AI system or tool implicate. Objectively assess the risks of using music, considering the technical and operational requirements of your AI. Do not underestimate the complexity required to develop and maintain a copyright compliant AI system. Each step, from data ingestion to output generation, may bring different copyrights into scope.
- Conduct data mapping. This exercise can help determine exactly how music will be used in training data, within the product itself, and how users interact with it. This will help you navigate the licensing process,
- Adopt a compliance first mindset. Stay alert to the legal and regulatory developments and include compliance practices within your internal governance, process, policies,protocols and training.
- Foster industry collaboration. Building relationships with the music industry based on shared goals and principles to help align your practices with industry standards and expectations.
- Explore all options. Research and engage all legal music licensing opportunities, including direct and collective licensing, but don’t forget the viable alternatives identified in my previous post.
Final thoughts
As much as it might pain some, advocating for stronger regulation of AI development rather than self-regulation may be the only viable way forward. Only when there is a clear, singular legal framework governing AI training data collection, usage and compensation for training data will there be clarity and equity. Sustainable AI development rests on a mutually beneficial contract between AI companies, artists and music rights holders, based on fair usage terms and fair compensation.
But for now we’re likely stuck with an inconsistent and inequitable two-layer licensing system; of direct deals for majors and collective licensing for small or individual rights holders.