A Joint MTUK & Vistex Roundtable Discussion
On Tuesday 17th February, MTUK and Vistex convened a closed roundtable bringing together people working across royalties, data, AI, streaming, and rights management.
Chaired by Vistex CEO Amos Biegun, the discussion featured: Matt Amery, Feedbaq; Melanie Davis, Songtradr; Mario Forsyth, Tuned Global; Neil Gaffney, BMAT; Jamie Parmenter, Serenade; Richard Hinkley, Repped Music; Michael Linn, Creds; Emma Jepson, Rightshub; Ritch Sibthorpe, Muwzo; Kinny Ahluwalia, Musiqmesh; Sean Keenan, Soundboard; Chris Gilbert, Voice Swap; Vittiorio Lovece, Musiqmesh
The discussion covered three broad areas: why music’s data infrastructure remains fragmented, where AI can realistically add value in rights administration, and what needs to change in the industry’s tech stack to handle the demands of 2026 and beyond.
Why data fragmentation persists
Participants broadly agreed that the fragmentation of music’s rights and metadata infrastructure is a long-standing and well-understood problem. Several had direct experience of previous attempts to address it, including the Global Repertoire Database (GRD), which was designed to create a unified, transparent trading platform for music rights but was eventually wound down.
The discussion identified several reasons why these efforts have fallen short. A key factor is the misalignment of incentives: organisations that hold data often have commercial reasons to limit access to it. There was also discussion of the role of legacy systems and the technical complexity of reconciling data across different societies, publishers and territories.
Participants noted that the core identifier infrastructure – ISRC, ISWC, IPI and others – already exists and could, in theory, support a much more connected system. The challenge is governance: each identifier tends to be controlled by a different body, with its own rules around access and usage.
One comparison raised was with the financial services industry, where banks and card networks eventually adopted shared infrastructure because the commercial case for interoperability outweighed the case for fragmentation. The question of what equivalent incentive or mechanism might work in music was discussed without a clear consensus emerging.
Data quality, fraud and works registration
The conversation turned to the practical challenges of maintaining data quality, particularly as the volume of self-released music has grown significantly. Participants described issues including duplicate ISRC codes, multiple recordings sharing a single code, and fraudulent works registrations – where individuals claim ownership of works they did not create.
There was discussion of different approaches to verification. One participant described a model where artists can register works freely, with identity verification only required at the point of licensing. Others raised the tension between keeping registration accessible for emerging and independent artists on one hand, and implementing sufficient controls to prevent abuse on the other.
A structural point was made about how the music industry’s registration model differs from other data systems: rather than a central authority issuing identifiers and verifying claims, ownership in music is asserted by the registering party and checked only if disputed. Several participants noted this makes the system vulnerable to error and fraud in ways that are difficult to address without significant changes to how registration works.
AI in rights administration
The group discussed where artificial intelligence could realistically add value in music rights administration – specifically in back-office functions, separate from questions about AI-generated creative content.
There was general agreement that AI has genuine potential in this area. Applications discussed included identifying fragmented or unclaimed ownership across large catalogues, reconciling conflicting data between societies and publishers, and automating elements of works registration and matching. One participant described how a data task that previously required a team of analysts over an extended period could now be completed in milliseconds using machine learning.
However, several participants flagged a significant limitation: AI tools are only as reliable as the data they draw on. Concerns were raised about companies already using AI to map ownership and surface unclaimed royalties, but doing so by scraping publicly available platforms such as YouTube and Spotify, which are not verified copyright databases. The view expressed was that this approach risks producing inaccurate results at scale, and that investment in source data quality is a prerequisite for AI to add meaningful value.
The discussion also touched on how catalogue ownership changes – through acquisitions, transfers and splits – create additional complexity that AI systems need to account for when tracking rights over time.
The volume problem
Participants raised data volume as a distinct challenge from fragmentation. Recording music has become accessible to almost anyone, resulting in a dramatic increase in the number of works being registered and reported. Several noted that processing the resulting volume of data – including digital sales reports from DSPs, which must be processed separately for each society and territory – places significant strain on existing systems and creates barriers for smaller organisations without the infrastructure to handle it.
The point was made that larger organisations are better placed to absorb these costs, which may over time widen the gap between major players and smaller operators.
Standards and interoperability
Discussion of standards focused on the gap between having agreed formats and achieving genuine interoperability. CWR (Common Works Registration) was cited as an example: while it exists as an industry standard for works registration, there are understood to be 27 or more society-specific variations in how it is implemented, which significantly limits its usefulness as a unifying mechanism.
Participants drew comparisons with other industries where common protocols have enabled global-scale operations. Mobile telecommunications and payment networks were both mentioned. The observation was made that in those cases, interoperability was adopted because it grew the market for all participants, rather than as an act of altruism.
The role of legislation was also discussed. The MLC (Mechanical Licensing Collective) in the US was noted as an example of a mandated, transparent approach to data sharing. Participants debated whether similar legislative frameworks could or should be applied in Europe, and what the practical barriers would be – including the territorial nature of copyright law and the difficulty of mandating standards across jurisdictions.
UGC and the rights stack for new use cases
The group discussed user-generated content on platforms including TikTok, YouTube and Instagram as a specific area where the current rights infrastructure struggles. The volume of uses, the difficulty of matching UGC to underlying works, and the complexity of determining when content becomes commercial were all raised as practical problems.
Content management systems were discussed, with participants noting that existing tools such as Content ID provide a partial solution but were not designed to handle the full complexity of the current UGC landscape. The territorial nature of licensing rights was identified as a particular challenge for platforms operating globally.
A related point was made about the potential for micro-licensing and digital sync in the social media and gaming spaces. One participant described the addressable market for unlicensed music use in brand and creator content as very significant, and suggested that making licensing easier and more accessible in this space represents a substantial untapped opportunity.
Incremental improvement or structural change?
The closing discussion addressed whether the problems described require incremental improvement within the existing system or something more fundamental. Views differed.
Some participants pointed to positive trends – a gradual opening up of basic metadata, new data-sharing initiatives, and growing awareness of the costs of fragmentation – as evidence that incremental progress is being made and can continue. Others questioned whether incremental approaches can address problems that are structural in nature, and noted that previous reform efforts, including the GRD, had not succeeded despite sustained effort.
The question of what might prompt more significant change was raised. Historical precedents were discussed – the licensing of iTunes and Spotify were both cited as examples of major structural shifts that occurred in response to external pressure rather than through industry-led reform. Several participants noted that the current AI moment, and its potential impact on revenue flows to rights holders, may represent a similar inflection point.
The session closed without a definitive answer to the incremental-vs-reset question, but with broad agreement that the conversation itself – bringing together people working on different parts of the same problem – was a useful starting point.

