This is where we’ll discuss the Epic focused on the Artist Dashboard (aka “Upload Tool”).
The closest thing to a product description document, with some of the context and a set of user stories associated with uploading, one at a time and bulk, are in this joint document worked up between Resonate and Kendra.io. The Artist Dashboard contains quite a lot of other features, like the Artist plays and earnings summary, that don’t have formal requirements / product description as far as I know, but @auggod may also have some design notes? However, there is plenty of feedback in various places in Basecamp on the upload tool that could be harvested and summarised by a ‘product owner’. I can’t help thinking that the Artist Dashboard is too big to tackle in one epic and needs to be broken down into separate product areas, and tackled in stages.
Note the previous discussion on metadata services below:
…intended as an improvement on the crude IFPI lookup tool we put in place to get the historical lookups on artists appearing earnings statement lines for Cargo in this case.
The vast majority of artists don’t appear in the earnings yet because of the €10 de-minimis limit… but that will change, and we need to find the right ISRC / UPN codes for all the tracks that actually have them.
Thanks Nick and angus. Thanks for sharing again the Kendra doc.
This epic should be renamed - an artist dashboard is something else (a part of what we are discussing - i can send you examples). We do of course need a dashboard for artists and users - but the uploader is separate, together with the reporting and rights management and payments. The things operate all together.
I would suggest - ONBOARDING & REPORTING epic. As all these things will fall under these two categories. They can be separated into different work flows, but should be considered as one, to ensure we understand and build the thing we need in its entirety.
Doesn’t it make sense to build the artist uploader in unison with the bulk upload, not separate, as both require full metadata and essentially do the same thing.
Each should enable us to report fully to artists, labels, PROs. Handle all the international reporting data, as well as correct payments regarding splits, rights management. It must be aligned with our payment and reporting system - or built with them. It takes the work of uploads out of the hands of volunteers and puts the work & compliance in the hands of our users - as with bandcamp, spotify, soundcloud, fatdrop.
If we have the ISWC / ISRC numbers and CAE/IPI numbers - it saves us and them a lot of time. Although we need to fill in the gaps for the music we already have.
The artist/user dashboard needs to be designed properly - users love pie charts and graphs - this can be cute without being complicated. We can take a look at how other DSPs such as spotify and bandcamp do it. It will also need to control/display their biogs, pictures, links etc. as well as reporting on streams, sales, downloads, payouts etc. They need to be able to update their affiliations and contact info, payment details etc. We should probably share useful metadata on fans, territories etc - dependent on what our listeners will share with artists that they listen too.
I shared again the example metadata sheet from our partner Kompakt, which is used by the independent digital distributor IDOL. I also corrected the field forms in accordance with this information. @angus can you share those here please?
It’s important to build with scalability. I still don’t fully understand if the artist uploader that Augustine built last year can be fixed or if we should be starting completely new - can someone clarify please?
Alongside this work - work on the player should really continue - for example the rewards system, the community aspect of the design and, the most requested omission, downloads, wallet integration etc. And we need to begin the further development of the forum - privacy, collectives, artist engagement etc.
@kavan I would love to bring you into this conversation - I think your knowledge and insight is also invaluable here.
Yes, that’s why all this is together under this (admittedly imperfectly named) topic… the idea was that we continue the API strategy… tracks API v2 to be upgraded to tracks API v3 which would involve both metadata updates and more docs and general developer-friendliness so that other tools/features apps can exploit it in a consistent (and permissioned way - using the User API with it)… beyond just the player:
- upload tool (direct for artists) - existing @auggod - great work done to enable user api with this, but still a lot of things to work on (see the list)
- upload tool (bulk - for labels / distributors) - stalled? @richjensen what was the last action with Kendraio / Kompakt?
- catalog management tool (internal) - requirements only - see the user stories
- search tools - ongoing work by @auggod and @boopboop
- playlisting tools - great work by @boopboop
- digital sales reporting to PROs labels and artists - spreasheet sales reporting system creates sales reports, ISRC / UPC added post-hoc - not scalable, but all we can do at the moment until the basics of the work on the new tracks api are in hand
As you point out, beyond what we have in hand, there’s considerable work to do specify what we need to do, let alone develop, migrate and test. We need to fund, recruit developers and staff up catalogue operations people to do it. That will cost money.
For metadata, here are the details I confirmed with Hakanto sometime back, it make sense to do a bundle upload together. Will it make sense to build the next batch of processes above the hierarchy of processes for the single track upload? Here is what i confirmed below with Hakanto last year, maybe it might help. Do correct me if I am wrong, and it is just my opinion, maybe we have a different workflow now.
Artists (handling bundle details individually)
Label (handling bundle details segregated by sub labels/artists)
Distributors (handling bundle details segregated by labels/artists)
Bundle Details (release-level)
Release Type: EP or Album or Single
Artist Name: (Option to add)
Featured artist(s): (Option to add)
℗ Details: Year XXXX Company Name: XXXX
Original Release date: Calendar option
Track Details (per track, option to add)
Version: Eg. DJ Resonate Remix
Featuring: (Option to add)
Writers: (Option to add)
Remixers: (Option to add)
Publishers: (Option to add)
@kavan great: I have a naive question:
If the authoritative source of the metadata is the originating record company and/or ISRC registrant issuer of the ISRC why would a DSP like Resonate need to duplicate or add more ‘optional’ information beyond that which is required for its player, search, community and legal obligations (territories, deals etc - c and p lines)?
If we hold the UPC / ISRC keys for release and tack, and maybe ISNI for artist, and the data is freely available from metadata companies like IFPI and others, we could retrieve any additional data on an as-requested basis. The minimum that we currently upload and display in the player is not far off what you have listed above… other than the ‘industry keys’.
We could meet ‘full’ metadata requirements by working with an industry partner, perhaps?
We could then use the (fixed) upload tool for one-at-a-time uploads and continue meet rights reporting requirements ‘on demand’ by adding the industry identifiers missing from our V2 tracks API via a lookup on sales reporting / query.
Meanwhile we would of course be working on V3 of the tracks API with its more comprehensive (and properly sourced) metadata, fed by bulk uploading to handle data in the tiered / ERN structure you describe above.
(NB: some technical changes too: V3 tracks API would probably be written in GO rather than js)
There’s a data cleanse / conversion exercise needed on the existing catalogue, of course!
No worries. let me try and clarify:
why would a DSP like Resonate need to duplicate or add more ‘optional’ information beyond that which is required for its player, search, community and legal obligations (territories, deals etc - c and p lines)?
ISRC Codes are indeed the fingerprint of each song, the additional information are always for credits, such as the co-writers who co-wrote the song, remixers and publishers (whom might be our partners in future as well)
a little fun article can be found here: 10 Tips for Songwriters: Credits, Copyrights, and Coauthors | Nolo
Basically not short changing anyone, who are involved in the track itself.
Then assuming the artist has correctly registered their recorded work (and associated ISWC - correctly attributing the song and its authors) with a competent label / distributor /metadata provider, then all that metadata is potentially retrievable from that provider or any metadata aggregator / provider in the industry who also received the data?
I.e. we could get it via a link to a web service closer to source.
Hey thanks so much for the clarification,
Are you trying to say we can extract the metadata from a source and transfer it to our databases? The problem is the metadata are with different databases and there is no central database that stores every song metadata in the world (different publishers, different collecting societies, some artists/bands only register with collecting societies and do not have publishers, etc).
Won’t it be problem managing so many different databases, where we should store it on our own anyway? I am not sure there is a middleware that can do so, from what I know.
Or did you mean to include a link where we say: You can find the rest of the metadata on this link? But then, credits will not be shown and this may pose a problem.
Hi Kavan, thanks …yes it’s partly question about the role of Resonate. I don’t think we are primarily in the business of metadata capture, provision and management. Labels and distibutors help artists with that - or artists do it themselves. Meanwhile there are several organisations and services which specialise in metadata aggregation and matching as a service.
It seems to me that the music industry has made metadata into a proprietary gravy-train, mostly based on DDEX, but at least the identifiers are sorted out and the basic model has been agreed, even if the technical implementation and protocols are clumsy.
As a DSP I guess for now we don’t have to do better than Spotify who merely provide their listeners with very basic and incomplete data on performers, written by, plus produced by plus the c and p line rights holding organisation. They don’t even provide the identifiers which point to the authoritative credits information, often at the artist’s own website, or at an industry aggregator, like IFPI. If we provide the standard identifiers at least our listeners can use web search more reliably to find the core registration data. Of course IFPI and other similar services don’t hold everything, but if we provide the ISRC/ISWC/UPC/ISNI , folks can reliably use web search for themselves to find authoritative credits information.
So it seems we already plan to provide better metadata than do Spotify … but we do have work to do on territories and correct c and p line display. I think that is the compliance area we need to take care of.
The need for full metadata to be added when the tracks are added is coming not from us, but from our label and distro partners.
Metadata isn’t a gravy train - it’s the tracing codes used for artist payments for radio plays, down to micro payments all over the internet.
I have been in talks for over a year with partners including Kompakt label and distro, EPM digital distro, IDOL digital distro, Domino, SRD and many others. IDOL is the independent digital distributor responsible for Kompakt and SRD - the example metadata form is coming directly from them, these are the fields they require our system to hold for them to be able to work with us and bulk upload.
Yes, metadata can be drawn from other places. But it would be a hell of a lot easier if we…
- use an industry standard form/fields from a partner we plan to work with.
- put the onus on the user rather than us and our system - we don’t need to retrieve it.
- allow users who do not have this information to skip it when uploading. If they have the info, they should put it in themselves - they are already doing it for bandcamp/soundcloud/fatdrop etc. But we don’t need to force them, we just need to give all the field options so we build a system that can scale with the industry we are working in.
@Nick_M spotify are taking the full metadata from the digitial distros, they are just displaying a very tiny amount to the user.
The form we have from Kompakt/IDOL is the same metadata fields used to deliver to spotify et al.
Hi Melissa - Thanks!
Building on that here’s a hypothesis, reviewing:
- Where we would like to be
- Where we are now
- Possible roadmap
…each of these from the viewpoint of two different key stakeholders: Music distributed by a label / distributor and new music from direct artists. The ‘roadmap’ piece is a suggested set of iterations of the services we have already (tracks api) and possible new services (metadata api / third party service)
Resonate is a community streaming platform, not a label or distributor. It needs to scale massively compared to today in order to be considered useful or relevant. There is a distinction between what Resonate as a DSP needs to hold, and what it needs to display. According to the standards, the relationship for a distributor to a DSP is based around the ERN - release notification - to the DSPs and Music Stores who load what they need from it into their many and various streaming and sales platforms. Just like Spotify, Resonate already gets ERN information from distributors like FUGA as a DDEX ERN feed, for example, and stores it. Potentially we will have to add code to the new player to display the rights, according to listener territory.
When / if we process those files it isn’t necessary to copy everything from them and put it in a back-end service - the tracks API - behind the player. We can keep the last file sent to us (or have someone else keep those files for us). We must of course update the data we DO use and display, especially for updates and takedowns. Processing of those files has been manual in the past- that’s clearly unsustainable. The planned bulk uploading work is meant to address that. Kendraio already have good tooling for this so it is still hoped that our association with them will deliver on this.
An artist who comes to us via a label or distributor does NOT want to have to re-key all the metadata already provided to a distributor or to a industry metadata aggregator. They’d want us to get it from them automatically in the IDOL, FUGA feed or similar. They would simply tell the label or distributor to ‘turn on’ their data in the feed sent to us. Having an accurate and reliable key in that data, like an ISRC would be helpful, and indeed essential for later sales reporting.
In the player we should really be displaying ‘p-line’ and ‘c-line’ information according to the territory of the listener so that we respect rights. Spotify and other big streaming services do this, but Bandcamp don’t seem to do it on their sample/promotional plays. There’s another explainer here: What do the P Line and C Line mean in music copyright? - RouteNote Blog
Resonate sales reporting will one day require the creation of standard sales report files based on the standards. We will need a proper reporting service to do this instead of the current spreadsheet. If we need to add further industry metadata to that file we can build a metadat service to merge ERN-sources data with our playstats and income data. Downstream of us, distributors or PRO’s may then use that as input to a separate service (or a third party) on their side to calculate the splits and distribute the royalty payments.
As an artist with new music, I might not be very aware of music industry standards, bodies and ways of working (see Kavan’s primer link above). Resonate has a friendly community of folks who can help educate newcomers, about the metadata they need to maintain as an artist, but as a streaming service Resonate needs to focus on the data it needs to display to listeners and provide later to the indusrtry in sales reporting.
We don’t have the resources or expertise to act as a substitute for a good label or distributor - educating the artist and checking all the data about the artist’s release / track, registering it with the appropriate industry keys (UPN, ISRC) and adding other relevant keys like ISWC and ISNI if available. making sure they are allocated and on the DDEX dataset in the right place.
‘Upstream’ of Resonate, I’d imagine that Kompakt’s process is: 1. to collect metadata from artists and labels in that spreadsheet (with its embedded guidance). 2. use that data as input to create a DDEX ERN file or similar for standardised distribution, using IDOL 3. IDOL distributes to DSP’s, retailers or any industry partner able to accept the standard electronic (XML / flat file) they create. 4. DSPs / retailers import what they need from it into their streaming or sales order processing system.
Resonate might one day seek to operate a co-operative / community label / distribution arrangement, but as far as I know that is a delicate matter - why would we compete with trusted and experienced independent label partners to do that? Far better to introduce the newcomers to those partners via our forum and enlist their help in that?
So our own Resonate ‘direct’ upload tool or ‘dash’ would have three main use cases for direct artists:
- newcomer self-uploading - could be simple and focused on just that which is necessary to showcase the new music as a try-out on the platform - basic minimum release / track information plus any special Resonate-specific tags they might want to add to assist discovery
- fallback route for artists with labels / distributors who can’t get their data to us via an ERN feed, but have the validated metadata and industry keys available and have it themselves or with a label/distributor
- label wanting to update Resonate-specific discovery tags on behalf of their artist
Our work with Kendra.io is stalled. We had hoped to try to get work going on tracks API v2 and v3 and the User API plugged in to their tooling. This still seems a good idea, but we need to work together with them on the tracks API v3 design.
Once we have the fixes done on the existing upload tool / dash and some interim technical work done on tracks api V2 we are in good shape to release a simple ‘newcomers’ self-uploading tool on the back of the artist sign-up flow. It might also be of interest for a small label adding a number of profiles on artists’ behalf.
Unfortunately we would have to continue to collect ISRC’s, ISWC’s UPCs and any other mandatory reporting data separately and add them to our sales reporting process and files. (Still spreadsheet-based).
v2 tracks API as above.
Capture industry identifiers in migration files
v3 tracks API - add industry identifiers, document it properly, provide a good developer environment
Prepare for migration of existing metadata for 15k tracks.
As a design principle it would be good to our internal tracks API metadata set as focused and clean as possible, focused on what we need for the player, reporting and community/socials, holding the keys to original / source metadata rather than all the metadata. A separate metadata service could then be created to handle the external interaction with other services. This could be a specialist partner-provided and hosted service, part of our ecosystem.
The comment about the ‘gravy-train’ was not meant to be distrespectful of all those who care about the vital importance of metadata and attribution: I’m referring to the industry dominance of the majors in standards-setting who resist open-source approaches, leaving us with a set of clumsy industry metadata standards and protocols that act as a barrier to new small label entrants and artists alike.
There are alternative open-source alternatives rooted in W3C standards like json-ld that would be preferable for a community music ecosystem with enhanced attribution and discovery - in the longer term, IMHO:
…possibly working in collaboration with specialist open-source encyclopaedia projects like: About - MusicBrainz
But…as Kavan points out, there is no globally standardised all in one place store of all the source metadata for a track, just the data standards and the industry protocol for exchange, with many folks who use it. Our metadata service will have to support many standards, starting with the ones that are rooted in law.
Footnote: …One day in the long-distant future we might combine the open data approach of the music ontology with our verifiable credentials and become an issuer of truly verifiable rights ownership, without depending on a host of centralised intermediary services, who make money out of shovelling metadata from place to place.
sorry for the long answer… trying to be as thorough as I can in the time available
Hi dear @Nick_M - Sorry for the late reply. I am getting a metadata sheet which Spotify requests for, from my distributor. I will be back shortly on this but in the meantime, we should meet up online, I would like to understand on the reasons why the highlight of case studies you have put across, against the complete metadata ideology.
Hi Kavan! I have nothing against the complete metadata ideology… after all, we get it anyway in the files we receive. I just feel we need a separate service to handle metadata exchange and management, not the tracks api, which needs to focus on information that the player needs to display. I think we need a B2B service that specialises in metadata management to do this properly. A partner tool or service might be a good solution for us. Yes, happy to meet up.
Hey Nick, thanks so much for clarifying. Yes. i see a clearer picture now from your explanation, and yes, let’s meet. Are you free later today?
Trying to sort out all of these discussions in my head, as to what we have, what we need and what we desire.
Makes me wonder… is there a spreadsheet that would cover that? Showing all the fields we are currently collecting in the artist upload tool and everything that we’d need to add in order to be viable for bulk ingestion from distributors.
Asking both to understand how the decision making process is happening (which seems to heavily relate to the relaunch) as well as how it needs to evolve (which heavily relates to the work we’re doing at Envoke).
Working on some scenarios in which Envoke could provide funding and/or developer support as part of our roadmap. We’re collaborating with VUT on a metadata database for their membership, to be then licensed to other trade associations, so I’m hoping this can all be done in a way that is beneficial to all of us.