A High Price for Free Music

To the ‘feels like free’ crowd, two examples from the summer of 2014 have shown just how strange and illusory is the ideal of a music market with no barriers to consumption and no price to the public. Barriers, it turns out, are sometimes valuable to patrons and sponsors, and costs to the public are not exclusively financial.

Apple’s U2 giveaway to 500 million iTunes users has yet to reveal its strategic brilliance; Emeli Sandé shared a stage on the river Thames with a car, to debut a pedestrian hashtag-inspired song. Sandé left me longing for the uncompromising brilliance of Grace Jones (does Mr Kipling make madeleines?), an artist her ‘team’ have obviously learned from. With wonderful irony Bono’s paean to punk, ‘The Miracle (of Joey Ramone)’ carries the following message at the YouTube link offered by the Guardian website:

“U2 – Songs Of Innocence …” This video is no longer available due to a copyright claim by Apple Inc..

Sandé is available on YouTube – find it, if you like, via the Jaguar cars website, and your tracking data will add to the marketers’ conviction that they are the smartest cats on the planet right now.

Our sophisticated modern economy offers us many ways to pay for music; it should not need pointing out that doing so is practicably unavoidable, as the price of music used in marketing is included in the price of each and every product. Through taxes and levies we support broadcasters and arts programmes which pass on part of their funding to musicians and the music industry. Most of the money collected for this mode of payment comes under ‘public performance’ schemes. Here is what PWC had to say about it in an IFPI report about just the sound recordings side, published in 2008:

…the estimated range for the potential global value of sound recording performance rights is between USD 1.9 billion (an increase of 56% over current collections) to USD 2.9 billion (an increase of 141% over current collections) annually.

And here are IFPI’s numbers from 2013:

Performance rights income topped US$1.1 billion globally for the first time in 2013, increasing by an estimated 19 per cent in 2013, more than double the growth rate in 2012, and accounting for 7.3 per cent of total record industry revenue.

One step back from this compulsory payment for music is the theme park model offered by subscription services, in which unlimited listening is offered in return for a single, usually monthly payment. This has the virtue of moving the pain of paying away from the pleasure of consumption, and allowing people to express a general preference for music in their lives as well as a degree of preference for which music they listen to.

It also has a significant difference from public performance in that the price to the subscription service is set by the copyright owner in a private and direct negotiation. And that means that the two sided wholesale market – between artists and recording / distribution companies on one side, and on the other side those intermediaries and services – is exponentially more complicated and competitive, even as the eventual cost per listen to the consumer is a simple function of plays divided by subscription price.

Outside the fawning response of the fanbase, critical reaction to models where payment is either indirect or non-discriminatory falls into a few categories. ‘Selling out’ is an obvious one; music industry insiders worry about ‘training the consumer to expect free music’; artists and superfans are concerned that intermediated markets ‘rip off artists’.

These are important ideas in the narrative of popular music. Authenticity of the artist’s voice is valued, and corporate sponsorship undermines it no matter how that corporate might be perceived. The shift from ‘consumer pays’ to ‘somebody else pays’ undermines the independence and authenticity of the consumer, an idea that could probably do with some deep examination as it inverts the very common themes of product and brand authenticity in modern marketing. The degree to which music is seen as an authentic product will govern how much a consumer cares about ripping off artists, but it is surely true that few of us like to be coerced or bribed into something we think might be unfair to others.

There are two deeper and more troubling problems with ‘feels like free’ for music. It would be pointless to say that music should not be commoditised, but that does not mean we need to believe that a hashtag driven marketing campaign is an ideal creative inspiration, or that a marketing department is the best arbiter of the value of the result. There is no way to measure or value the future loss of authentic and personal creativity that the commercial sponsorship of artists might cause, which makes it both harder and more important to care about it. Otherwise we would need to believe that Sandé’s song and U2’s album can be the epitome of musical creativity and expression despite their genesis and mode of emergence into the world.

The other deep problem, as illustrated by Apple’s takedowns on YouTube, is that when you don’t buy freely in a market you pay with freedom in return for the product. For Apple, value is partly about the extent to which they can define and own the space you inhabit. A previously private space – your own music library – has been shown to be shared with Apple; you can delete the album, but cannot regain your privacy. To an individualist like me that is sad. The Ramones inhabited a space that could not be privatised by any corporation, and their record sales are a historical proof of their inability, deliberate or not, to sell out. On behalf of all of us, and co-opting Joey Ramone’s name, U2 sold that space to Apple.

Posted in markets | Leave a comment

To Get Paid We Should Set Music Free

Digital music, having started out as a tied product with iTunes and Windows Media locking the files to devices and software players, achieved a certain freedom as DRM came off, but is now migrating fast back to proprietary encrypted formats. These locks benefit only their owners and operators. Switching from Spotify to Rdio, or Rdio to Deezer, or Deezer to Zvooq, means abandoning your library and playlists, and starting from scratch.

It is unarguable that this approach is providing a significant and growing flow of money into the music industry. IFPI reported that subscription and streaming services produced $1b in 2013, and grew by over 50% year on year. In their bid to sign up subscribers these services are innovating rapidly in user experience, and racing to achieve global scale. There’s a long way to go, with around 30m active subscribers across all services. If music follows broadband into the home, there are already about 700m households in the market.

On a standard Rogers model this means that we are out of the ‘innovator’ market and into early adopters. But only just. And the early adopters are very important, as they have the highest degree of opinion leadership. This segment is where a new paradigm flies or flops. And this is where a kind of inverse evolution is at its most dangerous.

Evolution, in the popular phrase, is bad for the individual but good for the species. It is a binary test for specific unfitness. Early adopter markets on the other hand are good for the individual, but terrible for the species. No player has a strong incentive to sacrifice present growth for future sustainability; each prefers to act as a highly competitive individual rather than a category.

And the truth is that the qualities of a successful category are very different to the qualities of a single offering. Choice, variety, many price points, and often some technical compatibility with competitors are what drives adoption into the mass market. In recorded music we have had many iterations of technical delivery, so we know this well. A £29 portable CD player plays the same CD as a top end audiophile model costing £29,000 (yes, they do exist).

This is what we need for digital too if we are to push out of the early adopter market into the early majority. We need the influential early adopters to be saying not that one or other service is great, but that subscription music is great, for all the category benefits outlined above. Music otherwise risks sacrificing its paid-for niche to the irresistible rise of unpaid platforms, and for a far less certain environment for investment in new artist development and high quality music production.

There is a smart way to push this process forward, and that is through library and playlist portability, enabled through a simple metadata file format (the audio itself is becoming ubiquitous anyway so would not need to be transferred unless missing in the receiving service), and a voluntary trademarked scheme for the services themselves. A ‘non-portable’ flag in the original supply chain metadata could allow music owners to offer enforceable exclusives to services if that was seen as valuable. All but the biggest service would have an incentive to participate as there would be more to gain than to lose; a service removing portability in response to losing customers would suffer a big loss in reputation and trust.

Throughout the history of recorded music, innovation that adds value has on balance expanded the market. Cassette tapes made vinyl records more valuable by making the music portable. ‘Rip, mix, and burn’ added value to CDs in a similar way. Millions of tracks available with simple search and click, and sophisticated playlist and library management tools are doing the same in digital music. Giving consumers full ownership over their investment in the time and trouble to learn and use these tools will create more value for them, and more value for music owners and services as subscription heads into the mainstream.

Portability will release a wave of investment in music service innovation to serve all kinds of consumers better, until, as with CD players, it would be inconceivable to imagine a well equipped home without one. It is not particularly challenging, either technically or legally. We should, once again, give music libraries and playlists the right kind of freedom, from restrictions, or risk being overwhelmed by the wrong kind, freedom from payment.

Posted in markets, strategy, technology | Leave a comment

YouTube Sends a Depressing Message About Music

This month (May 2104) saw the surfacing of another argument between the organised independent record companies and a wholesale customer, in this case YouTube. The accusation, that YouTube had given the major labels better terms then they were offering indies, was wearyingly familiar; as was YouTube’s reported response, to offer ‘take it leave it’ deal terms and to try to pick off labels willing for whatever reason to leave the collective.

Both sides have plenty to justify their position. On YouTube’s side, independently produced music brings vast catalogues, a disordered supply chain, and a lower average value per track. Collectivisation solves some problems but not all, and relatively few recordings are so compelling that consumers would leave YouTube if they were not available. The indies can counter that they produce more high quality new recordings than majors, with much more diversity, that the administration can and will be made much smoother if the incentive is there, and that anyway, who is YouTube to say that any individual recording is worth more or less according to the size of the corporate entity that is selling it?

Some compromise will no doubt be found in time. However YouTube should be much more circumspect about how it manages its relations with music producers; professional independent producers create disproportionate value in music, and YouTube does not have a credible alternative to their skills and investment if it wishes to avoid a disastrous drift into sub-optimal Gini coefficient content markets. Some examination of what could be thought of as the content funnel will show how this works.

A music producer who has not contracted away the rights in their recordings can upload their work to YouTube and accept whatever revenue is offered in the click-through licence. Any business wanting to insert itself as an intermediary needs a mixture of pull and push to attract musicians who would otherwise have no reason at all to give up a share of their revenue from YouTube or from anywhere else.

Naturally YouTube would ensure the balance of risk and reward would be very much weighted to its advantage in a deal it offered indiscriminately. With so little information available about the eventual value of any recording no advances against future revenue are likely to be offered, and nor will the music producer be able to argue that each listen or view should have a minimum fee attached. YouTube is extremely unlikely to wish to pay for many recordings before they are made and would find it very hard to organise production. Rival platforms might not wish to carry its content, meaning a proportion of the market would be inaccessible.

All this is obvious, but it benefits from a restatement in this context. Intermediaries can compete by advancing funds for production, from bringing those organisational skills (hiring arrangers, session musicians, and studio time), and from their access to the whole market, and potentially from their ability to negotiate better terms than would be offered to random uploaders. They can invest in better supply chain management, and can work cooperatively with YouTube and other platforms to find and focus interest in the music. A content funnel with many intermediaries competing for the attention of music producers raises standards all round.

Systemic discrimination in favour of three global companies sends a depressing message about YouTube’s view of the world of music, musicians, and music industry expertise. If it is motivated by anything other than ignorance, laziness, or spite, such discrimination is a broadcast to policy makers, creative people, and to its audience, that YouTube believes any music not owned by music’s Big Three (Sony, Warner, UMG) is by definition less worthy of attention, and the musicians who made it less worthy of investment. Like the legendary cigar chomping music exec of yore, this is YouTube’s way of saying ‘don’t give up the day job’. The fewer highly rewarded music supplier partners YouTube has, the more such systemic discrimination is multiplied, meaning even less investment in quality and diversity.

In an organisation as smart as YouTube, owned by one of the most capable businesses of the age, there must be people capable of taking a forensic view of the music industry, and analysing with the greatest precision where incentives should be created to reward diversity and quality in music on a scale never before seen. It is surely too soon in the development of super-massive platforms like YouTube to decide that any encouragement outside the corporate hegemony is a pointless waste. We should all hope that those people within YouTube who still have a little hope for the future of music soon find their voice.

 

Posted in markets, strategy | Leave a comment

Downloads versus Access? Follow the Science AND the Money!

One of the big debates in music is whether the download is dead, killed by ubiquitous connectivity and streaming services. Certainly the expensive (>$0.5) download seems to have plateaued, and streaming revenue is growing strongly tempting new entrants who bring with them investment in better consumer experiences. The end game, goes the argument, is that the device you plug the earbuds into becomes a terminal, connected to choice and service such as the music fan has never before experienced.

Two things make me pause. The first is a nagging doubt that the mobile and wireless networks needed to deliver always on access really will live up to expectation in the near future. Telecoms seems to be in turmoil around the developed world, with companies facing huge investment decisions at a time when the traditional sources of profitable revenue are crumbling, cannibalised by feature substitutes carried over undifferentiated data. Regulation quite rightly favours innovation over protection, but has not yet addressed the funding of infrastructure when networks have been turned into a utility rather than a service. As for seamlessly transferring sessions between protocols and providers, well I am not holding my breath.

The second pause point is the astounding innovation in storage, seemingly in all configurations and target uses. The standard server hard drive now carries 4TB of data, with 6TB on the way and densities in labs promising as much as 60TB. The advanced science suggests that we are in the foothills; IBM posit that a mere 12 atoms are needed to store 1 bit of data, suggesting a 100 times increase in storage density. That may be some way off, but each month brings announcements about technical breakthroughs promising twice to ten times today’s densities.

In music only a very few tracks have enough genuine demand to justify providing a live connection either for a download or a stream, or the resulting torrent of reporting to share out the minute royalties generated. It’s easy to understand popularity in music as a pyramid of 3 multiplied by powers of 10. 3 tracks are screamingly hot, 30 in demand, 300 popular, and so on. Today the entire repertoire of drivetime radio would fit on a memory chip the size of a thumbnail, and as we know that selection changes so slowly it makes a real thumbnail look volatile. A square inch will soon be able to store 15,000 tracks – that is about 750 hours of music. We are less eclectic than we think; many of us turn off the radio if the DJ starts playing music we have not heard before.

What are the business drivers here? A catalogue owner would do much better to license a selection of back catalogue on a once only fee to be sold as a chip embedded in a device, enjoyed for a year or two, and then recycled or destroyed. New and popular music would naturally remain on a per use or per download tariff as it justifies the cost of multiple deliveries and all that royalty reporting, and could be managed in a more dynamic cache on the device. Music services could get adept at using network quiet time to refresh and pre-cache, giving the network owners a real reason to subsidise the bandwidth music uses off-peak in preference to expensive and contended peaktime data. The embedded library could have something akin to a firmware update from time to time, if necessary. Importantly it could be entirely locked to the hardware, meaning a effortful loopback to a recording device if anyone wanted to set it free.

The sell to the consumer would be relatively straight forward too. Buy a library of evergreen music for $50, and add all the latest chart hits for $25 per year if you want to, or buy a la carte if you prefer. At a wholesale rate of $40 music could be used as a sales incentive for just about any big ticket or recurring item. Remember this is back catalogue; I predict that the music rights owners would compete to get on the chip.

Having strong musical preferences and making playlists has always been a minority sport. The technology and the business drivers both say that in a few years we will be buying downloads in vast quantities, but not choosing them track by track. We might not be paying for them directly. The result will be a much more efficient market, and most important for us all, a certain amount of freedom from over-dependence on privacy invading clouds and undependable mobile networks.

Posted in markets, strategy, technology | Leave a comment

What the Future Used to Look Like, in 2006

A new feature in iOS 7 reminded me that the sky, which was supposed to be falling eight years ago, has remained inexplicably buoyant. The feature is called the ‘Multipeer Connectivity Framework’ and it provides a simple way for devices to connect to each other over bluetooth in order to exchange files or messages.

Eight years ago I was thinking about WiFi mesh networks, and wondering why what I thought of as the ultimate playground sharing device had not been brought to market. The idea was simple; a portable device which let you create sharing groups of friends, securely connect to each other, and pool collections of files, chat, and send photos to each other. If any one of a connected group of devices had an internet connection that too could be shared, giving everyone access to the global filesharing and chat networks. Private, ad hoc, and intermittent sharing could at the time be thought of as a copyright owner’s nemesis. Instant tape trees, updated and improved. Even back then the bill of materials for such a device, and the networking protocols, were available and cheap. It could have been delivered for under $100.

For better or worse it never happened, like so many alternative futures we have dreamed of over the first two decades of ubiquitous digits. I still think it would be exciting, and terrifying, if such a device caught on in the playgrounds of the world. I wonder, for instance, if regulators would feel compelled to come up with new crimes alongside copyright infringement, such as ‘incitement to upskirt’, or ‘assault by clandestine chat mobbing’.

I was at the time working with a music industry group on collective responses to such developments, the membership and discussions of which have to remain confidential. I can however share a paper I prepared, in August 2006, full of excitement about what seemed to me to be amazing advances in technology with huge potential for good as well as a bit of copyright mischief. Many of the references and links have disappeared, but for the sake of perspective and posterity, here it is:

New and Exciting Ways to Experience Music, Part I

Let’s have a key! How about:

(n) for “now” – meaning implemented and in use

(l) for “likely” – meaning the technical means are out there, and perhaps already being used by a few intrepid pioneers

(p) for “possible” – meaning that existing techniques could be adapted, or new techniques developed using similar technologies to those either already deployed or in the pipeline

It will also be helpful and avoid repetition if the NEWTEMS are grouped into families, so that the variations on a theme can be understood and their impact assessed. The status code should apply independently to the variations as well as to the family. Also, where possible, examples, with web links.

Newness is a matter of viewpoint, so the criteria should probably be whether they are sufficiently both digital and musical not whether they emerged in the last n years.

Internet filesharing, ‘classic P2P’ and its successors

The components we need to consider are the communication protocol, the application, and the transport. To illustrate, in the case of Limewire the protocol is Gnutella, the application is Limewire, and the transport is TCP and UDP. The Gnutella protocol is released under an open source licence and is widely adopted. The transport uses fundamental internet protocols. The application is released in two versions, a paid for application and an open source project available under the GNU GPL. The Limewire source code has been used by other project teams to create new and modified applications.

Gnutella is what’s known as a ‘decentralised’ protocol, where both searching and file transfer are managed by the application running on the user’s host computer, rather than by one or more central servers. It is however relatively easy to discover which host is advertising a particular file, and some implementations allow the shared directory of each host to be listed.

There are many sources of information about filesharing protocols and applications on the web. I shall here just highlight a few developments that will have some bearing on the VRS agenda. Some applications will be hybrids of the following categories.

Anonymous P2P (n)

Anonymity is achieved by constructing routing messages so that it is difficult to determine the source or final destination of any given packet of data. Messages may also be encrypted.

Example: GNUnet – http://gnunet.org/

Encrypted P2P (n)

Messages are encrypted so that it is difficult to impossible to discover what content is contained in the data that is flowing around the network.

Example: ANts P2P – http://antsp2p.sourceforge.net/

Filesharing over encrypted connections (n)

An encrypted and private network is created over which filesharing applications are run. Increasingly internet chat sessions are being encrypted, and the chat applications are including easier ways to share files.

Example: stunnel – http://www.stunnel.org/

Filesharing over anonymous networks (n)

An anonymous network is created using a series of temporary routes and proxies which masks the source and destination of some network traffic. Filesharing applications then use this anonymous network.

Example: Tor – http://tor.eff.org/
also: I2P – http://www.i2p.net/

Trusted F2F networks (n)

Generally what’s meant by the ‘darknet’, F2F (friend to friend) networks are formed through permission mechanisms, either where each member has to accept a security token from every other member, or where a member has to allow a new member to join and then the new member is accepted by all existing members.

Example: W.A.S.T.E. – http://waste.sourceforge.net/

Wireless Internet, WiFi, WiMAX, Mesh Networks

Anything that works over the wired internet also works over wireless, so all the above apply equally to this section. These technologies also have the capability of forming local point-to-point connections, and local networks, without any internet connectivity. While some WiFi internet will be provided through commercial hotspots, and require contracts and accounts in order to gain access, some will be provided through free and open networks provided by municipalities, volunteer groups, charities, and through neighbour sharing of home wireless LANs.

Shared WiFi (n)

Rather than opening a standard WiFi router up to open public access some operators are controlling who can connect to their networks, which ensures a degree of privacy or trust. Terms and conditions are sometimes subject to a voluntary peering agreement which is designed to prevent others from using the network without contributing their own bandwidth to it. Open source tools are available to manage shared WiFi networks.

WiFi is also capable of creating small scale point to point or group networks between mobile devices. These could be used to transfer files from one portable music player to another, or allow a group of friends to share each others’ content. WiFi could also enable such devices to connect to the internet.

Sony Mylo
http://www.sony.com/mylo

Wifidog
http://dev.wifidog.org/

WiMAX (l)

WiMAX is the marketing name for a technical standard that is best suited to providing high bandwidth network access over a distance of less than 5km. It can carry mobile telephony, TV and video signals, and internet access. WiMAX is currently either absorbing or concluding interoperability agreements with other rival standards, and is being deployed on a large scale in several countries.

WiMAX Forum
http://www.wimaxforum.org/home/

Mesh Networks (n)

Mesh technologies enable much larger networks to be created, with each node acting as a router as well as a point of connection. A mesh can also use the wired internet to hop between otherwise autonomous mesh networks. In practice, mesh networks are likely to be used for sharing internet access and files around buildings, small areas of a city, or at events such as festivals, where their ad hoc characteristics make them most suitable. They also offer a very quick and low cost way to bring internet access to disaster or conflict areas. Generically such capabilities would enable what could be termed a ‘playground sharing device’, with SMS and internet chat applications too.

Locustworld
http://www.locustworld.com/

Radio Babylon (a state51 project)
radio babylon

SONbuddy
http://www.sonbuddy.com/

Short Range Radio, Cables, other Transmission Methods, Pluggable and Removable Media

Much like the wireless technologies listed above, this set of communication technologies can be used for a range of purposes in order to provide point to point connections, networks, and access to other networks. The shorter range radio standards are generally considered as ‘wire replacements’, used instead of cables to connect devices and peripherals. Enabling technologies include USB, FireWire, Bluetooth, WiFi, UWB, Infrared, Memory Cards of all shapes and sizes, portable Hard Drives, and all other forms of removable media storage, including CDs.

Sneakernet (n)

Rather a broad term perhaps, but useful to collect a set of essentially person to person activities that are enabled by new digital technologies. Sneakernet refers to data transfer which involves at least one party carrying something containing the data. Here are a few subcategories…

Bluetooth transfer: files are loaded onto a bluetooth capable device such as a mobile phone, and BT file transfer is used to copy the files onto another device. There’s an interesting case where Verizon Wireless, apparently in order to comply with contracts with content providers, crippled some bluetooth features on a phone including file transfer, and consequently fell foul of a class action.
Verizon Wireless – ‘We never stop working for you.’
http://www.verizonwireless.com/b2c/footer/legalNotices/v710.jsp

USB Hard Drive (Thumb Drive): A small Hard Disk is loaded with files and can then be plugged into mobile phones, mp3 players, computers and games consoles in order to copy the files. Capacities have been growing and have now reached 16GB.
CellDisk 16GB
http://www.iocell.com/english/products/16gb_e.asp

USB Data Transfer: Even without a host computer two USB enabled devices, including hard drives and memory cards, can be connected and their files copied.
Macally SyncBox
http://www.macally.com/spec/usb/input_device/syncbox.html

On The Horizon

Internet over Powerlines (l)

While for the most part internet access is a fairly simple business, with households maintaining an account with an ISP, there are some interesting developments with wireless access, and with powerline networks, which make for a much more complicated picture. An entire street for instance could elect to share passwords on a single high-speed internet account, and use the electricity cables to provide access.

Powerline adapter
http://www.trendware.com/products/TPL-101U.htm

The next stage would be to add encryption and mesh networking to provide each home with security and privacy, but to leave a segment of the network open for more community spirited activities, such as sharing the minutes of the street party committee meeting…

Really Mobile Internet (p)

Some of these networks might even be automotive, by which I mean that they move under their own steam, as in this unique and wonderful project which creates a flocking swarm of helicopter computers and a mesh network…

The UltraSwarms
http://gridswarms.essex.ac.uk/

Store and Forward Internet (l)

There is no need to create an end-to-end session to get the benefits of internet access, and therefore filesharing and other content distribution. So far, store and forward techniques are being used to bring the internet to remote and deprived communities, but they could equally work wherever people can get hold of compatible devices.

Internet Village Motoman
http://www.ratanakiri.com/

 

Posted in strategy, technology | Leave a comment

The US Net Neutrality Debate; Sleepwalking into Walled Gardens

Being an outsider to the US Net Neutrality debate, I don’t feel the pain of cable monopoly, or the blight caused by expensive and poor quality broadband. Over here in the UK we hear more about Verizon FiOS and Google Fibre, and mobile investments and megadeals, than we do about coax. In the UK, and increasingly in Europe, our regulators manage markets with perhaps too much diligence.

The usual way regulators deal with monopoly is to encourage or mandate competition. I have been surprised by the number of US states banning muni wifi – though I can understand the argument that common property should not be preferentially available to private interests. In many cases it seems to me that the public interest should clearly override, and that monopoly providers who gained their own wayleaves and easements under different regulatory and technological  environments should not be considered adequate provision for the next 20 years. Susan Crawford seems to agree with me (!) according to an article she wrote for the FT this week (Feb, 2014).

Perhaps, though, the problem is as much about what content and services might be available over new competitor access networks, and on what terms, rather than how an incumbent should be obliged to manage their business and network.

Here’s a simple question which illustrates in some ways how difficult the regulation challenge is. Should an aspiring competitor to Netflix or any other content or service provider, which does not yet have the subscribers or the bandwidth requirement of its competitors, be able to force a consumer broadband provider (CBP) to purchase public peering capacity at parity with whatever private arrangements the CBP (I avoid the term ISP deliberately) has made with competitor content or service providers?

If your answer is no, they should not, then you are setting the bar higher for the Internet as an engine of competition and innovation; the new entrant will have to overcome market inertia as well as poor quality of service in order to grow. And that will be true of all networks, not just the problematic incumbents, be they monopolistic or competitive.

The complex balance between colocation, private peering, and public peering seems to me very poorly understood by many content companies – sleepwalking into walled gardens in my opinion – and regulators, who have bought the narrative without looking at the grammar or semantics of what they were sold. It is very well understood by a very small number of highly successful content and services companies, who, not an accidental coincidence, are getting off the public internet as fast as they can.

Whether or not a CBP tweaks its packets in favour of one content or service provider or another could be well covered by fair trade regulations; it could be dealt with completely outside the scope of telecoms regulation. How your CBP connects to the Internet is an entirely different matter, and one that only a telecoms regulator can really be concerned with.

It is very tempting to gravitate towards one lobbying platform or another, and it sure gets a reaction from the crowd if a journalist or lobbyist can work up a good foam. Through my strongly sceptical filter, however, the US Net Neutrality debate looks as much about content and service providers trying to open CBPs to uncompensated private and colocated access, as it does about CBPs trying to use their control of the last mile to extract rents. It’s not as if there is anything to preserve – NN as it seems to be deployed by commentators and lobbyists was never adopted by the broadband industry, but emerged out of the evolution of the technology and business models of telcos, only to retreat as content and service providers became able to influence CBP profitability by stressing their networks.

Framed in those terms, profitability will be a key metric of the relative success of the contestants over ‘connected world’ business models, as will their share price relative to current earnings as the market collectively decides who it thinks is winning. Again, none of this is a regulatory matter, beyond the usual fairness arbitration.

This is not to say that telco regulators have no place. Far from it. If any of this analysis proves apt, regulators need to consider the connectivity and openness of the Internet itself, being how networks connect, who can connect to them, and the capacity, capability and status of that connection vis a vis any other ways that content and services get carried to people.

But that is not neutrality by any ordinary definition of the term. What I think the progressives are saying in this debate is that we should use regulation to create an Internet governance policy which is sustainably open, and which has the capacity and capability to support the innovation and churn we need to remain healthy. This I wholeheartedly would support, but what Comcast or Verizon do to your Netflix or Google packets has no relevance at all to any of it.

Wake up America! Thinking that it is all about whether Verizon throttles Netflix, or whether AT&T has ‘provider pays’ QoS, will just hand more advantage to incumbent content and service providers at the expense of new entrants, as colocation, caching, and private peering dig deeper into the CBPs’ networks. No wonder that the popular content and services companies are bleating about how much they need Net Neutrality; as currently framed it would be a great way to keep out competition.

It is probably too much to ask that CBPs should be obliged to maintain a public Internet at parity, as posited in my simple question above. It’s not too much to ask that between them they maintain an adequate public Internet, even if their customers are blissfully ignorant of whether their 21st Century couch/remote combo ever touches the it on the way through.

Posted in regulation | Leave a comment

Spotify Royalties: More Expectations and Numbers in Digital Music

Spotify has released some very helpful information about what it expects to pay music owners for albums in different parts of the sales curve over a typical month. It’s being widely discussed. Less than helpfully, while we have a fair idea about the top end (what they call a ‘Global Hit Album’), it is far less clear what they mean by ‘Niche Indie’.

Yet, to my mind the layer of the market that is just bubbling under the viability level is critical to the health and diversity of the music industry in the future. It does not take much imagination to see why – music at the top of the market is driven by large teams of producers and product managers, trying to take as much of the risk away as possible. (This was pretty much the conclusion of some great research done by Tijl de Bie at Bristol University looking at chart music over the years and finding that measures of diversity have been shrinking.) The new, the young, and the marginal are sources of innovation and regeneration in music as in so many other cultural fields.

To try to understand better what Spotify meant by ‘Niche Indie Album’ we made some assumptions and did a bit of math. Take the following with more than a pinch of salt!

Inspired by Will Page & Eric Garland’s paper of a few years back, when Page was at PRS, we had previously compared our own streaming service data and found similar patterns. A tiny minority of the inventory generates a large amount of the revenue, and when plotted shows the typical tall head / skinny tail curve. Page is now at Spotify; we have been given a few numbers to work with, and can fit them on a curve that we hope is not too far from the one he looks at every day, we suspect.

Here’s how it pans out on the monthly sales chart. A ‘Breakthrough Indie Album’ ($76,000) would be number 10; a ‘Classic Rock Album’ ($17,000) at number 42; a ‘Niche Indie Album’ ($3300) would be at number 220.

There’s no need to go into great detail about all the problems with extrapolating from what isn’t really data in the first place (Spotify streams tracks not albums, they don’t tell us whether this is peak revenue or ‘just passing through’, and nor do they tell us the provenance of ‘Indie’). We will just have to hope that we get more and better data in the future.

For an extra bit of fun we extended our investigation down the tail until we got to what we might call a ‘Spotify Pays My DIY Distribution Fees Album’ at $3 per month. It hits the sales curve at approximately number 240,000. This, if you think about it is an amazing achievement; a decade ago it was pretty much impossible to get 100,000 albums on sale, let alone generating enough revenue to cover the cost of shipping them to the stores.

It would be interesting to see some typical album sales curves over time, and to understand better what an album means to Spotify, and to other digital music services. Revenue patterns as well as amounts are changing dramatically, and artists and labels need good information about how to approach their time, effort, investment and careers. The public clearly loves good recorded music; better information can help the music industry create more of it, and better, especially in the vital but risky zone just outside the top 200 hits.

Posted in markets | Leave a comment

Record Companies Should Give Up a Hard Won Right. Here’s Why.

When the recording industry was young there was a genuine concern that it might be strangled at birth by owners of popular songs, who might naturally wish to protect their sheet music sales and public performance fees from competition from the new no-effort and high quality music experience offered by the phonogram. As it was, a sensible deal was struck to allow recordings to be made on payment of a low fee. This payment for the right to make copies of the songs embodied in recordings – the ‘mechanicals’ – became the biggest revenue stream for many of the writers and publishers, and the success of the recordings drove even more revenue through more public performance, and through broadcasting of the very recordings that some had feared would harm the pre-recording music business. This automatic right to perform and record songs became one of the foundations of the industry.

Times change. Great songs still are at the heart of the business; that is fairly uncontroversial. But low production costs and almost no barriers to entry in the digital market have brought a flood of badly performed and badly produced imitations of popular recordings. Artist names and song titles are often created to trick the public that they are getting the real thing. These records might generously be called ‘tributes’, but the tribute is to the earning power not the performance or the art involved in the recording. As a class these records are one of three main drivers of the ballooning inventory that is adding so much cost to the digital music business (the others being PD and ‘generic’ music). The cost in consumer displeasure is harder to quantify, but they add risk to the retail experience and that means a worse business for everyone not playing the ‘full vocal Karaoke’ game.

One response to this problem is certainly for music services to get much better at weeding out the genuinely interesting cover versions from the chancers and soundalikes. This is happening already, but is clearly a difficult, expensive and time consuming activity. They will too need to lose their fear of seeing uncompetitive if they have only the 10 million tracks people will enjoy rather than the 25 million that producers want them to carry.

But there is another approach, far more radical, and I would argue far more in tune with the times. Record companies should willingly give up their automatic right to make copies of the embodied song when they sell records. The right to perform need not be touched; the right to record for private and personal use can remain too. But making copies of the composition for the sale or performance of sound recordings would benefit greatly from reverting to exclusivity and private negotiation.

The question for the owner of the song would then become not how many copies can be tricked out of an undefended public, but whether a particular recording adds real value to the song itself, now and in the future. Record companies too will perhaps pay more, but for an exclusivity in their own asset, the recording, that should pay them back many times over in the market.

The contention here is that every horrendous cover of Born in the USA is a cost borne ultimately by the owners of the copyrights therein, and unfairly benefits free-riders on the value created by the writers and producers of the song and original recording. Digital markets sometimes need new approaches to copyright; in this instance what might seem a retrograde or undemocratic move would serve the public and the music industry better than the current free for all.

 

Posted in markets, regulation, strategy | Leave a comment

The Cost of Granularity in Music Copyright

It might be just my very partial view on some high volume/low unit price markets, but it seems that at some point the cost of the granularity required for a royalty based remuneration system is just too high, and the wholesale market should move to upfront fees for creators, and to catalogue brokering. This assumes that the cost is visible however, and is falling on the parties who can take appropriate measures to contain it, or change their models.

By way of illustration, even semi-efficient developed economy CROs end up taking deeply unfair and desperate measures to keep their rosters small, the goal being not to notice as many performances as they can get away with. The ‘300 highest-grossing live concerts’ method used by ASCAP for instance is clearly going to weight distributions heavily in favour of an elite cadre. But would a random sample of 300 from the top 3000 be any fairer?

This is one of many examples where it seems that collectivisation really does not work in a way that many would consider modern and fair. Chop the market into three on the vertical and you can easily see how ‘fairness’ for each third has its own particular sweet spot, and how different they are. It’s probably a bell curve with the edges migrating into their neighbouring sections where possible. The super-elite are trapped into paying costs to collect for their inferiors; nobody wants to serve the ‘penny a year’ horde.

Chop it in thirds on the horizontal – giving three organisations each roughly a third of the total revenue, and you can imagine how the different platforms might innovate to attract particular communities of musicians. Being great at collecting from arts centres and publicly funded halls perhaps would serve the modern classical and jazz performers and composers. Beer halls and cafes would presumably skew towards Schlager in some places. But that means a kind of ‘genrefication’ as creators seek markets with more efficient collection and stop breaking the rules quite so much (as they do now, but with less explicit data to guide them).

What does this mean for the organisations and businesses that depend upon a skewed distribution of granular royalty flows, or that have built franchises in collecting and skewing those flows? Many of the latter are quasi-governmental, or operate under ‘no competition’ regulatory regimes. Dislodging them and the businesses that depend on them will not be comfortable.

For many creative people however getting paid up front and allowing others to take the risk has great value. On a revenue per minute basis the revenue flows from royalty models still demand a large aggregate producer subsidy; which seems perverse and gives both producers and platforms very odd incentives, growing inventory and competition and depressing the investment per minute that could produce compelling and sustainable content production.

Perhaps we are looking at a world of billions of copyright transactions per hour just because the machines can do it, without sufficient thought for what we should incentive and reward. Correcting and improving the administration of collectives might be entirely the wrong thing to do to help creators in a world of giant and relatively open platforms.

Posted in regulation | Leave a comment

Perverse Incentives for Music Producers

Digital music could be a world of opportunity for many talented creators and producers who were squeezed out of the old world of manufacture and inventory limitations. One click to listen, one click to buy, and any number of new ways to connect creators and their market mean discovery, information, and transactions are virtually costless.

Music is indeed everywhere in the digital world, and a great number of services have been developed to help musicians turn their craft into a business and hopefully a career. One of the most striking features of the last decade of digital music is the growth in the inventory available. Here’s what one music platform says about the choice on offer:

Choose from over 25 million high quality tracks in our store; download, sync and play your music on the go. (7digital)

Just 10 years ago Apple launched the iTunes store:

The iTunes Music Store features over 200,000 songs from music companies including BMG, EMI, Sony Music Entertainment, Universal and Warner. (Apple press release)

So where has all this music come from, and what kind of music is it? And, for those contemplating adding their own recordings to the world’s 24/7 jukebox, what can the people who make all that music expect out of the market?

One thing is totally clear; most new music is not coming from the global businesses that used to be called the Major Labels, now down to three after the sale of EMI to Universal Music. Competition authorities were interested in the deal because it would strengthen Universal Music’s ability to control innovation in retail by withholding critical catalogues, not because there would be a shortage of music. Anecdotally the remaining Major labels, UMG, Sony, and Warner, have between 1 and 1.5 million tracks each in the market, and are adding thousands but not millions per year.

Other large sources of music include aggregators of distribution rights, such as The Orchard, now connected to Sony. In 2008 The Orchard, then a public company, filed reports claiming rights in a million tracks. These days it claims:

The Orchard is a global leader in music and video entertainment, representing over 3.1 million music tracks

So that is 2 million new tracks in 5 years, coming from many small independent labels and artists. An artist focused service, Tunecore, has some similarly impressive numbers, claiming recently “more than 849,000 artist and label account holders”. Even if many only release one track that is a lot of music. The company recently rather dispiritingly started a blog post with “TuneCore Artists are releasing tons of new music every day.”

Orchard rival, IODA, which it bought in 2012, claimed 2.1 million tracks, having started at zero in 2003. A number of smaller companies bring catalogues from the hundreds of thousands to the low millions, and much of this new music is now produced by self-funded artists.

Few of those artists have access to professional recording studios, so unsurprisingly orchestras don’t feature strongly, and nor do choirs. Typical new music is predominantly wholly electronic, with perhaps a vocal or single line of instrumental solo. Some also seems relatively anonymous. A difficult question these days seems to be ‘what do you put in the artist field?’ This lot put KoolSax:

http://www.amazon.co.uk/s/ref=ntt_srch_drd_B007OMGY6Q?ie=UTF8&field-keywords=KoolSax&index=digital-music&search-type=ss

The release schedule seems to be more about getting as much as possible out of as little as possible rather than a reflection of an artistic journey. A great deal of creativity is going into titling the compilation albums.

So what kind of market awaits those KoolSax tracks? The IFPI puts the digital market at $5.6b for 2012, and growing at about 10% per year. So let’s say $6b to make the sums easier, and divide it by the 25 million tracks to find that the mean revenue per track is going to be $240 for the year. That looks almost worth switching a computer on for, but of course markets don’t work like that.

I recently looked at a set of streaming data and found that only 10% of tracks were streamed once or more over a six month period. That would suggest a median value of zero, and indeed a 9:1 probability of earning nothing over the six month period for any track selected at random. Extrapolate (a dangerous hobby for sure) and that gives us a global ‘earning’ catalogue of approximately 2.5 million tracks this year. Our new mean revenue, once we have discarded the dead weight repertoire, is of course now $2400, or $200 per month.

So that is what the market seems to be saying. As a creator you have a 1 in 10 chance of a pop at $200 per month. Creativity in titling your tracks and your compilations seems to be as important as the music itself. The music had better be cheap to make, and you should find multiple ways to sell the same track or set of stems. You might end up in the belly of a huge distribution beast to whom you are economically insignificant.

Most people when asked how they compare to their peers claim to be better than average. Today’s music market looks like it rewards the second quartile no better than the third or fourth.

Posted in markets | Leave a comment