Net Neutrality Won’t Help Music. Here’s Why…

With the best intentions, some in the music industry are adding their voices to protest the US FCC’s rollback of net neutrality regulations. Keeping them won’t help the music industry; pretending it will means the real threats to open and fair digital markets will remain unaddressed.

To recap, net neutrality, as it is framed in the regulations, stops your consumer broadband provider from making either prioritisation or access to your devices conditional on fees or other non-charging policies. It was motivated partly by some clumsy attempts by telecommunications providers to protect their voice traffic by blocking VoIP and other peer to peer protocols (without net neutrality rules Skype would have been a far less attractive acquisition for Microsoft). Small content providers saw similar threats from huge media businesses, and feared being priced out of their audiences’ home Internet connections.

All this seems reasonable, but the concept of net neutrality – invented by a lawyer, not a technologist – is based on a very much oversimplified understanding of how the Internet works. It’s easy to assume that your consumer broadband provider connects to the Internet with everything else somehow magically taken care of. Indeed, many network diagrams just show the Internet as a cloud. Global connectivity is complex however and has changed a lot over the last 25 years.

So how has the Internet changed, and why does it matter? The discrimination that many fear is already happening in many ways and many places, to the advantage or disadvantage of many parties. Please pardon a somewhat detailed history lesson, but it is important to understand just why net neutrality is not capable of addressing the concerns it pretends to.

In the early days the Internet was made up of a fairly neat hierarchy of local, national, and global networks, referred to as tiers 3, 2 and 1. Lower tiers bought traffic capacity – known as IP transit – from higher tiers, and the tier 1 networks mostly just swapped data with each other at no charge. This business model drove investment in fibre and datacentres with relative agnosticism about what content and services were being carried.

Telehouse in London's Docklands

Telehouse, London

One of the oddities of this arrangement however was that data sent between two near neighbours sometimes took a huge round trip to get to its destination. You don’t need to be a network engineer to see that exchanging traffic between UK networks in the UK should be quicker and cheaper than sending it across the Atlantic and back; in fact that was exactly the problem that led a group of UK ISPs to establish LINX, the London Internet Exchange, in 1994. Connection facilities such as LINX are known as Internet exchange points (IXPs). Initially membership of exchanges was limited to networks with roughly similar in/out traffic ratios, and costs were mutualised rather than individually billed between network owners.  The exchange of traffic is known as ‘peering’ – giving a sense of its cooperative heritage – and can be done privately between two or more networks, or publicly by plugging into shared data switches. The global network of IXP centres continues to develop.

Internet architecture is designed to support peer to peer traffic, and each device can theoretically send to and receive from any other connected device. But as the net developed its commercial model, the traffic flows dramatically changed shape. More and more data was coming from specialised networks, such as CDNs (content distribution networks) and then from very large content distributors. The IXPs resisted at first, but then allowed these very asymmetric networks to join. In their quest to get ever closer to the consumer CDNs and content distributors quickly moved from public peering to private, investing in large capacity gateways to the bigger consumer ISPs. Then, as the consumer ISP market consolidated, the big content distributors started to install their equipment actually inside the ISP network, with private fibre connections to their own datacentres, a method known as colocation.

This change was being discussed, as a ‘massive disruption’, in the network operator community by 2007. Here’s a paper on it by one of the real experts, Willliam B. Norton, and here he is talking at a conference:

https://www.youtube.com/watch?v=oShb1VSSDVM

My own perhaps romantic take on this from 2012 is here: How Art Changes the Architecture of the Internet

Today these three modes of connectivity co-exist:

  • Public peering – relatively low cost, mutualised and operated by membership of exchanges
  • Private peering – organised ad hoc between network operators, and often used for specialised needs in business to business traffic exchange
  • Colocation – extremely expensive and only available to a small club of the biggest content distributors

To get an idea of the thresholds which drive traffic from one mode to another here’s a policy published by the UK’s national broadcaster, the BBC:

Any peer exchanging more than 1Gbps of traffic in either direction at any public peering location will be requested to migrate to private peering, at our discretion.

It should not be surprising that Google, Amazon, and Netflix are some of the biggest users of colocation, given the amount of video they ship to consumers. Here’s what Google says about their service:

With our edge nodes, network operators and internet service providers deploy Google-supplied servers inside their network.

Google has even installed its servers inside the Cuba national telecoms company, with now former chairman Eric Schmidt following close behind the Obama administration’s political normalisation efforts:

The Empresa de Telecomunicaciones de Cuba SA ( ETECSA ) and the US company Google have concluded the talks with the aim of signing an agreement Google Global Cache (GGC).

And here’s Netflix explaining just how powerful the economic forces are behind colocation:

Globally, close to 90% of our traffic is delivered via direct connections between Open Connect and the residential Internet Service Providers (ISPs) our members use to access the internet. Most of these connections are localized to the regional point of interconnection that’s geographically closest to the member who’s watching. Because connections to the Netflix Open Connect network are always free and our traffic delivery is highly localized, thousands of ISPs around the world enthusiastically participate.

We also give qualifying ISPs the same Open Connect Appliances (OCAs) that we use in our internet interconnection locations. After these appliances are installed in an ISP’s data center, almost all Netflix content is served from the local OCAs rather than “upstream” from the internet.

So to understand today’s Internet it is important to have these three different models for how data gets into the consumer’s home; and to understand how the economics and incentives that drive their respective capacities and capabilities differ from each other; and most important of all, the balance between them, how they compete with each other, and the investment model that supports each of them. Because as traffic moves from one model to another – from IP transit to public, then private peering, then to colocation, the investment case moves too.

The simple fact is that a colocated content distributor needs less private peering, and little or no public peering in order to reach its consumers. Traffic might still be growing across the internet, but with public peering growing slower there’s less money to invest in IXPs, and more money for what are effectively private overlays occupying the same conceptual space as the public Internet. It’s obvious which companies are involved; Netflix for instance, and Google, for its YouTube traffic. Netflix is not investing in public peering capacity; to do so would only benefit its competitors.

But here’s where it starts to get a bit more interesting (sorry it took so long). Cloud providers too are colocating alongside their peering arrangements. This club includes Amazon and Microsoft, and Alibaba now entering the global market. Music companies are big users of the cloud, and if they are buying AWS, Azure, or Google Cloud each of them, no matter how concerned they might be about net neutrality, is benefiting from the advantages of getting out of peering and into colocation.

Now consider this from the point of view of a consumer broadband provider. Your colocation partners are already on your network, and on very high capacity connections; they need it for their video streams. Your private peering partners are also connected to an extension of your network in a datacentre somewhere, and have as much capacity as they need, but are further away in network terms so are less reliable and efficient. Your public peering is now very much the poor relation; you have no control over the quality of the route that the publicly peered content takes before it arrives at your network, and while you need to maintain the connection in order to be able to deliver the whole of the rest of the Internet to your customers, it’s impossible to anticipate which connection a large file or video stream might come from, nor help it arrive smoothly and in one piece.

Raphus cucullatus, Roelandt Savery (1626)

Raphus cucullatus, Roelandt Savery (1626)

So, in brief, many music companies are now providing additional revenue to the cloud providers, who are abandoning as fast as they can the mechanisms by which the Internet remains open and fair. And that does not just include record labels; many consumer music services have moved their systems to the big cloud providers.

Here’s an ironic thought – an artist, a record label, and a distributor might already each be paying Amazon to store and process a track, in order to send that music to be sold and streamed by… Amazon!! There are more than twenty different ways that Amazon can extract revenue from music (sometimes in scalable and repeatable ways), and all but a few result in no revenue for the creators. It seems that the music industry collectively is doing as much damage as it can, as fast as it can, to the benefits that net neutrality is supposed to preserve.

Business is littered with situations where the collective interests of a class of companies are in conflict with the individual interests of most of them. Usually the result is the development of monopolistic or monopsonistic markets; no industry benefits from the dominance of Facebook and Google in digital advertising for instance, but many businesses benefit hugely from the reach and capabilities of the data and marketing platforms. And the answer to monopoly is usually regulation, but for better regulation to work it is essential that the intended beneficiaries have access to the skills and knowledge to move profitably into any fresh opportunities that the new rules might bring.

Can music make that move? As an industry we’re suffering from a brain drain out of the supply side and into the demand side, and there’s a lot of evidence that our trade associations and collective bodies are under-resourced to help such a change. But the game is far from over. In net neutrality, and in technology, the best approach seems to me to be the same as in so many other areas where we are looking at erosion of our freedom and agency. And that approach is a fierce and supremely well informed dedication to independence.

This entry was posted in regulation, technology. Bookmark the permalink.

Leave a Reply