Computers have been helping us make and store music for many years. Music AI is not so new either; already by the early 1990s there was widely available software runing on consumer grade desktop computers which could generate and regenerate music-like patterns of sound. The last 30 years have seen incremental improvements in the way computers analyse, categorise, and parameterise music.
This body of knowledge is increasingly used to make new music, but it’s hard to find a basis on which to judge the output of these robot composers. Should we fully decontextualise the results, risking the rather nihilistic conclusion that everything produced is a perfect version of itself? Or should we continue to base our judgements on our own human history, achieving perfection only when we can’t tell the fuguebot from the Bach? Or perhaps we should be simple hedonists, and ask the artificial intelligence to iterate for our greater pleasure.
The digital music market gives us another new way to evaluate the robots – commercial productivity. Is their output merchantable alongside music made by humans, and at a competitive cost to produce and supply? Early indications suggest that it is, and that in many ways robots are a much better fit than musicians with the market conditions created by music subscription services.
So what is it that makes a robot the perfect modern artist? Consider the contours of the digital market: each month is a contest for the plays that are claims on a finite pot of money, paid in by subscribers who can only play one track at a time. Increasing the play count across the whole service does not increase the money available, so one track’s success comes at the expense of all the rest. If instead of being an artist a supplier could be the owner of a fleet of them, each turning out the optimal amount of the right kind of music at the right time, sheer bulk in each monthly contest would tilt the odds slightly, even without any algorithmically refined fitness.
An entirely digital supply chain means that new tracks can go from AI algo to audience without being touched by human hands, and on an unprecedented scale. Here is Boomy, a fairly recent start-up taking advantage of these capabilities:
How does Boomy work?https://boomy.com/about
Boomy uses music automation technology powered by artificial intelligence, which you can use to create and save original songs in seconds for free. You can also create Releases and distribute them to all major streaming services and digital music retailers worldwide, and earn a share of royalties when your songs are played on networks like Spotify, Apple Music, TikTok, and YouTube.
Choose a style and a few options, click a button, and it will make a track. Type in a title and ‘artist’ name, and it is ready to be delivered. Boomy retains the ownership ‘for convenience’, but pays whatever pennies the music generates on Spotify and many other music services. Somewhat disingenuously Boomy claims, “Boomy users have created 9,007,441 songs, around 9.1% of the world’s recorded music”, all without a single shake of a tambourine. They are conveniently short too, ideal for increasing the collective playcount when playlisted.
It’s surely not controversial to suggest that capital likes indefatigable and fatigue-free robots. So to be able to piggyback on 300 years of hard won creators’ copyright law and soak up some music subscription money seems like an offshorer’s dream scenario. They should enjoy it while it lasts, because there are many reasons why that won’t be very long.
Music seems to be fundamental to who and what we are as humans. Musical instruments are among the earliest artefacts that we recognise as setting us apart from our fellow primates. We know that flutes have a continuous history of more than 40,000 years; it would not be surprising that tuned percussion has been even longer with us. And of course our voices and bodies precede anything we made in prehistory. Mechanisation came to music surprisingly early too. There are references to instruments resembling the hurdy-gurdy from very early mediaeval times.
With its brief periods of fashionability, the hurdy gurdy is an illuminating glass through which to see AI in music. For a while it was an innovation, playing a role in churches before the mighty organs were developed. Then a smaller version brought more music to more people as a reliable way to get a dance going at fairs and gatherings. Rediscovered by the fops of the French court it made an arch appearance in high society for a while, before resuming its place with the peasants and itinerant beggars who were music’s first democratic professionals. It is now perpetuated as a heritage instrument, maintained as a reflection of our inability to give up our cultural past.
Despite its presence at so many events over the ages there is not one single piece of hurdy gurdy music among the highest achievements of our musical traditions. Contrast the piano, also an innovation in the mechanics of musical performance, but not designed for efficient production or portability. Instead the hammered strings brought the potential of infinite subtlety to the rather limited prior pluckers of the keyboard family. Initially a sceptic, the great Bach was later so impressed he became a sales agent for the second wave of pianofortes.
So there are two contrasting innovations, one to increase the productivity of less skilled music workers, and one to extend the expressiveness of the best musicians of the age. And perhaps that is how we can understand today’s emergence of AI in mass production of music, while keeping open our hope for new possibilities in human creativity.
The value of music is the humanity we invest in it, and find in it. As an unashamed elitist I see no value in the millions of hours of audio generated from the parameterisation of musical concepts; and no inherent additional value in the particular instruments and tools that musicians deploy. But I anticipate with joy the new worlds that will be created when the best musicians harness the generative power of AI.