Automaton (and on and on)...
In another quote of Herndon’s, this time in the promotional material attached to her album, she captures an AI/music idyll, explaining how she’d, “like for people to have a sense of agency when approaching technology in their lives. I want them to know there’s a future that doesn’t sound like the past. It doesn’t have to be some sort of sci-fi hellscape where the machines take over. It can be beautiful.”
This is the ultimate positive example of how AI can encourage and help to create whole new forms, sounds and approaches to electronic music... but it also inadvertently connotes a darker negative. If artists and music lovers don’t have a sense of agency when approaching technology, we will get left behind and potentially duped by those who are savvy with the technology; for every innovative experiment and collaboration with AI, there are many more cynical commercial applications of it within mainstream music.
In an interesting, in-depth feature on The Verge last year, musician and DJ Mag writer Dani Deahl took a deep dive into AI based music software such as Amper, IDM Watson Beat and Google Magenta —programs that digest mountains of data from decades of recorded music and its successes, to try to effectively formulate or help to create hits. She neatly captured the weird sensation of witnessing AI coding creating a passable creative expression that we understand to be unique to humans, and experimented with Amper to create her own jingles. She asked the question; “If AI is currently good enough to make jingly elevator music, how long until it can create a number one hit?”
This is where the future isn’t quite so rosy for musicians commercially, and certain jobs will perhaps be taken by robots. For instance, AI generated music is already passable enough for adverts, backing music for videos and broadcasts, jingles, call holding music and ‘lite’ music for playlists in public places. It’s only a matter of years before the software becomes sophisticated and learned enough to create pop hits, which could potentially lose songwriters, top-line writers, players and collaborators certain opportunities and royalties. But that’s only looking at one side of the AI paradigm, and not taking in the much wider direction of how technology affects all of our lifestyles. We’re moving in a much more co-creative way, where accessibility and experimentation are encouraged. One example in the mainstream pop world is American Idol singer Taryn Southern, who collaborates with AI to create the music she sings to on her debut album ‘I Am AI’. In Taryn’s case, AI has enabled her to collaborate and create her own material.
Another interesting concept that is founded on access and co-creativity can be found at UK firm AI Music, which applies artificial intelligence to understand mood, location, activity and time of day. It then creates different versions of original songs to fit your time of day and activity. A sunset chill-out version of a Ceephax Acid Crew banger you usually love at 3am, for example. AI Music claims its algorithms can, through nuanced learning, potentially create thousands of different versions of a song, hyper customising your experience and relationship with music, and how you interact with it.
AI Music founder and CEO Siavash Mahdavi has explained in interviews how people pick music much more through their activity and mood, and how their software encourages collaboration and experimentation from the user. As the original song is still being used, the writer and players still get royalties and recognition. In a recent interview on BBC 5 Live, he used a comparison to pre-smartphone photography — how we took a fraction of the billions of photos that are now taken every day, and how it’s the smartphone’s AI software that has enabled us to take such creative photos. Yet professional photographers are still very much in demand, and their jobs don’t look in danger of being taken at all; the website Will Robots Take My Job? gives them a mere two per cent chance of their role becoming automated in the future.
“What we’re looking at doing,” he says, “is shifting music to a similar paradigm, so we get more and more people playing with music, lowering the barriers of entry to music creation using these tools. Looking at photography, we still have the artist level photography. That’s there to stay, and it’ll be a similar thing to music. But we’ll have more people playing with and creating music.”
Co-creation and a dialogue between fan and artist rather than passive consumption and accessibility; Stingray was right. We are lazy, we are often end consumers. But AI doesn’t have to necessarily exacerbate and amplify that. When applied, explored and experimented with effectively, by the likes of AI Music, Herndon, Baauer and any other artist willing to take the plunge with the same spirit of the unknown that OG producers had over 30 years ago when seeds for this movement were sewn, AI can take us out of the cyclical tropes music has become bounded by and enable us to explore much more potential, vastly different aesthetics, approaches, structures and mind-sets, which can only be a positive and progressive thing. See you at the Top 100 party, 2030.