Skip to main content

AI Futures: how artificial intelligence is infiltrating the DJ booth

Conversations around automation and DJing are tried-and-tested comment triggers — the ubiquity of the tedious ‘press play’ criticisms and ‘sync button’ debate attest to that. But AI and ML offer a whole new era of DJing, that goes way beyond simply keeping music in time. DJ Mag's digital tech editor explores the impact of AI on DJing in the third and final part of our AI Futures series

In part two of this series, we explored the impact of AI in the studio, with assisted mixing tools from iZotope, right up to full-on machine learning DAWs that can transfer the style of one producer to another project, among many other things. In part three, we’ll look at how AI has infiltrated the DJ booth, as well as how hyper-personalised generative music apps could lead to an even-more-siloed listening experience across streaming platforms.

It’s fair to say contemporary pop music follows a certain formula. Those sometimes predictable patterns make it easier for AI to spot trends and more accurately recreate music. For dance music, those patterns are even clearer, generally following a four-, eight- and sixteen-bar arrangement mould, thanks in part to the modern DAW. But what happens when AI tries to learn how to DJ? 

Conversations around automation and DJing are tried-and-tested comment triggers — the ubiquity of the tedious ‘press play’ criticisms and ‘sync button’ debate attest to that. But AI and ML offer a whole new era of DJing, that goes way beyond simply keeping music in time. 

In recent years, open-source technology has appeared that can fairly impressively separate stems using AI from a fully mixed-down track. That means a vocal, drum pattern or bassline could be isolated from a normal stereo track. You can try it for yourself right now in your browser using Splitter.AI. Audioshake is another example. Read our list of five ways to split stems here

Soon after, DJ software VirtualDJ and Algoriddim DJ both added the ability to separate stems in real-time inside a performance software. The implications of this are wide-ranging: from turntablists who want to scratch acapellas, to live remixing, mash-ups and four-deck performance, with much less preparation and hunting for acapellas required. Stem separation AI is also being used to ‘upmix’ old tracks from the ’60s or ’70s, whose multi-track tapes are no longer available or have degraded. The emergence of Apple’s Spatial Audio has also triggered a rush to remix catalogues, and therefore quickly extract stems. 

Advanced AutoMix functions have also appeared inside most DJ software: they don’t just fade from one track to the next, but analyse music for frequency content and arrangement in order to create the most seamless blend possible. Algoriddim’s djay takes it a step further by implementing their stem separation into their automix function. Pioneer DJ’s rekordbox has also added a vocal detection that uses AI to label arrangement elements like ‘bridge’ and ‘chorus’ to avoid clashes. A less sexy but equally important area of DJing is tagging and categorising your music. Musiio is a company that uses “AI to automate your workflow”. For DJs, it uses ML to arrange music into more nuanced categories than key, BPM and artist. ‘Emotion’, ‘energy’, ‘mood’, how much vocals are present and what percentage of a genre the AI thinks it is are all tags it uses to create a more nuanced tagging system. 

Ironically, the AI returns a more human result than is currently available in most DJ software; making sure you can find the right track at the right time in your set, even if all you remember about the track is something along the lines of, ‘It’s uplifting, with a female vocal sample that sounds like old Masters at Work.’

“Each artist is an autonomous and creative individual, and they use generative music as dynamic and constantly-changing content” – Sasha Tityanko, Art Director Sensorium Galaxy

Musiio Commercial Director Mack Hampson explained the implications of AI tagging for DJs in a Musiio blog post last year, titled ‘Artificial Intelligence and the Future of Music Searches for DJs’. As rekordbox libraries grow bigger and cloud DJing makes millions of tracks available at a time, we should expect this technology to become more commonplace in the years ahead. Cloud DJing is also primed to deliver the data required to train these algorithms, as we explored in our DJing and the Data Wars piece in 2020.

While the technology discussed so far is largely administrative and problem-solving, Sensorium Galaxy takes things to another level. Designed as a “digital metaverse that revolutionises how people interact with one another,” Sensorium claims to “provide out-of-this-world virtual experiences.” PRISM is the first virtual galaxy to be released within Sensorium, and is due to feature performances from a selection of A-list DJs including Armin van Buuren, David Guetta, Charlotte de Witte, Eric Prydz and Black Coffee. While the pandemic accelerated global adoption of digital events — Tomorrowland’s being one of the most impressive — Sensorium looks set to become the gold standard of truly virtual performances. 

The calibre of DJs aside, the AI behind the new virtual space is arguably just as impressive. As reported by DJ Mag in December 2020, Sensorium teamed up with AI music company Mubert to create AI DJs, avatars who play different styles of music and have different personality traits. Performing inside the Sensorium Galaxy world, these DJs will play AI-generated music that never repeats and never ends; a 24/7 stream of machine-learned electronic music, taught on hundreds of thousands of stems of real music. 

“Each artist is an autonomous and creative individual, and they use generative music as dynamic and constantly-changing content,” says Sasha Tityanko, Sensorium’s Art Director. “This allows the artist to be constantly evolving in the virtual environment of Sensorium.” 

Not only is the music generated based on hundreds of hours of data from real recordings, soon the virtual DJs will actually learn from the dancefloor too. “The virtual DJs will be able to react to the audience, to its mood and its vibe, to change the kind of music they play. They take information from crowd behaviour and submit it to the Mubert algorithm, so the DJ can react to the taste and mood of the crowd.” Do they take requests? “We could add that, yes,” she laughs. 

Although many virtual events across 2020 were due to necessity, given the closure of most clubs, Sensorium is not designed to compete with IRL events. “We don’t see it as a substitute for real-life events, it’s just another dimension,” Tityanko continues. “It has its own features, particularities and possibilities which are very different from real-life experiences. Not everyone can travel to attend their favourite show with their favourite artist, or maybe they can’t afford it. Real-life artists and events will always exist; on the other hand, with the development of technology, new horizons are always brought up."

"The world is so crazy right now, people are now almost self-medicating with sound" – Oleg Stavitsky, CEO Endel

Generative music on a virtual dancefloor may feel futuristic or even dystopian, but AI-created music has another, more noble role in 2021. “Because the world is so crazy right now, people are now almost self-medicating with sound,” says Oleg Stavitsky. He’s the CEO and co-founder of Endel, an app that uses AI to generate different mind states through music and sound. “Backed by neuroscience”, the app uses different soundscapes titled Focus, Sleep and Relax, generating a personalised sound based not only on your cognitive goal, but also your timezone, the weather and even your heart rate. Endel was voted Apple Watch App of the Year in 2020. 

“In short, there are two scientific pillars behind Endel: the science of circadian rhythms informs us about your current energy state, and then the neuroscience of music informs us on what frequencies, scales, and tones we should be using to help you achieve a certain cognitive state,” Stavitsky says.

In 2020, Endel collaborated with Grimes on a project called AI Lullaby, where she provided original vocals and stems to help adults and babies sleep better. The stems were then processed through Endel’s own machine learning to create an ever-evolving soundscape for sleep, or as Grimes put it: “This project is basically live remixing of ambient music by robots for babies.” In 2021, Endel collaborated with Richie Hawtin on an AI soundscape, scientifically engineered for deeper focus.

These high-profile collaborations forced Endel to address, once again, the issue of ownership. “An AI that creates music on the fly based on samples isn’t really covered by existing music licensing contracts,” explains Stavitsky. If the music is based on stems or data from a specific artist, but the AI’s interpretation of those stems — and the users’ own weather, heart rate, etc — generates something new every time, who owns the output? 

“The team representing Grimes wanted her to approve the final result before it went live,” Stavitsky says, “something that’s pretty much impossible for real-time generative music. We literally had to invent the legal framework for these, because it had never happened before.”

Going forward, much like for generative modelling of other musicians and artists, the future is unclear. 

“Overall, there’s a lot of very interesting legal, technological and even philosophical paradoxes that we do need to think about,” Stavitsky continues. “The only way to figure this stuff out is to jump right in and push the technology, push the innovation, and then make people say, ‘Okay, we do need to figure this out’. This isn’t a technology that’s just around the corner. This technology is here.”

"People are so trapped in the algorithm bubble, it’s like, ‘Is what we do [as producers] still useful or not?’" – Jaymie Silk, producer and DJ

Where Endel and apps like it generate hyper-personalised soundscapes with a common goal, and with AI already generating fairly competent music with services like Amper, Boomy and the aforementioned Mubert, it wouldn’t be a stretch to foresee a future where streaming platforms adopt this technology to further personalise a listener’s experience of the app. 

Spotify already features ‘mood’ playlists, and there’s big business around relaxing, therapeutic playlists on the platform. Would there be a time when the same song is heard differently by two different listeners?

“Even more than Spotify, I can see someone like Apple doing this,” says Cherie Hu. “They have a whole ecosystem of Apple Music, Fitness, Health, Apple Watch — they could quite easily create something real-time and adaptive.”

Music is unlikely to pivot entirely to purely generative, but a completely new style of listening — generated by AI, functional and highly personalised — is likely to co-exist with our favourite tracks and albums. 

“Technologies like this, where music stops being music in a strict sense and becomes ‘more like a running water or electricity' as David Bowie once stated, I see it as a future of music, or at least a very pleasant part of it,” says Stavitsky.

While Endel isn’t competing with traditional releases, the idea of music as electricity, or as content, runs counter to the belief of some artists, who are concerned for their futures if and when the AI is no longer discernible. “People don’t know the difference when they listen to MP3, a WAV, vinyl, whatever,” explains Jaymie Silk, a producer and label owner from Canada, now based in Paris. 

“People are so trapped in the algorithm bubble, it’s like, ‘Is what we do [as producers] still useful or not?’ If all you want to do is release the content, you want to be noticed, you want to be booked, just want the attention, you already have the tools to do it. It’s scary to think, ‘How will the audience perceive music in the future?’ Will they listen to it, is it just noise, is it just an excuse to go to a party? Are we useless as music producers? I don’t know.”

For some artists, AI will assist them in getting from A to B; for others, it’ll create the whole journey. But, Silk hopes quality will always prevail.  “A microwave is not really good for your food,” he laughs. “But if you need to eat quickly, why not? I think it’s the same thing.” 

‘What is the purpose of these AI algorithms for music?' Is the goal to replicate what came before, or is the goal to create something that’s never been heard?” – Cherie Hu

Here, we’ve only scratched the surface of the implications of AI on electronic music, some of which have yet to emerge. In exploring some of the most relevant aspects for producers and DJs — those who will inevitably be affected by this technology — we’re attempting to arm them with the knowledge and inspiration to participate in one of the most exciting developments in music-making since the advent of sampling. 

The creative implications and potential challenges are boundless. Perhaps, when everything can be automated, generated and re-created at the push of a button, we’ll long for more meaningful, human connections, in the same way radio is thriving in the streaming age or that vinyl sales skyrocketed during the pandemic. Either way, as generative music becomes more advanced, it’s inevitable that the music-as-content debate will continue. 

“There’s a question around, ‘What is the purpose of these AI algorithms for music?’” says Cherie Hu. “Is the goal to replicate what came before, or is the goal to create something that’s never been heard?” It’s a question that could redefine electronic music’s own identity crisis. As Dave Jenkins wrote in DJ Mag in 2019, “electronic genres such as electro, techno, acid house and jungle were fused by innovative creators who wanted to make something that had never been heard before, which is exactly where AI-led electronic music is going.” The future, it seems, is already here.