In part two of this series, we explored the impact of AI in the studio, with assisted mixing tools from iZotope, right up to full-on machine learning DAWs that can transfer the style of one producer to another project, among many other things. In part three, we’ll look at how AI has infiltrated the DJ booth, as well as how hyper-personalised generative music apps could lead to an even-more-siloed listening experience across streaming platforms.
It’s fair to say contemporary pop music follows a certain formula. Those sometimes predictable patterns make it easier for AI to spot trends and more accurately recreate music. For dance music, those patterns are even clearer, generally following a four-, eight- and sixteen-bar arrangement mould, thanks in part to the modern DAW. But what happens when AI tries to learn how to DJ?
Conversations around automation and DJing are tried-and-tested comment triggers — the ubiquity of the tedious ‘press play’ criticisms and ‘sync button’ debate attest to that. But AI and ML offer a whole new era of DJing, that goes way beyond simply keeping music in time.
In recent years, open-source technology has appeared that can fairly impressively separate stems using AI from a fully mixed-down track. That means a vocal, drum pattern or bassline could be isolated from a normal stereo track. You can try it for yourself right now in your browser using Splitter.AI. Audioshake is another example. Read our list of five ways to split stems here.
Soon after, DJ software VirtualDJ and Algoriddim DJ both added the ability to separate stems in real-time inside a performance software. The implications of this are wide-ranging: from turntablists who want to scratch acapellas, to live remixing, mash-ups and four-deck performance, with much less preparation and hunting for acapellas required. Stem separation AI is also being used to ‘upmix’ old tracks from the ’60s or ’70s, whose multi-track tapes are no longer available or have degraded. The emergence of Apple’s Spatial Audio has also triggered a rush to remix catalogues, and therefore quickly extract stems.
Advanced AutoMix functions have also appeared inside most DJ software: they don’t just fade from one track to the next, but analyse music for frequency content and arrangement in order to create the most seamless blend possible. Algoriddim’s djay takes it a step further by implementing their stem separation into their automix function. Pioneer DJ’s rekordbox has also added a vocal detection that uses AI to label arrangement elements like ‘bridge’ and ‘chorus’ to avoid clashes. A less sexy but equally important area of DJing is tagging and categorising your music. Musiio is a company that uses “AI to automate your workflow”. For DJs, it uses ML to arrange music into more nuanced categories than key, BPM and artist. ‘Emotion’, ‘energy’, ‘mood’, how much vocals are present and what percentage of a genre the AI thinks it is are all tags it uses to create a more nuanced tagging system.
Ironically, the AI returns a more human result than is currently available in most DJ software; making sure you can find the right track at the right time in your set, even if all you remember about the track is something along the lines of, ‘It’s uplifting, with a female vocal sample that sounds like old Masters at Work.’