MUMBAI: UK-based audio software company DataMind Audio is launching its flagship plugin, The Combobulator, out of beta and into the sound design community, ushering in a new era of AI-powered sound sculpting. While other generative AI models spit out recreations of source material, the Combobulator uses neural audio synthesis to produce a timbral style transfer—letting musicians tune their original works to the frequencies of an artist’s sonic world.
Neural audio synthesis is to sound what Chat GPT is to words—rather than prompting the model with text, audio goes in, and out comes a newly crafted sound, no subscription or internet connection required. “Our breakthrough technology makes it possible to play a neural network like a synthesizer,” explains DataMind Audio Co-Founder and CTO Ben Cantil.
The bespoke AI models, called Artist Brains, that power the Combobulator are created in collaboration with a wide range of musicians who receive 50% of the profits from Artist Brain sales. The Combobulator V1.0 features twice as many Artist Brains as the previous beta version—each one including a low-latency version that works so rapidly users can beatbox into it in real time. In addition, sound artists and producers can now use a set of intriguing parameters that make Artist Brains more explorable than ever before, including freeze, decoder, bend, and dilation.
“This project is all about the future of music and creativity, not the potential of AI. We need sustainable AI-powered tools and platforms that form the creative layer, not just scrambled recreations of existing works,” explains DataMind Audio Co-Founder and CEO Catherine Stewart. “The Combobulator brings artists into the fold to experiment with and benefit from neural audio synthesis - which is the greatest leap forward in sound production in the past five decades.”