May 10, 2020
Saving Time By Mastering With Automation-Based Plug-Ins
In recent years, machine learning has been added to multiple audio processing tools. Critics of this technology feared that mastering engineers wouldn't be necessary. Optimists praised the lowered threshold into the last resort of high end specialists.
To analyze audio files by finding common threads and adjust new material accordingly is no longer cutting-edge technology. But its implementation into practically all kinds of audio processing still is. Having analysis and automated modulation at hand when mastering will revolutionize workflows. Machine learning could, for instance, supersede presets with fixed parameters. It could adapt to changing sound over time, making distinctions between styles, parts, and evolving arrangements. Determined only by the power of computing hardware and the efficiency of code, automation-based processing will be everywhere. It's the audio engineer's task to make the most out of it while also understanding its limits.
Ready for takeover? What to expect and what to do
Plug-ins that are sometimes called AI-based rely on complex algorithms which analyze audio material and compare it to a matrix of parameters either drawn from a line of genre specific examples or with certain goals "in mind". Those could be a style oriented frequency curve, a specific dynamic range or spatial information (dry/wet, wide/narrow). In seconds it will create an audible difference – most often in a good way.
Pros
To reach a minimum consensus for a master to be released, be it in terms of frequency range, tonal balance, loudness, peak, delivery format and so on, an automation-based algorithm will be helpful. And if the processing is transparent and adjustable you go from there and refine all parameter as needed. If calculated suggestions are only meant as guidance, using traditional mastering tools like EQ, compression and reverb, which should sound great on their own, than there's no need to avoid such a product – but rather to embrace it.
Cons
No algorithm can distinguish between problem areas and purposefully extreme or irregular sounds sticking out. As long as the material is all smooth and evenly structured, the AI can reference the spectrum and make some adjustments accordingly. But with evolving dynamics and tonality or maybe with a nonlinear rhythmic structure there won't be any reference to rely on. So the plug-in might suggest to eliminate key elements or to accentuate others which where meant to subtly stay in the mix. Also any plug-in has a certain style fixation so to speak. It often references a classical radio-influenced sound which you might not approve. Pure artistic choices will be yours to made, and maybe forever so.
To lead by example
iZotope Ozone is one of those tools you might use for the task at hand. In the last part of the video above you can see how the AI analyzes the track while playing. It suggests a sound profile for the task at hand (streaming master) and lets the engineer make adjustments later on. If you want to explore this approach, the Elements version is included in all SOUND FORGE products and the Samplitude Pro Suite. iZotope will also provide you with in-depth information about how machine learning enhances plug-in capabilities be it for restoration, mixing or mastering purposes.
Conclusion
Artificial intelligence is not to be found inside everyday processing chains in the near future. It's little brother the machine learning or automation-based algorithm certainly is. Clear benefits are: Shortening preparation time before engaging effects like compression, equalizing and imaging. Choosing between individualized reference sounds suggested by the plug-in itself. Finding and controlling artifacts and anomalies faster. In one sentence: Key mastering workflows could be supported by automation-based plug-ins for better results. But how and when you integrate these tools is still your choice as a human being.
Next post >
Artist Interview: Oleksa Lozowchuk
< Previous post
STEM MAKER: Craig Anderton