Bob Humid: The Hyper-Reality Film
Bob Humid is a specialist for British Electronic Beats and sophisticated sound design and also has a serious obsession with the subject of mastering. Originally born in Montevideo (Uruguay), he is known for DJ sets which oscillate frenetically between diverse electronic genres. He has tied the mantra-like quote “All Styles Make The Style One” (EZ-Rollers) around his work as one of his numerous neckties. As an instructor and mastermind at Cologne’s workshops and studios ‘Fat of Excellence (Fatex)’, the self-described eclectic produces, mixes, and masters all kinds of music. One of his latest works, made for the independent film “Our Father”, was recently nominated and acclaimed for the coveted Accolade Award in the “experimental” category.
First of all, congratulations! Can you tell us what parts of the post production you were in charge of? And how did the collaboration for the film come about?
I had a huge amount of freedom when working on “Our Father”. This included four tasks at a time, which normally would have involved four different production departments at larger budgeted films: Sound mixing, sound design, music consulting, and also soundtrack, i.e. composition. Sascha Syndicus, a promising, busy and well-connected independent filmmaker from Cologne, decided to meet me after listing to my work on Alexander Gerdes’ “Angst” and Leo Ostermeier’s’ “Déja Vecù”. He had the feeling that his film, which had been scored very sporadically in the first version, would clearly benefit from artistic and technical improvements on an audio and musical level. So I watched the almost final cut of Our Father, and I thought that the sound and music level was still a bit rough, despite the nicely captured original dialogues & direct sound. I am very grateful and happy that Sascha trusted me completely with the mixing, sound design and music selection. It was challenging and satisfying at the same time. And I don’t think that these things work like that in large Hollywood productions. For HBO maybe, but certainly not for big the big screen and blockbusters. In these kinds of productions, each person has a screwdriver and a set of tiny screws with which he or she can run riot. As the saying goes, it’s the exception that proves the rule.
Did “Our Father” pose any particular challenges?
All of the film’s scenes are very intimate and immersive. That means that we are very close to the protagonists and their emotions, since most of the shots were done by Sascha Syndicus’ own uncommonly steady hand over the past two years. This spontaneity and the close-up framing was very good for the film and makes many of the scenes authentic. This style of production makes the few static long shots even more complete and absolute. In this regard, Sascha’s film was developed similarly to the way Gaspar Noé (“Irreversible”, “Enter The Void”) has done it in several modern classics. But Sascha left his own personal note. Among other things, a certain kind of intimacy is able to be heard throughout. The intimacy in the film often takes place in loud locations – in clubs, on the street, etc. So in this case, it was important to make the dialogue track comprehensible, but the mood (club music, clinking glasses, talking) needed to be mixed so that the viewer has the feeling of being very close to the people involved. The protagonist is an energetic raver girl and she likes going to loud places. In this case, just mixing the music quietly underneath isn’t going to work out. That only worked because I used excellent rooms and ambiences (in this case, the convolution reverb from Sequoia), which enabled me to set the dialogue into realistic spaces. This way the brain recognizes them as “natural”. That also improves dialog comprehension a lot.
Tell us how you approach composition of film music. When do you start composing?
I only get a feeling for the right music to match the scene once I’ve watched it. At the sound mixing level, it’s different: I wish sound designers and composers would also be involved in indie films to develop the audio-concept BEFORE shooting starts. Unfortunately, this is only the case with larger films and higher budgeted productions. For example, I would like to have had my direct sound recorded in MS technique and without any clipping in the dialogue. I needed to remove a couple of severe cases of clipping with unorthodox manners. One time, it involved a combination of Sequoia’s internal Declipper and iZotope’s RX. Impulse-type noise, on the other hand, can be wonderfully removed using the Spectral Editor in Sequoia. I used it quite a lot to tame loud clicks, pops and steps or dialogue parts.
“DARK SOUNDSCAPES ARE MY SPECIALITY.”
The film deals with very serious issues. Did the heavy material initially distract you, or did it take some time until the film no longer “surprised” you and you could get to work?
No, not really. Nothing human bewilders me easily. I don’t know why, but I usally don’t have a fear of abysmal things the way most other people have. I like looking into dark places – I have a penchant for horror, dystopian science fiction and psychodrama. However, dark soundscapes are my speciality, while other musicians can have difficulties with them. On the other hand, I’m not even that strongly interested in hopeful, jolly melodies. If you want to pin me down, then yes, very dark music comes easy to me.
Did you produce all the music yourself?
I received a hard drive with an enormous amount of music from two labels/publishers, which I was allowed to use for the combination of the soundtrack, but I produced some original music for the film as well. So, I created a large VIP in Sequoia, imported the film (converted into MVX format) in 720p resolution and worked chronologically, from the start to the end. This is one major advantage in Sequoia: You can take care of a large project like a feature film or an audio play in a single VIP with a very small quantity of tracks. I mixed and sound designed Our Father with a maximum of 25 audio tracks, since I could add individual sounds and voice-overs in the famous object editing mode. For indie projects, spatial atmosphere is often provided in mono, which is why I created a dozen extra rooms with the convolution reverbs on aux busses. By doing this, I was able to give each object the spaciousness and depth it really needed and was able to create new ambiences and rooms through combinations of those. Basically music work and sound design took place in a parallel way and at the same time. Scene for scene. I did not find suitable music for the very dark and disturbing scenes where the protagonist repeatedly wakes up dazed and traumatized near the Rhine. So I produced a few electronic compositions that where intended to knock a few proverbial socks off. One of the most successful parts resulted from a sound that was randomly produced as a brutal overflow/glitch sound during a plug-in crash. It sounds very violent, and I was only able to record it using the live recording button in the master channel in Sequoia. I knew the day would come when I needed that sound. You can hear it in the soundtrack at the beginning of “Bad Dawn 2”.
And how did you create the sounds for “Bad Dawn 1”? Hardware or software?
Software. This is a mixture of glitch sounds, a pair of dense textures from the KORE library, and several older renderings that I had created using Coagula, a graphic synthesizer by Rasmus Ekman, and was now able to edit and rework for this project.
“IT’S QUITE A BIT LYNCH-ESQUE”
Regarding the soundtrack: Did you have sources for inspiration in addition to the film footage?
I’m not sure. Certainly, all of the films (and filmmakers) that I have seen have left a mark on me. There is one depersonalization and alienation scene in which I was only able to express the emotion of the protagonist, who is slowly losing her mind in that particular scene, by using an extremely amplified room-tone (spatial atmosphere). It’s quite a bit Lynch-esque.
What is your personal musical favorite in terms of film music?
Oh, there are many: I love the soundtrack of “Quiet Earth” by John Charles, which is very difficult to find. I find the music and sound design for “Under The Skin” and “Breaking Bad” enormously stylish and appropriate. And of course, John Barry is my absolute favorite soundtrack writer. Plus, who could forget Jerry Goldsmith’s soundtrack for “Legend”, especially the first piece, “The Goblins”, which is unbelievably atmospheric…
Cinema productions are especially demanding in terms of mixing. Did you write the film completely for 5.1 Surround Sound?
Because the indie budget didn’t allow to, we didn’t mix it for 5.1. Instead we decided to do an upmix in favour of the correct localisation for each seat in larger cinemas, which is the most important aspect of Surround. In this case, it’s important to be very careful since many ‘upmixing’ tools are not downmix compatible. And then, if anyone accidentally listens to the Surround-mix at home in stereo, there will be of course phase cancellations. Personally, I’ve a lot of experience in the field of stereo mixing and I rarely let my guard down, but in this case, it was better for me to employ my friend Jan Gerhardt for upmixing. As far as I recall he used TC’s UnWrap (Powercore), and delivered us six separate tracks in 5.1. Surround, which I passed on to Sascha to assemble the final export.
“AT THE CINEMA, WE EXPERIENCE A RECTANGULAR CUTOUT OF SO-CALLED ‘REALITY'”
Are there other differences between producing music for a film and a “simple” song?
Film music accompanies, supports, and amplifies the emotion of a film scene, since a film composition is always synthetic. At the cinema, we experience a rectangular (camera-) cutout of so-called ‘reality’. But we don’t smell, taste, or feel the film. For this reason, you often need to enhance a scene with music to make it seem more intense or ‘realistic’. In this case I’d like to talk about ‘hyper reality’, since the word ‘real’ isn’t appropriate. We don’t actually run around with theme music playing in the background, do we? Sometimes, especially in Hollywood blockbusters, the viewer will receive a completely pre-digested musical emotion in order to make sure exactly how to feel at every possible moment during the movie. This is also supposed to help out unexperienced viewers to follow the plot easier but it is very annoying at times for movie enthusiasts. Thankfully this is a bit less common in European films…
Here’s a question aside from reality: If you didn’t have access to your studio, where would you prefer to write your soundtracks?
A studio is certainly something elementary. It’s important to have a space that you can go to that is exclusively there for making music or do sound design work. It takes me 20 minutes to get to the studio, and this transition helps me to leave unnecessary mental load behind. In a world of constant web 2.0 alerts and total acceleration this is hugely important. Of course, now there’s the possibility to work anywhere. To be honest, hardly anyone does this in the professional field. Distractions are not helpful. Sure, you have to imagine that back in the 80s, when there were still huge budgets for album productions, a band could move into a castle for eight months with their producer/engineer and take their entire sound studio with them. This doesn’t mean that the music will get better just because of the castle you’re making it in. But an independent or specialized space with as few distractions possible from the everyday world is actually unbelievably helpful for evoking creativity.
If the studio is such an elementary feature for you, what does your current studio setup look like?
I currently work a lot ‘in the box’, or in my DAW. Bascially, it is currently a 4-core system with 4 GHz each and 2 x UAD-2 DSP cards connected with Sequoia 12/13, an SSD, and a Marian Seraph AD2 converter. On the Synthesizer-side, we currently have a KORG DS-8 (my first synthesizer ever), a (completely underrated) M-Audio Venom, a KORG DSS-1, a Roland JD800, and a Juno 6. I also use my beloved Yamaha RS7000 for live-tweaking a lot. Though it’s 14 years old, it’s still an unbelievable production machine. On the software side, I like using the KORE library, Independence, FM7, TAL synths, Sascha Evermeier’s flexible and impressive Revolta 2, and of course sounds from my own sound design activities. There’s also a lot of bent, crunched, distorted, and pitched material. Thanks to object editing, sound design with Samplitude or Sequoia is a pleasantly sensual activity, a lot like pottery somehow. My most important weapons are pitch manipulation and a good fully parametric EQ. Please don’t take away my EQ when the zombie apocalpyse is near us! I also use a small phalanx of freeware plug-ins. Representatively “TAL Electro”, U-He’s Electro-Monster “TyrellN6”, and the very cool, futuristic-sounding “Ultrasonique” from EVM.
“MY MOST IMPORTANT WEAPON IS PITCH MANIPULATION.”
How long have you personally been using Sequoia?
I’ve been familiar with Samplitude since the SEK’D version (Studio for Electronic Sound Synthesis Dresden) with four tracks. So I’ve actually been working with it since 1995, during a time Hohner Midi, which back then was also selling the exotic Brain-2-midi controller, took over distribution. It was then when promoter Pit Klett agreed to endorse our own electronic record label BORED BEYOND BELIEF with a license of Samplitude. In these days we and my mates were managing a lot of experimental parties with electronic live acts, including one called Mos Eisley, which referenced the crazy bar full of alien scum on the planet Tatooine in Star Wars. I did all kind of mad stuff with Samplitude; once I performed a little filter-based live-act with the 32000-band FFT filter and my mouse. The more Titus Tost and Tilman Herberger built object-based options into these early versions, the more I departed from my Digidesign SampleCell. If you were to take away object cutting from me today, I’d go crazy. It’s a mystery to me why most DAWs are still based on track-based engines.
And what does Sequoia offer you today?
Transparency, extremely high-quality tools, mastering functionality, VCA faders and of course, the object-based, still-revolutionary production mode that I mentioned above.
“IN MY OPINION, SEQUOIA IS THE SECRET POST-PRODUCTION STANDARD THAT HASN’T BEEN DISCOVERED YET.”
My goodness: 1,000+ tracks for a radio drama production in other DAWs? Seriously? I also appreciate the fact that with Sequoia now I have a program that replaces my MIDI sequencer, audio editor, DAW, batch editor, and mixing-console. Last but not least Samplitude/Sequoia also offered latency compensation for DSP cards, like the UAD-2s, right from the start. I never would have thought that I would say this, but the multi-selection in the object editor has become an indispensable tool in my post-production work. Now that I’ve mixed and scored four short films and one feature film, I don’t even miss the copy/paste button in older versions.
Do you have a plugin that you swear by, that you couldn’t work without?
To many to mention. I love using Sascha Evermeier’s am|Pulse to saturate bass. To do this, I turn the saturation knob to 2 o’clock, and dial-in the distortion subtly with the mix-button to gain some more overtones and presence. The EQ116 in linear phase mode is an excellent mastering-grade-EQ. am|Phibia is also fantastic for creating bold and authentic soundscapes, and the relatively new tube simulation module in the “Essential FX” section is excellent for breathing life into individual signals…
“AT THE END OF THE DAY, IT’S ALL PHYSICS.”
With Fatex you’re working on a lot of different genres in terms of mastering (film/advertising, house, synth-pop, industrial, indie). Is it difficult for you to mix and master music that you don’t like listening to personally?
No, not at all. At the end of the day, it’s all physics. A mastering engineer is also someone who knows what he is doing physically and where the limits are. But it’s an advantage to know that drum’n’bass producers also want to feel something in the master below 40 Hz, while metal freaks usually don’t understand what sub bass is, since they usually lust for pulse wave-shaped slammers and rich mids. But at the end of the day everyone loves depth, punch, definition and resolution. The brain longs for complex sound events. This is also the secret behind the esoteric glorification of so-called “analog sound”. Looking closely at this topic reveals that this aesthetic doesn’t really exist. “Analog” often just means a pleasantly sounding, complex, dense sound event, based on the vivid nature of incoherent analogue signal-paths. Nowadays the pristine-clean digital sound is more and more regarded as “wrong”. Americans often mean something different than Brits and Europeans when they refer to the definition “fat” anyway. These concepts should be clarified in the communication with the artist/client.
Privately, I like every genre, except Dixieland and roots reggae; Personally, I prefer electronic music the most. From early Jean-Michel Jarre records to The White Noise, Rupert Hine, Skinny Puppy, Coil, Photek, The Legendary Pink Dots, Si Begg, Pinch, Instra:mental, Kraftwerk, Gary Numan, and back again. These are some of the most important names that occur to me spontaneously. To me, most of the current productions in electronic music are too generic and formulaic. I’m waiting for another post-modern revolution that is able to evoke depth in production. Something along the lines of Photek or Autechre.
One final question: You said before that you find darker soundscapes easier. Which “film villain” would you like to write the “theme” for?
That would definitely be the crazed cowboy robot in the remake of the 1973 classic “Westworld” (Crichton/HBO), which is coming in 2016. Unfortunately, all of the jobs seem to have been assigned already. It’s certainly going to be big.