In every recording session you probably have collected a ton of unique sounds and samples for your film. You choose the best parts for the master project, and probably left all of the ‘faulty takes’ on your hard drive to slowly die. It is when you are asked to do another movie in a similar music style, that you are searching through all your old projects to find the stems of this awesome sound that aren’t there anymore. But why would you, if you had the possibility to easily collect all of these samples, and create your own sound library? Some reasons why this would be a good idea:
- Create your unique sound to distinguish yourself as a composer – music producer
- Play your samples instead of dragging them into your DAW
- Create order in your chaotic music sample folders
- Borrow unique instruments to use it forever!
It is because of these reasons, that every time I finish a movie, I build a virtual instrument based on the music recordings of that particular movie. Lets check what I have done in the latest library I created. It is a cinematic string quartet library with convolution reverb and the option to toggle between microphones. It uses the samples that are recorded for the film Catastrophe.
For Catastrophe we recorded a string quartet with 6 microphones. Four Neumann KM184 close to capture the attack of the strings. And two AKG 414Cs in MS setup high above the players somewhere close in the room. The cool thing about this setup is the flexibility in adjusting the microphone position in Kontakt later on. So every sound I collected for the library, had two versions (close and room). Whenever you press a key, you probably would not notice the multiple microphone placements that the sample has. Though, if you play the sample again with the room button closed, you will definitely notice the change of placement. It now sounds like if you are really close to the string players. Though, whenever you will do it again with the room button open and the close-button closed, it sounds like you are further away from the players. Now, this choice in position is very useful. The option can intonate scenes in a total different way. The load of a scene can change by the choice of microphone placement.
All the samples are ordered and are given some nice tabs to choose from. The categories include: runs, staccatos, staccato cellos, marcato cellos and clusters. Every category is linked to one of the keyswitches down at your keyboard (C-2 to G-2). In this way it gives the option to also activate the tabs without clicking. Keyswitches are coloured red, and the active one is coloured green. Only the blue keys have playable samples.
The convolution reverb in this library is nothing but an extra option. As you know, the reverb gives the option to make the input sound ‘wetter’. For instance, an impulse in a church has a wetter sound than an impulse in your bedroom. In this way you can trick the audience by faking the place where the recording is done. We usually like to record dry, and fake the reverbs in the mix. This way we have the option of a wide range of different reverbs. This would not be possible to achieve whenever you recorded the source wet already.
The convolution reverb is a reverb that simulates the reverberation of a space. In this particular reverb, an audio file is used (impulse response) that is recorded in a real space. This impulse response is loaded in an engine that makes it possible to use this reverb digitally. There are lots of plugins that support the convolution reverb, so the option within the Catastrophe library is only for fun. Though, the reverbs supported here are recorded by me in some very cool places in Amsterdam; Haitinkzaal in the Conservatory, the Burcht in the plantage, and my apartment hall. Download the instrument here:18 april 2018
Exactly a year ago I finished the article ‘Beyond the Wall of Sound’. I explained how Phil Spector created a recording technique that made use of harmony and music technology in a way to mask definition in the whole. By doing so, it became difficult to distinguish instruments from one another as they form a certain density. I tried to reconstruct this technique, and researched whether it is still possible to recreate Spector’s wall of sound with the technology of today. One of Spector’s number one technical adjustments to the wall of sound was the use of the Gold Star Studios, based in Los Angeles. Not only because the studio was terribly small for the amount of musicians that played in it (19x24x13 ft.), but mainly because the studio had an unique echo chamber.
Last week Ernesto from Spain send me a message. He had some questions regarding the thesis ‘Beyond the Wall of Sound’. I collected all the material again (including a short video that was shot during the recording day). He allowed me to post some of the questions to share the information about Phil Spector’s recording technique to those that take special interest in the subject.
To gather such a big group seems to be a hard task, I suppose you did not choose “Be my Baby” because adding a string section apart from the twenty musicians would have been impossible, right?
‘Haha, that is right. I only had a small allowance from the research commission for the studio rental. Because of that I wasn’t able to pay the musicians. I had to share my enthusiasm for them to join the project. A lot of them knew the sound of Spector and were curious what would happen. But to find twenty musicians wasn’t easy and even on the day itself I was able to find the last horn-player. To add another ten musicians to play strings, wouldn’t have been impossible. It would have been more cozy in the studio (just like the Gold Star Sessions) and it would have increased the density of the sound. I would have loved to have them, but there was simply no time and no budget for that. Even if I would have been able to have string players on the session, I might have chosen for ‘You’ve Lost That Lovin’ Feelin’’ from the Righteous Brothers instead of ‘Be my Baby’. I think the Wall of Sound is heard best on that particular record.’
What kind of compressors, limiters and EQ did you used?
‘I’m into the Fab-filter plugins at the moment, including the Pro-C2, Pro-Q2 and the Pro-MB. They work intuitive, and sound great. I still use a lot of native logic plugins. The AUgraphicEQ is really nice to use sometimes, because frequencies can easily be adjusted. Same for the native compressors and limiter. I used a mix of those plugins for ‘Walking in the Rain’ .’
I noticed in the video it is just one musician playing tambourine/bell and the other percussion but the sound on the recording is unbelievable. It sounds like a lot of people playing percussion. Is it maybe the reverb that is responsible?
‘Because of its high frequency, the bells are always dominant in the Spector records. If you listen to some of the tracks separately, you always hear the high bells bleeding trough. That means that the bells are reaching all the microphones at a different time (because of all the distances to every microphone). Maybe this effect might trick you in hearing more percussion players then there actually are. I believe during the sessions back in the 60’s, Spector never miked the bells. Even though we were aware of that, we did mic the percussion. The percussion player (Selle) was standing in the middle of the room. He had the bell stick and a woodblock. On the final recording, I stood next to him playing another pair of sleigh bells. In front of us was a DPA 4006a omni microphone, which recorded the bells, but also had the rest of the musicians bleeding through. The reason to place this microphone in front of us, even though research learned that it was not the case back in the 60’s, is to have some certainty in the mix. If the bells needed to be louder, I was able to control them. In the end, it appeared that this was the case. The bells needed a boost on the percussion channel. The reason for this is that the room in which we recorded, was way bigger then the Gold Star Studios.’
I am using Altiverb for simulating an echo chamber. The plugin includes the sampling of Western (now called Cello) Studios rooms and echo chambers, like the ones used for ‘Pet Sounds’  and many more recordings. I send every instrument to a channel where I have loaded a delay so the signals come into the “echo chamber” delayed. Then I set it to 115ms because (I did some maths considering distances and speeds) it is a time similar to what the Ampex tape machines delayed the signals before they came into the real chambers. Also an equalizer cutting lows and highs (600Hz and 10kHz respectively) and, finally, the Altiverb Western echo chamber. Every instruments arrives to this “echo” channel and so I have the different dry channels and just one “echo” wet channel which I mix with the rest of dry ones. The results doing it this way are good but your sound is bigger and closer to original Spector recordings so you are doing it in a very very good way. I would like to ask you about your reverb process. Is it a plugin?
‘I like your idea of simulating the delay of the signal. It makes sense to do that. The Gold Star Studio was known for its reverb chamber, and I also heard that Altiverb’s Cello Studios came close the the reverb of the Gold Star. One of my research advisors came with the idea to record a convolution reverb. So before recording ‘Walking in the Rain’ I had the idea to use (or even record) the convolution reverb of The Gold Star Studio, but learned later on that the studios burned down in the eighties. So for the final mix I also used Altiverb, and although I was aware of the Cello Studios, I used the (Zappa) Echo Chamber Bright 1&2 because for these recordings it sounded better. I made eight different aux-channels and send the instrument groups to every aux channel. These aux-channels had an Echo Chamber, and additionally, a compressor or EQ, depending how I wanted the instrument group to sound. In the end I used a master-chain with a 9k Hz boost(!), multiprocessor and brick wall limiter. I think this made it sound really nice in the end. There was only little known how Spector did the final mix. I’ve listened a thousand times to ‘Walking in the Rain’ and just tried to come as close as possible by ear.’
Another reverb question: Do you have just one echo channel that receives all the channels or does every channel has its own reverb channel? Listening to your tracks makes me think about every channel is maybe a mix of dry and wet. I think in the 60’s they just used one (or two) echo chambers or plates so its output was a glued mix of all instruments. I can not imagine how they would have done several reverbs, but on the other side it is true that your recordings sound very very very much close to the original recordings.
‘Instead of placing the drummer (Jordy) around some foam panels, he was placed in the drum booth, because otherwise his sound would dominate the session. We kept his doors open to make his sound bleed through all other microphones though. With all the microphone bleeding, a big part of the actual mixing is done by moving instruments. I mean, if the drummer is in the middle, all other microphones would pick up his sound, which makes it impossible to lower his sound in the mix. What you are hearing is the high percussion bleeding through his microphone. The reason why the other instruments are not audible is because the engineer placed and gained the microphones in a way that the rest of the band is only slightly audible. He was aware how the bleeding was used, but did increase it to a minimum.Back in the 60’s the wet sound of the echo chamber was mixed with the dry signal. To imitate this technique, I send every channel 50% to the aux-channels. That is the half dry/half wet sound you hear. How the wet sound was mixed with the original back then, I am not sure. It makes sense that they linked every track separately to the echo chambers, and overdubbed the original track, but now with a reverb. If you know the answer to that, I would love to hear it.’
How do you send 50% of the signal? And you send the wet signal back to the same original channel after that?
‘In lots of DAW you can find the ‘send’ function directly in the channel. In logic pro X, the green slider shows how much of the original channel (from -6dB to 6dB) you want to send to the aux channel, so it is not necessarily 50%. The original sound of the channel is still send to the ‘stereo out’ as is shown under in the picture. The ‘wet signal’ is also send to the ‘stereo out’ and not back to the original channel. In this way you blend an effect placed in an aux channel with the original signal.
I abandoned VSTis for almost eight years and now I am back on it and I have discovered how this world have changed, the legatos and other techniques are very well made now. Well, I can imagine you are used to the real orchestra so maybe this kind of tools are not very interesting for you.
It’s amazing how fast it is changing. Sample libraries get more realistic every day and the possibilities are endless. To increase the realism, they even got stage positioner plugins, so you can build your own orchestra and place them in a virtual space. One of the biggest limitations of the virtual instruments is the absence of the microphone bleeding. Whenever we play strings and brass from two different libraries, the signal of both instrument group don’t leak through each others microphones. Bleeding was of course one of Spector’s number one of technical additions to the wall of sound so to use VST in a wall of sound recording doesn’t seem right.14 mei 2017