Best way to "place" bone dry instruments in a room?

Yeah, the direct sound attenuates pretty quickly with increased distance (how quickly depends on how directional the instrument is), so with more omnidirectional instruments at distance in a large hall, it’s pretty much the room you’re hearing. There can still be a lot of stereo information at that point, but then it’s the room response creating that, rather than the direct sound.

1 Like

I use these same instruments daily. The idea as you know is to be able to define a few things - where the sound is coming from, and how close it is. So I will use VSS2 for placement on the truly dry things (SampleModeling strings and brass, SWAM winds) and make the best of the ones that have ambience baked in, leaning towards close mics if they are suitable enough. The next step is using a great unifying IR to tie everything together, and the crucial details in this are: using sends that are pre-fader so you can control the apparent distance from the virtual main mic, and being able to control the level of early reflections in the reverb. So it’s good that once you have a balance within a section that you group it, so you can make adjustments to the section’s apparent distance. It’s also important that the placement software and the master IR be describing rooms that are similar in character and size, or else it won’t make any sense to your brain when it’s trying to look for spatial cues.

3 Likes

Thanks Richard, very good points. I hear lots of people talking about VSS, but can you do this “spatial staging” with normal reverb plugins as well in your opinion?

Good tip about send reverb with pre-fader mode btw! :slight_smile:

I have the Sample Modeling Brass, and the solo instruments like muted trumpet and flugelhorn are amazing with a TEC3 breath controller - nothing quite like it. They are dry by default, but if you want to blend them in to an ensemble with something like Cinebrass, don’t overlook the Controllers 5 settings in the GUI, called Virtual Soundstage. The controls there include pan, distance,and early reflections. I’ve found that increasing the distance and early reflections really move the instruments back into the room, and the pan can align them with the positions in othe libraries. After that, you can add a reverb tail to everything and it can blend pretty well, especially in a mix.

2 Likes

Thank you Ron, I think I missed those controls. However, since I have many dry modeled instruments I would also like to find an effective way of placing them in a stage external of the instrument plugin itself. Meaning insert effects in my DAW (Logic). I am reading up on early and late reflections, as well as filtering highs etc. There are many tricks it seems. :slight_smile:

1 Like

I guess the key here is to separate “spatialization” (placing of the instruments “in” the room) and “tails” (glueing everything together in the “same” space).

I know Beat Kaufman has done some excellent videos and tutorials on this subject. I believe Joël Dollié also addresses this in his orchestral mixing videos.

What I try to do is use early reflections “ER” for the spatialization (EAReverb2, Melda MReverb are perfect for this, as they allow you to really place a separate insert somewhere in a 2D “pane” / soundstage). Depending on the amount of baked in (“recorded”) amount of room and “in situ”-ness that’s already IN a sample, they may need more or less of this treatment. I do this for each insert / sample library separately.

Then for groups of instruments (woodwinds, strings, brass, etc.) I have sends / groups where I put them through Seventh Heaven (“Sandor’s Hall”) or Relab VSR S24 (“warm hall”) to all give them a decent amount of glue/tails.

I tend to use pretty dry libraries like Chris Hein and XSample and this does work for me. I highly recommend Beat Kaufman’s approach.

2 Likes

Absolutely can be done with “normal” reverbs as well. Cory Pelizzari has two videos where he shows how he uses two instances of 7H exactly for this reason.

The specialized reverbs like VSS and DearVRPro maybe slightly more suited or follow some more specific binaural principle.

2 Likes

Thank you, great points. I am actually looking into the Seventh Heaven reverb, after watching Cory’s video. Also lots of cinematic composers seem to rave about how good it is. Apparently a simulation of a Bricasti hardware reverb, which seems to be some kind of holy grail for the Hollywood sound. :stuck_out_tongue:

1 Like

I personally don’t find algorithmic reverbs to be convincing in this regard - but looking at some of the reverbs being mentioned I can see everyone’s goal is a little different - nothing wrong with that. The only time I’ve liked anything besides either convolution or VSS2 (which apparently is algorithmic) is when I use ReVibe with early reflections only on the VSL winds group master, because I was able to get a sound that filled in what the dry stage was missing in terms of density and size. But it also hits my favorite IR in Altiverb, and is placed appropriately. Works pretty well to force dual perspectives of immediacy and enough wet to make it sound like the rest of the orchestra. But I’ve tried this with several other algorithmic verbs and didn’t like the result. Down to personal taste, I guess. Also I have used Goldplate or Lustrous Plates on strings to age them a bit - just a tail that hangs without much definition, only time.

2 Likes

A few thoughts:

If you want a cinematic sound, you should combine a convolution reverb that has a Scoring Stage preset (Reverence, Altiverb, or other) + algorithmic reverb that has a concert hall setting to extend the tail of the reverb.

Reverb Predelay settings to push forward and backwards in the mix, reffer to this post

Use a stereo panner to move the instrument left and right while controlling the stereo image width (By default, Cubase uses a Stereo Balance. You have to right-click in the mixer to change the setting).

When a sound source is located on a side, it will reach one ear slightly earlier than the other ear. By knowing that, you can apply some psychoacoustic concepts:

  • PROXIMITY EFFECT:
    You can slightly push away the instrument from the microphone by reducing around 150Hz the proximity with an EQ.

  • HAAS EFFECT
    If you introduce a very small delay on one channel, the sound will be perceived as coming from the other side.

  • HEAD ACOUSTIC SHADOW
    If you use the built-in “Frequency” plugin in Cubase, you can cut some of the high frequencies on one channel, and the sound will be perceived as coming from the other channel. This is simulates how high frequencies are naturally filtered by your head.

For more info on sound perception, check out this clip https://youtu.be/dnDrAG8FZok

3 Likes

Oh wow, your posts are always so incredibly detailed and valuable Medhat. That last one I have never thought about, but it is such a logical aspect that I will definitely try that one.

One thing you did not mention here though is high frequency fall-off. Since high frequencies are so low in energy, every meter of distance between you and the instrument will have more decrease in the highest frequencies. So far I have been adjusting this with EQ/filtering…but is there a plugin that is dedicated for this purpose?

Essentially, with clever eq/filtering of both channels independently you should be able to stage the instrument forward/back and left/right based on frequency response alone, right?

2 Likes

You’re welcome :slight_smile:

Having taken both acoustics and psychoacoustics courses at the University of Montreal, I have never heard of that, and it doesn’t really make any sense from a scientific point of view… It’s quite the opposite, actually: The further you get away from something, the less you hear the low frequencies, and the less directional it becomes.

If you’re close to a fountain, you hear all the details and a wide stereo image. The further away you go, the less you will hear the low frequencies, and the less you can pinpoint exactly its exact location. If you’re very far, you’ll only hear the high frequencies, and it will sound like a mono source. They even program attenuation curves in the audio engine to cut low frequencies to simulate this effect in video games.

High frequencies will get filtered out only if you have big obstacles like walls, and this will depend on the material’s absorption coefficient. That’s why, if you’re near a wall, and music is playing on the other side, you will likely hear the basses.

Two completely different concepts and shouldn’t be mixed or confused :slight_smile:
As for plugins: Way too overrated and unnecessary; just use any regular build-in EQ and cut at 150Hz, and you’ll instantly make the instrument further. Of course, you should combine this with the reverb with the correct pre-delay settings mentioned above according to the distance :wink:

2 Likes

Really? Perhaps I think of light energy that fades every meter from the source. I only know that high frequencies are very low in energy, so I assumed that they reduce the further you go from a source? :slight_smile:

Anyway the problem I have in practice is with for example the new SWAM solo brass, which I love. But if I use a big hall reverb on it, and then let’s say play staccato parts…I still hear that crystal clear focus in the high range, which I would never hear if they were actually placed in a real hall some distance away from me. :slight_smile:

1 Like

Some composers have a wetter reverb mix on the long articulations, and a slightly drier reverb mix on the shorts to give them more bite.

Unless you’re doing concert work played by live players, in my book: as long as it sounds good, the rule is that there are no rules. :grin:

1 Like

Well in my experience, reducing the higher frequencies will make the instrument feel farther away. Don’t know the science about it, but it seems to work. High crisp sounds, feels “in your face”, while things back in the hall gets a rounder tone. That rounder tone is what I want to get with my SWAM instruments. Right now I get that annoying layering effect of “dry sound + wet reverb”, instead of actually hearing the instrument placed in a sound stage.

So, I experimented a bit with VirtualSoundStage vs 2CAudio Precedence + Breeze, used as spatial mixing stages, feeding into an instance of Spaces II. I used Ravenscroft 275, SWAM Trumpet 1.5.1, and Infinite Brass 1.5 Trumpet 1, all configured to bone dry mono.

In short:

  • Both solutions do a much better job of spatial mixing than just pan + EQ.
  • Both have instance linking, for managing the whole mix through one window. (Precedence can also link to Breeze, so that the actual reverb also responds to the spatial position.)
  • Both seem easy enough on the CPU that processing instruments separately should be fine on a decent DAW machine.
  • I’m very impressed with the spatial positioning, stereo compatibility, and sound quality of Precedence/Breeze, and quite disappointed with these aspects of VSS.
  • Breeze is also a REALLY nice reverb for general use, and may well be all you need (as in, no master IR reverbs), possibly even for orchestral work. My first impression is that it sounds better than Pro-R, and possibly even better than TC VSS2, is more capable than both of those, and has a pretty nice and easy to use GUI.

More specifically, the problem I have with VirtualSoundStage is that it sounds like it basically just does pan + a few slapbacks, and even with diffusion enabled, it sounds very synthetic, and creates bursts of harsh, metallic echoes when confronted with harsh transients. With enough reverb added, these artifacts can be masked more or less, but at that point, there isn’t much more than the pan left of the spatial positioning.

Meanwhile, with Precedence + Breeze, I can set up a small, dry room, and place instruments very accurately in it, and it still sounds smooth and natural, even on harsh sound sources. Thus, I can set up a really defined sound stage, and then just feed that into a tail reverb - or just use a hall preset or something in Breeze and be done with it, or anything in between, and it sounds nice and smooth no matter what. Exactly what I was looking for!

1 Like

I feel a bit silly for missing this, but Logic Pro has a feature called binaural panning. I never messed with it, but since it is native on each track (simply change from stereo pan to binaural) it would be great if it works as the primary staging stage, before the reverb.

PS. I also just found another staging plugin which works from binaural acoustics but also with reverb in the same plugin. Could this be interesting David:

1 Like

here’s a another option/sound design on how to plays a dry signal in a room.
On the insert channel on first slot you place a reverb with hi and lo cut built in.
set the mix knob to taste 2 - 8%
Lo cut 400 - 600
Hi cut 2000 - 4000

Reverb 2
Same reverb on a send channel to taste with the same settings except lo-cut, Hi-cut and mix
I use to have a separate eq for lo pass and hi pass before the reverb/reverbs.
If you want to go deeper
Put 2 reverbs instead of one as send reverbs
Pan one hard left the other hard right and send them to a bus.
On the bus: a stereo plugin to experiment with
Use mid eq to pan them even further
On the insert channel you mix with how much of left and right reverb you want you want to send.

2 Likes

Cubase has some different panning modules as well, though the interesting ones seem entirely focused on various surround formats - which might actually be interesting, provided one can set up master reverbs and whatnot that can actually handle surround input properly. I’d like to research that a bit as well, as I’m interested in doing actual surround mixes as well. (I mean, if the sound effects in games and movies are surround, why should music and soundscapes be plain stereo?)

I have dearVR Music, and indeed, it has a lot of options and output formats, and the spatial positioning is quite effective. However, it sounds like it’s “just” straight Haas (left/right phase), pan, and early reflections, without any attempts at ameliorating the side effects of these methods. The result, apart from mono compatibility issues, is a tendency towards harsh and/or boxy sound.

The dearVR reverbs are not to my taste either (like real rooms with unpleasant acoustics, and some sound a bit metallic), and seem to be mono-in, as they’re completely unaffected by both spatial positioning and mono/stereo sound sources - which is perfectly fine for tails in most cases (there’s virtually no spatial information left at that stage anyway), BUT in this case, it results in the mono compatibility artifacts of the spatial positioning ruining the reverb tail!

It’s better to disable that and use an external stereo reverb setup. I tried using a stereo pair of TC VSS3 instances with slightly different settings, with dearVR “panning” + reflections, and that actually sounds pretty good. Of course, that loses the distance-to-dry/wet link, so unless dearVR has dedicated outputs for external reverb that I can’t find, that will have to be adjusted manually in the traditional way.

So all in all, out of the options I’ve tried so far, the Precedence/Breeze combo completely blows everything else out of the water. It’s easy to use, does exactly what I want out of the box, sounds amazing, with none of the issues and artifacts of the other solutions, and the modules can also be used for panning and reverb together with other plugins.

If anything, it would be nice to have smooth automation (to have sound sources moving around, at the expense of higher CPU load - seems like that’s something they consider doing eventually), and I’d say the GUIs feel a bit bulky, but they get the job done, and you can control all instances from two windows anyway, so no big deal. And, they’re themable and scalable! :smiley:

1 Like

I totally agree with what @olofson is saying. There’s so much information you need to replicate to get a dry signal to fit in a “realistic space”. I use quotation marks there because actually, you shouldn’t be going for realistic… this is a common mistake when it comes to reverb. Reverb actually makes a sound larger than life, and that’s what you are after… because a natural room reverb tail actually does so many different things because the sound is actually traveling through the space. For instance, you don’t only have the reverb tail, you also have a secondary delay followed by at least 7 other delays of the signal from your primary and secondary reflection points, then because that sound is traveling the pitch warps… so for every few feet the sound gets fuller and flatter. Then there’s stereo spread which is 3 dimensional (your 2 speakers can’t do this). Finally, the sound also changes phase in the space as the waveform loses energy, again… this part of the process can’t be mimicked. So all of this is going on and guess what, it all changes over time, there’s o logical decay rate for natural reverb…

So if you want an extremely detailed reverb that sounds somewhat natural, you’d need at least 4 reverbs at varying lengths, plus another 4 delays at varying delay times with the volume between them automated for each section, the pitch would need to be automated at the time of the first reflection point returning to the mic and all of this would then need to be sent to another 8 busses to be automated over the span of the piece, dependent on the volume of the source. Oh and all of that would need to be in true stereo :sob::sob::sob::sob:

Some verbs do a lot of that work for you… Valhalla products for instance are great… black hole does this well too… and the spaces plugin from EW is probably one of the best. I’d add a slapback delay to the dry instrument in your case though, this will get you 90% of the way there and then you can overlay your reverb and maybe automate the verb time dependent on velocity. :slight_smile:

And Binural panning works well if you are using a stereo signal. Otherwise it’s a bit pants.

3 Likes