Best way to "place" bone dry instruments in a room?

I use a lot of modeled instruments like sample modeling, SWAM etc. In fact, I am a huge fan since they give me the performance flexibility, agility and inspiring playability I so much love! :slight_smile:

My biggest problem though, is that even if I add a big orchestral reverb on these instruments. They always sound like “upfront/dry instrument + reverb”. When I want it to sound like it is in the middle of a scoring stage or hall.

Can you offer any tips on how I can solve this issue of mixing “bone dry instruments” in a way that they feel positioned in a bigger room?

tagging @olofson since I know you have some of these types of instruments. Also @rfwd since you proclaimed your love for SWAM etc.

1 Like

I think what it boils down to is a vague definition of “dry.” Even dry studio libraries have plenty of room sound in them, part to create a full sound, part - and this is the important bit - to create spatial definition. This is the kind of input a typical reverb actually needs to sound as we expect, and that kind of reverb setup will not magically fix a bone dry signal.

To further complicate matters, acoustic instruments are not unidirectional point sound sources, so even in an anechoic chamber, there are important elements of the recording setup in the sound. Spot mics and pickups in particular, can require quite aggressive EQ to sound anything like what the instrument sounds like to the naked ear, and they still have strange, surrealistic perspective - much like the raw signal from a modeled instrument. Large diaphragm condensers are certainly not immune to these phenomena either, and they tend to have plenty of character of their own as well. Finally, the “typical” orchestral sound also involves tube pre-amps, analog tape and whatnot, so we’re not exactly comparing the modeled instruments to clinical recordings here.

Anyway, back to the reverb issue: The sound from normal recordings and sample libraries - even dry studio ones - contain a lot more information than just direct sound from the instruments. Even in libraries with spot mics, you can usually hear a hint of the space. Where reverb terminology speaks of dry, early reflections, late reflections, tail etc, there’s a key part of the sound that’s assumed to either be undesired (for that in-your-face studio sound), or is expected to be in the “dry” signal already, before entering the reverb.

Meanwhile, synths, modeled instruments, recordings from anechoic chambers and the like - actual dry sounds - have absolutely none of this, so if you want it, you have to simulate that too, before the usual reverb chain.

What you want here is the “dry” sound to already define the shape of the space immediately around the instrument, and where the instrument is located in that space.

I suppose one could think of it as the intimate relation between (actual) dry sound and “direct reflections” or something; the part of early reflections that are bouncing off objects close to the direct sound path, as opposed to sound bouncing off the back wall, ceiling, and more distant objects. These “direct reflections” mostly affect the color of the sound (comb filter effect), and the left/right phase alignment, which, incidentally, is what the brain primarily uses for determining where a sound is coming from.

In practical terms, there are plugins that cover this part, or specialize on it, like VirtualSoundStage, or dearVR. A reverb that can be set up to for tight, space-defining early reflections reverb can also be useful, and also stereo imaging tools, like bx_stereomaker, khs Haas and the like. Some instruments (like SampleModeling and Infinite) have their own IR based tools that can be used for this, either as a specific feature, or as part of the close mics simulation.

1 Like

Ah very detailed insights here, I see your points. I was playing some more with SWAM Solo Brass today, and even with Spaces 2 added with quite a high reverb setting (wet ratio), as soon as I played louder accents, that annoying super upfront sound was clearly heard. It makes it sound like a close up super dry sound with reverb added on top.

1 Like

Exactly! There is basically a missing stage in between the bone dry sound and the typical reverb, so they’ll never truly blend. Early reflection control is limited or non-existent in most IR reverbs, but you could insert another instance with a very short and “room defining” IR, feeding into the normal reverb.

1 Like

I just watched a video on Seventh Heaven, where the person used the early/late reflection knob to control the spatial setting (with direction mixer and stereo width control before it goes into the reverb). Seemed to be a good approach. :slight_smile:

1 Like

Essentially, the farther away an instrument section is, the less stereo information you should get from the instrument itself, right? I mean all stereo info should be from the reflections of the room.

1 Like

Yeah, the direct sound attenuates pretty quickly with increased distance (how quickly depends on how directional the instrument is), so with more omnidirectional instruments at distance in a large hall, it’s pretty much the room you’re hearing. There can still be a lot of stereo information at that point, but then it’s the room response creating that, rather than the direct sound.

1 Like

I use these same instruments daily. The idea as you know is to be able to define a few things - where the sound is coming from, and how close it is. So I will use VSS2 for placement on the truly dry things (SampleModeling strings and brass, SWAM winds) and make the best of the ones that have ambience baked in, leaning towards close mics if they are suitable enough. The next step is using a great unifying IR to tie everything together, and the crucial details in this are: using sends that are pre-fader so you can control the apparent distance from the virtual main mic, and being able to control the level of early reflections in the reverb. So it’s good that once you have a balance within a section that you group it, so you can make adjustments to the section’s apparent distance. It’s also important that the placement software and the master IR be describing rooms that are similar in character and size, or else it won’t make any sense to your brain when it’s trying to look for spatial cues.


Thanks Richard, very good points. I hear lots of people talking about VSS, but can you do this “spatial staging” with normal reverb plugins as well in your opinion?

Good tip about send reverb with pre-fader mode btw! :slight_smile:

I have the Sample Modeling Brass, and the solo instruments like muted trumpet and flugelhorn are amazing with a TEC3 breath controller - nothing quite like it. They are dry by default, but if you want to blend them in to an ensemble with something like Cinebrass, don’t overlook the Controllers 5 settings in the GUI, called Virtual Soundstage. The controls there include pan, distance,and early reflections. I’ve found that increasing the distance and early reflections really move the instruments back into the room, and the pan can align them with the positions in othe libraries. After that, you can add a reverb tail to everything and it can blend pretty well, especially in a mix.


Thank you Ron, I think I missed those controls. However, since I have many dry modeled instruments I would also like to find an effective way of placing them in a stage external of the instrument plugin itself. Meaning insert effects in my DAW (Logic). I am reading up on early and late reflections, as well as filtering highs etc. There are many tricks it seems. :slight_smile:

1 Like

I guess the key here is to separate “spatialization” (placing of the instruments “in” the room) and “tails” (glueing everything together in the “same” space).

I know Beat Kaufman has done some excellent videos and tutorials on this subject. I believe Joël Dollié also addresses this in his orchestral mixing videos.

What I try to do is use early reflections “ER” for the spatialization (EAReverb2, Melda MReverb are perfect for this, as they allow you to really place a separate insert somewhere in a 2D “pane” / soundstage). Depending on the amount of baked in (“recorded”) amount of room and “in situ”-ness that’s already IN a sample, they may need more or less of this treatment. I do this for each insert / sample library separately.

Then for groups of instruments (woodwinds, strings, brass, etc.) I have sends / groups where I put them through Seventh Heaven (“Sandor’s Hall”) or Relab VSR S24 (“warm hall”) to all give them a decent amount of glue/tails.

I tend to use pretty dry libraries like Chris Hein and XSample and this does work for me. I highly recommend Beat Kaufman’s approach.


Absolutely can be done with “normal” reverbs as well. Cory Pelizzari has two videos where he shows how he uses two instances of 7H exactly for this reason.

The specialized reverbs like VSS and DearVRPro maybe slightly more suited or follow some more specific binaural principle.


Thank you, great points. I am actually looking into the Seventh Heaven reverb, after watching Cory’s video. Also lots of cinematic composers seem to rave about how good it is. Apparently a simulation of a Bricasti hardware reverb, which seems to be some kind of holy grail for the Hollywood sound. :stuck_out_tongue:

1 Like

I personally don’t find algorithmic reverbs to be convincing in this regard - but looking at some of the reverbs being mentioned I can see everyone’s goal is a little different - nothing wrong with that. The only time I’ve liked anything besides either convolution or VSS2 (which apparently is algorithmic) is when I use ReVibe with early reflections only on the VSL winds group master, because I was able to get a sound that filled in what the dry stage was missing in terms of density and size. But it also hits my favorite IR in Altiverb, and is placed appropriately. Works pretty well to force dual perspectives of immediacy and enough wet to make it sound like the rest of the orchestra. But I’ve tried this with several other algorithmic verbs and didn’t like the result. Down to personal taste, I guess. Also I have used Goldplate or Lustrous Plates on strings to age them a bit - just a tail that hangs without much definition, only time.


A few thoughts:

If you want a cinematic sound, you should combine a convolution reverb that has a Scoring Stage preset (Reverence, Altiverb, or other) + algorithmic reverb that has a concert hall setting to extend the tail of the reverb.

Reverb Predelay settings to push forward and backwards in the mix, reffer to this post

Use a stereo panner to move the instrument left and right while controlling the stereo image width (By default, Cubase uses a Stereo Balance. You have to right-click in the mixer to change the setting).

When a sound source is located on a side, it will reach one ear slightly earlier than the other ear. By knowing that, you can apply some psychoacoustic concepts:

    You can slightly push away the instrument from the microphone by reducing around 150Hz the proximity with an EQ.

    If you introduce a very small delay on one channel, the sound will be perceived as coming from the other side.

    If you use the built-in “Frequency” plugin in Cubase, you can cut some of the high frequencies on one channel, and the sound will be perceived as coming from the other channel. This is simulates how high frequencies are naturally filtered by your head.

For more info on sound perception, check out this clip


Oh wow, your posts are always so incredibly detailed and valuable Medhat. That last one I have never thought about, but it is such a logical aspect that I will definitely try that one.

One thing you did not mention here though is high frequency fall-off. Since high frequencies are so low in energy, every meter of distance between you and the instrument will have more decrease in the highest frequencies. So far I have been adjusting this with EQ/filtering…but is there a plugin that is dedicated for this purpose?

Essentially, with clever eq/filtering of both channels independently you should be able to stage the instrument forward/back and left/right based on frequency response alone, right?


You’re welcome :slight_smile:

Having taken both acoustics and psychoacoustics courses at the University of Montreal, I have never heard of that, and it doesn’t really make any sense from a scientific point of view… It’s quite the opposite, actually: The further you get away from something, the less you hear the low frequencies, and the less directional it becomes.

If you’re close to a fountain, you hear all the details and a wide stereo image. The further away you go, the less you will hear the low frequencies, and the less you can pinpoint exactly its exact location. If you’re very far, you’ll only hear the high frequencies, and it will sound like a mono source. They even program attenuation curves in the audio engine to cut low frequencies to simulate this effect in video games.

High frequencies will get filtered out only if you have big obstacles like walls, and this will depend on the material’s absorption coefficient. That’s why, if you’re near a wall, and music is playing on the other side, you will likely hear the basses.

Two completely different concepts and shouldn’t be mixed or confused :slight_smile:
As for plugins: Way too overrated and unnecessary; just use any regular build-in EQ and cut at 150Hz, and you’ll instantly make the instrument further. Of course, you should combine this with the reverb with the correct pre-delay settings mentioned above according to the distance :wink:


Really? Perhaps I think of light energy that fades every meter from the source. I only know that high frequencies are very low in energy, so I assumed that they reduce the further you go from a source? :slight_smile:

Anyway the problem I have in practice is with for example the new SWAM solo brass, which I love. But if I use a big hall reverb on it, and then let’s say play staccato parts…I still hear that crystal clear focus in the high range, which I would never hear if they were actually placed in a real hall some distance away from me. :slight_smile:

1 Like

Some composers have a wetter reverb mix on the long articulations, and a slightly drier reverb mix on the shorts to give them more bite.

Unless you’re doing concert work played by live players, in my book: as long as it sounds good, the rule is that there are no rules. :grin:

1 Like