Symphonic Music Production

When producing symphonic (orchestral) music being produced with 100% virtual instruments, how do you get that “full” sound quality? I’m not necessarily talking about more volume, but it probably does have to do with volume. I’ve listened to professional and non-professional productions and I hear something that is missing from my mixes. The best I can describe it is a “depth” which does come across as volume. But, I know that it’s not as simple as turning up the volume. Any thoughts?

I will share a link to a pice I was working on last night. I do music for concert band (wind band). Listen to this and please tell me what is missing. What am I doing wrong in the mixing process?



First of all, dynamics! fff moments won’t sound like fff, unless p is really p. It’s easy to lose perspective of that when working on speakers or headphones, especially if you work at low volume levels to save your ears.

Second, reverb! Properly used reverb gives you more control over the perceived placement of instruments, essentially giving you 3D control, rather than plain 2D stereo. It also creates a “background,” as a frame of reference for the stage, allowing the listener to perceive the intended size and dynamics in a more natural and reliable manner.

You’ll probably need to use dynamics processing in the mastering chain to bring things up to “normal” levels by today’s standards, which is an art and science in itself, but it can be taken pretty far without obvious artifacts. The key here is to keep loud bass from pushing everything down. This can be achieved through various combinations of compression and saturation, sometimes in multiband configuration, and limiting. That said, this is the mastering stage, and although a REALLY loud master starts on the mix level, this is probably not the first thing you should worry about for orchestral music.

(I wouldn’t consider myself an authority on the matter, but I think I’m starting to get it right every now and then. :slight_smile: Hybrid orchestral example: )


Yes, after listening to the example that you posted, that is the sound that I was referring too. Thanks for the response. You gave me somewhere to start with my research.

1 Like

Hi Jonathan,
in my last projects i also experimented a lot with stereoization and depth. In my opinion and for my skill its a log way to work with EQ, reverb, and width. I use Ozone Imager and OTT cuz they’re both free. But there are lots of other products i’ve heard of… i.e. valhalla & co.

Take a listen at my new track:

Best Regards

1 Like


I just learned about OTT about two days ago from one of the YouTube videos. Of course, it was a video talking about mixing pop music. With a name like Over The Top I never thought that it could be used with orchestral music. Now I will have to go and download them both and do some research.

How do you use it? Do you use it on individual tracks, section busses, or in the master chain?



Hi Jonathan,

that depends on what you wanna do.
Try different things and listen what you like most.
Most of all, i use these plugins in the master mix. In this project i used one EQ also in the Taiko track to filter some higher frequencies out.
I’d recommend Alex Moukala’s tutorials, he explains very good how to use these kind of plugins.
Are you a composing beginner, too?



No, I have been composing and arranging for about 25 years. I am just very new to DAWs and virtual instruments and processing. Back when I started in college, it was paper, pencil, and a piano. Then in the early to mid 90s Finale came along. I messed around with Garageband about 4 years ago but nothing serious. Just got Logic Pro X this past summer and my first virtual instruments ( VSL Special Editions) on Black Friday. Since then, I have been obsessed with learning as much as I can. I just believe being able to produce high quality mockups is so amazing.


I wish you good luck and happy composing,
I always ask myself what beethoven and mozart did if they had a DAW :confused:
But i’m sure in a couple of weeks it will make a lot of fun and as u are very experienced, there should be no further problem :slight_smile:

1 Like

I agree with @olofson: Dynamics are extremely important indeed. It’s psychoacoustics: we perceive dynamics better when we’re mixing at a lower volume because the ear acts as a compressor. The louder we mix, the less we perceive dynamics.

Check out the Fletcher Munson curve, there’s a fantastic video that clearly explains it:

From my experience, I can tell you that programming a realistic fanfare with a ‘marching band’ sound is extremely difficult as most libraries are geared towards a film sound or big band sound. When I listened to the demo you posted, I noticed a couple of things:

  • The drums are a bit muddy and lack definition, that’s partly caused by noise build-up when samples are repeated. Check out this awesome tutorial by Anne-Kathrin Dern where she shows how to fix that.
  • The brass sound synthy and outdated. You should consider using a better and more modern sample library. Some things can be fixed in the mix but bad sounding samples will always sound bad, so working with high-end virtual instruments is the only way to go. :slight_smile:
  • The tubular bells are way too low in the mix, they should be way louder in the mix.
  • The tone of the glockenspiel sounds off and some harmonics are a bit too harsh in the higher end, maybe try a better one like the one from CinePercs and use a dynamic EQ like Fabfilter’s Pro-Q 3 to tone them down

Mixing is one thing, but another aspect that needs to be discussed here is Orchestration for virtual instruments.
Like you, I also come from a classical and jazz background and worked with a pencil and paper for years. When I started I was writing in the DAW like I would write with live players, my mixes sounded super thin and the results were pretty bad because musicians don’t have the same limitations you get with VST instruments. It’s really two completely different beasts.

  • We agree that nothing replaces live players. If you’re working with VST instruments, you need to adapt your writing for their strengths (if a library is good for slow tempos, don’t use it with a fast tempo. If it has a lyrical/orchestral sound, don’t use it in a big band context).
  • There isn’t a one-size-fits-all solution, unfortunately. Some libraries are great for slow passages, others are better for fast passages, for example. You need to either mix and match them to get the music you want to write, or adapt your writing to the libraries.
  • If you’re mixing VST libraries together (for example 2 string libraries for a fuller sound), just make sure to add reverb to the driest one so it matches the wetter one.
  • In the case of film scores, don’t hesitate to use the low end of the orchestra and double basses with Tubas, Trombones, DoubleBasses, Contrabassoons or even synths to have a richer harmonic content and low end. (One of the mistakes I did for years that resulted in a very thin sounds)


  • If you’re looking for a typical film score sound, you need to group your instruments by family and add a convolution reverb to each family that has the impulse response of an actual scoring stage (this is VERY important so they all sound like they’re playing in the same space). You can use Altiverb, Reverence if you’re in Cubase, or any other convolution library that has a scoring stage IR preset. You adjust the instruments in the space by adjusting the reverb predelay settings to play with the depth:
    In the case of a scoring stage, it’s typically between 0 ms (if the instruments are in the back right by the back wall of the scoring stage), and 40 ms (for the closest instruments to Decca tree microphone)
  • Once convolution reverbs are set up, you need to add a master algorithmic reverb that will extend the scoring stage natural reverb, I highly recommend Valhalla Room, a 50$ reverb that’s used on many Hollywood scores.

Hope this is useful


Thank you, Medhat,
This was very useful. i found many parallels to my work,

Thanks again,

1 Like

Thank you. This information is very useful for me as new to this.


Yes, this is very helpful and I will watch the suggested tutorials.

Questions from your comments:

  1. Which brass library do you consider to be a more modern? I only have two: VSL Special Editions and the brass included in NotePerformer 3. The brass and woodwinds in this demo were rendered NotePerformer Via Finale 26.

  2. I noted that you mentioned CinePerc for percussion. What library do you like in Woodwinds?

  3. I understand what you are saying regarding reverb. Do the master algorithmic reverb go on as a send of each family, or is it just sit up as a separate bus and send everything to that one reverb?

That I you very mischievous Medhat for taking the time to help me. You have given me a lot to work on - and to consider purchasing.


There are many great libraries on the market, the trick is to choose them depending on the context (Music style, tempo, etc.). I highly recommend CineBrass Core + Pro and maybe layer them with Caspian.

Caspian isn’t the best library, but it has a very fast attack, and combined with CineSamples, it could help you get the typical “fanfare” sound. You’ll need to match the ambience the rooms and tweak the note length to match the endings of the CineSamples’s, of course.

Note Performer is nice to get a general idea of the sound when you work with live players, but by no means useful if you want to get a realistic result.

Every VST library is unique, the legato speed and scripting works differently, the dynamics are different, etc. so you have to manually play-in every part in a DAW to get a realistic performance instead of notation software playback or MDI export. You need to manually shape the dynamics using the modulation (CC1), adjust the phrasing intention with the expression (CC11) as well as track MIDI delay to make sure the phrasing timing is correct.

I absolutely love Berlin Woodwinds by Orchestral Tools because it comes with different players so you can have an actual Flute 1, Flute 2, Flute 3 for example and have realistic divisi sections. It’s pricey but really worth it.

For the master Reverb it’s usually it’s a separate FX track with a concert hall setting to which you send your signal to ‘glue’ the mix together.


1 Like

i’m still struggeling with this.

this might be a very good topic for a video tutorial.

and it should be saving cpu as well - right?

I’m by no means good at drawing, but I tried my best to draw this in Photoshop.
Hopefully, it will make things clearer. Basically it’s way sound behaves in two different spaces: a scoring stage vs a concert hall.

This is due to the acoustics of both rooms that are completely different (shape, material, size)
You’ll immediately understand what I meant by: “it will glue everything together”

As for the resources, more plugins = more required CPU. Using aux tracks for send will help you save power since different tracks will be processed by the same plugin without the need to insert one instance per track.


I now understand, especially after trying it out on my concert band track. I can definitely hear the glueing effect.

1 Like