Mixing vs. velocity

Hello guys,

i hope i am able to explain what i mean. if i start recording or playing my track i am able to set the volume with the velocity or the modwheel. so assuming there is a crescendo over the track.

what should be the setting of the fader in my mixer? Normally i let it around 0bB and my mixing is mostly done by the velocity.
Is this wrong? The other possibility would be to play all notes with the same velocity and create an automation which fades from maybe -6 to 0 over the track.
I am a bit confused with all possibilitie.
hope you can help me out on there.

Thank you
Michael

1 Like

TL;DR: Generally speaking, the mixer faders should be used for mixing only (which may include solving technical problems, but not “musical” dynamics), whereas velocity and dynamics (modwheel) should be used for musical expression.

Sidenote: You could also use the mixer faders for balancing instruments in the mix, if they don’t come with the correct gains out of the box (usually the case when combining unrelated libraries, but some libraries don’t even have the instruments balanced in the first place), but I think it’s better to use the sampler output gain, mixer pre-gain adjustment or similar for that, so that “all faders at 0 dB” is the starting point when you get into actual mixing. Technically speaking, it doesn’t matter at all, though - unless you have non-linear plugins (compression, saturation, tape simulation, waveshapers…), which will of course be affected if you change the gain before them.

The dynamics implementation varies between instruments, but typically, short articulations will be velocity controlled, whereas long articulations are modwheel controlled. Some use both, for separate control of onset and sustain. These controls will generally switch/fade between sampled velocity layers, apply filters and whatnot, to create a realistic impression of actual dynamic playing. For most instruments, this is very different from just changing the volume!

Some sample libraries also have two confusingly named controls (Dynamics and Expression, in the case of most/all Spitfire libs, for example), where one is the actual dynamics control (sampled layers etc), and the other is essentially just an extra volume control, for when the dynamic range of the sampled layers isn’t enough. The later should only be used as a last resort, or for special effect, as it’s not very realistic.

Some libraries (some 8Dio ones, for example) allow these “true dynamics” and “secondary volume” controls to be linked, for increased dynamic range.

Other libraries (Orchestral Tools etc) have a Niente switch somewhere, which applies extra “fake” dynamic range, so that velocity/dynamics scales all the way down to silence, rather than the recorded level of the quietest sample layer.

2 Likes

i played a bit with EQ today and the result was, that my sounds - altough i only cutted the low end - sounded like a phone recording.
i am still struggeling a lot, because i still don’t know, how something SHOULD sound :slight_smile:
atm its nothing more than try and error :frowning:

2 Likes

Well, this all seems pretty abstract at first, and it’s not something you figure out overnight. In fact, no matter what level you’re on, the moment you stop learning is probably when it’s time to start considering a different career. :slight_smile:

Trial-and-error is pretty much the way to go here - and that’s true whether you’re just starting, or have been doing it for decades. Experience just allows you to make better qualified guesses, but few ever reach the level of being able to just listen and say “you need to cut 2 dB from 2300-2700 Hz” and actually nail it.

I think the key to learning effectively is to approach it in a systematic manner. Watch the frequencies, gains, Q’s etc, and try to learn what things “feel” like. It speeds things up if you know the numbers behind a “boomy” sound, harsh sibilance, but what really matters is that you know how to pull up an EQ, and if all else fails, just sweep through the frequency range, to find the problem.

As for how it “should” sound, the best answer is to check reference tracks that sound good, learn how those sound to you, through your gear, and try to figure out why it sounds like that, and what you need to do to match that sound with your own mixes. Visualization (analyzers and meters of various sorts) may help to put numbers on things, and make the concepts more clear in your mind, but keep in mind that ultimately, you should be using your ears when making decisions.

1 Like

Thank you so much, David @olofson

I realized, that makig music it not only putting some midi elements together in the daw.
So for now, i keep continuing listening to other tracks, try to identify the instuments and how they are EQd. It is in fact very intresting what cut or pass has which impact, even its only 1dB in some range.
For my kind of work i figured out to play with easy hĂ­gh and low cuts as a good starting point.

Ah 1 Question, should EQ be always automated or only in very diffucult situations?

Thank you so much for your help
Michael

1 Like

EQ would usually be fixed, unless you’re using it for creative sound design or something. I often use automation on the Cubase strip EQ for more basic filter sweeps and the like, as it’s quick and easy to set up, and works fine as long as you don’t need resonance, extremely steep filters etc.

Note that it’s common to use different EQ, or entirely separate mixes for the “same” instrument in a track, for example, vocals chorus vs verse, where you want to emphasize the different feel between the parts.

There may be a need for actual EQ automation if the problem actually is moving around. Not an unusual situation with vocals, especially if the singer has inconsistent microphone technique, or the microphone has a very pronounced proximity effect. Then you may need to “ride” the EQ to dynamically correct the proximity effect as the singer/mic distance changes in the recording.

Now, to complicate matters further (or make your job easier, once you know what you’re doing), there are EQ’s with dynamic bands (like my favorite, FabFilter Pro-R 3), which allows you to avoid manual automation in many situations. For example, if you have in instrument or recording space with problematic resonances that only show up when playing certain notes, you can set up an EQ band to tame that, and then make it dynamic so it only kicks in when the offending resonance builds up. It’s effectively the same thing as automating the EQ, but it can deal accurately with complex situations without all the work of tweaking automation curves.