I think there are some problems with âsmartâ articulations and properly expressive virtual instruments, no matter how you turn itâŠ
One way or another, you need to indicate what exactly you want the instrument to do. Sure, velocity sensitivity, aftertouch, breath controllers etc can technically allow you to play expressively - but you have to get the velocity, aftertouch, modwheel, timing etc just right to actually get the articulation you intend. I believe the main reason for the popularity of pre-selecting articulations with key switches or similar is that it doesnât really demand anything extra from the actual playing, so it makes the instruments directly accessible to anyone who can play a standard keyboard controller or similar.
I like the BBC2 and the TouchĂ©, and I have no doubt Iâll love the Osmose - but using them is not very different from playing the cello or violin! Theyâre fun and easy to play around with for effect, but actually playing expressively, in a controlled fashion, requires a fair bit of focused training, and some techniques (like vibrato) can take years to master.
So, weâre starting to see technology that literally turns âsynthsâ into infinitely expressive, real instruments, but how many will put in the time to learn playing them properly? TBH, the difficulty/results ratio has not been enough for me to seriously practice with the BBC2 or TouchĂ©. I spend that time on the violin, cello, and vocals instead, because theyâre still much more inspiring, and learning those techniques feels like a safer investment somehow. If the Osmose is anything like I expect, it might be a game changer here, but even though it has more in common with the standard keyboard than prior attempts, I donât think it can be played to its full potential without lots of training.
Finally, there is still this issue of interpreting input, and rendering it as audio, in the context of realistic virtual instruments. Sample libraries need to add substantial latency to have a chance of figuring out which articulations to select, as you canât just switch samples at any random moment. Key switches sort of circumvent that, as theyâre inaudible âout-of-bandâ information, allowing the player to make a final, unambiguous decisions well before playing the actual notes. Physical modelling and similar tend to generate sound on the fly, rather than âbranchingâ within a limited set of samples, so that would seem to improve matters, but only by so much. After all, the main reason sample libraries tend to have âdelaysâ (even with key switches) is that real instruments have various audible onset phenomena going on, and those are really supposed to happen before the actual notes. The Osmose sensing the very moment you touch a key might help with this, but ordinary keyboards just donât transmit any information at all until the key is fully down.
So⊠Bottom line is, this is a much more complicated problem than it may seem at first, and most âsolutionsâ just cause more problems than they solve.