I think there are some problems with “smart” articulations and properly expressive virtual instruments, no matter how you turn it…
One way or another, you need to indicate what exactly you want the instrument to do. Sure, velocity sensitivity, aftertouch, breath controllers etc can technically allow you to play expressively - but you have to get the velocity, aftertouch, modwheel, timing etc just right to actually get the articulation you intend. I believe the main reason for the popularity of pre-selecting articulations with key switches or similar is that it doesn’t really demand anything extra from the actual playing, so it makes the instruments directly accessible to anyone who can play a standard keyboard controller or similar.
I like the BBC2 and the Touché, and I have no doubt I’ll love the Osmose - but using them is not very different from playing the cello or violin! They’re fun and easy to play around with for effect, but actually playing expressively, in a controlled fashion, requires a fair bit of focused training, and some techniques (like vibrato) can take years to master.
So, we’re starting to see technology that literally turns “synths” into infinitely expressive, real instruments, but how many will put in the time to learn playing them properly? TBH, the difficulty/results ratio has not been enough for me to seriously practice with the BBC2 or Touché. I spend that time on the violin, cello, and vocals instead, because they’re still much more inspiring, and learning those techniques feels like a safer investment somehow. If the Osmose is anything like I expect, it might be a game changer here, but even though it has more in common with the standard keyboard than prior attempts, I don’t think it can be played to its full potential without lots of training.
Finally, there is still this issue of interpreting input, and rendering it as audio, in the context of realistic virtual instruments. Sample libraries need to add substantial latency to have a chance of figuring out which articulations to select, as you can’t just switch samples at any random moment. Key switches sort of circumvent that, as they’re inaudible “out-of-band” information, allowing the player to make a final, unambiguous decisions well before playing the actual notes. Physical modelling and similar tend to generate sound on the fly, rather than “branching” within a limited set of samples, so that would seem to improve matters, but only by so much. After all, the main reason sample libraries tend to have “delays” (even with key switches) is that real instruments have various audible onset phenomena going on, and those are really supposed to happen before the actual notes. The Osmose sensing the very moment you touch a key might help with this, but ordinary keyboards just don’t transmit any information at all until the key is fully down.
So… Bottom line is, this is a much more complicated problem than it may seem at first, and most “solutions” just cause more problems than they solve.