Well, as you know, I’m not particularly impressed with the tone of most modeled instruments so far, and I believe the reason they’re not “there” yet, is that it’s essentially synth programming. Even with meticulous research, audio analysis, custom tools for deriving model parameters from recorded audio etc, it’s a massive project to even come up with an “acceptable” generic instrument, let alone one that sounds like a really good real one.
I mean, after hundreds of years, we only have a rough idea how a violin actually works, and building really great sounding ones - even with the help of computers and modern science, as some luthiers have started doing - is still more art than science. To “fake” our way around that, we’d at least have to record impulse responses of the top, back and other significant parts of good violins, and feed those into the models, to avoid trying to model things we don’t fully understand, such as the acoustic properties of a good piece of spruce.
Sample modelling partially avoids these problems by instead using the recorded tone, transients and the like from the real instrument, and only “modeling” the higher level behavior and playing techniques of the instrument.
Anyway, I’m not sure what the next step will be for the “mainstream,” but I’m leaning towards something along the lines of sample modeling. I suspect “true” modeling is just too much work to get right with the current tools and methods, and most of the work needs to be redone for every single instrument to be modeled. Meanwhile, traditional style “brute force” deep sampling just doesn’t scale to the levels of control and detail we want.
As for playability and control, most modeling approachs are automatically more “real time,” whereas traditional deep sampling inherently suffers from the need to select samples upfront, which is why they need to either depend on explicit “out-of-band” controls (kepswitches, CCs, …), or have to add latency to have enough time to reliably interpret input in clever ways.
On the downside, modeling demands more from the player and controllers (hello breath controllers, Touché, Seaboard, Osmose etc!), and I’m not sure composers, orchestrators etc in general are prepared, or even willing, to deal with that…
Are we going to see a market for “virtual studio musicians,” specializing in expressive playing of virtual instruments?
Or will there be “AI musicians” integrated into the instruments? If so, how will that be implemented, without reintroducing the dreaded latencies, keyswitches and whatnot that we were supposed to leave behind when moving beyond traditional sampling…?