This turned out to be a very long post. My apologies - considering the libraries and how people are talking about them got me thinking.
None of these instruments will play themselves.
To me there’s a kind of tree of goals with virtual instruments of any kind.
If the goal is purely composing, with sounds that refer to the intended instrument sufficiently so that someone understands what is intended, then “point and shoot” libraries that make moves or gestures of some kind themselves, along with, say, Sibelius to tell the library what stunt the composer wants performed, is fine. One won’t get a deeply human result in the tableaux one designs - more like a bunch of chess pieces with photos of people stuck to them. But it wouldn’t make much sense to expect more. Consequently if one used a more responsive library in that setting (but didn’t use that aspect of the library), the result would be similar or maybe even worse - without pre-recorded gestures the whole thing would appear flat.
If the goal is to render a product and what one hears - not the concept of the music but the actual audio - is that product, then libraries that are more flexible (or instruments that are more playable) are what is called for, but that’s the smallest part of what is needed. The other part is the composer learning and then playing these instruments in the way that best uses their strengths and minimizes their weaknesses. As someone who writes from the mind but within the context of what’s under my fingers, I can say that my writing has evolved from the time when I was using a TX81Z to an FB-01 to a Proteus to a JD-990 with an orchestra card to an S-750 or an e-64 with a library to the long slow rise into VI’s - and it’s evolved in no small way because what’s possible has improved so much. I had and still have a Yamaha physical-modeling synth that changed my life utterly because it allowed me to get closer to getting an instrument to depict what I wanted. And that is the romantic (in the philosophical sense) goal of my relationship with gear and libraries - for things to sound less wrong. For me to be able to ignore what’s not working because there’s lots that is. And the reality of traditional sample-based libraries is: if a library is well-performed and well-recorded, and of a sufficient depth of dynamics, and then appropriately edited and programmed, then what we should be left with is essentially a series of realistic moments strung together with either more or less convincing transitions between them. If the result one gets is less than convincing, here are some of the things that can make it so:
On the part of the user:
Choosing incorrect dynamics
Choosing moves like crescendi that don’t exactly work in your context
Forcing, say, mp samples to perform a ff role and vice versa
Awkward or insufficient controller usage
Not taking the attack or decay into account when playing
Playing a 12-player section with six notes (72 players)
Expecting the instrument to think musically for the composer
Unrealistic placement of instruments
Unnatural balance of instruments
Poor gainstaging ending in loss of fidelity
Misunderstanding of the instrument depicted by the patch
Poor orchestration choices
Using combination patches when specific sounds are needed
On the part of the library:
Poor ambience/too much ambience
Transposition of samples resulting in formant shift
Poor use of filtering in lieu of recording dynamics
Messy or inconsistent note attacks
Poor vibrato or cross-fade implementation
Not enough coverage of what the instrument or ensemble does
Baked-in combinations of instruments that lack nuance
A preconception of usage that interferes with flexibility
Every few years new things come out that purport to do something for us automatically so we don’t have to worry about it - and some of those things are useful (expression maps, auto-divisi, speed-based legato transitions), and some haven’t been done yet (automatic delay compensation for various legato styles, which requires integration with the host sequencer), but really the best that we can expect from the existing sample-playback technology is that some things are handled so we can have more time to focus on other things that need to be handled, like dynamics control. There is never a point where we don’t have to drive the instrument. To be at that point is like engaging an arpeggiator that randomly generates notes and taking credit for conceiving of the note choice and placement, or telling an AI we want a major sprightly theme and then calling ourselves its composer because we made the initial request. I don’t want to have to do less - I want to be able to do more in an easier way. I don’t want AudioModeling’s cello to play itself - I like playing; I’m a musician. I want it to use better IR’s for more wood sound, and model bow attacks better, and know how to dampen or resonate strings better, and do it with less CPU (I can dream), but I don’t want it to decide when I want vibrato (it doesn’t) or in what way I want to play sfz. That is on me. I didn’t spend that money on the VI so it would play itself but so I could play it in a way that other instruments won’t allow.
So one could go either the route of snapshot realism - good-sounding recordings of well-chosen articulations used as well as possible - or faithfulness to musical intent, where the instrument goes where you tell it, to the possible detriment of sounding as real as a recording of an instrument doing that thing.
But to provide a little bit of context, although I dearly love the AudioModeling solo strings, and how much fun they are to play, I also wish they sounded a bit better, which is why I’m also looking at a few other traditional solo instrument libraries as well - because sometimes a nice recording of the instrument doing things it does is all that’s needed.
In my shopping and research…I like the general character of the Cinematic Studio Strings solo instruments, although I’m not fully convinced that they would be as overall responsive or dry as I would like. But ultimately, since nothing is as responsive as the AudioModeling stuff, and potentially the SampleModeling as well, perhaps what I really am looking for is something that sounds realistic in a certain setting. If what one needs is a little bit of first chair, then a pre-recorded library with at the very least the ability to decide how much vibrato is present should fit the bill.
Spitfire has more thoroughly-sampled solo strings, but a) you have to want that sound and b) you have to be willing to program it a bit and c) you have to like their GUI (to a certain extent - see below).
The venerable VSL solo strings have a myriad of articulations and moves and are also dry as a bone. Not as responsive as some, but I’ve cheated with breath controller on many occasions. Probably the most programming required of any, but tons of moves and articulations, well-intonated, if perhaps a little restrained emotionally.
The first chairs that come with the 8Dio Anthology/Adagio are not much to write home about. I like the 8Dio sound but nothing they make feels particularly agile to me and I find their programming incomplete and their recordings sometimes shy of the right performance - and I find them utterly shameless about saying those shortcomings add to the “humanity” of their libraries. Honestly. I’d tell those humans to play it again.
My awareness of EastWest’s libraries ended at Hollywood Strings and Brass. I personally detest the PLAY engine and how it interacts with other instruments (mostly Kontakt but VI Pro to a lesser extent) that also want to allocate RAM for streaming - and I find their programming to be sloppy and cumbersome. Far more offerings with greater nuance - though they get points for the first everything-we-make subscription, which is a fabulous resource starting point for many people if that’s all they use or (in my experience) if you quarantine PLAY to its own computer.
Someone with more direct experience with the other libraries (Chris Hein, Cinesamples, Orchestral Tools First Chairs, Emotional Cello and what have you) could offer insight on them, but they all seem to be as good as the setting is - that if you need the things they are, then they can be part of a palette. None seem overly flexible to me but I’m happy to be shown otherwise.
Also, to me a huge deal with solo strings is the kind of vibrato they record. I like a good florid one available. Not everyone does that. Demos reveal it.
So regarding GUI’s - I will absolutely own that a good GUI and articulation approach can really turn my head - and that things get more use when they are good to work with, no matter how they sound. (I remember years ago when my two partners and I got a new giant plasma TV for video, and after setting it up and watching some things, we all felt like it made our music sound better. :D) And I will also own that I bought the Spitfire SSO and then barely used it because I found the interface to be so tedious and the lack of all-encompassing (i.e. including legatos with other articulations) patches to be a great impediment to how I wanted to work with it. (I have other complaints about how I wanted the close mics to stand on their own more, but the library itself sounds pretty good.) So then… I found out about MIDIKinetics’ Composer Tools Pro (US$79) template for Lemur. I’d built Lemur templates before for other libraries, but CTP is beautifully organized and very powerful, and has the added feature of being able to recall instrument-specific presets when that instrument’s track in enabled in your sequencer - and in Cubase it does it automatically without even pressing a “recall” button. So that’s great - but also the template comes with an app that converts Cubase expression maps into presets for CTP. So I went and bought Babylonwaves’ €49 expression map package that has virtually every available library in it, and have begun importing and implementing them as needed. So I load Kontakt, and load four Spitfire patches per instrument - the core techniques, decorative techniques, legato performance, and Sul G legato patches - and assign them all to one midi channel, and CTP now has a preset that already knows what they are called and has labeled keyswitches for everything (and they can be on keys or on buttons, however you prefer to look at it). And I took ten minutes and assigned faders and buttons to the standard Spitfire functions, like mic levels, vibrato, tightness, expression and so on, and copied those assignments to every Spitfire preset (you can do them all at once), and now I have instruments I want to use, that allow me to make, say, mix adjustments, or purge a mic, for all articulations across four patches all at once by moving one virtual fader. And one can have any fader or switch send a default position when the preset is active, which means that when you select an instrument you’d been doing dynamics for you don’t have to wobble a controller in order to hear it. Which is brilliant. I’ve also built presets for the SampleModeling brass (took about five minutes), which make them much more usable than they already were. Next up for me to make will be presets for the AudioModeling woodwinds, which need a couple more fingers and limbs to control every parameter that needs it. And I’m not squinting at Kontakt’s tiny font anymore, so yay for that.
This is what happens when I have a morning off and school has started so the house is quiet.