I think what it boils down to is a vague definition of “dry.” Even dry studio libraries have plenty of room sound in them, part to create a full sound, part - and this is the important bit - to create spatial definition. This is the kind of input a typical reverb actually needs to sound as we expect, and that kind of reverb setup will not magically fix a bone dry signal.
To further complicate matters, acoustic instruments are not unidirectional point sound sources, so even in an anechoic chamber, there are important elements of the recording setup in the sound. Spot mics and pickups in particular, can require quite aggressive EQ to sound anything like what the instrument sounds like to the naked ear, and they still have strange, surrealistic perspective - much like the raw signal from a modeled instrument. Large diaphragm condensers are certainly not immune to these phenomena either, and they tend to have plenty of character of their own as well. Finally, the “typical” orchestral sound also involves tube pre-amps, analog tape and whatnot, so we’re not exactly comparing the modeled instruments to clinical recordings here.
Anyway, back to the reverb issue: The sound from normal recordings and sample libraries - even dry studio ones - contain a lot more information than just direct sound from the instruments. Even in libraries with spot mics, you can usually hear a hint of the space. Where reverb terminology speaks of dry, early reflections, late reflections, tail etc, there’s a key part of the sound that’s assumed to either be undesired (for that in-your-face studio sound), or is expected to be in the “dry” signal already, before entering the reverb.
Meanwhile, synths, modeled instruments, recordings from anechoic chambers and the like - actual dry sounds - have absolutely none of this, so if you want it, you have to simulate that too, before the usual reverb chain.
What you want here is the “dry” sound to already define the shape of the space immediately around the instrument, and where the instrument is located in that space.
I suppose one could think of it as the intimate relation between (actual) dry sound and “direct reflections” or something; the part of early reflections that are bouncing off objects close to the direct sound path, as opposed to sound bouncing off the back wall, ceiling, and more distant objects. These “direct reflections” mostly affect the color of the sound (comb filter effect), and the left/right phase alignment, which, incidentally, is what the brain primarily uses for determining where a sound is coming from.
In practical terms, there are plugins that cover this part, or specialize on it, like VirtualSoundStage, or dearVR. A reverb that can be set up to for tight, space-defining early reflections reverb can also be useful, and also stereo imaging tools, like bx_stereomaker, khs Haas and the like. Some instruments (like SampleModeling and Infinite) have their own IR based tools that can be used for this, either as a specific feature, or as part of the close mics simulation.