Orchestrating with modeled instruments

While researching modeled instruments in the context of orchestral scoring, I ran into this thread, which starts with the thread starter (Rohan De Livera) explaining his process for orchestrating exclusively with modeled solo instruments, along with examples:

As it seems that the whole modeling thing is still pretty experimental, and even more so in classical style contexts, I figured it might be an idea to create a thread for the gathering of, and discussion around, this particular subject.

2 Likes

Sounds like too much of a headache too me. Seems proof that live players is the way to go. :grin: I Think just paying a real player to add into your sample mix is good enough for me, or using Dorico Pro with Noteperformer is realistic enough, and Steinberg is continually improving it, so just write the score and hopefully get some live players to perform it.

1 Like

Yeah, as it is, that’s pretty much what it comes down to: All that effort, for what is essentially still the old synthesis vs sampling trade-off: Expression vs quality. And, that is provided one has the “feel” and skill to create good performances one way or another, which is basically what one hires musicians for.

However, the quality and realism of modeled instruments is constantly improving, and if one can spend countless hours learning to play real instruments, why not virtual instruments? Of course, I can certainly see why people focused strictly on composing and orchestration are not willing to invest that kind of time in any sort of instrument, but other than that, there’s still a problem here: Even as someone crazy enough to pick up the violin, cello, and opera at 40+, I’m kind of reluctant to invest countless hours in learning to play expressive MIDI controllers at this point. The instruments don’t quite seem to be up to it yet - but there’s a bit of a Catch-22 situation here; how will we ever know what these instruments can actually do, if hardly anyone bothers learning to play them properly?

2 Likes

Really good points. For my part, I’m just a composer/musician and artist. Not that I’m afraid of technology, I’m just not interested in needing a computer science degree to go along with my music education to be able to create. I just want to focus on the creation rather than programming and sampling and all is enough to keep my hands full!

2 Likes

I don’t see the difficulty here. I’ve always wanted better control, more dynamics and so on - I bought a VL1m years ago to pursue that, and even in its infancy that tech greatly inspired me. Since then I’ve always built patches even using traditional sampling methods that use breath controller - and when SampleModeling released their brass I was delighted. Other than the percussion, guitar and world instruments and a few VSL and Spitfire sounds, virtually all of what I use is modeled instruments. I do a cartoon series, and there is nothing - nothing at all - as responsive, agile or flexible as modeled instruments. And I’m on a deadline, of course, but I don’t feel like they slow me down - I feel like I can play in a muted bass trombone part, and have it be funny, and done -without editing. But this is something I’ve wanted for years, so it may be that I’m motivated to use them in ways that others might not be. I know what they will and won’t do, and supplement them where needed, and I love the result. And not a word from clients about how things sound except “great”. What would slow me down would be if things didn’t sound convincing or emotive, or if I had to change what I wanted to write because the library wouldn’t do it.

2 Likes

In related news, I acquired and installed SWAM Solo Strings yesterday. First impressions are pretty much what I expected; the “bite” from the bow is there if you just play realistically, and the sound is basically nice, but seems overly “close-mic’ed” or something, so will need some processing. Finally, the built-in vibrato still sounds very mechanical and “uninspired” to my ears, so I’ll be looking into other ways of doing that.

But first, controllers and setup…

The single best thing you can do for the vibrato is something you can’t do with traditional libraries. I have a TEControl breath controller that senses movement on front/back and left/right (nod and tilt) and I map the nod to vibrato speed. With a small head movement I can speed it up for intensity or slow it down. You could do it with faders too, of course. But do it.

1 Like

Been looking for something like this to use as a bow controller for modeled strings…

Doepfer actually has one - the A-198 - which has both position and pressure, but it’s intended for analog modular, and comes with a Eurorack interface module. There was a MIDI+CV version, R2M, but that’s no longer available, as the MCU they used went out of production. :-/

There is the Sensel Morph, though, which is apparently X-Y + pressure. Excellent!

Then it dawned on me… I have a Wacom pen screen! X-Y + pressure as well, but also adds 2D tilt sensing for the stylus. There seem to be some solutions for using these things for MIDI:

Update:
Unfortunately, it looks like most tablet-to-MIDI utilities are gone, abandonware, and/or Mac-only, but I found some potentially useful stuff on GitHub. At worst, I’ll have to roll my own, I suppose.

1 Like

Maybe Expressive E’s Touché?

1 Like

Well, yes, I have one, and I’m going to try using it for bow control in modelled strings. Technically, its output could be translated to position + weight, but it only has a tiny fraction of the length of a violin bow, so I’m not sure “bowing mode” would work - but it might work in the “normal” dynamics mode.

1 Like