How do you think that string/orchestral libraries can Improve

Hey all, so being into sampling myself, I have been pondering a question. Like most of us I have copious amounts of string samples that I’ve sccumulated over the years. Everything from tacky digital samples, all the way up to modern high performance real samples instruments… I love how they tell a story of my orchestral journey when I look back.

I’m sure though that I’m not the only person who has noticed that the newer sample libraries that are out are starting to his a ace long. Where the upper class sample libraries that were around ten years ago have either been overtaken or are at the same level as newer libraries we can probably agree that it’s getting harder to distinguish the different libraries other than space sound within a recording, which in itself isn’t bad.

I have been wondering what the next logical step is though (other than getting rid of key switches that is). What would we like to see in a library??

For me there’s one thing that stops these newer libraries from sounding exactly like a live orchestra… and that is Mic bleed. I’ve emulated this concept in my Daw a few times and it’s crippled my computer :sweat_smile:. But it would be amazing to have a control that lets you side chain your instruments into the background of your sound very quietly so that the stereo field is emulated also. (I can let you know how I emulated this if your interested).

What do you think would be the one thing you’d like to see?

1 Like

I want to be able to control EVERYTHING. :stuck_out_tongue:
That is why I simply can not see samples being the future, as a sample is like a static picture. You get perhaps 3 legato timings, that’s it. I want to be able to shape any curve in music, legato, vibrato, glissando. Everything should be able to shape into any curve, or to sync to a beat. For example having the entire orchestra doing a riff almost like a guitar slide, and land on a beat! In essence, I want a superb controller connection that makes me able to shape the entire performance. It will probably not be a standard MIDI keyboard, but something like ROLI or another future from MIDI 2.0.

2 Likes

I clicked on this post to say one thing:

Mic bleed.

But then I noticed you already highlighted it!

I was thinking about it just the other day though - wondering whether anyone has setup a template where they send a tiny amount of all instruments to all other instruments (if that makes sense?), and whether it sounds any more realistic…or just plain messy!

Beyond that, the only other thing I can think of that would REALLY improve orchestral sample libraries is the use of Artificial Intelligence to “interpret” how a line should be played and to recreate it as a real player would.

4 Likes

I have listened to a lot of rock/metal over the years, and I want to be able to play like that on orchestral sample libraries. Meaning, creating riffs, with all those nuances, ornaments and details that makes a great riff stand out on an electric guitar. Right now that is a complete impossibility with sample libraries. I get a bit annoyed when people say sample libraries sound so good that they can replace real players. Sure, if you only use simple long legato melody notes, or chords! :stuck_out_tongue:

But if you want to create a unique line, a completely original performance. Then all those nuances a real player can add, is something that is necessary to fulfil your creative vision as a composer! :slight_smile:

3 Likes

There are plenty of libraries that do sound stunningly real and add nearly every variation Mikael, but I have found this only applies to soloist instruments. As soon as the ensemble gets bigger it becomes near impossible to include these unique variations because the amount of players smooths out the timbre which is why you don’t get the detail with bigger sections, so I agree somewhat in that sense. I think this is why manufacturers are starting to become more niche, so that the library we buy becomes more specialised to a certain task. I should also imagine that to actually have a computer powerful enough to process the sheer amount of samples it would take to make an ensemble like that work would be at the moment, near on impossible. We are talking hundreds of thousands of takes to get that sort of detail AND sound like a fresh performance.

2 Likes

My dream would be being able to write music notation with EVERY possible articulation and performance technique you can write in music sheet form. And then have the software instrument translate it perfectly into a performance that is impossible to tell apart from a real performance. :stuck_out_tongue:

Now, we are talking long term future hopes here, I don’t believe anything like this will be coming in the next couple of years =/

1 Like

Not only with strings/orchestral instruments btw. But guitars, ethnic instruments…any instrument really! :stuck_out_tongue:

3 Likes

I don’t have a template but this is the process that you need to go around to get this sort of processing. First you need to set up a master AUX track. For every other Aux track you create, it needs to be fed into this one which then routes to your stereo out.

Next you need an Aux track for each section you have in you piece. In an orchestra you would include 2 (one for each side of the stage). Treat each section of the string family as it’s own section as there’s so many of them. I found its best to pass the second through the first to save on processing power. You should end up with around 18 Aux tracks linked to the corresponding section. Label and colour code them clearly as this is where it gets complicated.

Now create a further two Aux sends and route these through your master Aux…you now have 21 Aux channels. These Aux channels are for late reflections. They will be labelled left verb, right verb.

Set up the amount of each instrument that you want being sent to your Aux tracks so that you get an even amount sent to each aux and pan them around the stereo field from the sections perspective. Now that you’ve done this add a slap back delay to each aux track… the closer to the section the aux is in the stereo field the more of z a delay you’ll need. Anywhere between 8 and 37ms for slapback. Also make sure it’s 100% wet. This will give you a more genuine stereo field.

Now on your master Aux add a wet verb with no slapback.

Finally add two different verbs with a high slapback on the stereo pair aux coming from the master Aux that we spoke about earlier.

This is as close as you’ll get… plus this is much harder to type than to visually see :sweat_smile:

4 Likes

Yeah this definitely applies with everything in terms of quality and usability. Totally agree

1 Like

Wow! What DAW are you using? You mind sharing (or even selling) a template with this setup already in it?

1 Like

I’m using Logic. The template would change slightly each time you use it but I’d be happy to make a template for everyone to use. I’ll post it on here in the next month. Really busy atm so I’ll have to do it as I get around to it.

That would be awesome! No rush. Thanks for being willing to share it! :raised_hands:t5:

1 Like

Sampling will always have some limitations because they are just snapshots. You can create larger and larger libraries with more articulation samples and more microphones, but that ultimately becomes harder to manage. Eventually, I predict that sound modeling libraries will become increasingly sophisticated, enough so that your ears will not hear a difference between modeled and sampled snapshots. Then you may get closer to being able to “dial-in” many articulations and room sounds.

3 Likes

I agree Ron, this is what I hope for. The snapshot aspect is what is bugging me a lot. I want “analog”-like flexibility and tonal/expressive variation with software instruments, just like real instruments in the acoustic world.

1 Like

To make the perfect library with each possible take and articulation… and possible mics and angles… You’d end un with petabytes… or whatever… and will need a spermatozoon****(Super Machine) machine to run it.

I believe we just need a basic 3 to 5 takes in mics. A good sampling for pitch and a good one for textures.

Everything else will be done by algorithms… so the perfect library should not be so heavy (maybe a Terabyte) but will have such an engine that could at the moment create the possible variations and you’ll be able to control its amount, so whenever you play, it would sound slightly different, humanly and real world variation, as if real players where doing it.

It will be possible also to add the amount of “accidents” (not referring to “accidentals”) but, “human-like” errors and nuisances to the playing… so it will be pleasurable to listen to… analogue real world like harmonic frequencies that massages human hearing, as a real instrument do.

Also, it will have AI to help the user develop ideas or fill spaces.

All this could be done right now, we have the technology; In other life I would do it… at this point, I would guess that this “library” creation is in a 15 year old young musician-programmer’s hand. Or even, it is a reality and we don’t know it yet: There is a library that I believe has some of this incorporated, but I have not the time right now to test it, but it looks like it: https://audiomodeling.com/solo-strings/swam-violin/

If anyone here has it, please review!

Good day to you all! :+1:

**** hahaha that was a corrector working, a typo… hahahaha came out funny, but I didn’t meant that :slight_smile:

2 Likes

I agree that it’s not realistically possible to cover it all with traditional sampling technology. Articulations aren’t even “digital” in most cases; for example, violin bowing goes all the way from legato through ricochet, and there’s even a traditional technique where you start with light bounces and allow the bow to gradually settle. One cannot possibly record and manage all possible permutations of that!

I’m also of the opinion that if one believe pure physical modelling is a viable alternative for realistic results, one is either a genius of historical magnitude, or one does not fully understand the problem. We don’t even understand the more subtle details of what makes a good violin sound the way it does yet. How are we going to accurately model it…?

So, I think Aaron Venture and maybe some others are on to something here. Basically, record the sound from real instruments, but model the behavior. The best of both worlds, and realistically viable enough that we’re already hearing some quite impressive results.

I’m thinking that maybe the next step is to move from time domain to frequency domain, possibly with some abstraction layer, such as “oscillator banks” (massively additive synthesis) or something like that, and then build resynthesis based modelling algoritms on top of that.

I don’t know… I’m not a total stranger to coding or DSP, obviously, and I might have a proper go at this stuff eventually. :slight_smile:

1 Like

Yes, it might not be realistic today, but as we already have fantastic modeling for Electric Bass (Modo Bass) and Piano (Pianotech 6), that I personally have a very hard time hearing the difference from a “real instrument”, I still have hopes for this technology to evolve in the coming years. Perhaps I am too optimistic. :stuck_out_tongue:

1 Like

I think it’s theoretically possible to model any instrument accurately. “If it can happen, it can be modeled.” :wink:

The problem is that some are much more difficult than others, and every single one will need its own research, both on the real instrument side, and the modeling side. If there’s enough interest form developers and users, it will happen, but I think less popular instruments are unlikely to be modeled properly, as long as humans need to do it.

Now, AI-assisted modeling might not be all that far off, and that might turn things around quite a bit…

1 Like