New Macs are about to come out

It’s important to remember that realtime systems differ significantly from basically everything else that computers are used for, and unfortunately, this problem space has very low priority outside the control engineering domain. Even in cases where the hardware itself would theoretically be able to do what you expect, it will most likely not be possible, due to it not being a design criteria for drivers and OS.

For example, it’s perfectly possible to do rock solid sub-millisecond latency DSP on PC hardware - but only on a properly configured machine with an RTOS, and drivers, software and plugins designed for the job. Any misbehaving component will break it.

As for disk I/O (SSD or otherwise), the situation is even worse, as no part of those subsystems is designed to guarantee any sort of usable worst case latency in this context. A sampler needs its data within (depending on host configuration) a few milliseconds after you trigger a note, and no disk subsystem in existence can reliably provide that. They’re designed and optimized for low average latency, and high sustained throughput, which are not even particularly important to this use case.

That said, with fast SSDs and extreme bandwidth between all major components, you “should” be able to reduce default preload buffer size a far bit in Kontakt and the like - provided samplers and libraries actually work correctly if you change those settings. I’ve not had much luck with that, as some libraries break in mysterious ways.

So, in short, you’ll NOT do away with the preload buffers under any realistic circumstances, and even being able to reduce them is somewhere between “devs need to fix their libs” and “won’t work due to worst case I/O latency” - so I wouldn’t hold my breath.

2 Likes

Yeah I totally agree, especially for latency of memory. That system is actually very good for storage because that’s what it’s optimisedcfor, but data transfer over many thousands of samples just crushes it. This is definitely a bottle next. The best it’s optomisedcfor atm is around 0.6ms for most mainstream applications, which is actually great on paper… but as soon as start to include real time editing that often slips to around 30ms which isn’t advertised… sometimes it can getcwprsecthan that too. This is all down to optimisation as you say, it’s a scary concept when you think that our entire industry is based on this.

Heard an interesting talk about file transfer when the latest iMac came out. They heavily focused on its backup speed and urged everyone to upgrade the Lan just in the prospects of going native with their storage. Makes a lot of sense, but even the upgrade isn’t quite good enough.

Also had a talk the other day with an IT Tutor I work alongside in the dept and he recons that apples update will soon make its way to the motherboard, creating a cube shaped motherboard with even less transfer space, increasing the efficiency again. Thought that was a fun concept that I’d love to see someone attempt.

1 Like

how much will the M1 cost, can we afford it? :sweat_smile:

Base model mini is £700 I think

I found a good comparison video of the new MBP M1 Chip vs MBP 16 with Intel-6-Core-Prozessor…very interesting.

Based on what I see, I think Apple will get rid off the Intel-based Macs within 3 years or so…it really doesn’t make any sense for them to keep them running alive. Impressive stuff. Maybe that’s the reason why they have stopped producing iPhones for a while :smiley:

1 Like