I have read a lot about this and also chatted with @Keith in this forum about the difference between PCIe and external audio interfaces on USB/Thunderbolt. Apparently there is a difference in direct RAM access that PCIe has as its advantage. Anyone with experience in this field who can share some inputs on this topic please, David perhaps you know? @olofson
TL;DR: the computer/audio interface communication is only a small part of the problem, and I’m not even sure PCIe is worth the effort, unless you need massive channel counts. Go hardware DSP!
Well, I just had a harsh reminder of the flaky nature of USB audio the other day… The Scarlett glitched out, resulting in a few BSODs, after which I had to remove and reinstall the driver to get audio working at all again, and then reconfigure all applications, as they suddenly no longer recognized the interface.
That said, it’s normally pretty solid, as long as you don’t have other devices on the same root hub port, and I believe most of the issues are due to driver bugs.
As for latency; yes, there’s definitely a difference, as PCI/PCIe has a high speed connection directly to the computer chipset, rather than via USB or 1394 interfaces, which are a bit like network interfaces. Even if the USB/1394 devices may have DMA on the PCI/PCIe side (meaning they read and write data without CPU intervention), there’s still added buffering and a slower serial connection between the hub and the sound interface, which invariably adds latency.
Also note that there are HUGE differences in bandwidth between the generations/versions of USB and 1394/FireWire/Thunderbolt, and that directly affects latency, as latency, as we’re talking block transfers here - not sample-by-sample. That said, audio is very low bandwidth by modern standards, so this doesn’t have nearly as big impact as one would think! 24 bit stereo at 48 kHz is just 2.3 Mbps, and even USB 2.0 can handle 480 Mbps, so that’s about 7 microseconds for transferring 64 samples, corresponding to 1.33 ms of audio. With lots of channels, and/or extreme sample rates, bandwidth is of course going to matter more.
BTW, AFAIK, 1394 explicitly supports realtime devices, whereas USB does not, so 1394 should be better for audio in theory. Don’t know if that actually makes a difference, though, as long as you avoid sharing root hub ports with other devices (cameras, NICs, storage, …) via hubs.
Either way, as long as the hardware is built properly, and the drivers are solid, I don’t think this is the limiting factor. Glitches and dropouts are mostly caused by unreliable scheduling in the OS, and uneven CPU load in the applications/plugins, and for really low latencies, you start feeling the impact of the extremely inefficient interrupt handling of the x86 architecture, as well as cache misses and the RAM bandwidth bottleneck. Even cutting the hardware latency to literally zero would not fix any of that. Mainstream computers, and their operating systems, are simply not designed, or particularly well suited, for low latency audio.
So, if you want REALLY low latency (1 ms or so), without glitches, you need to bypass the computer entirely, and run the realtime processing on a DSP in the sound interface, the way that UAD, Pro Tools HD, and some others do. That’s the only type of solution that can deliver this reliably at this point.
What’s 1394, Thunderbolt?
When you mentioned drivers, I heard the actual optimization of the driver code is one of the biggest aspects of latency today. So RME still has the strongest reputation in the driver department at least.
I am using an old audio interface made by Propellerhead (the guys who made Reason). I love the design of it both from the look and more importantly the function. But I often run into “USB” issues that makes it necessary for me to pull and plug the USB cable. Problem is that then the Core Audio re-initialises automatically in Logic and everything has the be reloaded!
I want low latency (< 4-5ms) but more importantly super high reliability. That is why I researched PCI-e as I might be investing in the new Mac Pro later this year in order to scale my business where the computer is currently a bottleneck.
IEEE 1394 is what Apple called FireWire, and Thunderbolt supersedes that standard.
Yeah… I don’t know how much actual optimization there is to do, apart from just avoiding to build retarded hardware (don’t get me started on that… ), but there are certainly chances of screwing things up in there, and some drivers are clearly doing just that.
Not sure if it’s really a USB thing, but I’ve indeed had the Scarlett just die on a few occasions. Fortunately, Cubase on Windoze is rather friendly with that, and can even restart the audio engine without reloading plugins.
I think 4-5 ms should be doable, and your chances might be better on MacOS than on Windows. However, it comes down to how the DAW application (in particular these days, how well it handles multithreading), plugins, sample libraries etc behave, and some are just not very good at distributing the CPU load evenly between buffers.
For example, Kontakt, with certain libraries (like most of the Spitfire orchestral libs), will cause glitches when triggering notes at low latency settings. In the case of Cubase, this hasn’t been a huge issue for me, as it only impacts the “live” plugins (with realtime audio or MIDI) input, whereas everything else has additional buffering, and there is always the option of using different settings when working with such libraries. Orchestral libs tend to have substantial latency anyway, compared to synths, so a few ms more or less is hardly noticeable.
But isn’t the entire thing with Thunderbolt supposed to be “external PCI-e”? I am very sceptical that it can allow equally low latency as it is external after all?? But you know this much better than me.
I only do “live” plugins as you call it as I record every single performance live, when all else is playing at the same time. Which is super hard on the CPU, and I get spikes from the old (1 track chain is tied to one core “bug”/issue) in Logic. =/
Well, PCIe is already a serial interface, even for ports, registers, and interrupts, but we’re talking microsecond latencies; about the same as for the old PCI interface. An for internal/external, once you have an asynchronous protocol (that is, you don’t need to comply to a strict maximum roundtrip latency for things to work at all), the only difference that makes is the time it takes for the signals to physically travel through the wires - which is about 3 nanoseconds per meter of wire.
If solid low latency monitoring through plugins is critical, I’d definitely consider a DSP solution with good plugin support.
It’s certainly possible to get down to fractional millisecond latency on x86 hardware (I’ve done it, with RTOS + custom drivers, on standard PC hardware), but running an actual DAW like that is a pipe dream, unless we’re talking embedded turn-key solutions. It puts extreme requirements on applications and plugins, and the operating systems that can do it have features and capabilities you don’t even want in a desktop or server OS, so I don’t think that’s going to happen any time soon.
So in no way does PCIe vs USB audio interfaces have any difference on CPU and RAM use? Meaning no performance gain from PCIe?
Well, theoretically, PCIe should be best, followed by FireWire/Thunderbolt, and last, USB 2.0. It appears that one key difference is that USB 2.0 does not allow devices to initiate data transfers on their own, but need to wait for the host (PC) to set it up before they can start transferring. (Unlike for example PCIe NICs, where the driver sets up lots of descriptors with preallocated data buffers, and then the hardware can just fill these in via DMA, without driver interaction.) This is one area where I suspect there might be room for some interesting workaround hacks on the driver side with USB 2.0.
Now, USB 3.1 apparently introduces “Streams,” allowing the host to set up multiple buffers in advance, which should even things out further. Just keep in mind that some “USB 3.1” devices just (correctly) state that they can be used with 3.1 hubs - not that they actually use any of the 3.1 features.
Either way, we’re still talking about microseconds here, so for all practical matters, it’s going to be mostly about the design of the hardware and drivers.
I see, it’s just that I read on forums that people who changed from USB to PCIe cards have experienced that all pops/crackles gone, and some people argued that the ‘queue’ for CPU/RAM can sometimes produce pops/crackles on external audio interfaces even if your CPU/RAM has headroom.
Yes - and I’m not surprised. I’m considering going PCIe as well, hoping for a slight improvement in latency, but mostly just getting rid of the random stability issues. However, I really can’t tell if these “USB problems” are just issues with specific models, or if there really is that much difference between USB and PCIe in general.
I think we can conclude that PCIe is less likely to have any of the stability or latency issues often seen with USB - but even that is mostly speculation based on theory and anecdotal evidence.
I’d like to see a comprehensive test of a range of different sound interfaces on a number of different machines and operating systems, measuring lowest “usable” latency, lowest latency that’s rock solid over time, lowest latency under different CPU loads etc.
Well there is this thread I found of audio interfaces and latency, starting with what seems to be an extensive test: https://www.gearslutz.com/board/music-computers/618474-audio-interface-low-latency-performance-data-base.html
PS. It does not however focus on PCIe, just audio interfaces in general.
Awesome! Will study that later.
I think that’s the correct perspective. If there actually is a systematic difference between PCIe and USB solutions, that should be visible in the results - but it’s kind of a moot point anyway, unless the conclusion is simply that every PCIe interface is better than any USB interface.
One thing is for certain. If I indeed invest in the Mac Pro later this year (I’m waiting until their June conference to see if they announce any other viable alternative for me), I will be getting a new audio interface. And I will make sure it’s a damn good one!
I’m sort of in the market for an upgrade as well, but mostly because I want 4+ inputs (maybe even more, as I’ll soon have a proper hardware synth again as well), and better mic preamps, but I think that’s going to add up to a bunch of rack mount devices if I go down the PCIe route - and I’d have WAY more channels than I’ll ever need.
Do you have any “reasonably sized” PCIe solutions in mind?
I’m pretty sure I’m going RME if I choose PCIe. They still have an amazing reputation for optimized drivers both for efficiency and reliability.
However, the downside is that you also need to invest in a rack mountable “dock”…I mean the actual pre-amps and inputs. Unless you want to use the simple breakout cable mess
Yeah… These devices are really aimed at users who really need loads of channels. A bit overkill, but should certainly get the job done.
BTW, I’m kind of learning towards RME as well, part because they have serious Linux drivers as well. (An online acquaintance of mine from the Linux audio days wrote those.)
It’s crazy to say but…PCIe is also proven to be compatible long term. Firewire is gone, USB2 is phasing out. Thunderbolt, well…I don’t know really. But people are using the same PCIe audio interfaces today that they got 15 years ago!
David, read the reviews on this RME PCIe interface, it’s the one I am looking mainly at. Anecdotal of course as all reviews, but man they sell me on it!
On that note, I have a Layla20 laying around from back in the day. 20 bits, up to 50 kHz, 10+8 channels or something… That’s PCI, though - not PCIe. And, I’m pretty sure their drivers dropped support for those models long ago, as they’ve dropped the whole pro audio thing.