PreSonus Chief Tech Officer Bob Tudor on the Audio Engine In the StudioLive 32.4.2AI

For those who do not know Bob, he is the kind of guy who gets described using words like “genius” by people who do not throw words like that around casually. When we asked about design changes with an assumption that it was hardware and preamps, Bob set us straight. The following is direct from his keyboard….

The components in the hardware are unchanged.  But the layouts are improved and the stronger power supply can improve the sound.  Even using the same components,  we can make incremental improvements from one generation to the next.  The beauty of analog is that one product to the next,  it never quite sounds the same.


As an enterprise,  we share a code base used in all our software. iPAD, iPhone, embedded, DAW, etc.. and soon Notion branded products.  This was an initiative we started years ago.  It is a hill to climb combining the cultures,  rewriting working products with zero visible gain, etc.  but in 2013 and the AI series we “arrived”.

In the AI line, most of this sharing is done on the control side of the code related to networking, presets, mixer states, etc..  which is why the new AI series products are so powerful in that regard.  By running an advanced OS (Linux) on an ARM core, we inherited so much capability from Studio One’s code base because we can use shared code “cross compiled”.

Even though the DSP is 32 bit floating like Gen 1,  what is different is we moved the coefficient calculations out of the DSP and with AI we manage them in the ARM core running Linux,  and in critical places we use doubles  (64 bit values).

The coefficient generation is therefore 64 bit data types in the AI series,  whereas the Gen 1 mixer did everything in the DSP using 32 bit data types.  

Another key area we improved was our control tapers. They were 8 bit in Gen 1, meaning  there were 256 steps on a fader or a knob feeding a lookup table. 

Now there are no steps or lookup tables..  They are interpolated using floating point values “on the fly”.  This removes the more “choppy” resolution and makes the knobs and faders more like analog resolutions.

Of course a physical fader can only be scanned at 10 bits,  but all the value storage in the AI mixers (pots, encoders, and what the user types in manually) is super high granularity.

This is where people get confused.  Bit talk.  There is the native bit depth of a processor,  the size of a data type used in calculations and value ranges,  and the number of bits of an audio sample.  

For example,  if you multiple a huge number by huge number then you later on divide by a huge number  (we called this MulDiv in the old days), you would get round off errors unless you planned for it.   It’s not that you need 64-bit audio samples to hold the end result,  you just need a 64-bit data type in the intermediate formulae or things get rounded.  Exponential, logarithmic, and other trigonometry functions can benefit greatly by the added depth of 64 bit data types. And when used in hundreds of places in the mix,  it can really help.

The ultimate resulting coefficients of a biquad, trig function, or advanced math algorithm is sent to the DSP as 32-bit float,  but all the intermediate calculations have this higher mathematical head room.

The audio engines in StudioLive and Studio One are not the same.  But they do share some tricks.  An embedded audio engine and a desktop audio engine are much different from one another.  One is dynamic, and one is static and highly optimized.   I wrote the DSP engine in the mixer myself and it was a lot of fun to work on.  It really is a different experience than writing a dynamic DAW audio engine that targets any computer out there.  

However,  Matthias Juwan (founder and software architect of StudioOne) is a good friend and I consulted heavily with him.  It was refreshing for me to learn some new things from the younger crowd. We opted to use a plugin interface for all the DSP that he defined among other strong contributions that make it easy to move code in and out of the mixer and Studio One.   How channels are summed to busses/auxes, ramped, etc..  I wrote based on prior mixer experiences but I had a huge advantage being able to consult with our experts in Germany.  It’s a C++ object oriented architecture and its state of the art.  Probably the most advanced code running inside a hardware mixer.  

The object-oriented design approach does not negatively impact the performance as many developers might believe.  In fact,  there are ways to use modern software design to outperform legacy assembly strategies, especially now that compilers have evolved to outperform human beings especially related to memory cache and pipelining on multi-core processors.

What makes it super special is that the same audio engine is shared between seven different products that are all shipping now.  The four new AI speakers,  and the three new AI mixers.  That was a very important decision that enabled us to provide a consistent sound and performance across everything AI. (Plus it’s easier to improve and maintain one engine instead of seven.)   

Also, we no longer soft clip the channels digitally because it’s not necessary when a mix engine is designed optimally.  I believe that may have also changed the sound.   

Those are the main changes. It is most accurate to say that the DNA of our German engineers has been morphed together with that of our mixer culture.  We share tricks. We share code.  We share methodology,  all of which improves our technology on both ends.

One more great feature is all seven products can run in complete simulation on a Mac or Windows machine.  A rendering of the top panel of each product is fully functional and since our code is cross-compiled, we edit and improve it on a Mac/PC, then flick a switch and compile it for the embedded DSP.  That’s how we built seven advanced DSP products in just two years.

And that’s how we will be able to make derivative DSP products rapidly. In other words, stay tuned. There’s more coming.