
pipelineaudio
Member
If you are far enough down the geekiness rabbit hole that is audio recording, you may have seen a lot about hardware devices which claim to record in 32 bit float lately.
In theory, with the right devices, you may never need to touch a gain knob, or spend your entire recording time wondering if you should have hit that pad switch or turned the gain up a bit, or constantly worry about clipping, because these things have an astronomically insane potential dynamic range of 1528dB!
They accomplish this by having two sets of analog to digital converters, one with very low gain and one with very high gain. These two signals are combined somehow so that even if something clipped one of the converters, the other converter should be low enough to pass it clearly. And the opposite would go for avoiding quantization error. There are numerous potential issues with this, but if it works, it has a massive psychological benefit for some.
In truth, we can only really get around 120dB of dynamic range because of thermal noise and other factors, so we really can get everything we need out of 24 bits. But for many, there's always that nagging feeling that our waveforms look too small, or the fear that one finger pluck too loud or too ambitious of a snare hit will clip.
There was a Reddit thread noting that many of these devices claiming 32 bit float only have a single chip in them and are therefore not actually giving you anything useful by being at 32 bit float, as they are at best 24 bit converters, and now you just have to eat up more hard drive space to record the same thing you could have done with 24 bit. This 32 bit recalls the gory early days of digital when so many products had what we called "marketing bits" -
Dark Corner Studios had a recent video on their YouTube channel to attempt to debunk the myths, but in this video, the representative from Zoom seemed to try to say that you need two converters for 32 bit float because a microphone has a greater dynamic range than any single converter, and then also to say that they had a single converter with more dynamic range than a microphone so you don't need two converters for 32 bit
The host showed an acoustic guitar recording experiment to back that up, but I don't think it did. Here is the video:
The claimed benefit of 32 bit float (aside from never having to touch the input gain on "real" 32bit float converters) was that you could recover audio that would have clipped. But you can't.
In the example he should a squared off waveform then dragged the item level down until it was clear.
But this isn't a benefit exclusive to 32 bit float recording, nor was that single chip converter giving any more resolution than 24 bit would have
ANY sensible DAW converts whatever file bit depth you bring in into a higher resolution format. REAPER is 64 bit float for instance.
I did a follow up video where I showed even a fixed 16 bit file, put in the DAW, cranked to all hell to complete square waves, could be turned down before the output to completely restore the undistorted, unclipped audio
Long and rambly as most of my videos are, skip to around 33 minutes for the 16 bit experiment
In theory, with the right devices, you may never need to touch a gain knob, or spend your entire recording time wondering if you should have hit that pad switch or turned the gain up a bit, or constantly worry about clipping, because these things have an astronomically insane potential dynamic range of 1528dB!
They accomplish this by having two sets of analog to digital converters, one with very low gain and one with very high gain. These two signals are combined somehow so that even if something clipped one of the converters, the other converter should be low enough to pass it clearly. And the opposite would go for avoiding quantization error. There are numerous potential issues with this, but if it works, it has a massive psychological benefit for some.
In truth, we can only really get around 120dB of dynamic range because of thermal noise and other factors, so we really can get everything we need out of 24 bits. But for many, there's always that nagging feeling that our waveforms look too small, or the fear that one finger pluck too loud or too ambitious of a snare hit will clip.
There was a Reddit thread noting that many of these devices claiming 32 bit float only have a single chip in them and are therefore not actually giving you anything useful by being at 32 bit float, as they are at best 24 bit converters, and now you just have to eat up more hard drive space to record the same thing you could have done with 24 bit. This 32 bit recalls the gory early days of digital when so many products had what we called "marketing bits" -
Dark Corner Studios had a recent video on their YouTube channel to attempt to debunk the myths, but in this video, the representative from Zoom seemed to try to say that you need two converters for 32 bit float because a microphone has a greater dynamic range than any single converter, and then also to say that they had a single converter with more dynamic range than a microphone so you don't need two converters for 32 bit
The host showed an acoustic guitar recording experiment to back that up, but I don't think it did. Here is the video:
The claimed benefit of 32 bit float (aside from never having to touch the input gain on "real" 32bit float converters) was that you could recover audio that would have clipped. But you can't.
In the example he should a squared off waveform then dragged the item level down until it was clear.
But this isn't a benefit exclusive to 32 bit float recording, nor was that single chip converter giving any more resolution than 24 bit would have
ANY sensible DAW converts whatever file bit depth you bring in into a higher resolution format. REAPER is 64 bit float for instance.
I did a follow up video where I showed even a fixed 16 bit file, put in the DAW, cranked to all hell to complete square waves, could be turned down before the output to completely restore the undistorted, unclipped audio
Long and rambly as most of my videos are, skip to around 33 minutes for the 16 bit experiment