[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: dynamic range and sample bit depth



I concur that using two DACs would have problems. Using two 
DACs requires their relative contributions to be matched to 
better than the desired bit resolution of the overall 
system.  This is decidedly non-trivial, and is the reason 
that modern DACs are typically single-bit (but high bit 
rate) systems: This provides excellent linearity since 
there are no issues of matching bit weights.    

Best regards,

Bob Masta

====================
On 4 Dec 2014 at 23:45, Michael Fischer wrote:

> Hi Alain,
> 
> what do you mean regarding splitting the signal? From my understanding,
> I see the following problems on this strategy:
> 
> High/lowpass does not seem very helpful to me, since you would have two
> DACs instead of one, which would more or less mean an increase of the 
> noise floor by three dB (both DACs add their noise floor. I assume
> uncorrelated noise here).
> 
> On high/low bits I would expect distortion. You would generate
> distortion on both signal parts, just like a digital clipping. I don't
> think these distortions would cancel completely when adding both analog
> signals (in practice, I mean). Apart from that, the DAC with
> the higher  amplitude would define the noise floor, so you might get no
> improvement at all, even theoretically.
> 
> The most promising approach to me is an analog attenuator after the
> DAC, if you don't want to decrease the SNR on low volumes.
> 
> Michael
> 
> 
> 
> Am 04.12.14 10:47, schrieb Alain de Cheveigne:
> > Hi Etienne,
> >
> > A potential problem is that if you switch during sound output, you must synchronously change the scaling of the data you send to the DAC.  I imagine that this must be hard to do without introducing a glitch.
> >
> > An alternative might be to split the digital data into two signals with different amplitudes (high and low bits, or lowpass & complement), output each via its own DAC, and add in the analog domain with appropriate attenuation.   Can anyone spot a flaw with this approach?
> >
> > Alain
> >
> >
> > On 4 Dec 2014, at 09:10, Etienne Gaudrain <e.p.c.gaudrain@xxxxxxx> wrote:
> >
> >> Hi Dan,
> >>
> >> This is dodging the question a bit, but everything you mention is the very reason one you should not use digital attenuation and have some analog attenuator after their DAC. And I guess this is why many labs hold to their good old TDT PA4 (where PA stands for programmable attenuator). The analog attenuator allows you to move your dynamic range where you want it to be in SPL without sacrificing any bit.
> >>
> >> But since Matlab and most sound cards now-a-days support 24 bit audio and sampling frequencies up to 96kHz, why not use these? If you are worried about space, you can also now store your files in FLAC directly from Matlab. Or was your worry that people don't pay attention to this when they design experiments for NH and HI?
> >>
> >> Cheers,
> >> -Etienne
> >>
> >>
> >> On 03/12/2014 19:05, Dan Goodman wrote:
> >>> Dear auditory list,
> >>>
> >>> I have been worried about an issue to do with sampling bit depth and dynamic range for a while, and I have not yet been able to find a definitive answer. Hopefully some of you may be able to shed some light on this.
> >>>
> >>> Essentially, the question revolves around the fact that with a digital signal, when it is attenuated you lose information. For example, for a signal with 16 bits per sample, if you attenuate by 20*log10(2^8)=48 dB then the output will only be using 8 bits per sample. Having listened to 8 bit sounds, they are clearly of very poor quality. So, although it is often written that the 'dynamic range' of 16 bit sound is 96 dB (=20*log10(2^16)), at even 48 dB of attenuation the quality becomes terribly poor.
> >>>
> >>> So some questions:
> >>>
> >>> 1. How many bits per sample do we need for a high quality encoding of a sound without any attenuation?
> >>>
> >>> 2. How much dynamic range can we therefore get from standard audio systems using 16 or 24 bits per sample?
> >>>
> >>> 3. Are we routinely using more than this dynamic range in our experiments (and in musical recordings) and is this a problem for the results of, for example, studies mixing normal and hearing impaired listeners?
> >>>
> >>> 4. Is there anything we can do about this?
> >>>
> >>> Some more details:
> >>>
> >>> The clearest thing I have managed to find on this subject so far is a paper by Bob Stuart of Meridian Audio (https://www.meridian-audio.com/meridian-uploads/ara/coding2.pdf) that concludes that if you have 19 bits per sample at a 52 kHz sample rate, and you use dithering, and your audio system doesn't do any further processing on the sound, then at 90 dB attenuation from the maximum level you shouldn't hear any noise from the encoding (based on the extremes of measured hearing thresholds). This suggests 
that using 20 bit audio you can probably get 96 dB of high quality dynamic range (see below for why I mention 20 bit audio).
> >>>
> >>> This doesn't take into account that many (most?) researchers are probably not dithering their signals. As far as I can tell, Matlab's wavplay and audioplayer functions do not use dithering, for example. So how much dynamic range are we getting without introducing noise if we don't use dithering? And are any of the commonly used packages for playing sounds doing this dithering?
> >>>
> >>> Note: I mentioned 20 bit audio because I have read that 24 bit DACs only really use at most 22 bits of the signal, and due to thermal noise give about 20 bits noise free. I worked with one system in the past that I was told allowed you to select which 22 bits were used (although this was hardware specific and had to be coded in at a very low level, not using standard audio APIs).
> >>>
> >>> I am very far from an expert on any of this, but what it seems like to me is that we need to be using 24 bit audio and (very importantly) dithering, and if so we can probably get 96 dB of high quality dynamic range. It is possible that in some experimental setups, especially if we're testing normal hearing and hearing impaired listeners across a wide range of sounds on the same system, that we might be exceeding this. If so, is there anything we can do?
> >>>
> >>> Finally, any thoughts on the relevance for music / commercial audio? I guess it is much less of an issue there since the problems only seem to arise if you really push the limits of dynamic range.
> >>>
> >>> Thanks in advance,
> >>> Dan Goodman
> >>
> >> --
> >> Etienne Gaudrain, PhD
> >>
> >> UMCG, Afdeling KNO, BB20
> >> PO Box 30.001
> >> 9700 RB Groningen, NL
> >>
> >> Room P3.236
> >> Phone +31 5036 13290
> >> Skype egaudrain
> >>
> >> Centre de Recherche en Neurosciences de Lyon - CNRS UMR 5292
> >> Université Lyon 1
> >> 50 av. Tony Garnier
> >> 69366 Lyon Cedex 7, FR
> >>
> >> Note: emails to this address are limited to 10 MB. To send larger files, use egaudrain.cam@xxxxxxxxx.
> >

Bob Masta
 
            D A Q A R T A
Data AcQuisition And Real-Time Analysis
           www.daqarta.com
Scope, Spectrum, Spectrogram, Signal Generator
    Science with your sound card!