[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: psychoacoustically driven temporal approximation



Not exactly the method you describe, but Josh McDermott and Eero Simoncelli have done some very compelling work on synthesizing sound textures from statistics of biologically plausible signal representations. Their method doesn't involve spectral weighting, per se, but it does synthesize sound based on a perceptual model (or from features/statistics that are plausibly accessible to auditory perception) rather than a signal model (such as additive synthesis), which sounds like the direction you are pursuing. 

McDermott, J. H., & Simoncelli, E. P. (2011). Sound Texture Perception via Statistics of the Auditory Periphery: Evidence from Sound Synthesis. Neuron, 71(5), 926–940. doi:10.1016/j.neuron.2011.06.032


Kelly

On Mar 5, 2014, at 4:01 AM, Joachim Thiemann <joachim.thiemann@xxxxxxxxx> wrote:

> Hello Alberto,
> 
> I'm not entirely sure, but this sounds a bit like my thoughts when I
> started my Ph.D. many years ago: can I synthesize an audio waveform
> from a perceptual representation of another audio signal?  (and what
> does that imply about that particular perceptual representation?)
> 
> My answer was to so by iterative resynthesis: make a first good guess
> of the inverse perceptual transform, then correct for the error.  Key
> point is that the correction needs to go in the right direction.  My
> perceptual transform was a set of sparsely sampled hilbert envelopes
> of the outputs of a gammatone filterbank.
> 
> If you want you can have a look at my thesis "A Sparse Auditory
> Envelope Representation with Iterative Reconstruction for Audio
> Coding", linked from my homepage
> (http://jthiem.bitbucket.org/research.html), you can find the MATLAB
> code on that page too. Of course, in my thesis I refer to work that
> others have done in a similar vein.
> 
> Cheers,
> Joachim.
> 
> On 4 March 2014 12:05, JesterN Alberto Novello <jestern77@xxxxxxxx> wrote:
>> Hi all,
>> i'm trying to find a way to approximate the sample values of an audio
>> waveform in time domain.
>> I want a method that takes care of approximating perceptually-relevant audio
>> bands better than others.
>> Basically a spectral-weighted temporal approximation method.
>> In my head it's not clear how to connect frequency components to specific
>> samples in the time domain.
>> Any DSP wizard out there with a good idea/papers ?
>> Best regards
>> Alberto
>> 
> 
> -- 
> Joachim Thiemann :: http://jthiem.bitbucket.org ::
> http://signalsprocessed.blogspot.co

Kelly Fitz, Ph.D.
Principal Research Engineer
Signal Processing Research Department | Starkey Hearing Technologies