[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

*To*: AUDITORY@xxxxxxxxxxxxxxx*Subject*: Simulating a sound field and two mics*From*: Pradyumna S Upadrashta <prad@xxxxxxxxxxxxxxxx>*Date*: Thu, 17 Apr 2003 09:51:00 -0500*Delivery-date*: Thu Apr 17 11:08:44 2003*Importance*: Normal*In-reply-to*: <200304170401.h3H41DIw017754@prion.mcgill.ca>*Organization*: brain sciences center, VA Medical Center*Reply-to*: prad@xxxxxxxxxxxxxxxx*Sender*: AUDITORY Research in Auditory Perception <AUDITORY@xxxxxxxxxxxxxxx>

Dear Ki-Young, Simulating your situation is straightforward. It is essentially the "forward problem" -- that is, you define a linear mixing matrix and proceed to write out the linear model that describes the activity at each microphone, given a specific source distribution. I'm more familiar with "forward" (you provide the geometry) and "inverse" (no solutions without making assumptions) problems in EEG / MEG analysis. Say Vout(i) is the voltage measured at your microphone, i, then you can write the forward model as: Vout(i) = G1(i)*Y1 + G2(i)*Y2 + ... + Gn(i)*Yn + error Where G1, G2, G3,...,Gn are the individual "mixing matrices" for each source Yk, with n sources total. Vout(1) is the output measured at mic 1 Vout(2) is the output measured at mic 2 All you need now is a reasonable approximation of the geometry of your scenario, which will determine the elements of the Gk's. It will have to account for loss as a function of distance and medium. Presumably this is constant for a given geometry, which is what allows us to write the linear equation above; i.e., the loss w.r.t. distance is nonlinear since sound amplitude should fall off as r^2; i.e., sound generically propagates with a spherical wavefront and is described by compression/rarefaction or longitudinal propagation. Of course the situation is complex for different types of sound sources; for a good overview, look at the following: http://www.kettering.edu/~drussell/Demos/rad2/mdq.html http://www.kettering.edu/~drussell/Demos/radiation/radiation.html The Yk are the activity (amplitudes) of the sources over time, i.e., a time-series of the amplitudes generated by the sources. The "error" term accounts for spurious background noises in the environment (e.g., air flow around the mic head, spurious noises ala 60Hz electrical frequencies, etc.). Measurement of the output of a single/double microphone in a "silent" room might be a good approximation of this term. Hope that helps. Or at least, hope it doesn't confuse ;-) I might have missed something since I did this rather quickly. pradyumna __________________________________ Pradyumna S. Upadrashta, PhD Student prad@med.umn.edu 612-725-2000 x 1464 >----- Original Message ----- >From: "Ki-Young Park" <pkyoung@EEINFO.KAIST.AC.KR> >To: <AUDITORY@LISTS.MCGILL.CA> >Sent: 13 April 2003 17:23 >Subject: correlation btw 2 singals incoming two ears from >distributed souces > > >> Dear all, >> >> I am working on the speech stuff, recognition and enhancement. While >> using two signals with two microphones moderate distance >apart, say a >> few tens of centimeters. I assume there is one speech source, >> and distributed noise sources all around a room >> instead of a point-source. >> ( and also assume additive noise. ) >> >> Is there publication on the correlation of two signals incoming into >> two mics, when there are distributed noise sources around. >> >> and is there any way to simulate this situation? >> >> Any comments and references will be appreciated. >> >> Thank you. >> >> > >------------------------------ > >End of AUDITORY Digest - 15 Apr 2003 to 16 Apr 2003 (#2003-84) >************************************************************** >

- Prev by Date:
**Re: correlation btw 2 singals incoming two ears from distributed souces** - Next by Date:
**unpleasant sounds** - Previous by thread:
**ISMIR 2003 CFP - REMINDER** - Next by thread:
**Ki-Young 2 mic sound field question** - Index(es):