[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: [AUDITORY] Visual references in sound localisation



The McGurk effect is *not* an example of visual dominance but of audio-visual integration.

 

Some studies on audio-visual localization:

 

Battaglia, P. W., Jacobs, R. A., & Aslin, R. N. (2003). Bayesian integration of visual and auditory signals for spatial localization. Journal of the Optical Society of America a-Optics Image Science and Vision, 20(7), 1391-1397. doi: 10.1364/josaa.20.001391

Arnott, S. R., & Goodale, M. A. (2006). Distorting visual space with sound. Vision Research, 46(10), 1553-1558. doi: 10.1016/j.visres.2005.11.020

Bowen, A. L., Ramachandran, R., Muday, J. A., & Schirillo, J. A. (2011). Visual signals bias auditory targets in azimuth and depth. Experimental Brain Research, 214(3), 403-414. doi: 10.1007/s00221-011-2838-1

Garcia, S. E., Jones, P. R., Reeve, E. I., Michaelides, M., Rubin, G. S., & Nardini, M. (2017). Multisensory cue combination after sensory loss: Audio-visual localization in patients with progressive retinal disease. Journal of Experimental Psychology: Human Perception and Performance, 43(4), 729-740. doi: 10.1037/xhp0000344

 

Best

 

Daniel

 

---------------------------------

Dr. Daniel Oberfeld-Twistel

Associate Professor

Johannes Gutenberg - Universitaet Mainz

Institute of Psychology

Experimental Psychology

Wallstrasse 3

55122 Mainz

Germany

 

Phone ++49 (0) 6131 39 39274

Fax   ++49 (0) 6131 39 39268

http://www.staff.uni-mainz.de/oberfeld/

https://www.facebook.com/WahrnehmungUndPsychophysikUniMainz

 

From: AUDITORY - Research in Auditory Perception [mailto:AUDITORY@xxxxxxxxxxxxxxx] On Behalf Of Les Bernstein
Sent: Sunday, February 25, 2018 6:52 PM
To: AUDITORY@xxxxxxxxxxxxxxx
Subject: Re: Visual references in sound localisation

 

I believe the question was about what would occur under the circumstances described and the nature of visual influences.  It's also important to recognize that the visual modality is not dominant, per se.  Rather, it depends upon the reliability (indexed by variance) of the transduced information within each modality.  One can manipulate those variances and humans will generally weight the information within each modality in a near-optimal fashion (inverse to the variance).  Think Alais and Burr.

On 2/25/2018 12:50 AM, Kent Walker wrote:

When doing localization tests best practice is to use visually-opaque acoustically-transparent curtains. However, it's also best practice to provided respondents with visual references which they can use to respond. 

 

Depending on the perceptual task, providing a reference stimulus with known location (visual & acoustic) can be extremely useful. 

 

In audio engineering, things get more interesting when visual and auditory cues are in different spatial locations. 

 

For example, in film sound mixing dialogue is pretty much always mixed to the centre channel only, even when actors are visible at the left and right of the projected image. There are technical limitations that prevent using phantom sources to match the sound to the  viewed location of the actors. The visual-auditory mismatch is generally not annoying or troublesome and we perceive the dialogue as eminating from the visual location on the screen - not the physical location of the loudspeaker. In large theatres the physical mismatch between the stimuli can be quite large, routinely 30 feet.

 

This is because in multimodal perception vision generally dominates (think McGurk effect).

 

On Feb 24, 2018 22:11, "Engel Alonso-Martinez, Isaac" <isaac.engel@xxxxxxxxxxxxxx> wrote:

Dear all,

 

I am interested in the impact of audible visual references in sound localisation tasks.

 

For instance, let's say that you are presented two different continuous sounds (e.g., speech) coming from sources A and B, which are in different locations. While source A is clearly visible to you, B is invisible and you are asked to estimate its location. Will source A act as a spatial reference, helping you in doing a more accurate estimation, or will it be distracting and make the task more difficult?

 

If anyone can point to some literature on this, it would be greatly appreciated.

 

Kind regards,

Isaac Engel

 

 

--
Leslie R. Bernstein, Ph.D. | Professor
Depts. of Neuroscience and Surgery (Otolaryngology)| UConn School of Medicine

263 Farmington Avenue, Farmington, CT 06030-3401
Office: 860.679.4622 | Fax: 860.679.2495

cid:image001.png@01D3AEE1.BAA81A90