[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]
Re: An Auditory Illusion
Dear Auditory list,
> Since one table is red, the firing of RED and TABLE must be synchronized.
> Since the other table is blue, BLUE and TABLE must also be synchronized.
> Following the argument to completion implies that all four concepts must
> be synchronized. What we need is two separate instantiations of RED, one
> bound to TABLE, the other bound to BALL.
This isn't really a problem for the oscillatory framework. The
idea is the following: to have an oscillator assembly corresponding to
TABLE oscillate with double frequency so that this assembly synchronizes with
both the BLUE assembly and the RED assembly. At the same time the synchronized
assembly for BLUE TABLE and that for RED TABLE are desynchronized.
The above idea was used in an early paper on oscillatory associative memory
(Wang, Buhmann, and von der Malsburg, 1990, refs. below) to handle the
overlap problem, essentially the same as the above problem. But the oscillator
model used there proves to be too clumpsy. A recent model by Brown and Wang
(1996) explicitly addressed the problem of "duplex perception" in audition
using the same idea. But the Brown/Wang model is based on relaxation
oscillator networks, which have an elegant theory and computational properties
behind (see below).
Christian Kaernbach writes
> It woul be really better to ask people of the von der Malsburg group
> how they deal with this problem, but as far as I know their theory it
> works well even for a male voice at the left ear and a female voice
> at the right ear saying the same word ("happy"). The first ideas
> about synchronisation were associated with oscillations (e.g. 40 Hz
> in cats visual cortex), but this is no longer up to date. If people
> still model synchrony with oscillations, they use at least chaotic
> oscillators which show a multitude of limit cycles depending on their
My group has been working on oscillatory correlation for years. Oscillatory
correlation provides a rich representation, which appears sufficient for
a variety of perceptual tasks. But representing a solution is one thing,
computing the solution is another. So far the discussion is focussed on the
former, but the latter is often proven to be more difficult.
We have made quite a progress in computations using locally excitatory
global inhibitory relaxation oscillators, which admit an elegant computational
theory. See Wang (1996) for an auditory account, and Brown/Wang for modeling
double vowel separation. For vision, a paper just came out of the last
issue of Neural Computation (Wang and Terman, 1997).
The use of chaotic oscillators has not proven to be productive presumably
because of the computational difficulty associated. Perhaps Dr. Kaernbach
can elaborate on the last statement in the above quote?
> Many models don't use osscilations at all. If you think
> about non-periodic synchrony, you can imagine many different ways how
> to implement perfect synchrony, partial synchrony, no synchrony,
> antisynchrony, and even net-like correlations that could very well
> code the above situations. Node MALE could well be in synchrony with
> node LEFT and with node HAPPY, and node FEMALE with node RIGHT and
> node HAPPY, without nodes MALE/LEFT and nodes RIGHT/FEMALE being in
> synchrony. Node HAPPY could, e.g. fire with a 60-Hz cycle, nodes
> RIGHT/FEMALE firing with approx. 30-Hz bursts and taking every even
> burst, and nodes MALE/LEFT taking every odd burst. And this
> explanation was on the basis of periodic osscillations. Nonperiodic
> synchrony could probably do even better. I am well aware that the
> correlation theory is not apt to solve all binding problems today.
> The upoint I wanted to make is that one can imagine several
> mechanisms how node based systems could deal with complicated binding
> problems. It is another job to prove that this is the way it is done.
The above point is similar to what we actually modeled. Again, imagining
is one thing, computing is quite another.
Perhaps a little history on oscillatory modeling could help here.
Much is drawn on visual modeling, where experimental evidence is most
comprehensive (some from the auditory system too). The idea of using
oscillators for binding was first proposed by P. Milner in 1974 (one could
find close speculations in Hebb's well-known book of 1949), and
systematically proposed and advocated by C. von der Malsburg since 1981.
Since the discovery of coherent oscillations in the visual cortex in late 80s
(Eckhorn's group published in 1988, and Singer's group published in 1989),
there are perhaps more than 100 papers published on modeling oscillations and
on building systems for image analysis. A persistent stumbling block has
been the lack of a reliable computational mechanism for achieving rapid
synchronization based on local coupling (important for encoding topology),
and rapid desynchronization in the presence of multiple
organizations (streams in audition). The problem was recently resolved, and
thus there is ground for optimism for making substantial progress in this
direction. This again illustrates the importance of computational mechanism.
The references listed below contain more comprehensive accounts.
Brown G.J. and Wang D.L. (1996): Modelling the perceptual segregation of
double vowels with a network of neural oscillators. Technical report
CS-96-06, Sheffield Univ. Computer Sci. Dept. (available by anonymous
FTP from the site "ftp.dcs.shef.ac.uk", followed by the commands:
"cd /share/spandh/pubs/brown" and "get bw-report96.ps.Z"). A revised
version to appear in Neural Networks.
Wang D.L. (1996): Primitive auditory segregation based on oscillatory
correlatoin. Cognitive Science, vol. 20, 409-456. (available on
my web "http://www.cis.ohio-state.edu/~dwang")
Wang D.L. and Terman D. (1997): Image segmentation based on oscillatory
correlation. Neural Computation, vol. 9, 805-836.