ASA 125th Meeting Ottawa 1993 May

2pMU9. Interactive computational auditory scene analysis: An environment for exploring auditory representations and groups.

Malcolm Crawford

Martin Cooke

Guy Brown

Dept. of Comput. Sci., Univ. of Sheffield, Regent Court, 211 Portobello St., Sheffield S1 4DP, England

Computational modeling of auditory scene analysis (ASA) offers a new paradigm for experimentation. It permits a novel approach to the development of theories of grouping, and to the design of experimental stimuli. For example: (i) grouping algorithms can be implemented, and validated against experimental data; (ii) experimental data can be analyzed to suggest a possible representation in the auditory system, and to test conformance with expectations; (iii) computational implementation can expose deficiencies in current theory. Over the last 4 years, the Sheffield Auditory Group has developed a rich set of representations used for investigating computational ASA [G. J. Brown, ``Computational Auditory Scene Analysis: A Representational Approach,'' Ph.D. thesis, University of Sheffield (1992); M. P. Cooke, Modelling Auditory Processing and Organisation (Cambridge U.P., Cambridge, UK, 1993)]. There representations include computational maps for onsets, offsets, frequency transitions, and periodicities, in addition to higher-level symbolic representations of acoustic components. Recently, an environment has been created that brings together this diverse collection into a uniform framework for display, resynthesis, and experimentation. The environment supports experimental investigation and allows the ``debugging'' of stimulus selection. Further, it acts as a canvas onto which the results of auditory grouping can be drawn. It also serves as a tutorial in this increasingly complex field. The practical application of these points is illustrated in a case study that maps the path from stimulus generation to grouping by listeners or machine.