|[Apologies for cross-postings]|
[Feel free to forward]
At the Research Center for Science and Technology for Art (CITAR - http://citar.ucp.pt) of the School of Arts of the Portuguese Catholic University (EA-UCP - http://www.artes.ucp.pt) we are happy to announce two research
opportunities in the context of the research project
"A Computational Auditory Scene Analysis Framework for Sound Segregation in Music Signals”,
funded by Portuguese national funds through FCT/MCTES (PIDDAC - Project Reference: PTDC/EIA-CCO/111050/2009).
(1) A 6-month (renewable up to 3 years) BI (Master) research fellowship (more detailed information at: http://www.eracareers.pt/opportunities/index.aspx?task=global&jobId=22297 )
(2) A 3-month (renewable up to 6 months) BI research fellowship (more detailed information at: http://www.eracareers.pt/opportunities/index.aspx?task=showAnuncioOportunities&jobId=22298&idc=1)
This project will address the identification and segregation of sound events in “real-world” polyphonic music signals (including monaural audio signals). The goal is to individually characterize the different sound events comprising the polyphonic mixture, and use this structured representation to improve the extraction of perceptually relevant information from complex audio and musical mixtures and leverage the manipulation and interaction with complex music signals. The project will follow a Computational Auditory Scene Analysis (CASA) approach for modeling perceptual grouping in music listening. This approach is inspired by the current knowledge of how listeners perceive sound events in music signals, be it music notes, harmonic textures, melodic contours, instruments or other type of event, requiring a multidisciplinary approach to the problem.
Several software and hardware prototypes will be developed in the scope of the project, which should integrate sound analysis algorithms and human-computer interaction systems for sound segregation manipulation and evaluation (e.g. development of graphical, multi-touch, tangible, gesture, sonic or other types of natural user interfaces).
The project will also include the recording, mixing, annotation (i.e. metadata about each music piece, including MIDI information about the musical content, etc) and storage of a music dataset to be used for testing and evaluation of the software modules developed in the scope of the project. This music dataset will include music pieces (from various musical genres, instruments, etc.) played live by musicians in a recording studio.
The applicants to these positions will be mainly involved in the software development of both the software modules for sound analysis/segregation and user interfaces, as well as in the creation of the evaluation datasets.
The researchers will integrate into an international research team (namely Dr. Luis Gustavo Martins - CITAR/EA-UCP, Dr. George Tzanetakis - UVic, Dr. Mathieu Lagrange - IRCAM, Dr. Fabien Gouyon - INESC Porto, Dr. Stephen McAdams - CIRMMT/McGill, Dr. Jorge Cardoso - FEUP, Dr. Aníbal Ferreira - FEUP) and will have the opportunity to collaborate on several funded research projects, together with a number of international partners.
The work will be conducted at the Research Center for Science and Technology in Art (CITAR) of Universidade Católica Portuguesa – Centro Regional do Porto, Portugal, under the supervision of Dr. Luís Gustavo Martins.
Preferred start of contract: 04/04/2011
Location: Porto, Portugal
Working language: English
Detailed information about the available positions can be found at:
For more information on the grant and modalities for application,
please contact me directly in answer to this email.
Luís Gustavo Martins
Sound and Image Department
School of Arts of the Portuguese Catholic University