Call for Papers - IEEE Workshop on Broadcast and User-generated Content Recognition and Analysis (BRUREC) at ICME 2013 (jinyu han )


Subject: Call for Papers - IEEE Workshop on Broadcast and User-generated Content Recognition and Analysis (BRUREC) at ICME 2013
From:    jinyu han  <jinyuhan2008@xxxxxxxx>
Date:    Thu, 10 Jan 2013 23:42:19 -0800
List-Archive:<http://lists.mcgill.ca/scripts/wa.exe?LIST=AUDITORY>

--e89a8f2356bd672b9a04d2fe6f2e Content-Type: text/plain; charset=windows-1252 Content-Transfer-Encoding: quoted-printable Dear Auditory List, On behalf of the BRUREC 2013 Committee, we would like to invite you to be part of the 1st IEEE International Workshop on Broadcast and User-generated Content Recognition and Analysis (BRUREC - http://www.BRUREC.org<http://www.brurec.org/>), held in conjunction with ICME 2013 (http://www.icme2013.org), between July 15-19 in San Jose, California USA. A PDF call for papers is accessible at: https://sites.google.com/site/icmebrurec/CfPs_BRUREC.pdf?attredirects=3D0&d= =3D1. If you have any enquiries, please get in touch with me (jhan@xxxxxxxx) or email BRUREC2013@xxxxxxxx Please help us circulate this CfPs widely and forgive any cross-postings. Best regards, Jinyu Han ---------------------------------------------------------------------------= ------------------- *************** FIRST CALL FOR PAPERS *************** ---------------------------------------------------------------------------= ------------------- *** 1st IEEE International Workshop on Broadcast and User-generated Content Recognition and Analysis (BRUREC) at ICME 2013*** ---------------------------------------------------------------------------= -------------------------- ***** July 15-19, 2013 =95 Fairmont Hotel, San Jose, USA | http://www.BRUREC.org <http://www.brurec.org/> ***** ---------------------------------------------------------------------------= -------------------------- In the past decade, we have seen great advancement in the area of visual and acoustic content recognition and analysis. Audio fingerprinting, for example, has led to many successful commercial applications and fundamentally changed the way people listen to, share and store music. In the meanwhile, research and development in visual content identification has reached a watershed, and large-scale commercial applications have started to emerge. Automated Content Recognition (ACR) applications have found their way into consumer applications. In the meanwhile, major Hollywood movie studios and TV networks have adapted ACR to track and manage their content at large scale. While broadcast quality video and audio analysis is already at an advanced stage, consumer-produced content analysis is not. Great potential exists on the media connectivity between broadcast media and user-generated content. Therefore, in the last few years, user-generated content has attracted increasing attention from both academia and industry. This workshop aims to extend the ICME conference by focusing on algorithms, systems, applications, and standards for content recognition and analysis that can be applied across video and audio domains. BRUREC will cover all aspects on multimedia data generated by broadcast media such as TV and Radio as well as user-generated content such as Youtube and Vimeo. The goal of the workshop is to bring together researchers and practitioners in both industry and academia in the scientific community of content recognition and analysis across video, audio and multimedia domains to discuss the latest advance, challenges and unaddressed problems as well as exchange views, ideas in related technologies and applications, in which we attempt to advance the State of the Art of Broadcast and User-generated Content Recognition and Analysis. --------------------------------------------------------------------- ***** Topics include but are not limited to: ***** --------------------------------------------------------------------- Content recognition and analysis algorithms and techniques: - Video and audio fingerprinting for content identification - Segmentation and classification of audio and visual content - Image classification and recognition - Features and descriptors for video and audio content - Audio and visual content clustering - Large database indexing, matching, and search - Machine learning for content classification - Evaluation of content-based identification and classification Content identification systems and applications: - Automated Content Recognition (ACR) - TV-centric content analysis and recognition - Emerging standards related to visual and audio content identification - Automatic content recognition from TV or Radio - Implementation of content recognition systems and services - Content identification in mobile devices - Other content recognition based applications (e.g., recommendation and ad targeting) ---------------------------------------------------------- ********** Important Dates ********** ---------------------------------------------------------- -Paper Submission: March 7, 2013 -Notification of Acceptance: April 15, 2013 -Camera-Ready Paper Due: April 30, 2013 ---------------------------------------------------------- ********** Organizers ********** ---------------------------------------------------------- Jinyu Han Gracenote, Inc. jhan@xxxxxxxx Gerald Friedland ICSI, UC Berkeley fractor@xxxxxxxx Peter Dunker Gracenote, Inc. pdunker@xxxxxxxx --e89a8f2356bd672b9a04d2fe6f2e Content-Type: text/html; charset=windows-1252 Content-Transfer-Encoding: quoted-printable <p class=3D"p1">Dear Auditory List,</p><p class=3D"p1">On behalf of the BRU= REC 2013 Committee, we would like to invite you to be part of the 1st IEEE = International Workshop on Broadcast and User-generated Content Recognition = and Analysis (BRUREC - <a href=3D"http://www.brurec.org/"><span class=3D"s1= ">http://www.BRUREC.org</span></a>), held in conjunction with ICME 2013 (<a= href=3D"http://www.icme2013.org/"><span class=3D"s1">http://www.icme2013.o= rg</span></a>), between July 15-19 in San Jose, California USA.=A0</p> <p class=3D"p3"><span class=3D"s2">A PDF call for papers is accessible at: = <a href=3D"https://sites.google.com/site/icmebrurec/CfPs_BRUREC.pdf?attredi= rects=3D0&amp;d=3D1"><span class=3D"s1">https://sites.google.com/site/icmeb= rurec/CfPs_BRUREC.pdf?attredirects=3D0&amp;d=3D1</span></a>. If you have an= y enquiries, please get in touch with me (<a href=3D"mailto:jhan@xxxxxxxx= com"><span class=3D"s1">jhan@xxxxxxxx</span></a>) or email <a href=3D"= mailto:BRUREC2013@xxxxxxxx"><span class=3D"s1">BRUREC2013@xxxxxxxx</span>= </a>.=A0</span></p> <p class=3D"p1">Please help us circulate this CfPs widely and forgive any c= ross-postings.</p> <p class=3D"p1">Best regards,</p> <p class=3D"p1">Jinyu Han</p><p class=3D"p1"><br></p> <p class=3D"p1">-----------------------------------------------------------= -----------------------------------</p> <p class=3D"p1">*************** FIRST CALL FOR PAPERS ***************</p> <p class=3D"p1">-----------------------------------------------------------= -----------------------------------</p> <p class=3D"p1">*** 1st IEEE International Workshop on Broadcast and User-g= enerated Content Recognition and Analysis (BRUREC) at ICME 2013***</p> <p class=3D"p1">-----------------------------------------------------------= ------------------------------------------</p> <p class=3D"p1">***** July 15-19, 2013 =95 Fairmont Hotel, San Jose, USA | = <a href=3D"http://www.brurec.org/"><span class=3D"s1">http://www.BRUREC.org= </span></a> *****</p> <p class=3D"p1">-----------------------------------------------------------= ------------------------------------------</p> <p class=3D"p2"><br></p> <p class=3D"p1">In the past decade, we have seen great advancement in the a= rea of visual and acoustic content recognition and analysis. Audio fingerpr= inting, for example, has led to many successful commercial applications and= fundamentally changed the way people listen to, share and store music. In = the meanwhile, research and development in visual content identification ha= s reached a watershed, and large-scale commercial applications have started= to emerge. Automated Content Recognition (ACR) applications have found the= ir way into consumer applications. In the meanwhile, major Hollywood movie = studios and TV networks have adapted ACR to track and manage their content = at large scale. While broadcast quality video and audio analysis is already= at an advanced stage, consumer-produced content analysis is not. Great pot= ential exists on the media connectivity between broadcast media and user-ge= nerated content. =A0 Therefore, in the last few years, user-generated conte= nt has attracted increasing attention from both academia and industry.=A0</= p> <p class=3D"p2"><br></p> <p class=3D"p1">This workshop aims to extend the ICME conference by focusin= g on algorithms, systems, applications, and standards for content recogniti= on and analysis that can be applied across video and audio domains. BRUREC = will cover all aspects on multimedia data generated by broadcast media such= as TV and Radio as well as user-generated content such as Youtube and Vime= o. The goal of the workshop is to bring together researchers and practition= ers in both industry and academia in the scientific community of content re= cognition and analysis across video, audio and multimedia domains to discus= s the latest advance, challenges and unaddressed problems as well as exchan= ge views, ideas in related technologies and applications, in which we attem= pt to advance the State of the Art of Broadcast and User-generated Content = Recognition and Analysis.=A0</p> <p class=3D"p2"><br></p> <p class=3D"p1">-----------------------------------------------------------= ----------</p> <p class=3D"p1">***** Topics include but are not limited to: *****</p> <p class=3D"p1">-----------------------------------------------------------= ----------</p> <p class=3D"p1">Content recognition and analysis algorithms and techniques:= </p> <p class=3D"p2"><br></p> <p class=3D"p1">- Video and audio fingerprinting for content identification= </p> <p class=3D"p1">- Segmentation and classification of audio and visual conte= nt</p> <p class=3D"p1">- Image classification and recognition</p> <p class=3D"p1">- Features and descriptors for video and audio content</p> <p class=3D"p1">- Audio and visual content clustering</p> <p class=3D"p1">- Large database indexing, matching, and search</p> <p class=3D"p1">- Machine learning for content classification</p> <p class=3D"p1">- Evaluation of content-based identification and classifica= tion</p> <p class=3D"p2"><br></p> <p class=3D"p1">Content identification systems and applications:</p> <p class=3D"p1">- Automated Content Recognition (ACR)</p> <p class=3D"p1">- TV-centric content analysis and recognition</p> <p class=3D"p1">- Emerging standards related to visual and audio content id= entification</p> <p class=3D"p1">- Automatic content recognition from TV or Radio =A0</p> <p class=3D"p1">- Implementation of content recognition systems and service= s</p> <p class=3D"p1">- Content identification in mobile devices</p> <p class=3D"p1">- Other content recognition based applications (e.g., recom= mendation and ad targeting)</p> <p class=3D"p2"><br></p> <p class=3D"p1">----------------------------------------------------------<= /p> <p class=3D"p1">********** Important Dates **********</p> <p class=3D"p1">----------------------------------------------------------<= /p> <p class=3D"p1">-Paper Submission: March 7, 2013</p> <p class=3D"p1">-Notification of Acceptance: April 15, 2013</p> <p class=3D"p1">-Camera-Ready Paper Due: April 30, 2013</p> <p class=3D"p2"><br></p> <p class=3D"p1">----------------------------------------------------------<= /p> <p class=3D"p1">********** Organizers **********</p> <p class=3D"p1">----------------------------------------------------------<= /p> <p class=3D"p2"><br></p> <p class=3D"p1">Jinyu Han</p> <p class=3D"p1">Gracenote, Inc.</p> <p class=3D"p3"><span class=3D"s3"><a href=3D"mailto:jhan@xxxxxxxx">jh= an@xxxxxxxx</a></span><span class=3D"s2"> </span></p> <p class=3D"p2"><br></p> <p class=3D"p1">Gerald Friedland=A0</p> <p class=3D"p1">ICSI, UC Berkeley</p> <p class=3D"p3"><span class=3D"s3"><a href=3D"mailto:fractor@xxxxxxxx= edu">fractor@xxxxxxxx</a></span><span class=3D"s2"> </span></p> <p class=3D"p2"><br></p> <p class=3D"p1">Peter Dunker</p> <p class=3D"p1">Gracenote, Inc.</p> <p class=3D"p3"><span class=3D"s3"><a href=3D"mailto:pdunker@xxxxxxxx"= >pdunker@xxxxxxxx</a></span></p> --e89a8f2356bd672b9a04d2fe6f2e--


This message came from the mail archive
/var/www/postings/2013/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University