[Date Prev][Date Next][Thread Prev][Thread Next][Date Index][Thread Index]

Re: LAST CALL for Papers - ICME Workshop on Broadcast and User-generated Content Recognition and Analysis (BRUREC)



Dear all,

The deadline for ICME workshop on Broadcast and User-generated Content
Recognition and Analysis is extended to March 14th. You can submit and
revise your paper until then. Please forgive any cross-listings.

Best regards,
BRUREC organizers.

On Fri, Mar 1, 2013 at 11:22 PM, jinyu han <jinyuhan2008@xxxxxxxxx> wrote:
> Dear all,
>
> This is just a reminder that the deadline for ICME BRUREC is one week
> from now. If you work on related research topics and missed the ICME
> regular paper deadline, please consider submitting your work to the
> following workshop. The page limite is the same as the ICME regular
> paper (6 pages). Accepted papers will be included in the conference
> proceeding and published by IEEE Xplore.
>
> Best regards,
> BRUREC organizers
>
> On Fri, Feb 22, 2013 at 9:51 PM, jinyu han <jinyuhan2008@xxxxxxxxx> wrote:
>> Dear list subscriber,
>>
>> Please help us circulate this CfPs widely and forgive any cross-postings.
>>
>> The deadline for BRUREC(March 7th) is approaching. Short (up to 4
>> pages) and long papers (up to 6 pages) are both welcome. All the
>> accepted papers will be included in the conference proceeding and
>> published by IEEE. The online submission is open:
>> https://cmt.research.microsoft.com/ICMEW2013. To make sure your paper
>> is submitted to BRUREC for review, please select the workshop track
>> '1st IEEE International Workshop on Broadcast and User-Generated
>> Content Recognition and Analysis (BRUREC)' when creating a new
>> submission.
>>
>>
>> ----------------------------------------------------------------------------------------------
>> *************** LAST CALL FOR PAPERS ***************
>> ----------------------------------------------------------------------------------------------
>>
>> ------------------------------------------------------------------------------------------------------------------------------
>> *** 1st IEEE International Workshop on Broadcast and User-generated
>> Content Recognition and Analysis (BRUREC) at ICME 2013***
>> ------------------------------------------------------------------------------------------------------------------------------
>> ***** July 15-19, 2013 • Fairmont Hotel, San Jose, USA |
>> http://www.BRUREC.org *****
>> ------------------------------------------------------------------------------------------------------------------------------
>>
>> In the past decade, we have seen great advancement in the area of
>> visual and acoustic content recognition and analysis. Audio
>> fingerprinting, for example, has led to many successful commercial
>> applications and fundamentally changed the way people listen to, share
>> and store music. In the meanwhile, research and development in visual
>> content identification has reached a watershed, and large-scale
>> commercial applications have started to emerge. Automated Content
>> Recognition (ACR) applications have found their way into consumer
>> applications. In the meanwhile, major Hollywood movie studios and TV
>> networks have adapted ACR to track and manage their content at large
>> scale. While broadcast quality video and audio analysis is already at
>> an advanced stage, consumer-produced content analysis is not. Great
>> potential exists on the media connectivity between broadcast media and
>> user-generated content.   Therefore, in the last few years,
>> user-generated content has attracted increasing attention from both
>> academia and industry.
>>
>> This workshop aims to extend the ICME conference by focusing on
>> algorithms, systems, applications, and standards for content
>> recognition and analysis that can be applied across video and audio
>> domains. BRUREC will cover all aspects on multimedia data generated by
>> broadcast media such as TV and Radio as well as user-generated content
>> such as Youtube and Vimeo. The goal of the workshop is to bring
>> together researchers and practitioners in both industry and academia
>> in the scientific community of content recognition and analysis across
>> video, audio and multimedia domains to discuss the latest advance,
>> challenges and unaddressed problems as well as exchange views, ideas
>> in related technologies and applications, in which we attempt to
>> advance the State of the Art of Broadcast and User-generated Content
>> Recognition and Analysis.
>>
>> ---------------------------------------------------------------------
>> ***** Topics include but are not limited to: *****
>> ---------------------------------------------------------------------
>> Content recognition and analysis algorithms and techniques:
>>
>> - Video and audio fingerprinting for content identification
>> - Segmentation and classification of audio and visual content
>> - Image classification and recognition
>> - Features and descriptors for video and audio content
>> - Audio and visual content clustering
>> - Large database indexing, matching, and search
>> - Machine learning for content classification
>> - Evaluation of content-based identification and classification
>>
>> Content identification systems and applications:
>> - Automated Content Recognition (ACR)
>> - TV-centric content analysis and recognition
>> - Emerging standards related to visual and audio content identification
>> - Automatic content recognition from TV or Radio
>> - Implementation of content recognition systems and services
>> - Content identification in mobile devices
>> - Other content recognition based applications (e.g., recommendation
>> and ad targeting)
>>
>> ----------------------------------------------------------
>> ********** Important Dates **********
>> ----------------------------------------------------------
>> -Paper Submission: March 7, 2013
>> -Notification of Acceptance: April 15, 2013
>> -Camera-Ready Paper Due: April 30, 2013
>>
>> ----------------------------------------------------------
>> ********** Organizers **********
>> ----------------------------------------------------------
>>
>> Jinyu Han
>> Gracenote, Inc.
>> jhan@xxxxxxxxxxxxx
>>
>> Gerald Friedland
>> ICSI, UC Berkeley
>> fractor@xxxxxxxxxxxxxxxxx
>>
>> Peter Dunker
>> Gracenote, Inc.
>> pdunker@xxxxxxxxxxxxx
>>
>> ------------------------------------------------------------------------
>> ********** Technical Program Committee **********
>> ------------------------------------------------------------------------
>>
>> Ching-Wei Chen (Gracenote, USA)
>> Jingdong Chen (Northwestern Polytechnical University, China)
>> Ngai-Man Cheung (Singapore University of Technology and Design, Singapore)
>> Oscar Celma (Gracenote, USA)
>> Trista Chen (Cognitive Networks, USA)
>> Roger Dannenberg (Carnegie Mellon University, USA)
>> Lixin Duan (SAP Research, Singapore)
>> Yuan Dong (Orange Labs, France Telecom China)
>> Zhiyao Duan (Northwestern University, USA)
>> Dan Ellis (Columbia University, USA)
>> Matthias Gruhne (Bach Technologies, Germany)
>> Peter Grosche (Huawei European Research Center, Germany)
>> Congcong Li (Google, USA)
>> Lie Lu (Dolby Labs, China)
>> Xiaofan Lin (A9.com, USA)
>> Gautham Mysore (Adobe, USA)
>> Bryan Pardo (Northwestern University, USA)
>> Regu Radhakrishnan (Dolby Labs, USA)
>> Paris Smaragdis (University of Illinois at Urbana-Champaign, USA)
>> George Tzanetakis (University of Victoria, Canada)
>> Junsong Yuan (Nanyang Technological University, Singapore)
>> Honggang Zhang (Beijing University of Posts and Telecommunications, China)