[AUDITORY] [Jobs] Four research posts at CVSSP, University of Surrey (deadline 30 April) (Mark Plumbley )


Subject: [AUDITORY] [Jobs] Four research posts at CVSSP, University of Surrey (deadline 30 April)
From:    Mark Plumbley  <0000005fa4625f04-dmarc-request@xxxxxxxx>
Date:    Mon, 26 Apr 2021 10:07:18 +0000

--_000_PAXPR06MB81922905B1567D24563320D5A9429PAXPR06MB8192eurp_ Content-Type: text/plain; charset="us-ascii" Content-Transfer-Encoding: quoted-printable Dear List, Please forward to interested candidates (deadline 30 April 2021= ). Apologies for cross-posting. Best wishes, Mark Plumbley --- Join a new research partnership with the BBC at the Centre for Vision, Spee= ch and Signal Processing (CVSSP), University of Surrey ai4me.surrey.ac.uk<h= ttp://ai4me.surrey.ac.uk/> Four research posts available in Audio-Visual AI, Computer Vision and Audio= : Research Fellow B https://jobs.surrey.ac.uk/Vacancy.aspx?ref=3D015321 Research Fellow A https://jobs.surrey.ac.uk/vacancy.aspx?ref=3D015121 2x Research Software Engineer/Research Assistant https://jobs.surrey.ac.uk/= vacancy.aspx?ref=3D015621 Closing date for applications: 30th April 2021 Exciting opportunity for outstanding researchers in Computer Vision, Audio = and Audio-Visual AI to join CVSSP as part of a major new five-year research= partnership with the BBC to realise Future Personalised Media Experiences. The goal of the research partnership is to realise future personalised cont= ent creation and delivery at scale for the public at home or on the move. C= VSSP research will address the key challenges for personalised content crea= tion and rendering by advancing computer vision and audio-visual AI to tran= sform captured 2D video to object-based media. Research will advance automa= tic online understanding, reconstruction and neural rendering of complex dy= namic real-world scenes and events. This will enable a new generation of pe= rsonalised media content which adapts to user requirements and interests. T= he new partnership with the BBC and creative industry partners will positio= n the UK to lead future personalised media experiences. The Centre for Vision, Speech and Signal Processing (CVSSP) at the Universi= ty of Surrey is ranked first in the UK for computer vision. The centre lead= s ground-breaking research in audio-visual AI and machine perception for th= e benefit of people and society through technological innovations in health= care, security, entertainment, robotics and communications. Over the past t= wo decades CVSSP has pioneered advances in 3D and 4D computer vision, spati= al audio and audio-visual AI which have enabled award winning technologies = for content production in TV, film, games and immersive entertainment. BBC R&D (bbc.co.uk/rd<https://eur02.safelinks.protection.outlook.com/?url= =3Dhttp%3A%2F%2Fbbc.co.uk%2Frd&data=3D04%7C01%7Cm.plumbley%40surrey.ac.uk%7= C97953a210ce3475d276808d9056d096b%7C6b902693107440aa9e21d89446a2ebb5%7C0%7C= 0%7C637546788124695803%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoi= V2luMzIiLCJBTiI6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&sdata=3DBX99fKvJO%2F5oXidyuOS= yRi5EG0Kcjy5DFVerMQsBAYI%3D&reserved=3D0>) has a worldwide reputation for d= evelopments in media technology going back over 90 years and has worked clo= sely with CVSSP for over 20 years. It has pioneered the development of obje= ct-based media, working closely with programme-makers and technology teams = across the BBC. Recent work has included object-based audio delivery across= multiple synchronised devices for sports and drama, and AI for recognising= wildlife for natural history. This is an opportunity for outstanding researchers to join a world-renowned= research centre at the start of a major new five-year research partnership= . Research Fellow B The Research Fellow B will be an experienced researcher with an excellent t= rack-record of publication in leading academic forums and post-doctoral res= earch leadership. The successful candidate will take an active role in lead= ing the research programme, contributing novel machine learning approaches = to real-world dynamic scene understanding and reconstruction from video, an= d co-supervision of post-doctoral and PhD researchers. Research Fellow A The Research Fellow A will hold a PhD in computer vision, audio and/or audi= o-visual AI with a track-record of publication in leading academic forums. = The successful candidate will contribute novel machine learning approaches = advancing audio-visual AI to transform video of real-world scenes to object= -based representation and neural rendering. The post-holder will collaborat= e with the team and project partners to realise personalised media experien= ces. 2x Research Software Engineer/Research Assistant The Research Software Engineer/Research Assistant will have experience of r= esearch and software development in computer vision, audio and machine lear= ning. The post holder will support the research programme, contributing to= research and technologies which enable the transformation of video to audi= o-visual objects, production of personalised media experiences and object-b= ased audio-visual rendering. All posts are at the core of a research team working together with the BBC,= University and industry partners to realise personalised object-based medi= a experiences at scale for offline content and live events. These posts wi= ll enable individuals to advance knowledge in computer vision, audio and ma= chine learning and raise their own academic and research profile by joining= Europe's largest research centre in this field. All posts will initially b= e offered for a fixed term which is extendable for the 5 year duration of t= he partnership. Informal enquiries are welcomed by Professor Adrian Hilton by email (a.hilt= on@xxxxxxxx<mailto:a.hilton@xxxxxxxx>). -- Prof Mark D Plumbley Head of School of Computer Science and Electronic Engineering Email: Head-of-School-CSEE@xxxxxxxx Professor of Signal Processing University of Surrey, Guildford, Surrey, GU2 7XH, UK Email: m.plumbley@xxxxxxxx PA: Kelly Green Email: k.d.green@xxxxxxxx --_000_PAXPR06MB81922905B1567D24563320D5A9429PAXPR06MB8192eurp_ Content-Type: text/html; charset="us-ascii" Content-Transfer-Encoding: quoted-printable <html xmlns:v=3D"urn:schemas-microsoft-com:vml" xmlns:o=3D"urn:schemas-micr= osoft-com:office:office" xmlns:w=3D"urn:schemas-microsoft-com:office:word" = xmlns:m=3D"http://schemas.microsoft.com/office/2004/12/omml" xmlns=3D"http:= //www.w3.org/TR/REC-html40"> <head> <meta http-equiv=3D"Content-Type" content=3D"text/html; charset=3Dus-ascii"= > <meta name=3D"Generator" content=3D"Microsoft Word 15 (filtered medium)"> <style><!-- /* Font Definitions */ @xxxxxxxx {font-family:Helvetica; panose-1:2 11 6 4 2 2 2 2 2 4;} @xxxxxxxx {font-family:"Cambria Math"; panose-1:2 4 5 3 5 4 6 3 2 4;} @xxxxxxxx {font-family:Calibri; panose-1:2 15 5 2 2 2 4 3 2 4;} /* Style Definitions */ p.MsoNormal, li.MsoNormal, div.MsoNormal {margin:0cm; margin-bottom:.0001pt; font-size:12.0pt; font-family:"Times New Roman",serif;} a:link, span.MsoHyperlink {mso-style-priority:99; color:blue; text-decoration:underline;} a:visited, span.MsoHyperlinkFollowed {mso-style-priority:99; color:purple; text-decoration:underline;} p.msonormal0, li.msonormal0, div.msonormal0 {mso-style-name:msonormal; mso-margin-top-alt:auto; margin-right:0cm; mso-margin-bottom-alt:auto; margin-left:0cm; font-size:12.0pt; font-family:"Times New Roman",serif;} span.apple-converted-space {mso-style-name:apple-converted-space;} span.EmailStyle19 {mso-style-type:personal-reply; font-family:"Calibri",sans-serif; color:#1F497D;} .MsoChpDefault {mso-style-type:export-only; font-size:10.0pt;} @xxxxxxxx WordSection1 {size:612.0pt 792.0pt; margin:72.0pt 72.0pt 72.0pt 72.0pt;} div.WordSection1 {page:WordSection1;} --></style><!--[if gte mso 9]><xml> <o:shapedefaults v:ext=3D"edit" spidmax=3D"1026" /> </xml><![endif]--><!--[if gte mso 9]><xml> <o:shapelayout v:ext=3D"edit"> <o:idmap v:ext=3D"edit" data=3D"1" /> </o:shapelayout></xml><![endif]--> </head> <body lang=3D"EN-GB" link=3D"blue" vlink=3D"purple"> <div class=3D"WordSection1"> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Dear List, &nbsp;Please forward to in= terested candidates (deadline 30 April 2021). Apologies for cross-posting. = Best wishes, Mark Plumbley<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">---<o:p></o:p></span></p> <p class=3D"MsoNormal"><strong><span style=3D"font-weight:normal"><o:p>&nbs= p;</o:p></span></strong></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><strong><span style= =3D"font-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">Join a n= ew research partnership with the BBC&nbsp;</span></strong><span style=3D"fo= nt-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">at the Centre for Vision, Speech and Signal Processing (CVSSP), University of Sur= rey&nbsp;<a href=3D"http://ai4me.surrey.ac.uk/">ai4me.surrey.ac.uk</a><o:p>= </o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><strong><span style= =3D"font-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">Four res= earch posts available in Audio-Visual AI, Computer Vision and Audio:&nbsp;<= /span></strong><span style=3D"font-size:10.5pt;font-family:&quot;Helvetica&= quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">Research Fellow B&= nbsp;<a href=3D"https://jobs.surrey.ac.uk/Vacancy.aspx?ref=3D015321">https:= //jobs.surrey.ac.uk/Vacancy.aspx?ref=3D015321</a><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">Research Fellow A&= nbsp;<a href=3D"https://jobs.surrey.ac.uk/vacancy.aspx?ref=3D015121">https:= //jobs.surrey.ac.uk/vacancy.aspx?ref=3D015121</a><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">2x Research Softwa= re Engineer/Research Assistant&nbsp;<a href=3D"https://jobs.surrey.ac.uk/va= cancy.aspx?ref=3D015621">https://jobs.surrey.ac.uk/vacancy.aspx?ref=3D01562= 1</a><o:p></o:p></span></p> <div style=3D"margin-bottom:12.0pt"> <p class=3D"MsoNormal"><b><span style=3D"font-size:10.5pt;font-family:&quot= ;Helvetica&quot;,sans-serif">Closing date for applications:</span></b><span= style=3D"font-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">&n= bsp;30th April 2021&nbsp;<o:p></o:p></span></p> </div> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><strong><span style= =3D"font-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">Exciting= opportunity for outstanding researchers in Computer Vision, Audio and Audi= o-Visual AI to join CVSSP as part of a major new five-year research partnership with the BBC to realise Future Personalised Media Exp= eriences.</span></strong><span style=3D"font-size:10.5pt;font-family:&quot;= Helvetica&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">The goal of the re= search partnership is to realise future personalised content creation and d= elivery at scale for the public at home or on the move. CVSSP research will address the key challenges for personalised cont= ent creation and rendering by advancing computer vision and audio-visual AI= to transform captured 2D video to object-based media. Research will advanc= e automatic online understanding, reconstruction and neural rendering of complex dynamic real-world scenes a= nd events. This will enable a new generation of personalised media content = which adapts to user requirements and interests. The new partnership with t= he BBC and creative industry partners will position the UK to lead future personalised media experiences.<o:p></= o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">The Centre for Vis= ion, Speech and Signal Processing (CVSSP) at the University of Surrey is ra= nked first in the UK for computer vision. The centre leads ground-breaking research in audio-visual AI and machine perception f= or the benefit of people and society through technological innovations in h= ealthcare, security, entertainment, robotics and communications. Over the p= ast two decades CVSSP has pioneered advances in 3D and 4D computer vision, spatial audio and audio-visual AI w= hich have enabled award winning technologies for content production in TV, = film, games and immersive entertainment.<o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">BBC R&amp;D (<a hr= ef=3D"https://eur02.safelinks.protection.outlook.com/?url=3Dhttp%3A%2F%2Fbb= c.co.uk%2Frd&amp;data=3D04%7C01%7Cm.plumbley%40surrey.ac.uk%7C97953a210ce34= 75d276808d9056d096b%7C6b902693107440aa9e21d89446a2ebb5%7C0%7C0%7C6375467881= 24695803%7CUnknown%7CTWFpbGZsb3d8eyJWIjoiMC4wLjAwMDAiLCJQIjoiV2luMzIiLCJBTi= I6Ik1haWwiLCJXVCI6Mn0%3D%7C1000&amp;sdata=3DBX99fKvJO%2F5oXidyuOSyRi5EG0Kcj= y5DFVerMQsBAYI%3D&amp;reserved=3D0"><span style=3D"color:black;text-decorat= ion:none">bbc.co.uk/rd</span></a>) has a worldwide reputation for developments in media technology going back= over 90 years and has worked closely with CVSSP for over 20 years. It has = pioneered the development of object-based media, working closely with progr= amme-makers and technology teams across the BBC. Recent work has included object-based audio delivery acros= s multiple synchronised devices for sports and drama, and AI for recognisin= g wildlife for natural history.<o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">This is an opportu= nity for outstanding researchers to join a world-renowned research centre a= t the start of a major new five-year research partnership.&nbsp;<o:p></o:p>= </span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><strong><span style= =3D"font-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">Research= Fellow B</span></strong><span style=3D"font-size:10.5pt;font-family:&quot;= Helvetica&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">The Research Fello= w B will be an experienced researcher with an excellent track-record of pub= lication in leading academic forums and post-doctoral research leadership. The successful candidate will take an active role in = leading the research programme, contributing novel machine learning approac= hes to real-world dynamic scene understanding and reconstruction from video= , and co-supervision of post-doctoral and PhD researchers.<o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><strong><span style= =3D"font-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">Research= Fellow A</span></strong><span style=3D"font-size:10.5pt;font-family:&quot;= Helvetica&quot;,sans-serif"><o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">The Research Fello= w A will hold a PhD in computer vision, audio and/or audio-visual AI with a= track-record of publication in leading academic forums. The successful candidate will contribute novel machine learning ap= proaches advancing audio-visual AI to transform video of real-world scenes = to object-based representation and neural rendering. The post-holder will c= ollaborate with the team and project partners to realise personalised media experiences.<o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><strong><span style= =3D"font-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">2x Resea= rch Software Engineer/Research Assistant&nbsp;</span></strong><span style= =3D"font-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif"><o:p></o= :p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">The Research Softw= are Engineer/Research Assistant will have experience of research and softwa= re development in computer vision, audio and machine learning.&nbsp; The post holder will support the research programme, contr= ibuting to research and technologies which enable the transformation of vid= eo to audio-visual objects, production of personalised media experiences an= d object-based audio-visual rendering.<o:p></o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">All posts are at t= he core of a research team working together with the BBC, University and in= dustry partners to realise personalised object-based media experiences at scale for offline content and live events.&nbsp; Thes= e posts will enable individuals to advance knowledge in computer vision, au= dio and machine learning and raise their own academic and research profile = by joining Europe&#8217;s largest research centre in this field. All posts will initially be offered for a fixed term= which is extendable for the 5 year duration of the partnership.<o:p></o:p>= </span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><b><span style=3D"fon= t-size:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif">Informal enquir= ies</span></b><span style=3D"font-size:10.5pt;font-family:&quot;Helvetica&q= uot;,sans-serif"> are welcomed by Professor Adrian Hilton by email (<a href=3D"mailto:a.hilton@xxxxxxxx">a.hilton@xxxxxxxx</a>)= . <o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">--<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D;mso-fareast-language:EN-US">Prof Mark = D Plumbley<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D;mso-fareast-language:EN-US">Head of Sc= hool of Computer Science and Electronic Engineering<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D;mso-fareast-language:EN-US">Email: Hea= d-of-School-CSEE@xxxxxxxx<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D;mso-fareast-language:EN-US"><o:p>&nbsp= ;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Professor of Signal Processing<o:p></= o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">University of Surrey, Guildford, Surr= ey, GU2 7XH, UK<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:10.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Email: m.plumbley@xxxxxxxx<o:p></= o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">PA: Kelly Green<o:p></o:p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D">Email: k.d.green@xxxxxxxx<o:p></o= :p></span></p> <p class=3D"MsoNormal"><span style=3D"font-size:11.0pt;font-family:&quot;Ca= libri&quot;,sans-serif;color:#1F497D"><o:p>&nbsp;</o:p></span></p> <p class=3D"MsoNormal" style=3D"margin-bottom:12.0pt"><span style=3D"font-s= ize:10.5pt;font-family:&quot;Helvetica&quot;,sans-serif"><o:p>&nbsp;</o:p><= /span></p> </div> </body> </html> --_000_PAXPR06MB81922905B1567D24563320D5A9429PAXPR06MB8192eurp_--


This message came from the mail archive
src/postings/2021/
maintained by:
DAn Ellis <dpwe@ee.columbia.edu>
Electrical Engineering Dept., Columbia University