Meet our team

AI4S team

Professor Mark Plumbley

Principal Investigator

My research concerns AI for Sound: using machine learning and signal processing for analysis and recognition of sounds. My focus is on detection, classification and separation of acoustic scenes and events, particularly real-world sounds, using methods such as deep learning, sparse representations and probabilistic models.

Dr Helen Cooper

Project Officer and Facilities Manager

Within CVSSP, I am responsible for the day to day management of research projects, as well as coordinating the lab facilities and ensuring that the researchers have the equipment and tools they need to do their research.

Dr Thomas Deacon

Research Fellow in Design Research for Sound Sensing

i am a design researcher with a PhD in Media and Arts Technology from Queen Mary University of London. I have worked on Extended Reality and Spatial Audio at the Royal College of Art and as a user-experience prototyper and researcher at the VR startup Gravity Sketch. My participatory approach aims to create intuitive, accessible, delightful, and meaningful experiences using new technologies.

Dr Emily Corrigan-Kavanagh

Research Fellow in Design Research

I am a design researcher, practitioner and academic, with special interests in happiness, wellbeing, service design, home, visual communication, design for augmented paper, art therapy techniques, creative research methods, and exploring/making sense of subjective experiences.

Dr Arshdeep Singh

Research Fellow in Machine Learning for Sound

My research focuses on designing low computational complexity learning-based frameworks for audio classification. My research interest includes signal processing, audio scene analysis, dictionary learning, machine learning and compression of neural networks for efficient inference.

Gabriel Bibbó

Research Engineer

With my background in electronics and signal processing, I work at the intersection between development and production, focusing on software and hardware solutions for research problems related to sound and AI.

Haohe Liu

PhD Student

The goal of my research is to develop new methods for automatic labeling of sound environments and events in broadcast audio, assisting production staff to find and search through content, and helping the general public access archive content. I’m also working closely with BBC R&D Audio Team on putting our audio recognition algorithms into production, such as incorporating machine labels into the BBC sound effect library.

James King

PhD Student

I focus on AutoML (Automated Machine Learning), information-theoretic machine learning, Graph theory and their applications to audio. My research focuses on developing novel algorithms for analysing soundscapes using techniques from machine listening and spatial audio. I strive to create efficient methods that can be used by both academic researchers and industry professionals for automated problem-solving in challenging acoustic environments.

Alumni

Andres Fernandez

Research Engineer

With my background in computer science and music, I work at the intersection between development and production, focusing on software solutions for research problems related to sound and AI.

Dr Marc Green

Research Fellow in Machine Learning for Sound

My research is focused on environmental soundscapes, with a view to utilising techniques from machine listening and spatial audio in their analysis.

Dr Yin Cao

Reserch Fellow

Research on audio signal processing, acoustics, audio and speech signal processing.