AI4S team
Professor Mark Plumbley
Principal InvestigatorMy research concerns AI for Sound: using machine learning and signal processing for analysis and recognition of sounds. My focus is on detection, classification and separation of acoustic scenes and events, particularly real-world sounds, using methods such as deep learning, sparse representations and probabilistic models.
Dr Helen Cooper
Project Officer and Facilities ManagerWithin CVSSP, I am responsible for the day to day management of research projects, as well as coordinating the lab facilities and ensuring that the researchers have the equipment and tools they need to do their research.
Dr Thomas Deacon
Research Fellow in Design Research for Sound Sensingi am a design researcher with a PhD in Media and Arts Technology from Queen Mary University of London. I have worked on Extended Reality and Spatial Audio at the Royal College of Art and as a user-experience prototyper and researcher at the VR startup Gravity Sketch. My participatory approach aims to create intuitive, accessible, delightful, and meaningful experiences using new technologies.
Dr Emily Corrigan-Kavanagh
Research Fellow in Design ResearchI am a design researcher, practitioner and academic, with special interests in happiness, wellbeing, service design, home, visual communication, design for augmented paper, art therapy techniques, creative research methods, and exploring/making sense of subjective experiences.
Dr Arshdeep Singh
Research Fellow in Machine Learning for SoundMy research focuses on designing low computational complexity learning-based frameworks for audio classification. My research interest includes signal processing, audio scene analysis, dictionary learning, machine learning and compression of neural networks for efficient inference.
Gabriel Bibbó
Research EngineerWith my background in electronics and signal processing, I work at the intersection between development and production, focusing on software and hardware solutions for research problems related to sound and AI.
Haohe Liu
PhD StudentThe goal of my research is to develop new methods for automatic labeling of sound environments and events in broadcast audio, assisting production staff to find and search through content, and helping the general public access archive content. I’m also working closely with BBC R&D Audio Team on putting our audio recognition algorithms into production, such as incorporating machine labels into the BBC sound effect library.
James King
PhD StudentI focus on AutoML (Automated Machine Learning), information-theoretic machine learning, Graph theory and their applications to audio. My research focuses on developing novel algorithms for analysing soundscapes using techniques from machine listening and spatial audio. I strive to create efficient methods that can be used by both academic researchers and industry professionals for automated problem-solving in challenging acoustic environments.
Alumni
Andres Fernandez
Research EngineerWith my background in computer science and music, I work at the intersection between development and production, focusing on software solutions for research problems related to sound and AI.
Dr Marc Green
Research Fellow in Machine Learning for SoundMy research is focused on environmental soundscapes, with a view to utilising techniques from machine listening and spatial audio in their analysis.
Dr Yin Cao
Reserch FellowResearch on audio signal processing, acoustics, audio and speech signal processing.