SS DNAP IJCNN 2019 : Special Session on Deep Neural Audio Processing at IJCNN 2019
Call For Papers
In many research fields, Deep Neural methods have reached state of the art performance superseding popular approaches used for decades. Such paradigm change involved also the audio processing field, where Deep Neural methods had a major impact in several research areas. In speech recognition, the advent of Deep Neural Networks provided significant reduction of word error rates in many popular tasks. Similar benefits have been registered in the speaker recognition and computational para-linguistics (e.g., emotion and speaker state recognition) research fields. Deep Neural methods found application also in fields where data-driven methods were scarcely employed, such as single and multi-channel speech enhancement and dereverberation. Such technologies have not remained confined to academic research, but they also found application in popular commercial products, such as Amazon Echo, Google Home, and Apple HomePod. Audio processing with Deep Neural methods has found application also in the music research field, such as in music information retrieval, automated music generation, and style transfer. Processing of environmental sounds has gained particular attention in the last years, and Deep Neural methods gained state of the art performance in tasks such as acoustic monitoring, audio tagging, acoustic scene understanding and sound separation, detection, and identification.
Neural methods achieved state of the art performance in the aforementioned research fields, but several challenges still remain open: robustness of recognition systems to environmental noise has a long history, and despite the recent advancements it still remains a topic worth of investigation. Adversarial attacks have proven successful to fool state of the art recognition models, thus posing important security issues. Usually, Deep Neural methods need large amounts of data in order to reach state of the art performance. In some application scenarios, however, the amount of data at disposal for training is scarce and techniques such as few-shot learning and transfer learning must be adopted. In other application contexts, the computational and memory resources of the executing device are limited, and neural models cannot be employed without modification. Model compression and knowledge distillation techniques have been gaining significant attention in the last years, since they allow to lower the computational and memory burden with little or no performance deterioration.
In the light of this analysis, it is indeed of great interest for the scientific community to understand how and to what extent novel Deep Neural Networks-based techniques can be efficiently employed in Digital Audio. The aim of the session is thus to focus on the most recent advancements in this field and on their applicability to Digital Audio problems. Driven by the constantly increasing success encountered at IJCNN2014 in Beijing (China), IJCNN2015 in Killarney (Ireland), IJCNN2016 in Vancouver (Canada), IJCNN2017 in Anchorage (Alaska, USA), and IJCNN2018 in Rio de Janeiro (Brazil) the proposers of this session are motivated to revive and exceed the experience and to build, in the long-term, a solid reference within the Computational Intelligence community for the Digital Audio field.
Topics include, but are not limited to:
• Computational audio analysis
• Deep learning algorithms in digital audio
• Knowledge distillation in digital audio applications
• Transfer learning, few-shot learning in audio applications
• Music information retrieval
• Music content analysis
• Speech and speaker analysis and classification
• Neural methods for music/speech generation and voice conversion
• Generative Adversarial Networks for Audio Analysis and Synthesis
• Privacy preserving computational speech processing
• Audio source separation using deep models
• Sound event detection
• Acoustic novelty detection
• Acoustic scene analysis
• End-to-end learning for digital audio applications
• Single and multi-channel audio enhancement with neural networks
• Robust audio processing towards adversarial attacks
• Unsupervised methods for audio analysis
• Attention-based Topologies
• Explainability in Deep Learning for Audio Processing
Manuscripts intended for the special session should be submitted via the paper submission website of IJCNN 2019 as regular submissions. All papers submitted to special sessions will be subject to the same peer-review review procedure as the regular papers. Accepted papers will be part of the regular conference proceedings.
Paper submission guidelines: https://www.ijcnn.org/paper-submission-guidelines