posted by user: emap || 3280 views || tracked by 8 users: [display]

SS DNAP IJCNN 2020 : Special Session on Deep Neural Audio Processing at IEEE WCCI/IJCNN 2020

FacebookTwitterLinkedInGoogle

Link: https://wcci2020.org/
 
When Jul 19, 2020 - Jul 24, 2020
Where Glasgow, UK
Submission Deadline Jan 15, 2020
Notification Due Mar 15, 2020
Final Version Due Apr 15, 2020
Categories    deep learning   digital audio processing   computational audio analysis   speech and music synthesis
 

Call For Papers

In the last decade, deep learning methods have revolutionized several research fields, from audio processing to computer vision, from natural language processing to electrical signal processing in smart grids. In the audio processing field, deep neural methods have achieved unprecedented performance gains in areas that were traditionally a prerogative of pure signal processing methods, such as single and multi-channel speech enhancement or music synthesis.
Despite this success, several challenges remain open in the application of deep neural networks to this research field. For example, robustness to environmental noise is still an open problem in complex scenarios, such as in far-field speech recognition with multiple microphone arrays. Another key challenge is related to the increasing ability of neural methods to create high-quality fake audio. This capability undermines the security of speaker verification systems and facilitates the diffusion of fake news, thus demanding for appropriate countermeasures and interpretable models. Among the open problems is the need for a large amount of data for training deep neural networks and achieve state-of-the-art performance. In some application scenarios, however, data is scarce and techniques such as few-shot learning, transfer learning, and data augmentation must be adopted. As a final example, unsupervised learning of good representations of speech signals is still an open problem due to the complexity of their structure.
It is thus of great interest for the scientific community to understand how and to what extent novel Deep Neural Networks-based techniques can be efficiently employed in digital audio processing. The aim of the session is thus to promote a forum where scientists and practitioners in the industry can discuss the most recent advancements in this field and on their applicability to Digital Audio problems. Driven by the constantly increasing success encountered at IJCNN2014 in Beijing (China), IJCNN2015 in Killarney (Ireland), IJCNN2016 in Vancouver (Canada), IJCNN2017 in Anchorage (Alaska, USA), IJCNN2018 in Rio de Janeiro (Brazil), and IJCNN2019 in Budapest (Hungary) the proposers of this session are motivated to revive and exceed the experience and to build, in the long-term, a solid reference within the Computational Intelligence community for the Digital Audio field.

Topics include, but are not limited to:
• Deep learning algorithms in digital audio
• Single and multi-channel audio enhancement with deep neural networks
• Source separation, localization, and tracking using deep models
• Music information retrieval and analysis
• Automatic speech and speaker recognition
• Speech and speaker analysis and classification
• Speaker diarization
• Real/synthetic speech discrimination
• Privacy-preserving computational speech processing
• Sound event detection and acoustic scene analysis
• Deep generative methods for music and speech synthesis
• Voice conversion
• Deep learning for music style and timbre transfer
• Audio augmented reality
• Immersive audio synthesis with deep learning
• Deep methods for sound personalization
• Deep learning for digital audio effects
• Knowledge distillation in digital audio applications
• Transfer learning, few-shot learning in digital audio processing
• Robust audio processing towards adversarial attacks
• Interpretability in Deep Learning for audio processing
• Life-long learning/Continual learning for audio processing
• Generative Adversarial Networks for audio applications
• Digital audio processing with Graph Neural Networks
• Unsupervised learning of audio representations
• Automated Machine Learning (AutoML) for digital audio applications

Manuscripts intended for the special session should be submitted via the paper submission website of IJCNN 2020 as regular submissions. All papers submitted to special sessions will be subject to the same peer-review review procedure as the regular papers. Accepted papers will be part of the regular conference proceedings.
Paper submission guidelines: https://wcci2020.org/calls/#call-for-papers
For more information, please contact the Special Session organizers:
• Emanuele Principi, Università Politecnica delle Marche, Ancona, Italy, e.principi@univpm.it
• Aurelio Uncini, Università La Sapienza, Rome, Italy, aurel@ieee.org
• Björn Schuller, Imperial College London, UK & University of Augsburg, Germany, schuller@IEEE.org
• Stefano Squartini, Università Politecnica delle Marche, Ancona, Italy, s.squartini@univpm.it

Related Resources

IJCNN 2023   International Joint Conference on Neural Networks
IEEE Xplore-Ei/Scopus-DMCSE 2023   2023 International Conference on Data Mining, Computing and Software Engineering (DMCSE 2023) -EI Compendex
DMNNDL 2023   2023 International Conference on Data Mining, Neural Networks and Deep Learning (DMNNDL 2023)
CNSA 2024   17th International Conference on Network Security & Applications
ICLR 2024   International Conference of Learning Representations
IJMSA 2023   International Journal of Modelling, Simulation and Applications
SIGML 2024   5th International Conference on Signal Processing and Machine Learning
IJMIT 2023   International Journal of Managing Information Technology- WJCI Indexed, H index 24
EAIH 2024   Explainable AI for Health
IEEE IECON - SS 2023   IEEE IECON - Special Session - Emerging Modeling and Control Technologies for Offshore Wind Energy Conversion Systems