posted by user: emap || 3469 views || tracked by 8 users: [display]

SS DNAP IJCNN 2020 : Special Session on Deep Neural Audio Processing at IEEE WCCI/IJCNN 2020

FacebookTwitterLinkedInGoogle

Link: https://wcci2020.org/
 
When Jul 19, 2020 - Jul 24, 2020
Where Glasgow, UK
Submission Deadline Jan 15, 2020
Notification Due Mar 15, 2020
Final Version Due Apr 15, 2020
Categories    deep learning   digital audio processing   computational audio analysis   speech and music synthesis
 

Call For Papers

In the last decade, deep learning methods have revolutionized several research fields, from audio processing to computer vision, from natural language processing to electrical signal processing in smart grids. In the audio processing field, deep neural methods have achieved unprecedented performance gains in areas that were traditionally a prerogative of pure signal processing methods, such as single and multi-channel speech enhancement or music synthesis.
Despite this success, several challenges remain open in the application of deep neural networks to this research field. For example, robustness to environmental noise is still an open problem in complex scenarios, such as in far-field speech recognition with multiple microphone arrays. Another key challenge is related to the increasing ability of neural methods to create high-quality fake audio. This capability undermines the security of speaker verification systems and facilitates the diffusion of fake news, thus demanding for appropriate countermeasures and interpretable models. Among the open problems is the need for a large amount of data for training deep neural networks and achieve state-of-the-art performance. In some application scenarios, however, data is scarce and techniques such as few-shot learning, transfer learning, and data augmentation must be adopted. As a final example, unsupervised learning of good representations of speech signals is still an open problem due to the complexity of their structure.
It is thus of great interest for the scientific community to understand how and to what extent novel Deep Neural Networks-based techniques can be efficiently employed in digital audio processing. The aim of the session is thus to promote a forum where scientists and practitioners in the industry can discuss the most recent advancements in this field and on their applicability to Digital Audio problems. Driven by the constantly increasing success encountered at IJCNN2014 in Beijing (China), IJCNN2015 in Killarney (Ireland), IJCNN2016 in Vancouver (Canada), IJCNN2017 in Anchorage (Alaska, USA), IJCNN2018 in Rio de Janeiro (Brazil), and IJCNN2019 in Budapest (Hungary) the proposers of this session are motivated to revive and exceed the experience and to build, in the long-term, a solid reference within the Computational Intelligence community for the Digital Audio field.

Topics include, but are not limited to:
• Deep learning algorithms in digital audio
• Single and multi-channel audio enhancement with deep neural networks
• Source separation, localization, and tracking using deep models
• Music information retrieval and analysis
• Automatic speech and speaker recognition
• Speech and speaker analysis and classification
• Speaker diarization
• Real/synthetic speech discrimination
• Privacy-preserving computational speech processing
• Sound event detection and acoustic scene analysis
• Deep generative methods for music and speech synthesis
• Voice conversion
• Deep learning for music style and timbre transfer
• Audio augmented reality
• Immersive audio synthesis with deep learning
• Deep methods for sound personalization
• Deep learning for digital audio effects
• Knowledge distillation in digital audio applications
• Transfer learning, few-shot learning in digital audio processing
• Robust audio processing towards adversarial attacks
• Interpretability in Deep Learning for audio processing
• Life-long learning/Continual learning for audio processing
• Generative Adversarial Networks for audio applications
• Digital audio processing with Graph Neural Networks
• Unsupervised learning of audio representations
• Automated Machine Learning (AutoML) for digital audio applications

Manuscripts intended for the special session should be submitted via the paper submission website of IJCNN 2020 as regular submissions. All papers submitted to special sessions will be subject to the same peer-review review procedure as the regular papers. Accepted papers will be part of the regular conference proceedings.
Paper submission guidelines: https://wcci2020.org/calls/#call-for-papers
For more information, please contact the Special Session organizers:
• Emanuele Principi, Università Politecnica delle Marche, Ancona, Italy, e.principi@univpm.it
• Aurelio Uncini, Università La Sapienza, Rome, Italy, aurel@ieee.org
• Björn Schuller, Imperial College London, UK & University of Augsburg, Germany, schuller@IEEE.org
• Stefano Squartini, Università Politecnica delle Marche, Ancona, Italy, s.squartini@univpm.it

Related Resources

WCCI-IJCNN SS 2024   Special Session on Applied AI for Reliable and Trustworthy Medical Decision-Making Systems
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
NCTA 2024   16th International Conference on Neural Computation
ICDM 2024   IEEE International Conference on Data Mining
IOTCB 2024   3rd International Conference on IOT, Cloud and Big Data
EPSEE 2024   2024 3rd International Conference on Advanced Electric Power System and Energy Engineering (EPSEE 2024)
SOFTFM 2024   3rd International Conference on Software Engineering Advances and Formal Methods
COMSCI 2024   3rd International Conference on Computer Science and Information Technology
NYC-2024-DL 2024   New York Annual Conference on Deep Learning 2024
ACML 2024   16th Asian Conference on Machine Learning