posted by user: emap || 558 views || tracked by 4 users: [display]

SS DNAP IJCNN 2020 : Special Session on Deep Neural Audio Processing at IEEE WCCI/IJCNN 2020

FacebookTwitterLinkedInGoogle

Link: https://wcci2020.org/
 
When Jul 19, 2020 - Jul 24, 2020
Where Glasgow, UK
Submission Deadline Jan 15, 2020
Notification Due Mar 15, 2020
Final Version Due Apr 15, 2020
Categories    deep learning   digital audio processing   computational audio analysis   speech and music synthesis
 

Call For Papers

In the last decade, deep learning methods have revolutionized several research fields, from audio processing to computer vision, from natural language processing to electrical signal processing in smart grids. In the audio processing field, deep neural methods have achieved unprecedented performance gains in areas that were traditionally a prerogative of pure signal processing methods, such as single and multi-channel speech enhancement or music synthesis.
Despite this success, several challenges remain open in the application of deep neural networks to this research field. For example, robustness to environmental noise is still an open problem in complex scenarios, such as in far-field speech recognition with multiple microphone arrays. Another key challenge is related to the increasing ability of neural methods to create high-quality fake audio. This capability undermines the security of speaker verification systems and facilitates the diffusion of fake news, thus demanding for appropriate countermeasures and interpretable models. Among the open problems is the need for a large amount of data for training deep neural networks and achieve state-of-the-art performance. In some application scenarios, however, data is scarce and techniques such as few-shot learning, transfer learning, and data augmentation must be adopted. As a final example, unsupervised learning of good representations of speech signals is still an open problem due to the complexity of their structure.
It is thus of great interest for the scientific community to understand how and to what extent novel Deep Neural Networks-based techniques can be efficiently employed in digital audio processing. The aim of the session is thus to promote a forum where scientists and practitioners in the industry can discuss the most recent advancements in this field and on their applicability to Digital Audio problems. Driven by the constantly increasing success encountered at IJCNN2014 in Beijing (China), IJCNN2015 in Killarney (Ireland), IJCNN2016 in Vancouver (Canada), IJCNN2017 in Anchorage (Alaska, USA), IJCNN2018 in Rio de Janeiro (Brazil), and IJCNN2019 in Budapest (Hungary) the proposers of this session are motivated to revive and exceed the experience and to build, in the long-term, a solid reference within the Computational Intelligence community for the Digital Audio field.

Topics include, but are not limited to:
• Deep learning algorithms in digital audio
• Single and multi-channel audio enhancement with deep neural networks
• Source separation, localization, and tracking using deep models
• Music information retrieval and analysis
• Automatic speech and speaker recognition
• Speech and speaker analysis and classification
• Speaker diarization
• Real/synthetic speech discrimination
• Privacy-preserving computational speech processing
• Sound event detection and acoustic scene analysis
• Deep generative methods for music and speech synthesis
• Voice conversion
• Deep learning for music style and timbre transfer
• Audio augmented reality
• Immersive audio synthesis with deep learning
• Deep methods for sound personalization
• Deep learning for digital audio effects
• Knowledge distillation in digital audio applications
• Transfer learning, few-shot learning in digital audio processing
• Robust audio processing towards adversarial attacks
• Interpretability in Deep Learning for audio processing
• Life-long learning/Continual learning for audio processing
• Generative Adversarial Networks for audio applications
• Digital audio processing with Graph Neural Networks
• Unsupervised learning of audio representations
• Automated Machine Learning (AutoML) for digital audio applications

Manuscripts intended for the special session should be submitted via the paper submission website of IJCNN 2020 as regular submissions. All papers submitted to special sessions will be subject to the same peer-review review procedure as the regular papers. Accepted papers will be part of the regular conference proceedings.
Paper submission guidelines: https://wcci2020.org/calls/#call-for-papers
For more information, please contact the Special Session organizers:
• Emanuele Principi, Università Politecnica delle Marche, Ancona, Italy, e.principi@univpm.it
• Aurelio Uncini, Università La Sapienza, Rome, Italy, aurel@ieee.org
• Björn Schuller, Imperial College London, UK & University of Augsburg, Germany, schuller@IEEE.org
• Stefano Squartini, Università Politecnica delle Marche, Ancona, Italy, s.squartini@univpm.it

Related Resources

ICMLC--ACM, Ei and Scopus 2020   ACM--2020 12th International Conference on Machine Learning and Computing (ICMLC 2020)--SCOPUS, Ei Compendex
SS DNAP IJCNN 2019   Special Session on Deep Neural Audio Processing at IJCNN 2019
MNLP 2020   4th IEEE Conference on Machine Learning and Natural Language Processing
IJCSA 2019   International Journal on Computational Science & Applications
ICMLA 2019   18th IEEE International Conference on Machine Learning and Applications
IEEE AIML4COINS 2020   IEEE AIML4COINS2020 | Artificial Intelligence | Machine Learning | Deep Learning | Machine Vision | Big Data Analytics | Video Analytics | Speech Recognition | NLP
21st EANN 2020   Engineering Applications of Neural Networks
IJCNN 2019   International Joint Conference on Neural Network
NASFW 2020   Workshop on Neural Architecture Search for Computer Vision in the Wild @ WACV 2020
IJSCAI 2019   International Journal on Soft Computing, Artificial Intelligence and Applications