posted by organizer: simonescardapane || 4371 views || tracked by 23 users: [display]

Distributed Neural Networks@IJCNN 2016 : Special Session on Distributed Learning Algorithms for Neural Networks

FacebookTwitterLinkedInGoogle

Link: http://ispac.diet.uniroma1.it/ijcnn-2016-special-session-distributed-nn/
 
When Jul 25, 2016 - Jul 29, 2016
Where Vancouver (Canada)
Submission Deadline Jan 31, 2016
Notification Due Mar 15, 2016
Final Version Due Apr 15, 2016
Categories    artificial intelligence   neural networks   distributed systems   machine learning
 

Call For Papers

[Apologies if you receive multiple copies of this CFP]

--------------------------------------------------------------------------
Call for papers: IJCNN 2016 Special Session
DISTRIBUTED LEARNING ALGORITHMS FOR NEURAL NETWORKS
Vancouver, Canada, 25-29 July 2016
http://ispac.diet.uniroma1.it/ijcnn-2016-special-session-distributed-nn
--------------------------------------------------------------------------

Scope and motivations
--------------------------------------------------------------------------
In the era of big data and pervasive computing, it is common that datasets are distributed over multiple and geographically distinct sources of information (e.g. distributed databases). In this respect, a major challenge is designing adaptive training algorithms in a distributed fashion, with only partial or no reliance on a centralized authority. Indeed, distributed learning is an important step to handle inference within several research areas, including sensor networks, parallel and commodity computing, distributed optimization, and many others.

Based on the idea that all the aforementioned research fields share many fundamental questions and mechanisms, this special session is intended to bring forth advances on distributed training for neural networks. We are interested in papers proposing novel algorithms and protocols for distributed training under multiple constraints, analyses of their theoretical aspects, and applications for multiple source data clustering, regression and classification.

Topics
--------------------------------------------------------------------------
The topics of interest to be covered by this Special Session include, but are not limited to:
* Distributed algorithms for training neural networks and kernel methods
* Theoretical aspects of distributed learning (e.g. fundamental communication constraints)
* Learning on commodity computing architectures and parallel execution frameworks (e.g. MapReduce, Storm)
* Energy efficient distributed learning
* Distributed semi-supervised and active learning
* Novel results on distributed optimization for machine learning
* Cooperative and competitive multi-agent learning
* Learning in realistic wireless sensor networks
* Distributed systems with privacy concerns (e.g. healthcare systems)

Important dates
--------------------------------------------------------------------------
* Paper submission deadline: January 15, 2016
* Notification of paper acceptance: March 15, 2016
* Camera-ready deadline: April 15, 2016
* Conference: July 25-29, 2016

Further details
--------------------------------------------------------------------------
For additional details, please visit the special session's website, or contact one of the organizers:
Massimo Panella, Sapienza University of Rome (massimo [dot] panella [at] uniroma1 [dot] it).
Simone Scardapane, Sapienza University of Rome (simone [dot] scardapane [at] uniroma1 [dot] it).

Related Resources

ETHE Blearning 2017   Blended learning in higher education: research findings
ECCV 2018   European Conference on Computer Vision
ECA 2017   Special Session on Evolutionary Computing Algorithms
CVPR 2018   Computer Vision and Pattern Recognition
eLSE 2018   14th eLearning and Software for Education Conference
COLT 2018   Computational Learning Theory
ICPR 2018   24th International Conference on Pattern Recognition
IJCAI 2018   International Joint Conferences on Artificial Intelligence Organization
ParLearning 2018   The 7th International Workshop on Parallel and Distributed Computing for Large Scale Machine Learning and Big Data Analytics
DGGI@JWLLP 2017   Distributed Grammar Program session at JWLLP-23