posted by user: AAAI22 || 1639 views || tracked by 3 users: [display]

AAAI22-W 2022 : AAAI-22 Workshop: Learning Network Architecture During Training

FacebookTwitterLinkedInGoogle

Link: https://www.cmu.edu/epp/patents/events/aaai22/
 
When Feb 28, 2022 - Feb 28, 2022
Where online
Submission Deadline Nov 26, 2021
Notification Due Dec 3, 2021
Final Version Due Dec 31, 2021
Categories    machine learning   neural networks   neural architecture search
 

Call For Papers

A fundamental problem in the use of artificial neural networks is that the first step is to guess the network architecture. Fine tuning a neural network is very time consuming and far from optimal. Hyperparameters such as the number of layers, the number of nodes in each layer, the pattern of connectivity, and the presence and placement of elements such as memory cells, recurrent connections, and convolutional elements are all manually selected. If it turns out that the architecture is not appropriate for the task, the user must repeatedly adjust the architecture and retrain the network until an acceptable architecture has been obtained.

There is now a great deal of interest in finding better alternatives to this scheme. Options include pruning a trained network or training many networks automatically. In this workshop we focus on a contrasting approach: to learn the architecture during training. This topic encompasses forms of Neural Architecture Search (NAS) in which the performance properties of each architecture, after some training, are used to guide the selection of the next architecture to be tried. This topic also encompasses techniques that augment or alter a network as the network is trained. An example of the latter is the Cascade Correlation algorithm, as well as others that incrementally build or modify a neural network during training, as needed for the problem at hand.


Our goal is to build a stronger community of researchers exploring these methods, and to find synergies among these related approaches and alternatives. Eliminating the need to guess the right topology in advance of training is a prominent benefit of learning network architecture during training. Additional advantages are possible, including decreased computational resources to solve a problem, reduced time for the trained network to make predictions, reduced requirements for training set size, and avoiding “catastrophic forgetting.” We would especially like to highlight approaches that are qualitatively different from current popular, but computationally intensive, NAS methods.

As deep learning problems become increasingly complex, network sizes must increase and other architectural decisions become critical to success. The deep learning community must often confront serious time and hardware constraints from suboptimal architectural decisions. The growing popularity of NAS methods demonstrates the community’s hunger for better ways of choosing or evolving network architectures that are well-matched to the problem at hand.

Topics include methods for learning network architecture during training, including Incrementally building neural networks during training, new performance benchmarks for the above. Novel approaches, preliminary results, and works in progress are encouraged.


Please see the workshop web page at https://www.cmu.edu/epp/patents/events/aaai22/

Related Resources

ICANN 2024   33rd International Conference on Artificial Neural Networks
AAAI 2023   The 37th AAAI Conference on Artificial Intelligence
JCICE 2024   2024 International Joint Conference on Information and Communication Engineering(JCICE 2024)
AAAI 2024   The 38th Annual AAAI Conference on Artificial Intelligence
ICDM 2024   24th Industrial Conference on Data Mining
CVIPPR 2024   2024 2nd Asia Conference on Computer Vision, Image Processing and Pattern Recognition
ECCV 2024   European Conference on Computer Vision
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
MLANN 2024   2024 2nd Asia Conference on Machine Learning, Algorithms and Neural Networks (MLANN 2024)