|
| |||||||||||||||
DTL 2026 : The Fourth International Workshop on Deep and Transfer Learning | |||||||||||||||
| Link: https://mcna-conference.org/2026/Workshops/DTL2026/ | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
|
Co-located with 9th International Conference on Modern Computing, Networking and Applications (MCNA2026)
Deep learning has fundamentally transformed computer science, serving as the backbone for breakthroughs in computer vision, natural language processing (NLP), speech recognition, and robotics. While the efficacy of deep and complex architectures—such as Transformers and Recurrent Convolutional Neural Networks—is undeniable, the field faces significant hurdles regarding computational cost, data dependency, and adaptability. As we move toward 2026, the focus has shifted from training models from scratch to adapting robust pre-trained models. Transfer Learning and Multi-task Learning have emerged as critical methodologies to exploit available data, adapting previously learned knowledge to emerging domains with limited labeled datasets. Simultaneously, Deep Reinforcement Learning (DRL) continues to evolve, creating systems capable of autonomous decision-making and real-world adaptation through trial-and-error and reward optimization. Despite rapid progress, many challenges remain unsolved, particularly regarding resource efficiency, domain adaptation in dynamic environments, and the integration of generative capabilities. The DTL 2026 Workshop aims to bring together researchers working at the intersection of Deep Learning, Reinforcement Learning, and Transfer Learning. We seek to bridge the gap between theory and practice by providing a platform for researchers and practitioners to discuss novel architectures, criticize current theories, and share results on adapting models to new tasks efficiently. Topics of Interest We invite the submission of original papers on all topics related to Deep Learning, Deep Reinforcement Learning, and Transfer/Multi-task Learning. Given the current landscape, we have a special interest in the following areas: Deep Learning Foundations & Applications Deep Reinforcement Learning (DRL) Transfer and Multi-task Learning Federated Learning and Privacy-Preserving AI Resource-Efficient Deep Learning: Green AI, compression, and edge computing Robustness and Safety: Dataset bias, concept drift, and adversarial robustness Systems Management: Deep learning for network resource management Benchmarks, Open-source packages, and reproducible research |
|