|
| |||||||||||||||
TX4Nets 2026 : 3rd International Workshop on Trustworthy and eXplainable Artificial Intelligence for Networks | |||||||||||||||
| Link: https://sites.google.com/view/tx4nets2026 | |||||||||||||||
| |||||||||||||||
Call For Papers | |||||||||||||||
|
Artificial Intelligence (AI) is increasingly shaping the design, management, and optimization of communication networks, enabling advanced automation, improved service provisioning, and enhanced operational efficiency. Despite these advances, the adoption of AI-driven solutions in operational telecom networks remains cautious. Network operators continue to raise concerns regarding the reliability, transparency, and trustworthiness of Machine Learning (ML) and AI models, particularly when deployed in safety-critical and mission-critical infrastructures. The black-box nature of many AI techniques, coupled with limited guarantees on robustness, reliability, and accountability, remains a key barrier to their widespread adoption in real-world networking environments.
Building on the success of its previous editions, the Third International Workshop on Trustworthy and eXplainable Artificial Intelligence for Networks (TX4Nets) aims to foster research and innovation at the intersection of AI and networking, with a strong emphasis on trust, interpretability, and operational readiness. The workshop will bring together researchers, practitioners, and industry experts to discuss cutting-edge methodologies, practical solutions, and deployment experiences that enable transparent, reliable, secure, and efficient AI-driven network systems. TX4Nets focuses on the foundational pillars of trustworthy AI, including transparency, explainability, robustness, reliability, adaptability, security, data privacy, and computational efficiency, and investigates their impact on the automation and optimization of communication networks. In addition to peer-reviewed technical sessions, the workshop will feature an invited keynote and a tutorial on Explainable AI for networks. The tutorial is specifically designed for non-experts in XAI, with the goal of lowering the entry barrier and promoting the practical adoption of explainability techniques in networking applications. Topics of Interest Researchers are encouraged to submit original research contributions in all major areas related to trustworthy and explainable AI for communication networks, including but not limited to: x Trustworthy AI for communication networks, including network management, control, and automation x Reliability, robustness, and safety of AI models for networking applications x Explainable Artificial Intelligence (XAI) for networks, methods and frameworks x XAI for enhancing trust, transparency, and accountability in AI-driven network systems x Human-in-the-Loop and human-centered AI for communication networks x Concept-based, interpretable, and hybrid AI models for networking applications x Behavioral verification, validation, and testing of AI models in networked systems x Causal machine learning and causal reasoning for networking use cases x Explainable Reinforcement Learning for network control and optimization x Explainable generative AI for networking applications x Fairness-aware and ethical AI, including fair resource allocation in communication networks x Privacy-, security-, and resilience-aware AI for communication networks x XAI for network security, intrusion detection, and threat mitigation x XAI-driven network performance optimization x XAI for Edge, Cloud, and Internet-of-Things (IoT) environments x XAI for federated learning-based solutions in 5G/6G and future networks x XAI for digital twin-based solutions in 5G/6G and future networks x Case studies, real-world deployments, and experimental evaluations of trustworthy and explainable AI in networks x Interoperability, standardization, and best practices for AI in communication networks x Regulatory frameworks and compliance aspects for AI in communication networks |
|