posted by organizer: koo_ec || 1422 views || tracked by 6 users: [display]

AI Safety 2024 : Special Issue for the Journal Frontiers in Robotics and AI on AI Safety: Safety Critical Systems

FacebookTwitterLinkedInGoogle

Link: https://www.frontiersin.org/research-topics/57900/ai-safety-safety-critical-systems
 
When N/A
Where N/A
Abstract Registration Due Nov 13, 2023
Submission Deadline Mar 12, 2024
Notification Due Apr 28, 2024
Final Version Due Jun 20, 2024
Categories    artificial intelligence   deep learning   safety, software   safe ai
 

Call For Papers

A robotic system is autonomous when it can operate in a real-world environment for an extended time without being controlled by humans. As artificial intelligence (AI) continues to advance, it is increasingly applied in safety-critical autonomous systems to perform complex tasks, where failures can have catastrophic consequences. Examples of such safety-critical autonomous systems include: self-driving cars, surgical robots, unmanned aerial vehicles in urban environments, etc.

Therefore, as AI technology becomes more pervasive, it is crucial to address the challenges associated with deploying AI in safety-critical systems. These systems must adhere to stringent safety requirements to ensure the well-being of individuals and the environment.

Despite the great success of AI, the use of Deep Learning models presents new dependability challenges, such as lack of well-defined specification, black-box nature of the models, high-dimensionality of data and over-confidence of neural networks over out-of-distribution data.

Therefore, to cope with such issues, a new topic emerged: AI Safety. AI Safety is a multidisciplinary domain that lies at the intersection between AI, Software Engineering, Safety Engineering and Ethics, and is an essential and challenging topic that aims at improving the safety and provide certifiably safety-critical autonomous systems powered by AI solutions. It involves mitigating risks associated with AI failures, ensuring the robustness and resilience of AI algorithms, enabling human-AI collaboration and addressing ethical concerns in critical domains.

This Research Topic aims to gather cutting-edge research, insights, and methodologies in the field of AI safety, focusing specifically on safety-critical systems. We invite original contributions in the form of research articles, survey papers, case studies and reviews that explore various aspects of AI safety for safety-critical systems.

The topics of interest include, but are not limited to:
• Risk assessment and management for AI in safety-critical systems
• Verification and validation techniques for AI-driven systems
• Explainability (interpretability) of AI models in safety-critical domains
• Robustness and resilience of AI algorithms and systems
• Human-AI interaction and collaboration in safety-critical settings
• Ethical considerations and responsible AI practices for safety-critical systems
• Regulatory frameworks and standards for AI safety in critical domains
• Case studies and practical applications of AI safety in real-world scenarios

Related Resources

ITNG 2024   The 21st Int'l Conf. on Information Technology: New Generations ITNG 2024
CCVPR 2024   2024 International Joint Conference on Computer Vision and Pattern Recognition (CCVPR 2024)
CRISIS 2024   19th International Conference on Risks and Security of Internet and Systems
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
EAIH 2024   Explainable AI for Health
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
BPMDS 2024   Business Process Modeling, Development, and Support
ICDM 2024   IEEE International Conference on Data Mining
AIPIDAY 2025   AI on Pi Day
SOFTFM 2024   3rd International Conference on Software Engineering Advances and Formal Methods