posted by user: meryt75 || 450 views || tracked by 2 users: [display]

ATRACC 2024 : AAAI Fall Symposium: AI Trustworthiness and Risk Assessment for Challenged Contexts

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/aaai-atracc
 
When Nov 7, 2024 - Nov 9, 2024
Where Arlington
Submission Deadline Aug 2, 2024
Notification Due Aug 16, 2024
Final Version Due Aug 30, 2024
Categories    artificial intelligence   ai trustworthiness   risk assessment
 

Call For Papers

Artificial intelligence (AI) has already become a transformative technology that is having revolutionary impact in nearly every domain from business operations to more challenging contexts such as civil infrastructure, healthcare and military defense. AI systems built on large language and foundational/multi-modal models (LLFMs) have proven their value in all aspects of human society, rapidly transforming traditional robotics and computational systems into intelligent systems with emergent, beneficial and even unanticipated behaviors. However, the rapid embrace of AI-based critical systems introduces new dimensions of errors that induce increased levels of risk, limiting trustworthiness. Furthermore, the design of AI-based critical systems requires proving their trustworthiness. Thus, AI-based critical systems must be assessed across many dimensions by different parties (researchers, developers, regulators, customers, insurance companies, end-users, etc.) for different reasons.

We can call it AI testing, validation, monitoring, assurance, or auditing, but the fundamental concept in all cases is to make sure the AI is performing well within its operational design and avoids unanticipated behaviors and unintended consequences. Such assessment begins from the early stages of research, development, analysis, design, and deployment. Thus, trustworthy AI systems and methods for their assessment should address full system-level functions as well as individual AI-models and require a systematic design both during training and development phases, ultimately providing assurance guarantees. At the theoretical and foundational level, such methods must go beyond explainability to deliver uncertainty estimations and formalisms that can bound the limits of the AI; find blind spots and edge-cases; and incorporate testing for unintended use-cases, such as adversarial testing and red teaming in order to provide traceability, and quantify risk. This level of performance is critically important to contexts that have highly risk-averse mandates such as, healthcare, essential civil systems including power and communications, military defense, and robotics that interface directly with the physical world.

The Symposium track aims to create a platform for discussions and explorations that are expected to ultimately contribute to the development of innovative solutions for quantitatively trustworthy AI. The symposium track will last 2 1/2 days and will feature keynote and invited talks from accomplished experts in the field of Trustworthy AI, panel sessions, the presentation of selected papers, student papers and a poster session. Potential topics of interest include, but are not limited to :

- Assessment of non-functional requirements such as explainability, including transparency, accountability, and privacy
- Methods that use data and knowledge to support system reliability requirements, quantify uncertainty, or balance over-generalizability
- Approaches for verification and validation (V&V) of AI systems and quantitative AI and system performance indicators
- Methods and approaches for enhancing reasoning in LLFMs, e.g. causal reasoning techniques and outcome verification approaches
- Links between performance, and trustworthiness and trust leveraged by AI sciences, system and software engineering, metrology, and Social Sciences and Humanities methods
- Research on and architectures/frameworks for Mixture-Of-Experts (MoE) and Multi-Agent systems with an emphasis on robustness, reliability, and emergent behaviors in risk-averse contexts
- Evaluation of AI systems vulnerabilities, risks and impact; including adversarial (prompt injection, data poisoning, etc.) and red-teaming approaches targeting LLFMs or multi-agent behaviors.

Important Dates:
- August 2:  Paper submission deadline, submitted in easyChair
- August 16: Notification of paper status sent to authors
- August 30: Final accepted paper revisions due
- October 4: Deadline for Registration Refund Requests – Late Registration Rate Begins

Useful Links:
- ATRAAC 2024 Paper Submission: https://easychair.org/my/conference?conf=fss24
- ATRAAC 2024 Home Page: https://sites.google.com/view/aaai-atracc
- 2024 AAAI Fall Symposium Series: https://aaai.org/conference/fall-symposia/fss24/

Related Resources

AAAI 2024   The 38th Annual AAAI Conference on Artificial Intelligence
Ei/Scopus-AACIP 2024   2024 2nd Asia Conference on Algorithms, Computing and Image Processing (AACIP 2024)-EI Compendex
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
AAAI 2025   The 39th Annual AAAI Conference on Artificial Intelligence
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
EuroSys 2024   The European Conference on Computer Systems (Fall Deadline)
IEEE ICA 2022   The 6th IEEE International Conference on Agents
AAAI-MAKE 2024   AAAI 2024 Spring Symposium on Empowering Machine Learning and Large Language Models with Domain and Commonsense Knowledge
IEEE Big Data - MMAI 2024   IEEE Big Data 2024 Workshop on Multimodal AI
NDSS 2025   Network and Distributed System Security Symposium - Fall Review Cycle