posted by organizer: tnaeven || 347 views || tracked by 1 users: [display]

SAIA 2024 : Symposium on Scaling AI Assessments - Tools, Ecosystems and Business Models


When Sep 30, 2024 - Oct 1, 2024
Where Cologne
Submission Deadline Jul 22, 2024
Notification Due Aug 19, 2024
Final Version Due Sep 9, 2024
Categories    artificial intelligence   AI   computer science   trustworthy ai

Call For Papers

This symposium aims to advance marketable AI assessments and audits for trustworthy AI. Specifically, papers and presentations both from an operationalization perspective (including governance and business perspectives) and from an ecosystem & tools perspective (covering approaches from computer science) are encouraged. Topics include but are not limited to:

Perspective: Operationalization of market-ready AI assessment
- Standardizing AI Assessments
- Risk and Vulnerability Evaluation
- Implementing Regulatory Requirements
- Business Models Based on AI Assessments

Perspective: Testing tools and implementation methods for trustworthy AI products
- Infrastructure and Automation
- Safeguarding and Assessment Methods
- Systematic Testing

Organization: Fraunhofer IAIS
Organization Committee contact:

For further information please visit the symposium website:


Trustworthy AI is considered a key prerequisite for Artificial Intelligence (AI) applications. Especially against the background of European AI regulation, AI conformity assessment procedures are of particular importance, both for specific use cases and for general-purpose models. But also in non-regulated domains, the quality of AI systems is a decisive factor as unintended behavior can lead to serious financial and reputation damage. As a result, there is a great need for AI audits and assessments and in fact, it can also be observed that a corresponding market is forming. At the same time, there are still (technical and legal) challenges in conducting the required assessments and a lack of extensive practical experience in evaluating different AI systems. Overall, the emergence of the first marketable/commercial AI assessment offerings is just in the process and a definitive, distinct procedure for AI quality assurance has not yet been established.

1. AI assessments require further operationalization both at level of governance and related processes and at the system/product level. Empirical research is pending that tests/evaluates governance frameworks, assessment criteria, AI quality KPIs and methodologies in practice for different AI use cases.

2. Conducting AI assessments in practice requires a testing ecosystem and tool support, as many quality KPIs cannot be calculated without tool support. At the same time automation of such assessments is a prerequisite to make the corresponding business model scale.

Related Resources

Ei/Scopus-AACIP 2024   2024 2nd Asia Conference on Algorithms, Computing and Image Processing (AACIP 2024)-EI Compendex
BPMDS 2024   Business Process Modeling, Development, and Support
IEEE-Ei/Scopus-ACEPE 2024   2024 IEEE Asia Conference on Advances in Electrical and Power Engineering (ACEPE 2024) -Ei Compendex
BTSD 2024   The 5th International Workshop on Big Data & AI Tools, Models, and Use Cases for Innovative Scientific Discovery (BTSD) 2024
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
BS LAB 2025   9th Business Systems Laboratory International Symposium TECHNOLOGY AND SOCIETY: Boon or Bane?
IEEE ICA 2022   The 6th IEEE International Conference on Agents
ITNG 2024   The 21st Int'l Conf. on Information Technology: New Generations ITNG 2024
AAAI 2025   The 39th Annual AAAI Conference on Artificial Intelligence
IEEE-JBHI (SI) 2024   Special Issue on Revolutionizing Healthcare Informatics with Generative AI: Innovations and Implications