posted by user: Boming || 86 views || tracked by 1 users: [display]

RAIE 2026 : 4th International Workshop on Responsible AI Engineering

FacebookTwitterLinkedInGoogle

Link: https://conf.researchr.org/home/icse-2026/raie-2026
 
When Apr 12, 2026 - Apr 18, 2026
Where Rio de Janeiro, Brazil
Submission Deadline Oct 20, 2025
Notification Due Nov 24, 2025
Final Version Due Jan 26, 2026
Categories    responsible ai   ai safety   software engineering   ai engineering
 

Call For Papers

The rapid advancements in AI, particularly the release of large language models (LLMs) and their applications, have attracted significant global interest and raised substantial concerns on responsible AI and AI safety. While LLMs are impressive examples of AI models, it is the compound AI systems, which integrate these models with other key components for function and quality/risk control, that are ultimately deployed and have real-world impact. These AI systems, especially autonomous LLM agents and those involving multi-agent interacting, require careful system-level engineering to ensure responsible AI and AI safety.

In recent years, numerous regulations, principles, and guidelines for responsible AI and AI safety have been issued by governments, research organizations, and enterprises. However, they are typically very high-level and do not provide concrete guidance for technologists on how to implement responsible and safe AI. Developing responsible AI systems goes beyond fixing traditional software code “bugs” and providing theoretical guarantees for algorithms. New and improved software/AI engineering approaches are required to ensure that AI systems are trustworthy and safe throughout their entire lifecycle and trusted by those who use and rely on them.

Diversity and inclusion principles in AI are crucial for ensuring that the technology fairly represents and benefits all segments of society, preventing biases that can lead to discrimination and inequality. By incorporating diverse perspectives within data, process, system, and governance of the AI eco-system, AI systems can be more innovative, ethical, and effective in addressing the needs of diverse and especially under-represented users. This commitment to diversity and inclusion also ensures responsible and ethical AI development by fostering transparency, accountability, and trustworthiness, thereby safeguarding against unintended harmful consequences and promoting societal well-being.

Achieving responsible AI engineering—building adequate software engineering tools to support the responsible development of AI systems—requires a comprehensive understanding of human expectations and the utilization context of AI systems. This workshop aims to bring together researchers and practitioners not only in software engineering and AI but also ethicists, and experts from social sciences and regulatory bodies to build a community that will tackle the responsible/safe AI engineering challenges practitioners face in developing responsible and safe AI systems. Traditional software engineering methods are not sufficient to tackle the unique challenges posed by advanced AI technologies. This workshop will provide valuable insights into how software engineering can evolve to meet these challenges, focusing on aspects such as requirement engineering, architecture and design, verification and validation, and operational processes like DevOps and AgentOps. By bringing together experts from various fields, the workshop aims to foster interdisciplinary collaboration that will drive the advancement of responsible AI and AI safety engineering practices.

The primary objectives of this workshop are to:

- Share cutting-edge software/AI engineering methods, techniques, tools, and real-world case studies that can help ensure responsible AI and AI safety.
- Facilitate discussions among researchers and practitioners from diverse fields, including software engineering, AI, ethics, social sciences, and regulatory bodies, to address the responsible AI and AI safety engineering challenges.
- Promote the development of new and improved software/AI engineering approaches to ensure AI systems are trustworthy and trusted throughout their lifecycle.

Topics of interests include, but are not limited to:

- Requirement engineering for responsible AI and AI safety
- Responsible-AI-by-design and AI-safety-by-design software architecture
- Verification and validation for responsible AI and AI safety
- DevOps, MLOps, LLMOps, AgentOps for ensuring responsible AI and AI safety
- Development processes for responsible and safe AI systems
- Responsible AI and AI safety evaluation tools and techniques
- Reproducibility and traceability of AI systems
- Trust and trustworthiness of AI systems
- Responsible AI and AI safety governance
- Diversity and Inclusion in Responsible AI ecosystem; humans, data, processes/algorithms, systems, governance
- Operationalisation of laws (e.g., EU AI Act) and standards
- Evaluation of AI agent behaviors, reasoning reliability, consequences and risk analysis
- Human-AI interaction and collaboration reliability and user-centric evaluation of AI agents
- Human aspect of responsible AI and AI safety engineering
- Responsible AI and AI safety engineering for next-generation foundation model based AI systems (e.g., LLM-based agents)
- Case studies from certain high priority domains (e.g., financial services, science discovery, health, environment, energy)

The workshop will be highly interactive, including invited keynotes or talks, and paper presentations for different topics in the area of responsible AI engineering.

Three types of contributions will be considered:

- A research or experience full paper with 8 pages max;
- A short research or experience paper with 4 pages max;
- An extended abstract with 5 pages max. This is free of APC changes.

All submissions must be in English and in PDF format. Papers must not exceed the page limits that are listed above. The Single-Anonymous Review process is employed by RAIE’26.

Related Resources

ACM-JRC Trustworthy 2025   ACM Journal on Responsible Computing Special Issue on Trustworthy AI and Autonomous Systems
Ei/Scopus-CCNML 2025   2025 5th International Conference on Communications, Networking and Machine Learning (CCNML 2025)
RAIE 2025   3rd International Workshop on Responsible AI Engineering
ACM SAC 2025   40th ACM/SIGAPP Symposium On Applied Computing
AIAS 2025   Symposium for AI Accelerated Science 2025
AIAT 2025   2025 5th International Conference on Artificial Intelligence and Application Technologies (AIAT 2025)
PJA 78 (1) 2027   AI, Art, and Ethics - The Polish Journal of Aesthetics
ISCMI 2025   2025 12th International Conference on Soft Computing & Machine Intelligence (ISCMI 2025)
AI Encyclopedia 2027   Call for Articles in Elsevier's new AI Encyclopedia
SANER 2026   The 33rd IEEE International Conference on Software Analysis, Evolution and Reengineering