posted by user: csilvia || 28 views || tracked by 1 users: [display]

RSS-Uncertainty-AI 2026 : [CFP] Uncertainty in the Era of AI (RSS: Data Science & AI)

FacebookTwitterLinkedInGoogle

Link: https://academic.oup.com/rssdat/pages/call-for-papers-uncertainty-in-the-era-of-ai
 
When N/A
Where N/A
Submission Deadline May 31, 2026
Categories    AI   machine learning   data science
 

Call For Papers

Dear colleagues,
The editors of RSS: Data Science and Artificial Intelligence (https://academic.oup.com/rssdat) invite submissions for an upcoming Special Issue: 'Uncertainty in the Era of AI'.
Deadline for submissions: 31 May 2026
The call:
The ability to define, measure, and propagate uncertainty is a key element of robust data science. This topic has been extensively studied in both classical statistics and probabilistic machine learning, and is a core aspect for safety-critical applications in engineering, medicine, public policy, and more.
In the era of large-scale AI systems, questions around uncertainty are once again proving crucial, and remain, in important ways, open and unresolved. Although classical definitions and measures of uncertainty remain relevant, there are specific features of modern AI that pose new or increased challenges. High-dimensional models operating on massive non-i.i.d. datasets strain traditional techniques, generative objectives prioritize plausibility over correctness, and interactive and multi-agent systems compound uncertainty across sequential dialogues and multiple agents. Yet, the challenge is not solely mathematical. As AI integrates into high-stakes professional domains such as healthcare, law, and education, experts face forms of uncertainty that resist reduction to percentages. A legal professional assessing subtle aspects of human actions, or a teacher evaluating the cultural meaning of a literary interpretation, is navigating hermeneutic uncertainty, where the challenge is contested interpretation. When AI systems force these nuanced judgments into probability scores, they risk atrophying the very professional expertise they are meant to augment.
This call for papers advocates for a dual-track evolution. We need new theoretical frameworks capable of rigorous uncertainty definition and quantification within the contexts of large-scale generative, human-interactive, and multi-agent systems. Simultaneously, we need participatory and sociotechnical approaches that allow professional communities to define how uncertainty is communicated, moving toward dynamic, context-aware expressions of doubt that preserve human agency.
We invite submissions that advance statistical theory and that bridge the divide with professional practice. We seek work that expands our notion of uncertainty as well as work that does not view uncertainty merely as a technical barrier to be overcome, but as a condition that makes professional learning and judgment possible.
We encourage contributions from statisticians, AI researchers, ethicists, and domain experts (e.g., clinicians, legal scholars) addressing the following themes:
New Taxonomies of Uncertainty: Developing new definitions of uncertainty adapted to the landscape of modern AI. These include definitions that move beyond the standard statistical distinctions to encompass ethical, cultural, and contextual uncertainties central to professional judgment.
Epistemic vs. Hermeneutic Uncertainty: Distinguishing between "what we don't know" (epistemic) and "what is open to interpretation" (hermeneutic). How can AI systems signal the latter without falsely quantifying it?
Methodological Innovations: Novel methods for quantifying uncertainty in generative models trained on near-universal datasets, including metrics for semantic uncertainty, and frameworks for tracking reliability across interactive and multi-agent systems.
Visualization & Uncertainty Communication: Moving beyond standard confidence intervals through innovations in visual analytics. We seek designs that help users navigate high-dimensional spaces, signal not only statistical uncertainty but also when outputs are technically sound yet open to interpretation, and link uncertainty to downstream decisions.
Professional Practice & Sense-Making: Participatory frameworks and empirical studies evaluating how uncertainty communication impacts the judgment, accuracy, and agency of human experts in collaborative workflows.
More details can be found at: https://academic.oup.com/rssdat/pages/call-for-papers-uncertainty-in-the-era-of-ai
Silvia Chiappa (chiappa.silvia@gmail.com)
Neil Lawrence (ndl21@cam.ac.uk)
Sach Mukherjee (sach.mukherjee@mrc-bsu.cam.ac.uk)
Editors-in-Chief, RSS: Data Science and Artificial Intelligence

Related Resources

Theme Collection: Sovereign AI and Digit 2026   Call for Papers: Sovereign AI and Digital Sovereignty
Ei/Scopus-CMLDS 2026   2026 3rd International Conference on Computing, Machine Learning and Data Science (CMLDS 2026)
Cyber-AI 2026   The 2nd IEEE 2026 International Conference on Cybersecurity and AI-Based Systems (Scopus)
IEEE-ICECCS 2026   2025 IEEE International Conference on Electronics, Communications and Computer Science (ICECCS 2026)
AI Encyclopedia 2027   Call for Articles in Elsevier's new AI Encyclopedia
CNCIT 2026   2026 5th International Conference on Networks, Communications and Information Technology
Rev-AI 2026   The 2026 International Conference on Revolutionary Artificial Intelligence and Future Applications
AMLDS 2026   IEEE--2026 2nd International Conference on Advanced Machine Learning and Data Science
NGEN-AI 2026   The 2026 International Conference on Next Generation AI Systems | Scopus Indexed
AI in Social Sciences 2026   AI in Social Sciences (working title)