| |||||||||||||||||
FLAIRS-36 ST XAI, Bias, and Trust 2023 : FLAIRS Special Track on Explainability, Bias, and Trust in Artificial Intelligence | |||||||||||||||||
Link: https://sites.google.com/view/flairs-35spectrackxaibiastrust/home | |||||||||||||||||
| |||||||||||||||||
Call For Papers | |||||||||||||||||
Call for Papers: FLAIRS-36 Special Track on Explainability, Bias, and Trust
Abstract Due Date: February 6, 2023 Submission Due Date: February 13, 2023 Conference Dates: May 14-17, 2023 Conference Location: Clearwater Beach, Florida Website: https://sites.google.com/view/flairs-35spectrackxaibiastrust/home URL: https://www.flairs-36.info/call-for-papers We are seeking submissions for the Explainability, Bias, and Trust special track at the 36th International FLAIRS Conference (https://www.flairs-36.info/home). This special track focuses on Explainability, Bias, and Trust in Artificial Intelligence systems. The goal of this track is to provide a venue for researchers to disseminate important and novel work in these areas and to bring such research to the diverse AI community that FLAIRS attracts. As AI continues to flourish and impact an increasingly broad array of industries and everyday activities, it is important to develop systems that users trust. The blackbox nature of many AI systems as well as well-publicized cases of bias in machine learning models undermine users’ trust in AI and lead to ethical and legal concerns. Explainable AI and bias detection and mitigation are active and growing areas of research designed to address these challenges. Extended versions of select papers accepted to this track will be invited for consideration in a special issue of the International Journal on AI Tools (IJAIT). Papers and contributions are encouraged for any work relating to AI and explainability, bias, or trust. Topics of interest may include (but are in no way limited to): 1. Detection and mitigation of bias in AI 2. Explainability of AI systems 3. Increasing trust in AI systems 4. Evaluating explainability and trust in AI 5. Support technologies useful for research in explainability, bias, and/or trust 6. Data sets of value in research in explainability, bias, and/or trust 7. Case studies of deployed systems involving explainability, bias, and/or trust Questions regarding the track should be addressed to: Doug Talbert at dtalbert@tntech.edu or William Eberle at weberle@tntech.edu. |
|