posted by organizer: vinitra || 1898 views || tracked by 2 users: [display]

FATED 2022 : Workshop on Fairness, Accountability, and Transparency in Educational Data


When Jul 27, 2022 - Jul 27, 2022
Where Durham, UK
Submission Deadline May 27, 2022
Notification Due Jun 24, 2022
Final Version Due Jul 8, 2022
Categories    fairness   education   machine learning   data science

Call For Papers

Workshop on Fairness, Accountability, and Transparency in Educational Data (FATED 2022)
to be held as part of the 15th International Conference on Educational Data Mining (EDM 2022)

July 27, 2022 - Hybrid (in-person in Durham, UK, and online)

-- Important Dates --

27 May 2022: Submission deadline for all types of papers

24 June 2022: Notification of acceptance

08 July 2022: Camera-ready submission

27 July 2022: Workshop, from approximately 8:30am-5pm GMT [in-person in Durham, England, and online]

All deadlines are 11:59pm, AoE time (Anywhere on Earth).

-- Workshop Aims and Scope --

FATED 2022 will bring an education-specific lens to fair machine learning topics and questions. We are particularly excited to spur discussion around what educational datasets should be used for comparison in the context of bias detection or correction algorithms; how to evaluate fairness and bias in educational tasks, including challenges that arise due to limited demographic information; and algorithmic bias detection and countermeasure design for education-specific tasks.

-- Workshop Topics --

In this workshop, we are interested in collecting contributions in the following areas:

Data Set Collection and Preparation: Shared datasets and benchmarks have been invaluable for progress in ML, but while an increasing number of educational datasets are available, there’s not yet consensus about what educational datasets are best for comparison in the context of bias detection or correction algorithms. Further, educational data can pose unique challenges when examining questions of fairness because student demographic information is highly protected by privacy laws.

Evaluation Protocol and Metric Formulation: There exist a wide variety of metrics and evaluation protocols for quantifying fairness and bias ML. For particular educational data mining and machine learning tasks, which of these metrics most appropriate? What are the pros and cons of different evaluation protocols for empirical research on fairness and bias across common types of educational machine learning and data mining tasks?

Detection and Countermeasure Design: We look forward to providing a venue for researchers to share their work on algorithmic bias detection and correction specifically in education-related contexts. We also invite discussion about what features of the questions that we address in educational machine learning and the datasets that we use pose particular challenges for detecting and/or addressing algorithmic bias.

The workshop will promote connections among researchers working on fair ML who want to engage with research questions related to education or educational data and researchers working on educational machine learning who want to be more attentive to questions of fairness. We hope this workshop provides an opportunity for exchange of ideas and communication among diverse stakeholders, including those working in both industry and academia.

-- Submission Details --

We invite submissions in the following categories:

Dataset Papers (2-4 pages): Short papers describing datasets (either created by themselves or by others) that they think should be widely used by fairness researchers in educational machine learning (EML). Papers will identify what EML questions the dataset is well-suited for, why the dataset should be widely used by bias and fairness researchers, and provide specific documentation about the contents of the dataset, including what demographic information, how it was collected, and what procedures were taken to safeguard participants’ privacy and rights. Submissions on this track combine elements of a datasheet that documents a dataset (Gebru et al. 2021) and go further by contextualizing the dataset for fairness research in EML/EDM. Papers should include a link to the dataset or information about how to obtain it if it is not publicly posted; datasets must be obtainable by researchers not involved in the original data collection.

Evaluation Protocol Papers (2-4 pages): Short papers on the importance of particular bias and fairness evaluation protocols (including metrics). We welcome papers that draw on published and/or newly presented research to argue for the importance of particular bias and fairness evaluation protocols specifically for EML tasks. Papers in this track may also elucidate a challenge related to fairness and EML, such as addressing issues of fairness given privacy restrictions on sharing student demographic data, or take a position on what is needed to address such issues.

Research Papers (4-8 pages): New or in-progress work related to algorithmic fairness and EML. Papers should be clearly placed with respect to the state of the art, state the contribution of the proposal in the domain of application, and describe the methodology in detail.

Reproducibility Papers (4-8 pages): These papers should replicate prior experiments on influential methods in the literature, preferably using the original source code, to further generalize and validate or not previous work from a bias or fairness perspective (e.g., reproducing a prior method on success prediction, originally evaluated based only on overall accuracy, to investigate how its estimates change within demographic groups). Using the original source code is recommended but not mandatory.

Encore Papers (no page restrictions): This track is for recently published pieces relevant to the workshop. Authors submit an abstract and a link or PDF of the recently published work.

Authors of accepted papers in any track will be invited to present at a poster session, and presenters may be invited to give spotlight talks. Dataset, Evaluation Protocol, and Reproducibility papers will be included in the workshop proceedings. Authors of accepted papers on the Research track will have the option to include abstracts or full papers in the workshop proceedings; in either case, full papers will be linked on the workshop website. Abstracts of Encore papers will be published on the workshop website, and a link to the original publication will be included.

Page limits above do not include references; any number of additional pages with references may be used.

Submitted papers should be formatted using the EDM submission templates available here, and submitted to the FATED 2022 workshop on EasyChair.

-- Attending --

The registration will be managed by the EDM main conference organization at Registration is yet to open.

-- Workshop Organizers --

Collin Lynch, North Carolina State University
Mirko Marras, University of Cagliari
Mykola Pechenizkiy, Eindhoven University of Technology
Anna Rafferty, Carleton College
Steve Ritter, Carnegie Learning
Vinitra Swamy, EPFL
Renzhe Yu, University of California, Irvine

Related Resources

FAccT 2024   ACM Conference on Fairness, Accountability, and Transparency 2024
ICMLA 2024   23rd International Conference on Machine Learning and Applications
DSIT 2024   2024 7th International Conference on Data Science and Information Technology (DSIT 2024)
Ei/Scopus-AACIP 2024   2024 2nd Asia Conference on Algorithms, Computing and Image Processing (AACIP 2024)-EI Compendex
AMLDS 2025   2025 International Conference on Advanced Machine Learning and Data Science
IEEE-Ei/Scopus-SGGEA 2024   2024 Asia Conference on Smart Grid, Green Energy and Applications (SGGEA 2024) -EI Compendex
SPIE-Ei/Scopus-CVCM 2024   2024 5th International Conference on Computer Vision, Communications and Multimedia (CVCM 2024) -EI Compendex
ICBICC 2024   2024 International Conference on Big Data, IoT, and Cloud Computing (ICBICC 2024)
ICDM 2024   IEEE International Conference on Data Mining
ICONDATA 2024   6th International Conference on Data Science and Applications