posted by organizer: mirkomarras || 9641 views || tracked by 3 users: [display]

SI Algo Bias & Fairness 2020 : Special Issue on Algorithmic Bias and Fairness in Search and Recommendation - Information Processing & Management (I.F. 3.892)


When N/A
Where N/A
Submission Deadline Nov 30, 2020
Notification Due Jan 15, 2021
Final Version Due Mar 15, 2021
Categories    recommender systems   information retrieval   fairness   bias

Call For Papers

Please accept our apologies in case of multiple receptions.
Please send to interested colleagues and students.

*** Call for Papers ***
Special Issue on Algorithmic Bias and Fairness in Search and Recommendation
Information Processing & Management (Elsevier)
Impact Factor: 3.892

*** Important dates ***
- Manuscript submission due: November 30, 2020
- First round decision made: January 15, 2021
- Revised manuscript due: March 15, 2021
- Final decision made: May 15, 2021
- Final paper due: June 15, 2021

*** Aims and Scope ***
Search and recommendation algorithms are playing a primary role in supporting individuals at filtering the overwhelming alternatives our daily life offers. Such an automated intelligence is being used on a myriad of platforms covering different domains, from e-commerce to education, from health to social media, and so on. The ongoing research in these fields is posing search and recommendation algorithms closer and closer, with search algorithms being personalized based on users' characteristics, and recommendation algorithms being optimized on the ranking quality. This attitude results in enabling the identification of common challenges and shared priorities, essential to tailor these systems on the needs of our society.

Over the aspects getting special attention in search and recommendation so far, the capability to uncover, characterize, and counteract data and algorithmic biases, while preserving the original level of accuracy, is proving to be prominent and timely. Both classes of algorithms are trained on historical data, which often conveys imbalances and inequalities. Such patterns in the training data might be captured and emphasized in the results these algorithms provide to users, leading to biased or even unfair decisions. This can happen when an algorithm systematically discriminates users as individuals or as belonging to a legally-protected class, identified by common sensitive attributes.

Given the increasing adoption of systems empowered with search and recommendation capabilities, it is crucial to ensure that their decisions do not lead to biased or even discriminatory outcomes. Controlling the effects generated by popularity bias to improve the user's perceived quality of the results, supporting consumers and providers with fair rankings and recommendations, and providing transparent results are examples of challenges that require attention. This special issue intends to bring together original research methods and applications that put people first, inspect social and ethical impacts, and uplift the public trust on search and recommendation technologies. The goal is to favor a community-wide dialogue on new research perspectives in this field.

*** Topics ***
We solicit different types of contributions (research papers, surveys, replicability and reproducibility studies, resource papers, systematic review articles) on algorithmic bias in search and recommendation, focused but not limited to the following areas. If in doubt about the suitability, please contact the Guest Editors.

Data Set Collection and Preparation:
- Managing imbalances and inequalities within data sets
- Devising collection pipelines that lead to fair and unbiased data sets
- Collecting data sets useful for studying potential biased and unfair situations
- Designing procedures for creating synthetic data sets for research on bias and fairness

Countermeasure Design and Development:
- Conducting exploratory analysis that uncover biases
- Designing treatments that mitigate biases (e.g., popularity bias mitigation)
- Devising interpretable search and recommendation models
- Providing treatment procedures whose outcomes are easily interpretable
- Balancing inequalities among different groups of users or stakeholders

Evaluation Protocol and Metric Formulation:
- Conducting quantitative experimental studies on bias and unfairness
- Defining objective metrics that consider fairness and/or bias
- Formulating bias-aware protocols to evaluate existing algorithms
- Evaluating existing strategies in unexplored domains
- Comparative studies of existing evaluation protocols and strategies

Case Study Exploration:
- E-commerce platforms
- Educational environments
- Entertainment websites
- Healthcare systems
- Social media
- News platforms
- Digital libraries
- Job portals
- Dating platforms

*** Paper Submission and Review ***

Submitted papers must conform to the author guidelines available on the IPM journal website at ( Authors are required to submit their manuscripts online through the IPM submission site at (, article type “SI: Algo Bias & Fairness”.

Submissions must represent original material, that has not appeared elsewhere for publication and that is not under review for another refereed publication. If any portion of your submission has previously appeared in or will appear in a conference/workshop proceeding, you should notify this at the time of submission, make sure that the submission references the conference publication, and supply a copy of the conference version(s). Please also provide a brief description of the differences between the submitted manuscript and the preliminary version(s). You must select the appropriate designation for the files during the submission process in order to assist the guest editors and reviewers with differentiating between the files.

Submissions will be evaluated by at least three independent reviewers on the basis of relevance for the special issue, novelty, clarity, originality, significance of contribution, technical quality, and quality of presentation. The editors reserve the right to reject without review any submissions deemed to be outside the scope of the special issue. Authors are welcome to contact the special issue editors with questions about scope before preparing a submission.

*** Guest Editors ***

Ludovico Boratto
Data Science and Big Data Analytics Research Group
Eurecat - Centre Tecnològic de Catalunya, Barcelona, Spain

Stefano Faralli
Unitelma Sapienza University of Rome, Rome, Italy

Mirko Marras
Department of Mathematics and Computer Science
University of Cagliari, Cagliari, Italy

Giovanni Stilo
Department of Information Engineering, Computer Science and Mathematics
University of L’Aquila, L’Aquila, Italy

Related Resources

BIAS 2024   International Workshop on Algorithmic Bias in Search and Recommendation
ECNLPIR 2024   2024 European Conference on Natural Language Processing and Information Retrieval (ECNLPIR 2024)
FLAIRS-37 ST XAI, Fairness, and Trust 2024   FLAIRS-37 Special Track on Explainable, Fair, and Trustworthy AI
CST 2024   11th International Conference on Advances in Computer Science and Information Technology
GenderBiasNLP 2024   Fifth Workshop on Gender Bias in Natural Language Processing
EWAF 2024   3rd European Workshop on Algorithmic Fairness
MLNLP 2024   5th International Conference on Machine Learning Techniques and NLP
Dialogo_DBR 2024   The 19th bi-annual International Virtual Conference on Discrimination, Bias, and Repudiation
FAccT 2024   ACM Conference on Fairness, Accountability, and Transparency 2024
NLCA 2024   5th International Conference on Natural Language Computing Advances