posted by organizer: claudiopomo || 521 views || tracked by 1 users: [display]

EvalRS 2023 : EvalRS 2023. Well-Rounded Recommender Systems For Real-World Deployments @ ACM SIGKDD


When Aug 7, 2023 - Aug 7, 2023
Where Long Beach, CA, USA
Submission Deadline May 23, 2023
Notification Due Jun 23, 2023
Final Version Due Jul 15, 2023
Categories    recommender systems   evaluation   framework fo rss

Call For Papers


The EvalRS 2023. Well-Rounded Recommender Systems For Real-World Deployments,
in conjunction with the 29TH ACM SIGKDD Conference On Knowledge Discovery And
Data Mining (ACM SIGKDD 2023), ONSITE in Long Beach (CA), USA, 6-10 August 2023.

Full details are available online:

Submission deadline: 23 May 2023 AoE (extended)
Notification: 23 June 2023
Camera-ready: 15 July 2023
Workshop date: August 7th

Research papers and 4 hours hackathon on recSys evaluation.

The CFP is open (see below); full program and the finalize logistics will be soon announced on the official website:

Thanks to Mozilla AI support, we will award monetary prizes for:
best paper
best student paper
best hackathon project

EvalRS aims to foster closer partnerships between the academic and industrial sectors regarding the well-rounded evaluation of recommender systems (RS).

The traditional approach to RS evaluation has been centered on accuracy metrics. However, EvalRS seeks to expand the scope of evaluation techniques beyond just accuracy, to encompass other vital aspects such as fairness, interpretability, and robustness. By bringing together experts from industry, academia, and government, EvalRS creates a forum for discussion and collaboration on the latest trends and challenges across a wide
range of domains.

EvalRS 2023 follows in the footsteps of the first edition, EvalRS 2022, which featured more than 150 participants and was conducted entirely in the open, with artifacts such as datasets, metrics and evaluation code released back in the community. A review was published in Nature Machine Intelligence ( emphasizing the first-of-its-kind nature of the workshop, in which theoretical considerations became practical contributions, as participants were asked to “live and breadth” the problem of evaluation through a data challenge.

We believe that the rounded evaluation of RS is, by nature, a multi-faceted and multi-disciplinary endeavor and that the field as a whole has often been held back by the false dichotomy of
quantitative-and-scalable vs. qualitative-and-manual. The introduction of the hackathon promises to be an additional element of differentiation, making EvalRS 2023 a chance to present cutting-edge work on recommendation systems and network with like-minded, learn-by-doing, and, why not, win a prize!

Topics of interest include, but are not limited to:
- Online vs offline evaluation - e.g. making offline evaluation more trustworthy and unbiased;
- Tools and frameworks for the evaluation of RS;
- Empirical studies on the evaluation of RS;
- Reports from real-world deployments - failures, successes, and surprises;
- New metrics and methodologies for evaluation, both quantitative and qualitative;
- Multi-dimensional evaluation, combining multiple recommendation quality factors;
- Multi-disciplinary investigation on ethical questions connected to the deployment and use of RS.

In EvalRs 2023 We encourage the submission of original contributions along our main topics. Submitted papers will be evaluated (single-blind) according to their originality, technical content, style, clarity, and relevance to the workshop. Papers must be original work and may not be under submission to another venue at the time of review.

Accepted papers will appear in the workshop proceedings (as we did with EvalRS, we plan on using CEUR for proceedings).

- Long research/position papers (8 pages, excl references) short research/position papers (4 pages, excl. references), presenting work in
progress, lessons learnt, positions, emerging or future research issues and
directions on Recommender Systems evaluation.

- Extended abstract (2 pages, excl. references), containing descriptions of ongoing
projects or presenting already published results in the area.

Submissions of contributions must be in English, in PDF format in the CEUR-WS two-column conference format available at:
or at: if an Overleaf template is preferred.

All papers will undergo a peer review process by at least two expert reviewers to ensure a high standard of quality. Referees will consider originality, significance, technical soundness, clarity of exposition, and relevance to the workshop’s topics.

Research papers should be submitted electronically as a single PDF file through the CMT submission system at the following link

We plan to award monetary prizes to students and participants for outstanding paper contribution.

Federico Bianchi - Stanford
Patrick John Chia - Coveo
Ciro Greco - Bauplan
Gabriel Moreira - NVIDIA
Claudio Pomo - Politecnico di Bari
Davide Eynard - Mozilla AI
Fahd Husain - Mozilla AI
Jacopo Tagliabue - NYU, Bauplan

Related Resources

HealthRecSys 2024   The 6th Workshop on Health Recommender Systems co-located with ACM RecSys 2024
Learning 2024   Thirty-First International Conference on Learning
NLE Special Issue 2024   Natural Language Engineering- Special issue on NLP Approaches for Computational Analysis of Social Media Texts for Online Well-being and Social Order
RSsCI 2024   FLINS 2024 Special Session on Recommender systems supported by computational intelligence: emerging topics and applications
SURE 2024   Workshop on Strategic and Utility-aware REcommendations (@RecSys2024)
PCDS 2024   The 1st International Symposium on Parallel Computing and Distributed Systems
FAccTRec 2024   7th FAccTRec Workshop on Responsible Recommendation
ASPLOS 2025   The ACM International Conference on Architectural Support for Programming Languages and Operating Systems
DDECS 2024   27th International Symposium on Design and Diagnostics of Electronic Circuits and Systems