CLEF: Cross-Language Evaluation Forum



Past:   Proceedings on DBLP

Future:  Post a CFP for 2021 or later   |   Invite the Organizers Email


All CFPs on WikiCFP

Event When Where Deadline
CLEF 2020 11th Conference and Labs of the Evaluation Forum
Sep 22, 2020 - Sep 25, 2020 Thessaloniki, Greece Apr 28, 2020
CLEF 2019 CLEF 2019 | Conference and Labs of the Evaluation Forum
Sep 9, 2019 - Sep 12, 2019 Lugano, Switzerland May 10, 2019 (May 3, 2019)
CLEF 2017 Conference and Labs of the Evaluation Forum Information. Access Evaluation meets Multilinguality, Multimodality and Interaction
Sep 11, 2017 - Sep 14, 2017 Dublin, Ireland Apr 28, 2017
CLEF 2016 Conference and Labs of the Evaluation Forum. Information Access Evaluation meets Multilinguality, Multimodality and Interaction
Sep 5, 2016 - Sep 8, 2016 Évora, Portugal Apr 8, 2016
CLEF 2015 CLEF 2015: Conference and Labs of the Evaluation Forum
Sep 8, 2015 - Sep 11, 2015 Toulouse, France Apr 12, 2015
CLEF 2014 Conference and Labs of the Evaluation Forum
Sep 15, 2014 - Sep 18, 2014 Sheffield, UK Apr 28, 2014
CLEF 2013 Conference and Labs of the Evaluation Forum
Sep 23, 2013 - Sep 26, 2013 Valencia, Spain Apr 28, 2013
CLEF 2012 Information Access Evaluation meets Multilinguality, Multimodality, and Visual Analytics
Sep 17, 2012 - Sep 20, 2012 Rome, Italy Apr 30, 2012
CLEF 2011 Conference on Multilingual and Multimodal Information Access Evaluation
Sep 19, 2011 - Sep 22, 2011 Amsterdam, The Netherlands May 1, 2011

Present CFP : 2020

With apologies for cross-posting.

CLEF 2020

22-25 September 2020, Thessaloniki, Greece
First call for papers

Important Dates (Time zone: Anywhere on Earth)
• Submission of All Papers: 28 April 2020
• Notification of Acceptance: 7 June 2020
• Camera Ready Copy due: 21 June 2020
• Conference: 22-25 September 2020

The CLEF Conference addresses all aspects of Information Access in any modality and language. The CLEF conference includes presentation of research papers and a series of workshops presenting the results of lab-based comparative evaluation benchmarks. CLEF 2020 Thessaloniki is the 11th year of the CLEF Conference series and the 21st year of the CLEF initiative as a forum for information retrieval (IR) evaluation. The CLEF conference has a clear focus on experimental IR as carried out within evaluation forums (e.g., CLEF Labs, TREC, NTCIR, FIRE, MediaEval, RomIP, SemEval, and TAC) with special attention to the challenges of multimodality, multilinguality, and interactive search also considering specific classes of users as children, students, impaired users in different tasks (e.g., academic, professional, or everyday-life) . We invite paper submissions on significant new insights demonstrated on IR test collections, on analysis of IR test collections and evaluation measures, as well as on concrete proposals to push the boundaries of the Cranfield style evaluation paradigm.

All submissions to the CLEF main conference will be reviewed on the basis of relevance, originality, importance, and clarity. CLEF welcomes papers that describe rigorous hypothesis testing regardless of whether the results are positive or negative. CLEF also welcomes past runs/results/data analysis and new data collections. Methods are expected to be written so that they are reproducible by others, and the logic of the research design is clearly described in the paper. The conference proceedings will be published in the Springer Lecture Notes in Computer Science (LNCS).

Relevant topics for the CLEF 2020 Conference include but are not limited to:
• Information Access in any language or modality: information retrieval, image retrieval, question answering, search interfaces and design, infrastructures, etc.
• Analytics for Information Retrieval: theoretical and practical results in the analytics field that are specifically targeted for information access data analysis, data enrichment, etc.
• User studies either based on lab studies or crowdsourcing.
• Past results/run deep analysis both statistically and fine grain based.
• Evaluation initiatives: conclusions, lessons learned, impact and projection of any evaluation initiative after completing their cycle.
• Evaluation: methodologies, metrics, statistical and analytical tools, component based, user groups and use cases, ground-truth creation, impact of multilingual/multicultural/multimodal differences, etc.
• Technology transfer: economic impact/sustainability of information access approaches, deployment and exploitation of systems, use cases, etc.
• Interactive Information Retrieval evaluation: the interactive evaluation of information retrieval systems using user-centered methods, evaluation of novel search interfaces, novel interactive evaluation methods, simulation of interaction, etc.
• Specific application domains: Information access and its evaluation in application domains such as cultural heritage, digital libraries, social media, expert search, health information, legal documents, patents, news, books, plants, etc.
• New data collection: presentation of new data collection with potential high impact on future research, specific collections from companies or labs, multilingual collections.
• Work on data from rare languages, collaborative, social data.

Authors are invited to electronically submit original papers, which have not been published and are not under consideration elsewhere, using the LNCS proceedings format:
Two types of papers are solicited:
• Long papers: 12 pages max (including references). Aimed to report complete research works.
• Short papers: 6 pages max (including references). Position papers, new evaluation proposals, developments and applications, etc.

Papers will be peer-reviewed by 3 members of the program committee. Selection will be based on originality, clarity, and technical quality. Papers should be submitted in PDF format to the following address:

Conference Chairs:
• Evangelos Kanoulas, Univ. of Amsterdam, the Netherlands
• Theodora Tsikrika, Information Technologies Institute, CERTH, Thessaloniki, Greece
• Stefanos Vrochidis, Information Technologies Institute, CERTH, Thessaloniki, Greece
• Avi Arampatzis, Democritus University of Thrace, Greece

Program Chairs:
• Hideo Joho, University of Tsukuba, Japan
• Christina Lioma, University of Copenhagen, Denmark

Lab Chairs:
• Aurélie Névéol, LIMSI, CNRS, France
• Carsten Eickhoff, Brown University, USA

Lab Mentorship Chair:
• Lorraine Goeuriot, Université Grenoble Alpes, France

Proceedings Chairs:
• Linda Cappellato, University of Padua, Italy
• Nicola Ferro, University of Padua, Italy


Related Resources

ENASE 2021   16th International Conference on Evaluation of Novel Approaches to Software Engineering
CHEME 2021   5th International Conference on Chemical Engineering
EAI VALUETOOLS 2021   14th EAI International Conference on Performance Evaluation Methodologies and Tools (Springer, Scopus, ISI, Ei, more)
IJCSEIT 2021   International Journal of Computer Science, Engineering and Information Technology
eRisk 2021   CFP eRisk @ CLEF 2021 - Early risk prediction on the Internet
CSEIT 2021   8th International Conference on Computer Science, Engineering and Information Technology
QEST 2021   International Conference on Quantitative Evaluation of SysTems
IJBISS 2021   International Journal of Business Information Systems Strategies
KTD 2021   Kant’s Transcendental Dialectic: A Re-Evaluation
TRECVID 2021   TREC Video Retrieval Evaluation