posted by organizer: skallumadi || 11352 views || tracked by 2 users: [display]

SIGIR Ecom Data Challenge 2019 : eBay Data Challenge- High Accuracy Recall Task

FacebookTwitterLinkedInGoogle

Link: https://sigir-ecom.github.io/data-task.html
 
When Jul 25, 2019 - Jul 25, 2019
Where Paris, France
Submission Deadline TBD
Categories    ecommerce ir   information retrieval   data challenge   product search
 

Call For Papers

Call For Participation:

The 2019 SIGIR workshop on eCommerce is hosting the High Accuracy Recall Task Data Challenge as part of the workshop. The data is provided by eBay search. SIGIR eCom is a full day workshop taking place on Thursday, July 25, 2019 in conjunction with SIGIR 2019 in Paris, France. Challenge participants will have the opportunity to present their work at the workshop.

Challenge website: https://sigir-ecom.github.io/data-task.html

Important Dates:
Data Challenge opens: May 17, 2019
Final Leaderboard - July 18, 2019
SIGIR eCom Full day Workshop - July 25, 2019

Task Description:

This challenge targets a common problem in eCommerce search: Identifying the items to show when using non-relevance sorts. Users of eCommerce search applications often sort by dimensions other than relevance. Popularity, review score, price, distance, recency, etc. This is a notable difference from traditional information oriented search, including web search, where documents are surfaced in relevance order.

Relevance ordering obviates the need for explicit relevant-or-not decisions on individual documents. Many well studied search methodologies take advantage of this. Non-relevance sorts orders are less well studied, but raise a number of interesting research topics. Evaluation metrics, ranking formulas, performance optimization, user experience, and more. These topics are discussed in the High Accuracy Recall Task paper, published at the SIGIR 2018 Workshop on eCommerce.

This search challenge focuses on the most basic aspect of this problem: identifying the items to include in the recall set when using non-relevance sorts. This is already a difficult problem, and includes typical search challenges like ambiguity, multiple query intents, etc.

Participation and Data:

The data challenge is open to everyone.

The challenge data consists of a set of popular search queries and a fair size set of candidate documents. Challenge participants make a boolean relevant-or-not decision for each query-document pair. Human judgments are used to create labeled training and evaluation data for a subset of the query-document pairs. Evaluation of submissions will be based on the traditional F1 metric, incorporating components of both recall and precision.

Details about evaluation metrics and other aspects of the task can be found at the website: https://sigir-ecom.github.io/data-task.html

Related Resources

SIGIR 2025   The 48th International ACM SIGIR Conference on Research and Development in Information Retrieval
NLPA 2025   6th International Conference on Natural Language Processing and Applications
ACM SAC 2025   40th ACM/SIGAPP Symposium On Applied Computing
CRBL 2025   5th International Conference on Cryptography and Blockchain
CoUDP 2025   2025 International Conference on Urban Design and Planning (CoUDP 2025)
SPM 2025   12th International Conference on Signal, Image Processing and Multimedia
Ei/Scopus-IPCML 2025   2025 International Conference on Image Processing, Communications and Machine Learning (IPCML 2025)
NCWMC 2025   10th International Conference on Networks, Communications, Wireless and Mobile Computing
BDML 2025   2025 8th International Conference on Big Data and Machine Learning (BDML 2025)--ESCI
BioCreative9@IJCAI-2025   BioCreative IX Challenge and Workshop@IJCAI-2025: Large Language Models for Clinical and Biomedical NLP