posted by organizer: skallumadi || 11004 views || tracked by 3 users: [display]

SIGIR Ecom Data Challenge 2019 : eBay Data Challenge- High Accuracy Recall Task

FacebookTwitterLinkedInGoogle

Link: https://sigir-ecom.github.io/data-task.html
 
When Jul 25, 2019 - Jul 25, 2019
Where Paris, France
Submission Deadline TBD
Categories    ecommerce ir   information retrieval   data challenge   product search
 

Call For Papers

Call For Participation:

The 2019 SIGIR workshop on eCommerce is hosting the High Accuracy Recall Task Data Challenge as part of the workshop. The data is provided by eBay search. SIGIR eCom is a full day workshop taking place on Thursday, July 25, 2019 in conjunction with SIGIR 2019 in Paris, France. Challenge participants will have the opportunity to present their work at the workshop.

Challenge website: https://sigir-ecom.github.io/data-task.html

Important Dates:
Data Challenge opens: May 17, 2019
Final Leaderboard - July 18, 2019
SIGIR eCom Full day Workshop - July 25, 2019

Task Description:

This challenge targets a common problem in eCommerce search: Identifying the items to show when using non-relevance sorts. Users of eCommerce search applications often sort by dimensions other than relevance. Popularity, review score, price, distance, recency, etc. This is a notable difference from traditional information oriented search, including web search, where documents are surfaced in relevance order.

Relevance ordering obviates the need for explicit relevant-or-not decisions on individual documents. Many well studied search methodologies take advantage of this. Non-relevance sorts orders are less well studied, but raise a number of interesting research topics. Evaluation metrics, ranking formulas, performance optimization, user experience, and more. These topics are discussed in the High Accuracy Recall Task paper, published at the SIGIR 2018 Workshop on eCommerce.

This search challenge focuses on the most basic aspect of this problem: identifying the items to include in the recall set when using non-relevance sorts. This is already a difficult problem, and includes typical search challenges like ambiguity, multiple query intents, etc.

Participation and Data:

The data challenge is open to everyone.

The challenge data consists of a set of popular search queries and a fair size set of candidate documents. Challenge participants make a boolean relevant-or-not decision for each query-document pair. Human judgments are used to create labeled training and evaluation data for a subset of the query-document pairs. Evaluation of submissions will be based on the traditional F1 metric, incorporating components of both recall and precision.

Details about evaluation metrics and other aspects of the task can be found at the website: https://sigir-ecom.github.io/data-task.html

Related Resources

SIGIR 2024   The 47th International ACM SIGIR Conference on Research and Development in Information Retrieval
ECNLPIR 2024   2024 European Conference on Natural Language Processing and Information Retrieval (ECNLPIR 2024)
DSIT 2024   2024 7th International Conference on Data Science and Information Technology (DSIT 2024)
AI & FL 2024   12th International Conference of Artificial Intelligence and Fuzzy Logic
HiPEAC SC 2024   HiPEAC Reproducibility Student Challenge
MLNLP 2024   5th International Conference on Machine Learning Techniques and NLP
ICBICC 2024   2024 International Conference on Big Data, IoT, and Cloud Computing (ICBICC 2024)
IJCTCM 2024   International Journal of Control Theory and Computer Modelling
BDCAT 2024   IEEE/ACM Int’l Conf. on Big Data Computing, Applications, and Technologies
ICONDATA 2024   6th International Conference on Data Science and Applications