posted by user: anilsingh || 3942 views || tracked by 7 users: [display]

RevOpiD 2018 : Opinion Mining, Summarization and Diversification

FacebookTwitterLinkedInGoogle

Link: https://sites.google.com/view/revopid-2018/home
 
When Jul 9, 2018 - Jul 12, 2018
Where Baltimore, Maryland, USA
Submission Deadline Apr 10, 2018
Notification Due May 8, 2018
Final Version Due May 19, 2018
Categories    information retrieval   opinion mining   summarization   natural language processing
 

Call For Papers

ACM Hypertext-2018 Workshop on Opinion Mining, Summarization and Diversification

Call for Papers and Participation in the Shared Task
Website: https://sites.google.com/view/revopid-2018
Contact email: aksingh.cse@iitbhu.ac.in

Submission Deadline: April 10, 2018

This workshop aims at uncovering diverse perspectives to defining opinions. How can opinions be better summarized on online forums, in web search results or elsewhere? What relationships can be mapped between exchange of opinions on the web? We invite submissions on all such relatively unexplored dynamics of opinion mining and modeling.

Through a workshop on Opinion Mining, Summarization and Diversification, we aim to cover the following themes, around which we invite submissions, in the form of original work and progress reports:

* Review Opinion Diversification
* Opinion Modeling techniques
* Text and Sentiment Summarization
* Opinion summarization in ranking
* Exchange of opinions as network graphs
* Joint Topic Sentiment Modeling
* Phrase Embeddings
* Sentiment Normalization on a relative scale
* Paraphrase detection in opinionated text
* Factors affecting likeability of online reviews
* Fake review detection
* Sarcasm detection in online reviews
* Bias propagation on online forums
* Evaluation of opinion diversity
* Evaluation of representativeness and diversity in ranking
* Knowledge Representation methods for opinions


Shared Task

As part of the workshop, we will also be hosting a shared task on Review Opinion Diversification. The shared task aims to identify opinions from online product reviews. By identification of opinions, we don’t just mean string matching with a predefined list. Instead, we reward two systems equally whether they recognize ‘this product is cost-effective’ as an opinion or, instead, ‘this product is inexpensive’ or ‘this product is worth the money.’ We have an annotated dataset of 80+ products, with more than 10,000 reviews in totality, each review being labelled with its constituent opinions in the form of one opinion matrix per product.


Subtask A (Usefulness Ranking)

A supervised task to predict the helpfulness rating of product reviews based on review text. For a review which 3 users rated as helpful and 2 users found not-helpful, will be 3/5.

Subtask B (Representativeness Ranking)

Subtask B judges a system on its ability to tell whether a given review R1 contains a given opinion O1 or not. While R1 can be easily identified by its Reviewer ID, opinions are not labeled with words. Instead, they are identified by the other reviews that they appear in. Therefore, we ask the participants to provide an opinion matrix as output, which we will evaluate using several verified metrics.

Subtask C (Exhaustive Coverage Ranking)

This subtask aims at producing, for each product, top-k reviews from a set of reviews such that the selected top-kreviews act as a summary of all the opinions expressed in the reviews set.

Data and Resources

The training, development and test data has been extracted and annotated from Amazon SNAP Review Dataset and will be available after registration.

Invitation

We invite participation from all researchers and practitioners. The organizers rely, as is usual in shared tasks, on the honesty of all participants who might have some prior knowledge of part of the data that will eventually be used for evaluation, not to unfairly use such knowledge. The only exceptions (to participation) are the members of the organizing team, who cannot submit a system. The organizing chair will serve as an authority to resolve any disputes concerning ethical issues or completeness of system descriptions.

Timeline

Research Papers

Paper Submission Deadline: April 10, 2018
Notification of Acceptance: May 8, 2018
Camera-Ready Deadline: May 19, 2018
Conference Dates: July 9-12, 2018

Shared Task

Registration open: January 26, 2018
Release of Training Data: January 28, 2018
Dryrun: Release of Development Set: February 5, 2018
Dryrun: Submission on Development Set: February 20, 2018
Dryrun: Release of Scores: February 24, 2018
Registration Ends: March 8, 2018
Release of Test Set: March 10, 2018
Submission of Systems: March 17, 2018
System Results: March 25, 2018
System Description Paper Due: April 10, 2018
Notification of Acceptance: May 8, 2018
Camera-Ready Deadline: May 19, 2018
Conference Dates: July 9-12, 2018

See https://sites.google.com/view/revopid-2018 for more information.

Related Resources

ICDM 2024   IEEE International Conference on Data Mining
ACM-Ei/Scopus-CCISS 2024   2024 International Conference on Computing, Information Science and System (CCISS 2024)
CoMSE 2024   2024 3rd Conference on Materials Science and Engineering (CoMSE 2024)
JARES 2024   International Journal of Advance Robotics & Expert Systems
DMKD 2024   2024 International Conference on Data Mining and Knowledge Discovery(DMKD 2024)
ECNLPIR 2024   2024 European Conference on Natural Language Processing and Information Retrieval (ECNLPIR 2024)
ADMA 2024   20th International Conference Advanced Data Mining and Applications
ISEEIE 2024   2024 4th International Symposium on Electrical, Electronics and Information Engineering (ISEEIE 2024)
BPMDS 2024   Business Process Modeling, Development, and Support
AIKE 2024   7th IEEE International Conference on Artificial Intelligence and Knowledge Engineering