posted by user: grupocole || 5805 views || tracked by 6 users: [display]

SENSE 2017 : The first workshop on Sense, Concept and Entity Representations and their Applications, co-located with EACL 2017


When Apr 3, 2017 - Apr 4, 2017
Where Valencia, Spain
Submission Deadline Jan 16, 2017
Notification Due Feb 11, 2017
Final Version Due Feb 21, 2017
Categories    NLP

Call For Papers

-- apologies for cross-posting --

SENSE: The first workshop on Sense, Concept and Entity Representations and their Applications, co-located with EACL 2017 in Valencia, Spain

For more information please visit:

Important dates

Jan 16, 2017: Workshop paper due date
Feb 11, 2017: Notification of acceptance
Feb 21, 2017: Camera-ready papers due
April 3-4, 2017: Workshop dates (one day workshop; exact date to be announced)


Word embeddings have proven beneficial for a wide range of NLP applications, mainly for their generalisation power. However, word embeddings (or any other word vector representation) suffer from a major limitation: they conflate different meanings of a word into a single representation and consequently are unable to accurately model semantics of individual word senses. To deal with this issue, a line of research has tried to learn distinct representations for individual meanings of words (i.e., word senses).

Existing sense representation techniques can be divided into two main categories: unsupervised and knowledge-based. Unsupervised sense representations usually cluster the contexts in which a given word appears and obtain a representation for each cluster. On the other hand, knowledge-based approaches exploit lexical resources, such as WordNet, OntoNotes, Wikipedia, BabelNet, or FreeBase, and try to represent individual word senses as defined in these sense inventories. These works have already shown the potential of sense vector representations to overcome the meaning conflation deficiency of word representations. However, the evaluations have mainly been carried out on the semantic similarity measurement benchmark only. Even though sense, concept and entity representations have been already used for NLP tasks such as text classification and knowledge base construction/completion, their true potential in these and other downstream tasks remains promising but unclear.

The SENSE workshop aims at bringing together researchers in lexical semantics and NLP in general to investigate and discuss how to integrate sense, concept and entity representations into different NLP tasks and applications. The workshop will be targeted to cover (but not limited to) the following topics:

Utilizing sense/concept/entity representations in applications such as Machine Translation, Information Extraction or Retrieval, Word Sense Disambiguation, Entity Linking, Text Mining, Semantic Parsing, Knowledge Base Construction or Completion, etc.
Exploration of the advantages/disadvantages of using sense representations over word representations.
New evaluation benchmarks or comparison studies for sense vector representations.
Development of new representation techniques (unsupervised, knowledge-based or hybrid).
Compositionality of sense representations to learn representations of larger linguistic units such as phrases and sentences.
Construction of sense representations for languages other than English as well as multilingual representations.


Jose Camacho-Collados, Sapienza University of Rome
Mohammad Taher Pilehvar, University of Cambridge

Program Committee

Eneko Agirre, University of the Basque Country
Claudio Delli Bovi, Sapienza University of Rome
Luis Espinosa-Anke, Pompeu Fabra University
Graeme Hirst, University of Toronto
Eduard Hovy, Carnegie Mellon University
Ignacio Iacobacci, Sapienza University of Rome
Richard Johansson, University of Gothenburg
David Jurgens, Stanford University
Omer Levy, University of Washington
Andrea Moro, Microsoft Research
Roberto Navigli, Sapienza University of Rome
Arvind Neelakantan, University of Massachusetts Amherst
Luis Nieto Piña, University of Gothenburg
Siva Reddy, University of Edinburgh
Horacio Saggion, Pompeu Fabra University
Hinrich Schütze, University of Munich
Piek Vossen, University of Amsterdam
Torsten Zesch, University of Duisburg-Essen
Jianwen Zhang, Microsoft Research

Submission information

This workshop will accept both long and short papers. Long papers may contain up to eight pages of content and two pages for references, while short papers may contain up to four pages of content and two pages for references. Long papers should include original and complete research. Short papers may include a focused contribution, an interesting application, a negative result, an opinion on the current state and prospectives of the field, or work in progress including hints for future work on the development of new representations techniques or on the integration of current techniques in downstream applications. All submissions must be in PDF format and must follow the EACL 2017 guidelines.

For more information about the submission please visit the workshop website:

Related Resources

ICLR 2020   International Conference on Learning Representations
COLING 2020   The 28th International Conference on Computational Linguistics
CLA 2020   Concept Lattices and their Applications
EMNLP 2020   Conference on Empirical Methods in Natural Language Processing
KES 2020   Global and Constrained Optimization: Algorithms and Applications
MNLP 2020   4th IEEE Conference on Machine Learning and Natural Language Processing
CL-MISRNLP 2020   Special Issue of Computational Linguistics: Multilingual and Interlingual Semantic Representations for Natural Language Processing
IJSCAI 2020   International Journal on Soft Computing, Artificial Intelligence and Applications
CLUSTER 2020   IEEE International Conference on Cluster Computing
OOPSLA 2020   Conference on Object-Oriented Programming Systems, Languages,and Applications