LDQ 2017 : 4th Workshop on Linked Data Quality
Call For Papers
Theme and Topics
In recent years, the Linked Data paradigm has emerged as a simple mechanism for employing the Web for data and knowledge integration, which allows the publication and exchange of information in an interoperable way. This is confirmed by the growth of Linked Data on the Web, where currently more than 10,000 datasets are provided. However, one crippling problem is the data quality. Inaccurate, inconsistent or incomplete data strongly affect the results, leading to unreliable conclusions.
These quality problems affect every application domain, be it scientific (e.g., life science, environment), governmental, or industrial applications. Moreover, assessing the quality of these datasets and making the information explicit to the publisher and/or consumer is a major challenge. Quality is a key to the success of the data on the Web and a major barrier for further industry adoption.
Despite the quality in Linked Data being an essential concept, few efforts are currently available to standardize how data quality tracking and assurance should be performed, which, when happens iteratively, is essential for the management of the quality of these datasets. Particularly in Linked Data, ensuring data quality is a challenge due to the openness of the Semantic Web, the diversity of the information and the unbounded, dynamic set of autonomous data sources and publishers and consumers (legal and software agents). Thus, there is a need for not only standardized concepts (e.g. vocabularies) but also methodologies, which can make the assessment explicit.
The goal of the Workshop on Linked Data Quality is to raise the awareness of quality issues in Linked Data and to promote approaches to assess, monitor, manage, maintain and improve Linked Data quality.
The workshop topics include, but are not limited to:
Quality modeling vocabularies, such as W3C’s DQV
Frameworks for quality testing and evaluation
Constraint languages and rules, such as SHACL, SPIN
Crowdsourcing data quality assessment
Quality assessment leveraging background knowledge
Assessing the quality evolution of Semantic Web Assets (Data, Services & Systems)
Refinement techniques for Linked Datasets
Methods and frameworks, e.g., linkage, alignment, cleaning, enrichment, correctness
Service/system quality improvement methods and frameworks
Methodologies and frameworks to plan, control, assure or improve the quality of Semantic Web Assets
Quality exploration and analysis interfaces
Developing, deploying and managing quality service ecosystems
Use-case driven quality management
Web Data and LOD quality benchmarks
Managing sustainability issues in services
Guarantee of service (availability, performance)
Systems for transparent management of open data
Quality of ontologies
Reputation and trustworthiness of Web resources
We seek novel technical research papers in the context of Linked Data Quality. We accept a variety of contribution types (but not limited):
research papers (max 12 pages),
industry presentations and use cases (max 5 pages),
tools and services papers (max 5 pages),
position papers (max 3 pages)
Papers should be submitted in PDF format. Paper submissions should be formatted in the style of the Springer Publications format for Lecture Notes in Computer Science (LNCS). Please submit your paper via EasyChair at https://easychair.org/conferences/?conf=ldq2017. We note that the author list does not need to be anonymized, as we do not have a double-blind review process in place. Submissions will be peer reviewed by three independent reviewers. Accepted papers have to be presented at the workshop to be published in the proceedings. Proceedings will be published online at CEUR workshop proceedings series. The best papers accepted for this workshop will be included in the supplementary proceedings of ESWC 2017, which will appear in the Springer LNCS series.