posted by user: mgns || 807 views || tracked by 4 users: [display]

DIACHRON 2016 : 2nd International Workshop on Preservation of Evolving Big Data


When Mar 16, 2016 - Mar 16, 2016
Where Bordeaux, France
Submission Deadline Dec 5, 2015
Notification Due Jan 9, 2016
Final Version Due Jan 23, 2016
Categories    big data   evolution

Call For Papers

Recently, there has been a vast and rapidly increasing quantity of scientific, corporate, government and crowd-sourced knowledge bases published and curated independently on the Data Web in various data formats, such as Linked Open Data (LOD) and Data APIs. This great abundance of open Big Data sources and the need for Web-scale knowledge management support is essentially transforming the Web into a vibrant information ecosystem. One of its main characteristics is its dynamicity and its decentralized evolution that spans across multiple interrelated sources and data hubs. Traditional closed-world settings impose that data management and evolution are performed within a well-defined controlled environment where change operations and dependencies on data can be easily monitored and handled. On the other hand, web and digital preservation techniques assume that preservation subjects, such as web pages, are plain digital assets that are collected (usually via a crawling mechanism) and individually archived for future reference. In contrast with these two approaches, the Data Web poses new requirements for revisiting and adjusting traditional closed-world data management techniques such as temporal management and change detection, data archiving and preservation, data ingestion, integration and enrichment, data provenance and quality, data visualization and exploratory analysis to the characteristics of multi-curated knowledge bases.

First, novel techniques and systems must address the extremely high heterogeneity that results from the independent curation of evolving Data Web sources. Although the semantic web and the LOD paradigm provides common ontologies and vocabularies for various domains (biological, geospatial, statistical, etc.), still the huge amount of available open data exists in many different formats, from unstructured text and web tables to well-defined data APIs; moreover, it pertains to diverse and ambiguous semantics and quality standards resulted from different curation processes across sites and sustains very different types of structural and semantic changes in arbitrary time intervals and frequencies. Secondly, current big data systems must address the issues related to the volume and performance of maintaining multiple copies of curated data that come from multiple sources and evolve in different timelines and granularities. Web data access and preservation is personalized at the user – curator level resulting in a blow-up of data replicated in various sites. New techniques must be aware of the different and varying user preferences for optimizing processes related to the harvesting, the integration, the detection of changes, the preservation and the querying of evolving data.

Thus, new big data systems must inherently support methods for the temporal management of data objects at different granularity levels (dataset, entity, record), the detection, modeling and querying of changes in data described in different source models and formats, offer machine learning techniques for time-aware entity resolution taking into account provenance and quality factors and uncertainty in the data, implement efficient archiving and compression strategies for optimizing between storage and query performance, adjust preservation solutions to varying user preferences, provide resilience to varying data citation schemes and guarantee long term sustainability, just to name a few.

The second DIACHRON workshop aims at addressing the above challenges and issues by providing a forum for Big Data researchers and practitioners to discuss exchange and disseminate their work. After the successful 1st DIACHRON Workshop, we aim to extend our audience to the broader Big Data community to network and cross-fertilize along issues related to the evolution and preservation.

Topics of interest include, but are not limited to:
Time representation in Big Data
Evolution representation and discovery,
Provenance annotation in Big Data
Time-aware Entity resolution and time-aware data integration
Efficient Change detection, modeling and synchronization
Reasoning techniques for evolution detection
Quality assessment in evolving knowledge
Visualization and exploration analysis of evolving big data
Visualizing trends, changes and paradigm shifts
User interfaces for evolving knowledge presentation
Querying techniques for Big Data archives
Efficient archiving techniques for Big Data
Preservation techniques for the Data Web,
Benchmarking of Big Data archiving tools
Tools and demos

Related Resources

DSAA 2017   The 4th IEEE International Conference on Data Science and Advanced Analytics 2017
BDCloud 2017   The 7th IEEE International Conference on Big Data and Cloud Computing
ICDM 2017   IEEE International Conference on Data Mining 2017
IEEE-ICCCBDA 2017   2nd International Conference on Cloud Computing and Big Data Analysis ICCCBDA -IEEE,Ei Compendex
ECML-PKDD 2017   European Conference on Machine Learning and Principles and Practice of Knowledge Discovery
INIT/AERFAISummerSchoolML 2017   INIT/AERFAI Summer School on Machine Learning
LDK 2017   Language, Data and Knowledge
SI - IoT and Big Data 2017   IoT and Big Data: An Extraordinary Synergy
HDMM 2017   2nd International Workshop on Health Data Management and Mining (HDMM) 2017
DATA 2017   6th International Conference on Data Science, Technology and Applications