posted by organizer: gcfac || 1651 views || tracked by 13 users: [display]

IML 2015 : Special Session on Incremental Machine Learning


When Nov 13, 2015 - Nov 15, 2015
Where Fukuoka, Japan
Submission Deadline Sep 7, 2015
Notification Due Sep 21, 2015
Final Version Due Oct 2, 2015
Categories    machine learning   incremental learning   data mining   pattern recognition

Call For Papers

The Special Session on Incremental Machine Learning (IML 2015)will be organized within the IEEE International Conference on Soft Computing and Pattern Recognition (SoCPaR 2015) to be held at the Kyushu University, Fukuoka, Japan during November 13-15, 2015.

Paper submission due: extended Sept. 7, 2015

Please follow the paper submission guidelines available in the SoCPaR 2015 website.
Submit your paper to the submission site, and choose "SS2: Incremental Machine Learning".

The International Conference on Soft Computing and Pattern Recognition, SoCPaR 2015 is a major annual international conference to bring together researchers, engineers, developers and practitioners from academia and industry working in all interdisciplinary areas of soft computing and pattern recognition to share their experience, and exchange and cross-fertilize their ideas.

All accepted papers fulfilling IEEE requirements on quality will be published in the IEEE Xplore. It is mandatory that at least one author registers for every paper that is included in the conference proceedings. Conference proceedings will be delivered to participants during the conference.

Special Issues:
Organizers have successfully negotiated with several International Journals to accommodate special issues on various topics. Expanded versions of SoCPaR 2015 selected papers will be published in Special Issues and Edited Volumes.

Aims and Scope:
Recently, the size of available datasets is increasing rapidly due in part to the development of Internet and sensor networks. In many cases, these databases are constantly evolving (data streams). They are characterized by a changing structure over time, new data arriving continuously. Sometimes, the evolution and the mass of data is so important that it is impossible to store them in a database. This poses several unique problems that make obsolete the applications of standard data analysis. Indeed, these databases are constantly online, growing with the arrival of new data. Thus, efficient algorithms must be able to work with a constant memory footprint, despite the evolution of the stream, as the entire database cannot be retained in memory. Traditionally, most machine learning algorithms are focused on batch learning from a static dataset or from a well-known distribution. However, these batch algorithms take a lot of time to learn a large amount of training data and many batch learning algorithms are not adapted to deal with non-stationary distributions. Only an analysis “on the fly” is possible. These processes are called “data streams analysis” and are the subject of numerous studies in recent years due to the large number of potential applications in many fields. Online incremental algorithms process few examples at a time and allows to extract the knowledge structure from continuous data in real-time. However, this problem became more difficult when we deal with high dimensional data, unbalanced data or outliers. Indeed, the study of data streams is a difficult problem: the computing and storage cost are high and the size of involved datasets is big. In the field of data mining, the main challenges for the study of data streams are the ability to compute a condensed description of the stream properties, but also the detection of change in the stream structure. To be autonomous, the algorithms must have several important characteristics: they must be able to discover the underlying structure of the data without any prior knowledge, i.e. the algorithm should be independent and automatically learn the spatial and temporal structure of the regardless of the type of the data: vector, symbolic, more complex forms (text, graphs) or mixed. The algorithms have to adapt to structural changes in data over time ("concept drift") in real time, without having to relearn everything every time. They must be able to store and reuse knowledge to learn a new data structure similar to that already learned. This property is essential for data stream analysis.
This special session aims to act as a forum for new ideas and paradigms concerning the field of Automated Incremental Learning. It will solicit theoretical and applicative research papers including but not limited to the following topics:

• Incremental Supervised Learning
• Incremental Unsupervised Learning
• Online Learning
• Autonomous Learning
• Concept Drift
• Model Selection
• Online Feature Selection
• Clustering data stream
• Distributed Clustering
• Consensus Clustering
• Incremental Probabilistically Models
• Active Learning
• Applications of Incremental Learning

Guénaël Cabanes, LIPN, Paris 13 University, Villetaneuse, France
Nicoleta Rogovschi, LIPADE, Paris Descartes University, Paris, France
Nistor Grozavu, LIPN, Paris 13 University, Villetaneuse, France
Younès Bennani, LIPN, Paris 13 University, Villetaneuse, France

Related Resources

ICML 2017   34th International Conference on Machine Learning
IJCAI 2017   International Joint Conference on Artificial Intelligence
MLDM 2017   Machine Learning and Data Mining in Pattern Recognition
DSAA 2017   The 4th IEEE International Conference on Data Science and Advanced Analytics 2017
ECML-PKDD 2017   European Conference on Machine Learning and Principles and Practice of Knowledge Discovery
PAKDD 2017   The 21st Pacific-Asia Conference on Knowledge Discovery and Data Mining
CCML 2017   The 16th China Conference on Machine Learning
ISMIS 2017   23rd International Symposium on Methodologies for Intelligent Systems
ICMR 2017   ACM International Conference on Multimedia Retrieval
CVPR 2017   Computer Vision and Pattern Recognition