IML 2015 : Special Session on Incremental Machine Learning
Call For Papers
The Special Session on Incremental Machine Learning (IML 2015)will be organized within the IEEE International Conference on Soft Computing and Pattern Recognition (SoCPaR 2015) to be held at the Kyushu University, Fukuoka, Japan during November 13-15, 2015.
Paper submission due: extended Sept. 7, 2015
Please follow the paper submission guidelines available in the SoCPaR 2015 website.
Submit your paper to the submission site, and choose "SS2: Incremental Machine Learning".
The International Conference on Soft Computing and Pattern Recognition, SoCPaR 2015 is a major annual international conference to bring together researchers, engineers, developers and practitioners from academia and industry working in all interdisciplinary areas of soft computing and pattern recognition to share their experience, and exchange and cross-fertilize their ideas.
All accepted papers fulfilling IEEE requirements on quality will be published in the IEEE Xplore. It is mandatory that at least one author registers for every paper that is included in the conference proceedings. Conference proceedings will be delivered to participants during the conference.
Organizers have successfully negotiated with several International Journals to accommodate special issues on various topics. Expanded versions of SoCPaR 2015 selected papers will be published in Special Issues and Edited Volumes.
Aims and Scope:
Recently, the size of available datasets is increasing rapidly due in part to the development of Internet and sensor networks. In many cases, these databases are constantly evolving (data streams). They are characterized by a changing structure over time, new data arriving continuously. Sometimes, the evolution and the mass of data is so important that it is impossible to store them in a database. This poses several unique problems that make obsolete the applications of standard data analysis. Indeed, these databases are constantly online, growing with the arrival of new data. Thus, efficient algorithms must be able to work with a constant memory footprint, despite the evolution of the stream, as the entire database cannot be retained in memory. Traditionally, most machine learning algorithms are focused on batch learning from a static dataset or from a well-known distribution. However, these batch algorithms take a lot of time to learn a large amount of training data and many batch learning algorithms are not adapted to deal with non-stationary distributions. Only an analysis “on the fly” is possible. These processes are called “data streams analysis” and are the subject of numerous studies in recent years due to the large number of potential applications in many fields. Online incremental algorithms process few examples at a time and allows to extract the knowledge structure from continuous data in real-time. However, this problem became more difficult when we deal with high dimensional data, unbalanced data or outliers. Indeed, the study of data streams is a difficult problem: the computing and storage cost are high and the size of involved datasets is big. In the field of data mining, the main challenges for the study of data streams are the ability to compute a condensed description of the stream properties, but also the detection of change in the stream structure. To be autonomous, the algorithms must have several important characteristics: they must be able to discover the underlying structure of the data without any prior knowledge, i.e. the algorithm should be independent and automatically learn the spatial and temporal structure of the regardless of the type of the data: vector, symbolic, more complex forms (text, graphs) or mixed. The algorithms have to adapt to structural changes in data over time ("concept drift") in real time, without having to relearn everything every time. They must be able to store and reuse knowledge to learn a new data structure similar to that already learned. This property is essential for data stream analysis.
This special session aims to act as a forum for new ideas and paradigms concerning the field of Automated Incremental Learning. It will solicit theoretical and applicative research papers including but not limited to the following topics:
• Incremental Supervised Learning
• Incremental Unsupervised Learning
• Online Learning
• Autonomous Learning
• Concept Drift
• Model Selection
• Online Feature Selection
• Clustering data stream
• Distributed Clustering
• Consensus Clustering
• Incremental Probabilistically Models
• Active Learning
• Applications of Incremental Learning
Guénaël Cabanes, LIPN, Paris 13 University, Villetaneuse, France
Nicoleta Rogovschi, LIPADE, Paris Descartes University, Paris, France
Nistor Grozavu, LIPN, Paris 13 University, Villetaneuse, France
Younès Bennani, LIPN, Paris 13 University, Villetaneuse, France