posted by user: bfrenay || 3483 views || tracked by 12 users: [display]

NEUCOM 2014 : Neurocomputing Special Issue: Advances in Learning with Label Noise

FacebookTwitterLinkedInGoogle

Link: http://www.journals.elsevier.com/neurocomputing/call-for-papers/special-issue-on-advances-in-learning-with-label-noise/
 
When N/A
Where N/A
Submission Deadline Mar 1, 2014
Notification Due Jul 15, 2014
Categories    machine learning
 

Call For Papers

Call for Papers: Neurocomputing Special Issue on "Advances in Learning with Label Noise"

AIMS AND SCOPE

Label noise is an important issue in classification. It is both expensive and difficult to obtain completely reliable labels, yet traditional classifiers expect a perfectly labelled training set. In real-world data sets, however, the labels available often contain mistakes. Mislabelling may occur for several reasons, including lack of information, speedy labelling by non-experts, the subjective nature of class memberships, expert errors, and communication problems. Furthermore, label noise may take several different forms -- for instance, labelling errors may occur at random, or may depend on particular values of the data features, or they may be adversarial. Errors may affect all data classes equally or asymmetrically. A large body of literature exists on the effects of label noise, which shows that mislabelling may detrimentally affect the classification performance, the complexity of the learned models, and it may impair pre-processing tasks such as feature selection.

Many methods have been proposed to deal with label noise. Filter approaches aim at identifying and removing any mislabelled instances. Label noise sensitive algorithms aim at dealing with label noise during learning, by modelling the process of label corruption as part of modelling the data. Some methods have been modified to take label noise into account in an embedded fashion. The current literature on learning with label noise is a lively mixture of theoretical and experimental studies which clearly demonstrate both the complexity and the importance of the problem. Dealing with mislabelled instances needs to be flexible enough to accommodate label uncertainty, yet constrained enough to guide the learning process in its decisions regarding when to trust the label and when to trust the classifier.

This special issue aims to stimulate new research in the area of learning with label noise by providing a forum for authors to report on new advances and findings in this problem area. Topics of interest include, but are not limited to:

- new methods to deal with label noise;
- new applications where label noise must be taken into account;
- theoretical results about learning in the presence of label noise;
- experimental results which provide insight about existing methods;
- dealing with different types of label noise (random, non-random, malicious, or adversarial);
- conditions for the consistency of classification in the presence of label noise;
- label noise in high dimensional small sample settings;
- the issue of model meta-parameters/order selection in the presence of label noise;
- feature selection and dimensionality reduction in the presence of label noise;
- label-noise aware classification algorithms in static and dynamic scenarios;
- on-line learning with label noise
- learning with side information to counter label noise;
- model assessment in the presence of label noise in test data.


SUBMISSION OF MANUSCRIPTS

If you intend to contribute to this special issue, please send a title and abstract of your contribution to the guest editors.

Authors should prepare their manuscript according to the Guide for Authors available at http://www.journals.elsevier.com/neurocomputing. All the papers will be peer-reviewed following the Neurocomputing reviewing procedures. Authors must submit their papers electronically by using online manuscript submission at http://ees.elsevier.com/neucom. To ensure that all manuscripts are correctly included into the special issue, it is important that authors select "SI: Learning with label noise" when they reach the "Article Type" step in the submission process.

For technical questions regarding the submission website, please contact the support office at Elsevier or the guest editors.

IMPORTANT DATES

Deadline of paper submission: 15 February 2014
Notification of acceptance: 15 July 2014

GUEST EDITORS

Benoît Frénay (Managing Guest Editor)
Université catholique de Louvain, Belgium
E-mail: benoit.frenay@uclouvain.be
Website: http://bfrenay.wordpress.com
Phone: +32 10 478133

Ata Kaban (Special Issue Guest Editor)
University of Birmingham, United Kingdom
E-mail: A.Kaban@cs.bham.ac.uk
Website: http://www.cs.bham.ac.uk/~axk
Phone: +44 121 41 42792

Related Resources

ICDMML 2019   2019 International Conference on Data Mining and Machine Learning
ICDM 2018   IEEE International Conference on Data Mining
FG 2019   The 14th IEEE International Conference on Automatic Face and Gesture Recognition
ACML 2018   The 10th Asian Conference on Machine Learning
eLSE 2018   14th eLearning and Software for Education Conference
WSDM 2019   WSDM 2019: The 12th ACM International Conference on Web Search and Data Mining
IICEHawaii 2019   The IAFOR International Conference on Education – Hawaii 2019
IJSCMC 2018   International Journal of Soft Computing, Mathematics and Control
CSITS 2018   International Workshop on Cyber Security for Intelligent Transportation Systems
TIST Special Issue 2018   ACM TIST Special Issue on Advances in Causal Discovery and Inference