posted by organizer: upuleegk || 1972 views || tracked by 2 users: [display]

MET 2016 : 1st International Workshop on Metamorphic Testing

FacebookTwitterLinkedInGoogle

Link: http://www.cs.montana.edu/met16/
 
When May 16, 2016 - May 16, 2016
Where Austin, Texas
Submission Deadline Jan 22, 2016
Notification Due Feb 19, 2016
Final Version Due Feb 26, 2016
Categories    metamorphic testing   test oracles   software testing
 

Call For Papers

*************** MET 2016 Call for Papers ******************

The 1st International Workshop on Metamorphic Testing (MET 2016) - http://www.cs.montana.edu/met16/

in conjunction with the 38th International Conference on Software Engineering (ICSE), Austin, TX, May 14-22, 2016


*IMPORTANT DATES*

Paper submissions due: January 22, 2016
Notification to author: February 19, 2016
Camera ready copies due: February 26, 2016
Workshop: May 16,2016


*SCOPE OF THE WORKSHOP*

Metamorphic testing (MT) is a testing technique that exploits the relationships among the inputs and outputs of multiple executions of the program under test, so-called Metamorphic Relations (MRs). MT has been proven highly effective in testing programs that face the oracle problem, for which the correctness of individual output is difficult to determine. Since the introduction of MT in 1998, the interest in this testing methodology has grown immensely with numerous applications in various domains such as machine learning, bioinformatics, computer graphics, simulation, search engines, decision support, cloud computing, databases, and compilers.

The First International Workshop on Metamorphic Testing (MET 2016) will bring together researchers and practitioners in academia and industry to discuss research results and experiences in MT. The ultimate goal of MET is to provide a platform for the discussion of novel ideas, new perspectives, new applications, and state of research, related to or inspired by MT.


*TOPICS OF INTEREST*

The topics of interest include, but are not limited to:

- Guidelines and techniques for the construction of MRs or MT test cases.
- Prioritization and minimization of MRs or MT test cases.
- Quality assessment mechanisms for MRs or MT test cases (e.g. metrics)
- Automated generation of likely MRs.
- Combination of MRs.
- Generation of source test cases.
- Formal methods involving MRs.
- Case studies and applications.
- Tools.
- Surveys.
- Empirical studies.
- Integration/comparison with other techniques.
- Novel applications, perspectives, or theories inspired by MT, which can be beyond conventional software testing topics.


*SUBMISSION AND PUBLICATION*

Authors are invited to submit original, previously unpublished research papers. Papers should be written in English, strictly following the ICSE 2016 formatting and submission instructions: http://2016.icse.cs.txstate.edu/formatInstr

The following types of submissions are accepted:

- Full research papers with a maximum length of 7 pages, including references and appendices.
- Short papers with a maximum length of 4 pages, including references and appendices.

Papers must be submitted in PDF format via the electronic submission system, which is available at [TBD]

Submitted papers will be evaluated according to their rigor, significance, originality, technical quality and exposition, by at least three members of an international program committee.

At least one author of each accepted paper must register and participate in the workshop. Registration is subject to the terms, conditions and procedure of the main ICSE conference to be found at its website: http://2016.icse.cs.txstate.edu/

Accepted papers will be published in the ACM digital library.


*KEYNOTE SPEAKER*

Prof. T.Y. Chen, Swinburne University of Technology, Australia


*ORGANIZERS*

Upulee Kanewala, Montana State University, USA
Laura L. Pullum, Oak Ridge National Laboratory, USA
Sergio Segura, University of Seville, Spain
Dave Towey, The University of Nottingham Ningbo China, China
Zhi Quan (George) Zhou, University of Wollongong, Australia


*PROGRAM COMMITTEE (To be completed)*

James Bieman, Colorado State University, USA
Giovanni Denaro, University of Milano Bicocca, Italy
Phyllis Frankl, Polytechnic Institute of New York University, USA
Arnaud Gotlieb, Simula Research Laboratory, Norway
Mark Harman, University College London, UK
Robert M. Hierons, University of Brunel, UK
Gail Kaiser, Columbia University, USA
F.-C. Kuo, Swinburne University of Technology, Australia
Mikael Lindvall, Fraunhofer Center for Experimental Software Engineering, USA
Huai Liu, RMIT University, Australia
Chris Murphy, University of Pennsylvania, USA
Alberto Núñez, Universidad Complutense de Madrid, Spain
T. H. Tse, The University of Hong Kong, Hong Kong
Xiaoyuan Xie, Wuhan University, China


*CONTACT*

Upulee Kanewala. Email: upulee.kanewala@cs.montana.edu

Related Resources

ISSTA 2024   The ACM SIGSOFT International Symposium on Software Testing and Analysis (Round 1)
AIFU 2024   10th International Conference on Artificial Intelligence and Applications
SOFT 2024   10th International Conference on Software Engineering
ICCES 2024   The 30th International Conference on Computational & Experimental Engineering and Sciences
DMSE 2024   5th International Conference on Data Mining and Software Engineering
ISSTA 2024   The ACM SIGSOFT International Symposium on Software Testing and Analysis (Round 2)
SOFT 2024   10th International Conference on Software Engineering
JSS VSI:AI-testing-and-analysis 2024   [JSS - Elsevier] Special Issue on Automated Testing and Analysis for Dependable AI-enabled Software and Systems
DMSE 2024   5th International Conference on Data Mining and Software Engineering
SOENG 2024   10th International Conference on Software Engineering