posted by organizer: upuleegk || 878 views || tracked by 2 users: [display]

MET 2016 : 1st International Workshop on Metamorphic Testing

FacebookTwitterLinkedInGoogle

Link: http://www.cs.montana.edu/met16/
 
When May 16, 2016 - May 16, 2016
Where Austin, Texas
Submission Deadline Jan 22, 2016
Notification Due Feb 19, 2016
Final Version Due Feb 26, 2016
Categories    metamorphic testing   test oracles   software testing
 

Call For Papers

*************** MET 2016 Call for Papers ******************

The 1st International Workshop on Metamorphic Testing (MET 2016) - http://www.cs.montana.edu/met16/

in conjunction with the 38th International Conference on Software Engineering (ICSE), Austin, TX, May 14-22, 2016


*IMPORTANT DATES*

Paper submissions due: January 22, 2016
Notification to author: February 19, 2016
Camera ready copies due: February 26, 2016
Workshop: May 16,2016


*SCOPE OF THE WORKSHOP*

Metamorphic testing (MT) is a testing technique that exploits the relationships among the inputs and outputs of multiple executions of the program under test, so-called Metamorphic Relations (MRs). MT has been proven highly effective in testing programs that face the oracle problem, for which the correctness of individual output is difficult to determine. Since the introduction of MT in 1998, the interest in this testing methodology has grown immensely with numerous applications in various domains such as machine learning, bioinformatics, computer graphics, simulation, search engines, decision support, cloud computing, databases, and compilers.

The First International Workshop on Metamorphic Testing (MET 2016) will bring together researchers and practitioners in academia and industry to discuss research results and experiences in MT. The ultimate goal of MET is to provide a platform for the discussion of novel ideas, new perspectives, new applications, and state of research, related to or inspired by MT.


*TOPICS OF INTEREST*

The topics of interest include, but are not limited to:

- Guidelines and techniques for the construction of MRs or MT test cases.
- Prioritization and minimization of MRs or MT test cases.
- Quality assessment mechanisms for MRs or MT test cases (e.g. metrics)
- Automated generation of likely MRs.
- Combination of MRs.
- Generation of source test cases.
- Formal methods involving MRs.
- Case studies and applications.
- Tools.
- Surveys.
- Empirical studies.
- Integration/comparison with other techniques.
- Novel applications, perspectives, or theories inspired by MT, which can be beyond conventional software testing topics.


*SUBMISSION AND PUBLICATION*

Authors are invited to submit original, previously unpublished research papers. Papers should be written in English, strictly following the ICSE 2016 formatting and submission instructions: http://2016.icse.cs.txstate.edu/formatInstr

The following types of submissions are accepted:

- Full research papers with a maximum length of 7 pages, including references and appendices.
- Short papers with a maximum length of 4 pages, including references and appendices.

Papers must be submitted in PDF format via the electronic submission system, which is available at [TBD]

Submitted papers will be evaluated according to their rigor, significance, originality, technical quality and exposition, by at least three members of an international program committee.

At least one author of each accepted paper must register and participate in the workshop. Registration is subject to the terms, conditions and procedure of the main ICSE conference to be found at its website: http://2016.icse.cs.txstate.edu/

Accepted papers will be published in the ACM digital library.


*KEYNOTE SPEAKER*

Prof. T.Y. Chen, Swinburne University of Technology, Australia


*ORGANIZERS*

Upulee Kanewala, Montana State University, USA
Laura L. Pullum, Oak Ridge National Laboratory, USA
Sergio Segura, University of Seville, Spain
Dave Towey, The University of Nottingham Ningbo China, China
Zhi Quan (George) Zhou, University of Wollongong, Australia


*PROGRAM COMMITTEE (To be completed)*

James Bieman, Colorado State University, USA
Giovanni Denaro, University of Milano Bicocca, Italy
Phyllis Frankl, Polytechnic Institute of New York University, USA
Arnaud Gotlieb, Simula Research Laboratory, Norway
Mark Harman, University College London, UK
Robert M. Hierons, University of Brunel, UK
Gail Kaiser, Columbia University, USA
F.-C. Kuo, Swinburne University of Technology, Australia
Mikael Lindvall, Fraunhofer Center for Experimental Software Engineering, USA
Huai Liu, RMIT University, Australia
Chris Murphy, University of Pennsylvania, USA
Alberto Núñez, Universidad Complutense de Madrid, Spain
T. H. Tse, The University of Hong Kong, Hong Kong
Xiaoyuan Xie, Wuhan University, China


*CONTACT*

Upulee Kanewala. Email: upulee.kanewala@cs.montana.edu

Related Resources

ICFEM 2017   19th International Conference on Formal Engineering Methods
HVC 2017   Haifa Verification Conference
STV 2017   11th Workshop on System Testing and Validation
EDCC 2017   13th European Dependable Computing Conference
CGDEIJ 2017   Computer Game Development and Education: An International Journal
DevTernity 2017   DevTernity 2017 International Conference, Riga, Latvia
ICST 2018   The 11th IEEE International Conference on Software Testing, Verification, and Validation (ICST 2018)
Testbeds 2017   Seventh International Workshop on Testing Techniques for Event BasED Software
INTUITEST 2017   3rd International Workshop on User Interface Test Automation
SAT 2017   20th International Conference on Theory and Applications of Satisfiability Testing