TAP: Tests and Proofs



Past:   Proceedings on DBLP

Future:  Post a CFP for 2019 or later   |   Invite the Organizers Email


All CFPs on WikiCFP

Event When Where Deadline
TAP 2018 12th International Conference on Tests And Proofs
Jun 27, 2018 - Jun 29, 2018 Toulouse Mar 11, 2018 (Mar 5, 2018)
TAP 2016 Tests & Proofs
Jul 5, 2016 - Jul 7, 2016 Vienna, Austria Feb 5, 2016 (Jan 29, 2016)
TAP 2015 9th International Conference on Tests & Proofs TAP 2015
Jul 20, 2015 - Jul 24, 2015 L'Aquila, Italy Feb 20, 2015 (Feb 13, 2015)
TAP 2014 8th International Conference on Tests And Proofs
Jul 24, 2014 - Jul 25, 2014 York, United Kingdom Mar 1, 2014 (Feb 25, 2014)
TAP 2011 5th International Conference on Tests & Proofs
Jun 30, 2011 - Jul 1, 2011 Zurich Feb 11, 2011 (Feb 4, 2011)

Present CFP : 2018

First Call for Papers

12th International Conference on Tests And Proofs
TAP 2018 Toulouse (France), 27-29 June 2018

Part of STAF 2018 held in Toulouse


Important Dates

Abstract: 5 March 2018 (Extended)
Paper: 11 March 2018 (Extended)
Notification: 9 April 2018
Camera-Ready Version: 23 April 2018
Conference: 27-29 June 2018

Aim and Scope

The TAP conference promotes research in verification and formal
methods that targets the interplay of proofs and testing: the
advancement of techniques of each kind and their combination, with the
ultimate goal of improving software and system dependability.

Research in verification has recently seen a steady convergence of
heterogeneous techniques and a synergy between the traditionally
distinct areas of testing (and dynamic analysis) and of proving (and
static analysis). Formal techniques for counter-example generation
based on, for example, symbolic execution, SAT/SMT-solving or
model checking, furnish evidence for the potential of a combination of
test and proof. The combination of predicate abstraction with testing-like
techniques based on exhaustive enumeration opens the perspective
for novel techniques of proving correctness. On the practical side,
testing offers cost-effective debugging techniques of specifications
or crucial parts of program proofs (such as invariants). Last but not
least, testing is indispensable when it comes to the validation of the
underlying assumptions of complex system models involving
hardware and/or system environments. Over the years, there is
growing acceptance in research communities that testing and proving
are complementary rather than mutually exclusive techniques.

The TAP conference aims to promote research in the intersection of
testing and proving by bringing together researchers and practitioners
from both areas of verification.

Topics of Interest

TAP's scope encompasses many aspects of verification technology,
including foundational work, tool development, and empirical
research. Its topics of interest center around the connection between
proofs (and other static techniques) and testing (and other dynamic
techniques). Papers are solicited on, but not limited to, the
following topics:

- Verification and analysis techniques combining proofs and tests
- Program proving with the aid of testing techniques
- Deductive techniques supporting the automated generation of test vectors and oracles
(theorem proving, model checking, symbolic execution, SAT/SMT solving, constraint logic programming, etc.)
- Deductive techniques supporting novel definitions of coverage criteria,
- Program analysis techniques combining static and dynamic analysis
- Specification inference by deductive and dynamic methods
- Testing and runtime analysis of formal specifications
- Search-based technics for proving and testing
- Verification of verification tools and environments
- Applications of test and proof techniques in new domains,
such as security, configuration management, learning
- Combined approaches of test and proof in the context of formal
certifications (Common Criteria, CENELEC, …)
- Case studies, tool and framework descriptions, and experience
reports about combining tests and proofs

Submission Instructions

TAP 2018 accepts papers of three kinds:

- Regular research papers: full submissions describing original
research, of up to 16 pages (excluding references).

- Tool demonstration papers: submissions describing the design and
implementation of an analysis/verification tool or framework, of up
to 8 pages (excluding references). The tool/framework described in
a tool demonstration paper should be available for public use.

- Short papers: submissions describing preliminary findings, proofs
of concepts, and exploratory studies, of up to 6 pages (excluding

We are planning to publish the proceedings in the Formal Methods subline
of Springer's LNCS series. Papers must be submitted in PDF format at the
EasyChair submission site:



Information about all committees can be found at https://tap18.lri.fr

Program Chairs :
- Catherine Dubois, ENSIIE, Evry, France
- Burkhart Wolff, University Paris-Sud, Orsay, France

Program Committee

Bernhard Beckert, Karlsruhe Institute of Technology
Martina Seidl, Johannes Kepler University Linz
Achim D. Brucker, The University of Sheffield
Nikolai Kosmatov, CEA List, France
Martin Gogolla, University of Bremen
Arnaud Gotlieb, SIMULA Research Laboratory, Norway
Reiner Hähnle, TU Darmstadt
Alain Giorgetti, FEMTO-ST, University Franche-Comté
Jasmin Christian Blanchette, Vrije Universiteit Amsterdam
Chantal Keller, LRI, Université Paris-Sud
Angelo Gargantini, University of Bergamo
Bernhard K. Aichernig, TU Graz
Carlo A. Furia, Chalmers University of Technology
Helene Waeselynck, LAAS-CNRS, Toulouse
Rob Hierons, Brunel University London
Corina Pasareanu, CMU/NASA Ames Research Center
Moa Johansson, Chalmers University of Technology
Thierry Jéron, INRIA Rennes
Laura Kovacs, TU Wien
Karl Meinke, Royal Institute of Technology, Stockholm
Alexandre Petrenko, Computer Research Institute of Montreal

Related Resources

CPP 2021   Certified Programs and Proofs
CPP 2020   Certified Programs and Proofs
VST 2021   4th Workshop on Validation, Analysis and Evolution of Software Tests
VST 2020   3rd International Workshop on Validation, Analysis and Evolution of Software Tests