TACAS: Tools and Algorithms for Construction and Analysis of Systems



Past:   Proceedings on DBLP

Future:  Post a CFP for 2022 or later   |   Invite the Organizers Email


All CFPs on WikiCFP

Event When Where Deadline
TACAS 2021 27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems
Mar 27, 2021 - Apr 1, 2021 Luxembourg Oct 15, 2020
TACAS 2020 Tools and Algorithms for Construction and Analysis of Systems
Apr 25, 2020 - Apr 30, 2020 Dublin, Ireland Oct 24, 2019
TACAS 2019 International Conference on Tools and Algorithms for the Construction and Analysis of Systems
Apr 8, 2019 - Apr 11, 2019 Prague, Czech Republic Nov 15, 2018 (Nov 7, 2018)
TACAS 2016 22nd International Conference on Tools and Algorithms for the Construction and Analysis of Systems
Apr 2, 2016 - Apr 8, 2016 Eindhoven, The Netherlands Oct 16, 2015 (Oct 9, 2015)
TACAS 2015 21th International Conference on Tools and Algorithms for the Construction and Analysis of Systems
Apr 11, 2015 - Apr 19, 2015 London, UK Oct 17, 2014 (Oct 10, 2014)
TACAS 2014 Tools and Algorithms for Construction and Analysis of Systems
Apr 5, 2014 - Apr 13, 2014 Grenoble Oct 11, 2013 (Oct 4, 2013)
TACAS 2013 19th International Conference on Tools and Algorithms for the Construction and Analysis of Systems (TACAS)
Mar 16, 2013 - Mar 24, 2013 Rome Oct 14, 2012 (Oct 7, 2012)
TACAS 2012 18th International Conference on Tools and Algorithms for the Construction and Analysis of Systems
Mar 24, 2012 - Apr 1, 2012 Tallinn, Estonia Oct 14, 2011 (Oct 7, 2011)
TACAS 2011 International Conference on Tools and Algorithms for the Construction and Analysis of Systems
Mar 26, 2011 - Apr 3, 2011 Saarbrücken, Germany Oct 8, 2010 (Oct 1, 2010)
Mar 20, 2010 - Mar 28, 2010 Paphos, Cyprus Oct 8, 2009 (Oct 1, 2009)

Present CFP : 2021

27th International Conference on Tools and Algorithms for the Construction and Analysis of Systems

TACAS is a forum for researchers, developers and users interested in rigorously based tools and algorithms for the construction and analysis of systems. The conference aims to bridge the gaps between different communities with this common interest and to support them in their quest to improve the utility, reliability, flexibility and efficiency of tools and algorithms for building systems.

Theoretical papers with clear relevance for tool construction and analysis as well as tool descriptions and case studies with a conceptual message are all encouraged. The topics covered by the conference include, but are not limited to:

specification and verification techniques;
software and hardware verification;
analytical techniques for real-time, hybrid, or stochastic systems;
analytical techniques for safety, security, or dependability;
SAT and SMT solving;
theorem proving;
model checking;
static and dynamic program analysis;
abstraction techniques for modeling and verification;
compositional and refinement-based methodologies;
system construction and transformation techniques;
machine-learning techniques for synthesis and verification;
tool environments and tool architectures;
applications and case studies.

Paper submission

See the ETAPS 2021 joint call for papers. Submit your paper via the TACAS 2021 author interface of EasyChair.

The review process of TACAS 2021 is single-blind, with a rebuttal phase for selected papers.

Limit of 3 submissions: Each individual author is limited to a maximum of three TACAS submissions as an author or co-author. Authors of co-authored submissions are jointly responsible for respecting this policy. In case of violations, all submissions of this (co-)author will be desk-rejected.
Paper categories

TACAS accepts four types of submissions: research papers, case-study papers, regular tool papers, and tool demonstration papers.

Research papers clearly identify and justify a principled advance to the theoretical foundations for the construction and analysis of systems. Where applicable, they are supported by experimental validation.

Case study papers report on case studies, preferably in a real-world setting. They should provide information about the following aspects: the system being studied and the reasons why it is of interest, the goals of the study, the challenges the system poses to automated analysis/testing/synthesis, research methodologies and approaches used, the degree to which the goals were met, and how the results can be generalized to other problems and domains.

Regular tool papers present a new tool, a new tool component, or novel extensions to an existing tool, and are subject to an artifact submission requirement (see below). They should provide a short description of the theoretical foundations with relevant citations, and emphasize the design and implementation concerns, including software architecture and core data structures. A regular tool paper should give a clear account of the tool’s functionality, discuss the tool’s practical capabilities with reference to the type and size of problems it can handle, describe experience with realistic case studies, and where applicable, provide a rigorous experimental evaluation. Papers that present extensions to existing tools should clearly focus on the improvements or extensions with respect to previously published versions of the tool, preferably substantiated by data on enhancements in terms of resources and capabilities.

Tool demonstration papers focus on the usage aspects of tools and are also subject to the artifact submission requirement. Theoretical foundations and experimental evaluation are not required, however, a motivation as to why the tool is interesting and significant should be provided. Further, the paper should describe aspects such as, for example, the assumptions about application domain and/or extent of potential generality, demonstrate the tool workflow(s), explain integration and/or human interaction, evaluate the overall role and the impact to the development process.

The length of research, case study, and regular tool papers is limited to 16 pp llncs.cls (excluding the blibliography). The length of tool demonstration papers is limited to 6 pp llncs.cls (excluding the bibliography).

Appendices going beyond the above page limits are not allowed! Additional (unlimited) appendices can be made available separately or as part of an extended version of the paper made available via arXiv, Zenodo, or a similar service, and cited in the paper. The reviewers are, however, not obliged to read such appendices.
Paper evaluation

All papers will be evaluated by the program committee (PC), coordinated by the PC chairs, aided by the case study chair for case study papers, and by the tools chair for regular tool papers and tool demonstration papers. All papers will be judged on novelty, significance, correctness, and clarity.

Reproducibility of results is of the utmost importance for the TACAS community. Therefore, we encourage all authors to include support for replicating the results of their papers. For theorems, this would mean providing proofs; for algorithms, this would mean including evidence of correctness and acceptable performance, either by a theoretical analysis or by experimentation; and for experiments, one should provide access to the artifacts used to generate the experimental data. Material that does not fit into the paper may be provided on a supplementary web site, with access appropriately enabled and license rights made clear. For example, the supplemental material for reviewing case-study papers and papers with experimental results could be classified as reviewer-confidential if necessary (e.g., if proprietary data are investigated or software is not open source). In general, TACAS encourages all authors to archive additional material and make it citable via DOI (e.g., via Zenodo or Figshare).
Artifact submission and evaluation

Regular tool papers and tool demonstration papers must be accompanied by an artifact, submitted together with the paper.

Exceptions to the compulsory artifact submission rule may be granted by the PC chairs, but only in cases when the tool cannot in any reasonable way be run by the AEC. In such cases, the authors should contact the PC chairs as soon as possible (at least 7 days prior to abstract submission), ask for an exception, and explain why it is needed. An example of a case where an exception can be negotiated is a tool that must be run in some very special environment, e.g., on special hardware that cannot be virtualised in any way. Note that license problems are generally not an acceptable grounds for an exception. When an exception is granted, the authors should instead submit a detailed video showing their tool in action.

The artifact will be evaluated by the artifact evaluation committee (AEC) independently of the paper according to the following criteria:

consistency with and replicability of results in the paper,
documentation, and
ease of use.

The results of the artifact evaluation will be taken into account during discussion of the paper submission.

For research papers and case study papers, it is optional to submit an artifact together with the paper. If an artifact is provided at this stage, then it will be reviewed immediately by the AEC and the results of the evaluation can be taken into consideration during the paper reviewing and rebuttal phase. Alternatively, authors of accepted papers may submit an artifact after notification.

Detailed guidelines for preparation of artifacts and submission can be found here.
Posters and tool demonstrations

Subject to available space, authors of all accepted papers will be given an option to present their results in the form of a poster in addition to the talk. Moreover, again subject to available space, authors of regular tool papers and tool demonstration papers will be given an option to demonstrate their tool to conference participants in addition to giving their talk / presenting their poster. More information about the posters and demonstrations will be posted to the concerned authors in due time.
Competition on software verification

TACAS 2021 hosts the 10th Competition on Software Verification with the goal to evaluate technology transfer and compare state-of-the-art software verifiers with respect to effectiveness and efficiency.
Program chairs

Jan Friso Groote (Technische Universiteit Eindhoven, The Netherlands)
Kim G. Larsen (Aalborg University, Denmark)
Case study chair

Tools chair

Competition chair

Program committee

Pedro R. D'Argenio (Universidad Notional de Córdoba, Argentina)
Christel Baier (Technische Universität Dresden, Germany)
Dirk Beyer (Ludwig-Maximilians-Universität München, Germany)
Armin Biere (Johannes-Kepler-Universität Linz, Austria)
Valentina Castiglioni (Reykjavik University, Iceland)

Alessandro Cimatti (FBK-IRST, Italy)
Rance Cleaveland (University of Maryland, USA)
Yuxin Deng (East China Normal University, China)
Carla Ferreira (Universidade Nova de Lisboa, Portugal)
Goran Frehse (ENSTA ParisTech, France)

Susanne Graf (Verimag, CNRS, France)
Orna Grumberg (Technion, Israel)
Klaus Havelund (NASA Jet Propulsion Lab, USA)
Holger Hermanns (Universität des Saarlandes, Germany)
Peter Höfner (Australian National University and Data61, Australia)

Hossein Hojjat (Rochester Institute of Technology, USA)
Falk Howar (Techniche Universität Dortmund, Germany)
David N. Jansen (Chinese Academy of Sciences, China)
Marcin Jurdziński (University of Warwick, United Kingdom)
Jeroen Keiren (Technische Universiteit Eindhoven, The Netherlands)

Sophia Knight (University of Minnesota Duluth, USA)
Laura Kovács (Technische Universität Wien, Austria)
Jan Kretińsky (Technische Universität München, Germany)
Alphons Laarman (Universiteit Leiden, The Netherlands)
Xinxin Liu (Chinese Academy of Sciences, Beijing, China)

Mieke Massink (CNR-ISTI, Italy)
Radu Mateescu (Inria Grenoble, France)
Jun Pang (Université du Luxembourg, Luxembourg)
David Parker (University of Birmingham, United Kingdom)
Jaco van de Pol (Aarhus University, Denmark)

Natasha Sharygina (Università della Svizzera italiana, Switzerland)
Bernhard Steffen (Technische Universität Dortmund, Germany)
Jan Strejček (Masaryk University, Czech Republic)
Antti Valmari (University of Jyväskylä, Finland)
Björn Victor (Uppsala University, Sweden)

Sarah Winkler (Universität Innsbruck, Austria)
Artifact evaluation chairs

Artifact evaluation committee

Steering committee chair

Bernhard Steffen (Technische Universität Dortmund, Germany)
Steering committee

Dirk Beyer (Ludwig-Maximilians-Universität München, Germany)
Rance Cleaveland (University of Maryland, USA)
Holger Hermanns (Universität des Saarlandes, Germany)
Kim G. Larsen (Aalborg University, Denmark)

Related Resources

DL in Hardware 2021   Circuits and Systems of Dictionary Learning Algorithms and Applications Special Issue
ITTCS 2021   Information Technologies, Telecommunications and Control Systems
Xiamen, China--AIACT 2022   The 6th International Conference on Artificial Intelligence, Automation and Control Technologies (AIACT 2022)
ICMLA 2021   20th IEEE International Conference on Machine Learning and Applications
ICONIP 2021   The 28th International Conference on Neural Information Processing (ICONIP2021)
ICAART 2022   14th International Conference on Agents and Artificial Intelligence
EI Compendex, Scopus-AI2A 2021   2021 International Conference on Artificial Intelligence, Automation and Algorithms (AI2A 2021)
MDPI Mathematics InSysModGraph 2021   Special Issue Information Systems Modeling Based on Graph Theory
AKBC 2021   3rd Conference on Automated Knowledge Base Construction (AKBC)
MDPI-SI-BDHA 2021   Call for Papers: Special Issue “Big Data for eHealth Applications” (MDPI Applied Sciences, IF 2.474 – Indexed on Scopus, Web of Science)