RV: Runtime Verification



Past:   Proceedings on DBLP

Future:  Post a CFP for 2020 or later   |   Invite the Organizers Email


All CFPs on WikiCFP

Event When Where Deadline
RV 2019 The 19th International Conference on Runtime Verification
Oct 8, 2019 - Oct 11, 2019 Porto May 21, 2019
RV 2018 18th International Conference on Runtime Verification
Nov 10, 2018 - Nov 13, 2018 Limassol, Cyprus Jul 8, 2018
RV 2016 7th International Conference on Runtime Verification
Sep 23, 2016 - Sep 30, 2016 Madrid, Spain May 15, 2016 (May 8, 2016)
RV 2015 6th International Conference on Runtime Verification
Sep 22, 2015 - Sep 25, 2015 Vienna, Austria Apr 19, 2015 (Apr 12, 2015)
RV 2014 Runtime Verification
Sep 22, 2014 - Sep 25, 2014 Toronto Apr 15, 2014 (Apr 8, 2014)
RV 2013 Fourth International Conference on Runtime Verification
Sep 24, 2013 - Sep 27, 2013 Rennes, France May 5, 2013 (Apr 28, 2013)
RV 2012 Runtime Verification
Sep 25, 2012 - Sep 28, 2012 Istanbul, Turkey Jun 3, 2012
RV 2011 2nd International Conference on Runtime Verification
Sep 27, 2011 - Sep 30, 2011 San Francisco, California, USA Jun 5, 2011

Present CFP : 2019


Runtime verification is concerned with the monitoring and analysis of the runtime behaviour of software and hardware systems. Runtime verification techniques are crucial for system correctness, reliability, and robustness; they provide an additional level of rigor and effectiveness compared to conventional testing, and are generally more practical than exhaustive formal verification. Runtime verification can be used prior to deployment, for testing, verification, and debugging purposes, and after deployment for ensuring reliability, safety, and security and for providing fault containment and recovery as well as online system repair.

Topics of interest to the conference include, but are not limited to:

specification languages for monitoring
monitor construction techniques
program instrumentation
logging, recording, and replay
combination of static and dynamic analysis
specification mining and machine learning over runtime traces
monitoring techniques for concurrent and distributed systems
runtime checking of privacy and security policies
metrics and statistical information gathering
program/system execution visualization
fault localization, containment, recovery and repair
dynamic type checking

Application areas of runtime verification include cyber-physical systems, safety/mission critical systems, enterprise and systems software, cloud systems, autonomous and reactive control systems, health management and diagnosis systems, and system security and privacy.

All papers and tutorials will appear in the conference proceedings in an LNCS volume. Submitted papers and tutorials must use the LNCS/Springer style detailed here:


Papers must be original work and not be submitted for publication elsewhere. Papers must be written in English and submitted electronically (in PDF format) using the EasyChair submission page here:


The page limitations mentioned below include all text and figures, but exclude references. Additional details omitted due to space limitations may be included in a clearly marked appendix, that will be reviewed at the discretion of reviewers, but not included in the proceedings.
At least one author of each accepted paper and tutorial must attend RV 2019 to present.

There are four categories of papers which can be submitted: regular, short, tool demo, and benchmark papers. Papers in each category will be reviewed by at least 3 members of the Program Committee.

Regular Papers (up to 15 pages, not including references) should present original unpublished results. We welcome theoretical papers, system papers, papers describing domain-specific variants of RV, and case studies on runtime verification.
Short Papers (up to 6 pages, not including references) may present novel but not necessarily thoroughly worked out ideas, for example emerging runtime verification techniques and applications, or techniques and applications that establish relationships between runtime verification and other domains.
Tool Demonstration Papers (up to 8 pages, not including references) should present a new tool, a new tool component, or novel extensions to existing tools supporting runtime verification. The paper must include information on tool availability, maturity, selected experimental results and it should provide a link to a website containing the theoretical background and user guide. Furthermore, we strongly encourage authors to make their tools and benchmarks available with their submission.
Benchmark papersfiber_new (up to 10 pages, not including references) We are excited to invite a new kind of RV paper: Benchmark papers should describe a benchmark, suite of benchmarks, or benchmark generator useful for evaluating RV tools. Papers will should include information as to what the benchmark consists of and its purpose (what is the domain), how to obtain and use the benchmark, an argument for the usefulness of the benchmark to the broader RV community, and may include any existing results produced using the benchmark. We are interested in both benchmarks pertaining to real-world scenarios and those containing synthetic data designed to achieve interesting properties. Broader definitions of benchmark e.g. for generating specifications from data or diagnosing faults are within scope. Finally, we encourage but do not require benchmarks that are tool agnostic (especially those that have been used to evaluate multiple tools), labelled benchmarks with rigorous arguments for correctness of labels, and benchmarks that are demonstrably challenging with respect to the state-of-the-art tools. Benchmark papers must be accompanied by an easily accessible and usable benchmark submission. Papers will be evaluated by a separate benchmark evaluation panel who will asses the benchmarks relevance, clarity, and utility as communicated by the submitted paper.

The Program Committee of RV 2019 will give a best paper award, and a selection of accepted regular papers will be invited to appear in a special journal issue.

Tutorial track

Tutorials are two-to-three-hour presentations on a selected topic. Additionally, tutorial presenters will be offered to publish a paper of up to 20 pages in the LNCS conference proceedings. A proposal for a tutorial must contain the subject of the tutorial, a proposed timeline, a note on previous similar tutorials (if applicable) and the differences to this incarnation, and a biography of the presenter. The proposal must not exceed 2 pages. Tutorial proposals will be reviewed by the Program Committee.
Important dates

Abstract deadline: May 21, 2019
Paper and tutorial deadline: May 21, 2019
Paper and tutorial notification: July 1, 2019
Conference: October, 8 - 11, 2019

Related Resources

AIAA 2019   9th International Conference on Artificial Intelligence, Soft Computing and Applications
CPP 2020   Certified Programs and Proofs
ICST 2020   13th IEEE Conference on Software Testing, Validation and Verification
SATRANH 2020   Special Issue of APPLIED SCIENCES on Static Analysis Techniques: Recent Advances and New Horizons
ATVA 2019   International Symposium on Automated Technology for Verification and Analysis
ICAITA 2020   9th International Conference on Advanced Information Technologies and Applications
CSIT 2020   7th International Conference on Computer Science and Information Technology
FormaliSE 2020   8th International Conference on Formal Methods in Software Engineering
MULTIPROG 2020   The Thirteenth International Workshop on Programmability and Architectures for Heterogeneous Multicores
CRIS 2020   6th International Conference on Cryptography and Information Security