posted by user: lucapulina || 1414 views || tracked by 1 users: [display]

QBFEVAL 2017 : QBFEVAL'17 - Competitive Evaluation of QBF Solvers

FacebookTwitterLinkedInGoogle

Link: http://www.qbflib.org/qbfeval17.php
 
When Aug 28, 2017 - Sep 1, 2017
Where Melbourne
Submission Deadline May 22, 2017
 

Call For Papers

******************************************************************
QBFEVAL'17 - Competitive Evaluation of QBF Solvers
Call for Participation

A joint event with SAT 2017 - The 20th International Conference on
Theory and Applications of Satisfiability Testing,
28 August - 1 September, Melbourne, Australia (2017)

******************************************************************


QBFEVAL'17 will be the 2017 competitive evaluation of QBF solvers, and
the twelfth evaluation of QBF solvers and instances ever. QBFEVAL'17
will award solvers that stand out as being particularly effective on
specific categories of QBF instances. The evaluation will run using
the computing infrastructure made available by StarExec.

We warmly encourage developers of QBF solvers to submit their work,
even at early stages of development, as long as it fulfills some very
simple requirements. We also welcome the submission of QBF formulas to
be used for the evaluation. Researchers thinking about using QBF-based
techniques in their area (e.g., formal verification, planning,
knowledge reasoning) are invited to contribute to the evaluation by
submitting QBF instances of their research problems (see the
requirements for instances). The results of the evaluation will be a
good indicator of the current feasibility of QBF-based approaches and
a stimulus for people working on QBF solvers to further enhance their
tools.

Details about solvers and benchmarks submission, tracks, and related
rules, are available at http://www.qbflib.org/qbfeval17.php

For questions, comments and any other issue regarding QBFEVAL'17,
please get in touch with qbf17@qbflib.org.



Important Dates

Registration open: April 1st 2017
Registration close: May 22nd 2017
Solvers and Benchmarks due: May 30th 2017
Final results: presented at SAT'17


Organizing committee

Organization
Luca Pulina, University of Sassari
Martina Seidl, Johannes Kepler Universität Linz

Judges
Olaf Beyersdorff, University of Leeds
Daniel Le Berre, Université d'Artois
Martin Suda, Technische Universität Wien
Christoph Wintersteiger, Microsoft Research Limited

Related Resources

LREC 2020   12th Conference on Language Resources and Evaluation
ENASE 2021   16th International Conference on Evaluation of Novel Approaches to Software Engineering
EDML 2020   2nd Workshop on Evaluation and Experimental Design in Data Mining and Machine Learning (EDML 2020) @ ECML PKDD 2020
EvalNLGEval 2020   1st Workshop on Evaluating NLG Evaluation, collocated with INLG
GENEA Workshop 2020   The First International Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
PEMWN 2020   The 9th IFIP/IEEE International Conference on Performance Evaluation and Modeling in Wired and Wireless Networks
FIRE 2020   12th meeting of the Forum for Information Retrieval Evaluation
TRECVID 2020   Trec Video Retrieval Evaluation
IEEE PEDISWESA 2020   12th Workshop on Performance Evaluation of Communications in Distributed Systems and Web based Service Architectures
SPECTS 2020   2020 International Symposium on Performance Evaluation of Computer and Telecommunication Systems Conference (VIRTUAL)