Visible to the public CfP: 5th Int. Workshop on Applied Verification for Continuous and Hybrid SystemsConflict Detection Enabled

No replies
Anonymous
Anonymous's picture

5th Int. Workshop on Applied Verification for Continuous and Hybrid Systems

Part of ADHS | Oxford, UK | July 13, 2018 | https://cps-vo.org/group/arch

The workshop on applied verification for continuous and hybrid systems (ARCH) brings together researchers and practitioners to establish a curated set of benchmarks and test them in a friendly competition.

Call for Submissions

Verification of continuous and hybrid systems is increasing in importance due to new cyber-physical systems that are safety- or operation-critical. This workshop addresses verification techniques for continuous and hybrid systems with a special focus on the transfer from theory to practice. Topics include, but are not limited to

  • Proposals for new benchmark problems (not necessarily yet solvable)
  • Tool presentations
  • Tool executions and evaluations based on ARCH benchmarks
  • Experience reports including open issues for industrial success
  • Reports on results of our friendly competition (separate call)

Researchers are welcome to submit examples, tools and benchmarks that have already appeared in brief form, but whose details were omitted. The online benchmark repository allows researchers to include modeling details, parameters, simulation results, etc. Submissions are encouraged, but not required, to include executable data (models, configuration files, code etc.). It is not required to show that the benchmark has a solution; it suffices that the problem is described in enough detail that somebody else can try to solve it.

Prize (tentative)

The paper with the most promising benchmark results receives a prize of 500 Euros sponsored by Robert Bosch GmbH, Germany. The winner is preselected by the program committee and determined by an audience voting.

General Submission Guidelines

Submissions consist of papers (ideally 3-8 pages) and optional files (e.g. models or traces) submitted through the ARCH'18 EasyChair web site. ARCH18 will provide proceedings in the EasyChair EPiC series, indexed by DBLP. Detailed submis-
sion guidelines can be found here: submission instructions. Submissions receive at least 3 anonymous reviews, including one from industry and one from academia.

Benchmark papers: A zip archive with additional data (description details, model files, sample traces, code, known results, etc.) is to be submitted together with the extended abstract. Benchmarks can be academic or industrial, of small size or extensive case studies.

Evaluation Criteria for Benchmarks

While the review criteria for tool presentations, benchmark results, and experience reports are more general, benchmark proposals should address the following criteria:

  • Relevance: How typical is the benchmark for its application domain or academic topic? How important (scientifically or practically) are the phenomena it exhibits? Does the benchmark correspond to an existing real-world system?
  • Clarity: How easy is it to create a working model from the description? How clear is the specification of the properties to be verified?
  • Verification advantages: Can verification show properties of the benchmark that are difficult to obtain using other approaches (stochastic simulation etc.)?

Important Dates

Submission deadline April 06, 2018
Notification of acceptance May 13, 2018
Final version June 13, 2018
Workshop July 13, 2018

PDF-Version of the Call

There is a pdf of the call (reduced content to make it fit on a single page).

Organizers

Program chairs: Goran Frehse, University Joseph Fourier-Verimag, France
Matthias Althoff, Technische Universitat Munchen, Germany
Publicity chair: Sergiy Bogomolov, Australian National University, Australia
Evaluation chair: Taylor T. Johnson, Vanderbilt University, USA

Program Committee (tentative)

Academia Industry
Pieter Collins (Maastricht Univ.) Ajinkya Bhave (Siemens PLM)
Alexandre Donze (UC Berkeley) Jyotirmoy Deshmukh (Toyota)
Ian Mitchell (Univ. British Colombia) Luca Parolini (GE Global Research)
Sayan Mitra (UI Urbana Champaign) Alessandro Pinto (United Technologies)
Andre Platzer (Carnegie Mellon Univ.) Aaron Fifarek (Linquest)
Nacim Ramdani (Universite d'Orleans) Jens Oehlerking (Bosch)
Aditya Zutshi (UC Boulder) William Hung (Synopsys Inc)
Xin Chen (UC Boulder) Olivier Bouissou (MathWorks)
Sicun Gao (Massachusetts Institute of Technology) Daniel Bryce (SIFT)
Stanley Bak (Air Force Research Lab)