Category Archives: Uncategorized

Synthesis Competition 2014: Results

(This post replicates information that was previously included on the front page of this site. It does not contain new information, except for a link to the written report on arXiv and STTT)

SYNTCOMP 2014 was a big success: 5 synthesis tools competed in 4 different categories, crunching on the 569 benchmark instances we collected for the first competition. The results have been presented at CAV (slides) and the SYNT workshop (slides). The written report, including descriptions of the benchmarks, tools, and results, is available on arXiv.
Update: the report for SYNTCOMP 2014 has now appeared in STTT (Software Tools for Technology Transfer).

The winners of the 4 tracks are:

Synthesis (Sequential):
AbsSynthe from R. Brenguier, G.A. Pérez, J.-F. Raskin, O. Sankur (UL de Bruxelles)

Synthesis (Parallel):
Demiurge from R. Könighofer (TU Graz), M. Seidl (JKU Linz)

Realizability (Sequential):
Simple BDD Solver from L. Ryzhyk (University of Toronto, NICTA) and A. Walker (NICTA)

Realizabiltiy (Parallel):
Basil from R. Ehlers (University of Bremen, DFKI Bremen)

As part of the FLoC Olympic Games, every track winner received a Kurt Gödel medal in silver, handed over by Ed Clarke:

VSL 2014_by_Nadja Meister_IMG_1377
(f.l.t.r.: Leonid Ryzhyk, Rüdiger Ehlers, Ed Clarke, Guillermo A. Pérez, Martina Seidl, Swen Jacobs, Thomas Krennwallner)
(c) VSL / Nadja Meister

Detailed information on the first competition can be found here. If you are interested, you can have a look at our rules, and download a testing framework that contains a simple reference implementation of a synthesis tool that handles the competition format, along with a set of benchmarks and a model checker to verify results.

Benchmark collection for 2015 finished

We have finished collecting benchmarks for SYNTCOMP 2015. Joining the 569 benchmark instances from last year are more than 2000 new benchmark instances, including new and challenging instances of existing benchmarks and 6 completely new sets of benchmarks.
We are currently working hard on testing and categorizing the benchmarks, and coming up with a scheme for selecting benchmarks for the competition (with the goal to have a good distribution across benchmarks from different problem sets and of different difficulty).

For testing purposes, the preliminary set of new benchmarks is available in folder Benchmarks2015 of our Bitbucket repository. Note that some of the files still contain errors (like 2 instead of 1 outputs), and many of them are very hard and have not been solved by any of the tools from last year in our test runs.