Category Archives: Uncategorized

SYNTCOMP 2019: Call for Benchmarks and Solvers

We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2019 and SYNT 2019.

The competition will run mostly in the same way as in 2016–2018, but will now be managed and ran on StarExec. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Participation: SYNTCOMP tutorial @ ETAPS *

This year, we are organizing a tutorial on the state of the art of reactive synthesis and the SYNTCOMP: https://conf.researchr.org/track/etaps-2019/syntcompcamp-2019-tutorial The target audience of this short tutorial (3.5 hours) are researchers that are interested in the state of the art regarding theory and implementations of the automatic synthesis of reactive systems. The tutorial will be separated into two parts: the first part will give a background on the history and the theoretical foundations of reactive synthesis and SYNTCOMP. The second part will go in-depth on one of the successful approaches and present its implementation in the tool BoSy (see https://www.react.uni-saarland.de/tools/bosy/).

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. New benchmarks can be included in SYNTCOMP 2019 if they are correctly uploaded (i.e. pass syntax checks) into StarExec by June 1, 2019. Contact us (see above) if you have any questions regarding scope or format of benchmarks.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2019 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2019.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, please write to jacobs@cispa.saarland and guillermoalberto.perez@uantwerpen.be

SYNTCOMP 2018 Results

The results of SYNTCOMP 2018 have been presented at the SYNT workshop and at FLoC 2018, and the slides of the SYNT presentation can be found here. The experiments can now be browsed on the web-frontend of our EDACC instance (experiments with “Legacy” in the name are based on older solvers running on this year’s benchmarks), and a more detailed analysis of the results is forthcoming.

Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. In each of the tracks, we consider the different tasks of realizability checking and synthesis, and split evaluation into sequential and parallel execution modes. Finally, for the synthesis tasks we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

Simple BDD Solver (solved 165 out of 234 problems in the AIGER/safety Realizability Track, sequential mode)

TermiteSAT (solved 179 out of 234 problems in the AIGER/safety Realizability Track, parallel mode)

SafetySynth (solved 154 out of 234 problems in the AIGER/safety Synthesis Track, sequential mode, and won the quality ranking in that mode with 224 points)

AbsSynthe (solved 156 out of 234 in the AIGER/safety Synthesis Track, parallel mode)

Demiurge (won the quality ranking in the AIGER/safety Synthesis Track, parallel mode, with 240 points)

Strix (solved most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, with 413 points in sequential and 446 in parallel mode)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2018: Call for Benchmarks and Solvers

We are preparing for the fifth annual reactive synthesis competition, which will again be affiliated with CAV, and will run in time for the results to be presented at CAV and SYNT, which are both part of the Federated Logics Conference (FLoC) 2018.

The competition will run mostly in the same way as in 2016 and 2017: there are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and evaluated separately in sequential and parallel execution modes. Like last year, we supply the  synthesis format conversion tool SyFCo that allows to convert TLSF into several forms, including input formats of a number of existing synthesis tools. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. We accept new benchmarks at any time. New benchmarks can be included in SYNTCOMP 2018 if they are sent to us by June 1, 2018.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until May 25. As in previous years, we will allow updates of the solvers for some time after that (possibly after feedback from the organizers), at least until June 1. However, it would help the organizers to know in advance how many solvers may be entering, and who is seriously interested in entering SYNTCOMP. Thus, if you plan to enter one or more solvers into SYNTCOMP 2018, please let us know at your earliest convenience.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, write to jacobs@react.uni-saarland.de.

On behalf of the SYNTCOMP team,

Swen

SYNTCOMP 2017 Results

The results of SYNTCOMP 2017 have been presented at the SYNT workshop and at CAV 2017, and the experiments can be inspected in the web-frontend of our EDACC instanceAn analysis of the results has been published in the proceedings of SYNT 2017.

Like in the previous years, the competition was split into a safety track, based on specifications in AIGER format, and an LTL track, with specifications in TLSF. In each of the tracks, we consider the different tasks of realizability checking and synthesis, and split evaluation into sequential and parallel execution modes. Finally, for the synthesis tasks we have an additional ranking based on not only the quantity, but also the quality of solutions.

Here are the tools that had the best results in these categories:

Simple BDD Solver (solved 171 out of 234 problems in the AIGER/safety Realizability Track, sequential mode)

TermiteSAT (solved 186 out of 234 problems in the AIGER/safety Realizability Track, parallel mode)

SafetySynth (solved 155 out of 234 problems in the AIGER/safety Synthesis Track, sequential mode, and won the quality ranking in that mode with 236 points)

AbsSynthe (solved 169 out of 2015 in the AIGER/safety Synthesis Track, parallel mode)

Demiurge (won the quality ranking in the AIGER/safety Synthesis Track, parallel mode, with 266 points)

Party (solved most problems in all TLSF/LTL tracks and modes, and won the quality ranking in the TLSF/LTL Synthesis Track, parallel mode, with 308 points)

BoSy (won the TLSF/LTL Synthesis Track, sequential mode, with 298 points)

Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!

SYNTCOMP 2017: Call for Benchmarks and Solvers

We are preparing for the fourth annual reactive synthesis competition, which will again be affiliated with CAV, and will run in time for the results to be presented at CAV 2017 and SYNT 2017.

The competition will run mostly in the same way as last year: there are main tracks for safety specifications in AIGER format and for LTL specifications in TLSF, each separated into subtracks for realizability checking and synthesis, and evaluated separately in sequential and parallel execution modes. Like last year, we supply the  synthesis format conversion tool SyFCo that allows to convert TLSF into several forms, including input formats of a number of existing synthesis tools.
The main difference to last year is that we will again have a quality metric, based on the size of the constructed implementations. Additionally, we will try to set up a track for GR(1) specifications. If you are interested in the latter, please contact me.

* Call for Benchmarks *

We are looking for new synthesis benchmarks to include into SYNTCOMP, both in AIGER and in TLSF. We accept new benchmarks at any time. New benchmarks can be included in SYNTCOMP 2017 if they are sent to us by April 30 May 7, 2017. If you send us a benchmark family for SYNTCOMP, we encourage you to also submit a paper describing the benchmark (including its motivation, properties, and possibly an analysis with existing tools) to the SYNT workshop.

* Call for Solvers *

Solvers for all tracks of the competition will be accepted until May 25. As in previous years, we will allow updates of the solvers for some time after that (possibly after feedback from the organizers), at least until June 1. However, it would help the organizers to know in advance how many solvers may be entering, and who is seriously interested in entering SYNTCOMP. Thus, if you plan to enter one or more solvers into SYNTCOMP 2017, please let us know at your earliest convenience.

* Communication *

If you are interested in submitting a solver, please have a look at the pages for Rules, Schedule, Submission, and FAQ. If you have any further questions or comments, write to jacobs@react.uni-saarland.de.

On behalf of the SYNTCOMP team,

Swen

SYNTCOMP 2016: Updated Schedule

We have received initial submissions for SYNTCOMP 2016 over the weekend and have made the following update to the schedule:

  • Tools for the AIGER track are still due on May 31. If you have not submitted an initial version, but still want to compete, contact me.
  • Tools for the TLSF track are now due on June 11. But again, if you have not submitted an initial version, please contact me *now* (since the schedule will be pretty tight after the extended deadline).

 

TLSF v1.1

We have just published a small update to the temporal logic synthesis format (TLSF). The main changes are:

  • additional sections for initial properties of environment and system, and for invariant properties of the environment (these have also been renamed, but the old names are still supported), and
  • support for user-defined enumeration types.

The changes are described in the new TLSF format description, and are supported by the first release of the synthesis format conversion tool (SyFCo).

The rules and call for benchmarks and solvers pages have been updated accordingly.

SYNTCOMP 2015: Results

SYNTCOMP 2015 has been another big success, as witnessed by a greatly extended benchmark library and impressive improvements in most of the participants. The results have been presented at the SYNT workshop (slides) and are available on the web-frontend of our EDACC instance.

Most notably, the following tools had the best results in some of the categories:

Simple BDD Solver (solved 195 out of 250 problems in the Realizability Track)

AbsSynthe (solved 161 out of 239 problems in the Synthesis Track, sequential mode)

Demiurge (solved 180 out of 239 problems in the Synthesis Track, parallel mode)

Many thanks to all contributors of benchmarks and all participants!