We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2025 and SYNT 2025.
This year, the competition will run with a new setup!
From 2025 onward, the competition will be run on the French Très Grand Centre de Calcul du CEA (TGCC), https://www-hpc.cea.fr/tgcc-public/en/html/tgcc-public.html. To submit your solvers, things will be simplified to the creation of a docker image with your solver and you communicating to the organizers the repository address of the image.
In SYNTCOMP 2025, we will run the following tracks (based on the same rules and with the same specification formats as in previous years): – realizability and synthesis for LTL specifications in TLSF – realizability for parity games in extended HOA and – realizability for LTLf specifications in an extended version of the TLSF format.
Tools will be evaluated with respect to the number of solved benchmarks, and (for synthesis) with respect to the size of their solutions.
Call for Benchmarks
We are looking for new synthesis benchmarks to include into SYNTCOMP, in any of the formats (TLSF, extended HOA). New benchmarks can be included in SYNTCOMP 2025 if they are correctly submitted to our benchmark repository https://github.com/SYNTCOMP/benchmarks by June 1, 2025. Please create a new issue, attach an archive with your new benchmarks with a sensible name, and make sure the archive should also contain a readme.md or readme.pdf describing the benchmarks, relevant papers, etc. We will check if everything is OK, and then add it to the repository. Alternatively, you can create a pull request with your benchmarks.
Call for Solvers
Solvers for all tracks of the competition will be accepted until June 1. Your solver will be included in SYNTCOMP 2025 if: – the address of a working docker extension is communicated to us by June 1, 2025, and – a technical description of the solver is submitted with it. (A pdf of 1-2 pages in LNCS format please.) A docker image that should be used as the basis of your submission will be made available in the next 2-3 weeks.
The results of the 2024 edition of SYNTCOMP are in. First, we give a list of the winners for each one of the realizability tracks ran this year: Parity games, LTL, and finite-word LTL. Then, we give more details about running times and scores as well as some graphs (showing only the best configuration per tool, i.e. the one that solved the most benchmarks). Finally, we link to the participants of the competition.
The data used comes from the StarExec jobs. All data, scripts, benchmarks, and (binaries of the) tools are archived as a Zenodo artifact here. Congratulations to the winners and thanks to all the participants!
Below, you will find information regarding the participating solvers, their
source code, license type, and links to their most recent reports or preferred
publications to be cited in case you use the tool in your academic work.
We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2024 and SYNT 2024.
The competition will run mostly in the same way as in previous years. If you plan on submitting a benchmark or solver, please go to StarExec, create a user, and contact us to be added to the SYNTCOMP space. There are main tracks for safety specifications in AIGER format, for LTL specifications in TLSF, for parity games in extended HOA, and:
New this year: The LTLf specification semantics have been updated so that the controller is expected to dictate the end of a trace. Please contact the organizers if this is not clear.
Each track is separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in any of the formats (AIGER, TLSF, extended HOA). New benchmarks can be included in SYNTCOMP 2024 if they are correctly submitted to our benchmark repository by June 1, 2024. Please create a new issue, attach an archive with your new benchmarks with a sensible name, and make sure the archive should also contains a readme.md or readme.pdf describing the benchmarks, relevant papers, etc. We will check if everything is OK, and then add it to the repository. Alternatively, you can create a pull request with your benchmarks.
Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2024 if it is correctly uploaded into StarExec by June 1, 2024.
The results of the 2023 edition of SYNTCOMP are in. First, we give a list of the winners for each one of the tracks ran this year: Parity games, LTL, and finite-word LTL. Then, we give more details about running times and scores as well as some graphs (showing only the best configuration per tool, i.e. the one that solved the most benchmarks). Finally, we link to the participants of the competition.
All graphs and tables were obtained as described here. The data used comes from the StarExec jobs. All data, scripts, benchmarks, and (binaries of the) tools are archived as a Zenodo artifact here. Congratulations to the winners and thanks to all the participants!
Below, you will find information regarding the participating solvers, their
source code, license type, and links to their most recent reports or preferred
publications to be cited in case you use the tool in your academic work.
We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2023 and SYNT 2023.
The competition will run mostly in the same way as in previous years. If you plan on submitting a benchmark or solver, please go to StarExec, create a user, and contact us to be added to the SYNTCOMP space. There are main tracks for safety specifications in AIGER format, for LTL specifications in TLSF, for parity games in extended HOA, and:
New this year: LTLf specifications in an extended version of the TLSF format! If you are interested, please contact us for more details on the exact semantics and the format we will be using for this track.
Each track is separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in any of the formats (AIGER, TLSF, extended HOA). New benchmarks can be included in SYNTCOMP 2023 if they are correctly submitted to our benchmark repository by June 1, 2023. Please create a new issue, attach an archive with your new benchmarks with a sensible name, and make sure the archive should also contains a readme.md or readme.pdf describing the benchmarks, relevant papers, etc. We will check if everything is OK, and then add it to the repository. Alternatively, you can create a pull request with your benchmarks.
Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2023 if it is correctly uploaded into StarExec by June 1, 2023.
The results of the 2022 competition on Reactive Synthesis are in! As in
previous years, the competition was run on StarExec. This year, the competition
consisted of LTL tracks, with specifications in TLSF, and parity-game tracks,
with specifications given in extended HOA. It is worth mentioning that we have
removed the separation between so-called parallel and sequential tracks in
favor of having two rankings based on total Wallclock time vs. total CPU time
used by solvers. This year, though, the rankings happen to coincide. For more
information on this and other updates to the competition, you can take a look
at the most recent report about the
competition.
Rankings
We present first the results for the parity-game tracks and then move to the
LTL tracks.
Parity games: realizability
To summarize, the ranking of the participating tools is as follows.
Knor
Strix
ltlsynt
More details regarding each solver configuration is given in the table below
and the cactus plots that follow
(where only the best configuration per tool is shown).
The raw information can be fetched from the
StarExec
job.
Solver
Configuration
Solved benchmarks
Total CPU time (s)
Total Wallclock time (s)
Strix
hoa_realizability_parallel
548
308532
124160
Strix
hoa_realizability_sequential
515
89584
89232
ltlsynt
pgreal
441
69416
69421
Knor
real_ltl
587
1722
1039
Parity games: synthesis
To summarize, the time-based ranking of the participating tools is the same as
for the realizability sub-track. More details regarding each solver
configuration is given in the table below and the cactus plots that follow
(where only the best configuration per tool is shown).
The raw information can be fetched from the StarExec
job.
Solver
Configuration
Solved benchmarks
Total CPU time (s)
Total Wallclock time (s)
Strix
hoa_synthesis_parallel
470
159456
73077
Strix
hoa_synthesis_sequential
451
48321
47884
ltlsynt
pgsynt
334
69254
69250
ltlsynt
pgsyntabc1
335
69386
69440
Knor
synt_sym
278
4839
4839
Knor
synt_sym_abc
278
5706
5401
Knor
synt_tl
480
1695
994
The quality ranking is quite different.
Strix
ltlsynt
Knor
More details regarding each solver
configuration is given in the table below.
Solver
Configuration
Synthesized controllers
Score
Strix
hoa_synthesis_parallel
199
385
Strix
hoa_synthesis_sequential
199
384
ltlsynt
pgsynt
204
321
ltlsynt
pgsyntabc1
205
358
Knor
synt_sym
197
292
Knor
synt_sym_abc
197
340
Knor
synt_tl
201
268
LTL: realizability
To summarize, the ranking of the participating tools is as follows.
Strix
ltlsynt
Acacia bonsai
sdf
SPORE
Otus
More details regarding each solver configuration is given in the table below
and the cactus plots that follow
(where only the best configuration per tool is shown).
The raw information can be fetched from the
StarExec
job.
Solver
Configuration
Solved benchmarks
Total CPU time (s)
Total Wallclock time (s)
_acab_sb
best
826
211030
86632
SPORE
ltl-real-recpar-bdd-seq
519
67800
67830
SPORE
ltl-real-recpar-fbdd-32-nodecomp-seq
715
71950
71960
SPORE
ltl-real-recpar-fbdd-32-seq
646
69159
69183
SPORE
ltl-real-recpar-fbdd-64-nodecomp-seq
715
72367
72309
SPORE
ltl-real-recpar-fbdd-64-seq
646
68257
68305
Strix
ltl_real_acd_bfs
892
43963
43974
Strix
ltl_real_zlk_bfs
893
39510
39518
Strix
ltl_real_zlk_pq
924
46340
46294
Otus
otus-ltl-realizability-parallel-sylvan
588
227382
57363
Otus
otus-ltl-realizability-sequential-jbdd
558
32009
32011
sdf
real
783
50466
40399
sdf
real_p
734
129252
72772
ltlsynt
segrealacd
831
67066
67064
ltlsynt
segrealds
800
55803
55810
ltlsynt
segreallar
828
57983
57956
LTL: synthesis
To summarize, the time-based ranking of the participating tools is as follows.
Strix
sdf
ltlsynt
Otus
More details regarding each solver
configuration is given in the table below and the cactus plots that follow
(where only the best configuration per tool is shown).
The raw information can be fetched from the StarExec
job.
Solver
Configuration
Solved benchmarks
Total CPU time (s)
Total Wallclock time (s)
Strix
ltl_synth_acd_bfs
820
30218
30195
Strix
ltl_synth_zlk_bfs
818
23009
23012
Strix
ltl_synth_zlk_pq
845
29458
29380
Otus
otus-ltl-synthesis-parallel-sylvan
503
125103
31644
Otus
otus-ltl-synthesis-sequential-jbdd
490
27739
27705
ltlsynt
segsyntacdabc
709
50445
504463
ltlsynt
segsyntdsabc
689
46082
46196
ltlsynt
segsyntlarabc
703
49251
49337
sdf
synt
721
47772
37822
sdf
synt_p
666
112532
61328
For the quality ranking, ltlsynt and sdf swap places.
Strix
ltlsynt
sdf
Otus
More details regarding each solver
configuration is given in the table below.
Solver
Configuration
Synthesized controllers
Score
Strix
ltl_synth_acd_bfs
433
785
Strix
ltl_synth_zlk_bfs
432
780
Strix
ltl_synth_zlk_pq
430
772
Otus
otus-ltl-synthesis-parallel-sylvan
305
243
Otus
otus-ltl-synthesis-sequential-jbdd
298
238
ltlsynt
segsyntacdabc
375
500
ltlsynt
segsyntdsabc
377
540
ltlsynt
segsyntlarabc
371
487
sdf
synt
382
447
sdf
synt_p
354
400
Benchmarks and Solvers
The set of benchmarks used in 2022 can be fetched from
the usual
repository.
Below, you will find information regarding the participating solvers, their
source code, license type, and links to their most recent reports or preferred
publications to be cited in case you use the tool in your academic work.
We are preparing for the annual reactive synthesis competition. This year SYNTCOMP will be part of the FLoC’22 olympic games. The competition will run in time for the results to be presented during CAV 2022 and SYNT 2022.
The competition will run on StarExec, as in 2020 and 2021. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for – safety specifications in AIGER format, – LTL specifications in TLSF, and – parity games in Hanoi Omega-Automata (HOA) format, each separated into subtracks for realizability checking and synthesis. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in this year’s SYNTCOMP if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by July 1. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until July 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in this year’s SYNTCOMP if it is correctly uploaded (i.e. passes a test run) into StarExec by July 1.
The results of SYNTCOMP 2021 have been presented at the SYNT workshop, part of CAV 2021. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here(see the subspaces corresponding to each track).
This year, the competition consisted of LTL tracks, with specifications in TLSF, and parity-game tracks, with specifications in extended HOA. Since we had an increase in participants, this year we had sequential and parallel versions of the realizability and synthesis sub-tracks! Also worth mentioning is the reorganization of our benchmark library.
Winners
Knor won the parity-game synthesis track, solving 276/303 benchmarks. It also won the hard-parity-game realizability track by solving 216/217 benchmarks.
Strix placed first in the quality ranking for parity-game synthesis. In its LTL/TLSF version, Strix won the LTL sequential realizability track by solving 827/924 benchmarks; and placed first in the quality ranking for LTL sequential synthesis.
sdf won the LTL parallel realizability track, solving 730/942 benchmarks, and the LTL parallel synthesis track.
Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!
We are preparing for the annual reactive synthesis competition, which will run in time for the results to be presented at CAV 2021 and SYNT 2021.
The competition will run on StarExec, as in 2020. If you plan on submitting a benchmark or solver, please go to https://www.starexec.org/, create a user, and contact us (jacobs@cispa.saarland, guillermoalberto.perez@uantwerpen.be) to be added to the SYNTCOMP space. There are main tracks for – safety specifications in AIGER format, – LTL specifications in TLSF, and – parity games in Hanoi Omega-Automata (HOA) format, each separated into subtracks for realizability checking and synthesis, and into sequential and parallel execution modes. Tools will be evaluated with respect to the number of solved benchmarks, and with respect to the size of their solutions.
* Call for Benchmarks *
We are looking for new synthesis benchmarks to include into SYNTCOMP, in AIGER, TLSF, and HOA. New benchmarks can be included in SYNTCOMP 2021 if they are correctly uploaded (as an issue) to the SYNTCOMP benchmark repository into StarExec by June 1, 2021. Contact us (see above) if you have any questions regarding scope or format of benchmarks.
* Call for Solvers *
Solvers for all tracks of the competition will be accepted until June 1. Because of our move to StarExec, you will be allowed to update your solver as many times as needed. Your solver will be included in SYNTCOMP 2021 if it is correctly uploaded (i.e. passes a test run) into StarExec by June 1, 2021.
The results of SYNTCOMP 2020 have been presented at the SYNT workshop, part of CAV 2020. The slides used for the SYNT presentation can be found here. The experiments can now be browsed on StarExec here(see the subspaces corresponding to each track).
This year, the competition was split into a safety track, based on specifications in AIGER format an LTL track, with specifications in TLSF, and a parity-game track, with specifications in an extended HOA format. Just like last year, for the safety track we have only considered the realizability-checking task. Furthermore, we have only ran the sequential mode of the resulting three categories. For the LTL track and the synthesis task we have an additional ranking based on not only the quantity, but also the quality of solutions.
Here are the tools that had the best results in these categories:
simpleBDDSolver (solved 186 out of 274 problems in the AIGER/safety Realizability Track)
Strix (solved 424 out of 434 problems in the TLSF/LTL Realizability track and won the quality ranking in the TLSF/LTL Synthesis Track)
Strix (solved all problems in the EHOA/parity game Realizability track and was the only participant in the synthesis track)
Congratulations to the winners, and many thanks to all contributors of benchmarks and all participants!