New Benckmarks
Wed, 12/14/2016 - 12:54pm
Dear All,
I am wondering whether we could propose benchmarks from the papers not in ARCH if we do not have enough examples by Dec 15th.
If so, I would like to suggest the following criteria for a benchmark.
1) It should be originally published in a paper which is NOT from the authors of the tools in our competition.
2) Configurations (initial sets, unsafe sets) for different difficulty levels should be provided. Trivial safety checking tasks should be excluded.
3) An explantion of the behavior should be given. For example, where and why the model is hard to existing techniques.
thanks,
Xin