Scoring Rubric

Scoring Overview

The competition is composed of 4 tasks, each of which is scored by the organizing team and invited judges. In order to advance in the competition, some tasks simply require competency, while others are based on relative scoring to other teams.

Task Rubrics

Task Description Rubric
1 Submit models and generated code to the CPS-VO, to demonstrate that they have the training and capability to read the documentation and run rudimentary experiments on the CPS-VO. Teams that successfully pass Task 1 will be invited to Task 2.

2: Good. Required experiments are performed on the CPS-VO

1: Okay. Only basic experiments works

0: Eek. Generated code does not work on the CPS-VO
2 Submit models and software that consume vehicle and sensor data when the vehicle is controlled by open-loop inputs, and generates as output the coordinates of interesting objects detected by those sensors in real time*. Team code will be run in simulation, as well as run by Competition Organizers on the CAT Vehicle. Teams that successfully pass Task 2 will be invited to Task 3.

Given M sensors available and N objects that can be detected.

4: Incredible. Team uses cheap subset of the M sensors; finds object coordinates and limits; all or almost all objects are detected; solution is elegant and robust.

3: Great. Team finds all or almost all objects; object coordinates are produced, may use defaults for limits;

2: Good. Team finds many of the objects; may have some false positives;

1: Mmmm. Solution runs but the output does not make much sense.

0: Eek. Solution does not run, or is an invalid tutorial solution.

3

Teams submit models and software that consume vehicle and sensor data, and which control the velocity of the vehicle, and which produce as the output a Gazebo world file at the conclusion of their drive. Team code will be run in simulation, as well as run by Competition Organizers on the CAT Vehicle. The top 4 scoring teams that participate in Task 3 will be invited to the Final Competition.

The previous rubric is utilized, in addition to the below metrics.

4:Incredible. The 3D world file is an uncanny representation of the environment just driven through.

3: Great. It is easy to look at the world file and compare it to the environment.

2: Good. The detected objects show up, but may be a bit confusing to see what objects in the world represent the objects in reality.

1: Mmmm. A lot of imagination is required, or the produced world file may be empty or invalid.

0: Eek. No output world is produced, or code does not run.
4

Teams will have an opportunity to modify and then re-run refinements of their Task 3 models on the CAT Vehicle in Tucson, AZ, over a period of 2-3 days. The models and software must be validated through the simulation framework before they are executed, for safety reasons.

Same rubric as Task 3.