2018-04-07 01:56:12 +08:00
2018-04-10 01:42:06 +08:00
Driving Benchmark Structure
2018-04-07 01:56:12 +08:00
-------------------
2018-04-17 20:54:18 +08:00
The figure below shows the general structure of the driving
2018-04-10 16:40:08 +08:00
benchmark module.
2018-04-07 01:56:12 +08:00
![Benchmark_structure ](img/benchmark_diagram.png )
2018-04-17 20:54:18 +08:00
>Figure: The general structure of the agent benchmark module.
2018-04-07 01:56:12 +08:00
2018-04-10 16:40:08 +08:00
The *driving benchmark* is the module responsible for evaluating a certain
2018-04-17 20:54:18 +08:00
*agent* in an *experiment suite* .
2018-04-07 01:56:12 +08:00
2018-04-10 16:40:08 +08:00
The *experiment suite* is an abstract module.
2018-04-12 21:05:05 +08:00
Thus, the user must define its own derivation
2018-04-17 20:54:18 +08:00
of *experiment suite* . We already provide the CoRL2017 suite and a simple
2018-04-12 21:05:05 +08:00
*experiment suite* for testing.
2018-04-10 16:40:08 +08:00
2018-04-07 01:56:12 +08:00
The *experiment suite* is composed by set of *experiments* .
2018-04-12 21:05:05 +08:00
Each *experiment* contains a *task* that consists of a set of navigation
2018-04-17 20:54:18 +08:00
episodes, represented by a set of *poses* .
These *poses* are tuples containing the start and end points of an
2018-04-12 21:05:05 +08:00
episode.
2018-04-10 16:40:08 +08:00
2018-04-12 21:05:05 +08:00
The *experiments* are also associated with a *condition* . A
condition is represented by a [carla settings ](carla_settings.md ) object.
The conditions specify simulation parameters such as: weather, sensor suite, number of
2018-04-07 01:56:12 +08:00
vehicles and pedestrians, etc.
2018-04-12 21:05:05 +08:00
The user also should derivate an *agent* class. The *agent* is the active
2018-04-17 20:54:18 +08:00
part which will be evaluated on the driving benchmark.
2018-04-10 16:40:08 +08:00
The driving benchmark also contains two auxiliary modules.
The *recording module* is used to keep track of all measurements and
2018-04-12 21:05:05 +08:00
can be used to pause and continue a driving benchmark.
2018-04-10 16:40:08 +08:00
The [*metrics module* ](benchmark_metrics.md ) is used to compute the performance metrics
by using the recorded measurements.
2018-04-07 01:56:12 +08:00
2018-04-10 16:40:08 +08:00
Example: CORL 2017
2018-04-07 01:56:12 +08:00
----------------------
2018-04-09 23:52:12 +08:00
We already provide the CoRL 2017 experiment suite used to benchmark the
2018-04-12 21:05:05 +08:00
agents for the [CoRL 2017 paper ](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf ).
2018-04-10 16:40:08 +08:00
The CoRL 2017 experiment suite has the following composition:
2018-04-17 20:54:18 +08:00
2018-04-10 16:40:08 +08:00
* A total of 24 experiments for each CARLA town containing:
2018-04-12 21:05:05 +08:00
* A task for going straight.
* A task for making a single turn.
* A task for going to an arbitrary position.
* A task for going to an arbitrary position with dynamic objects.
2018-04-17 20:54:18 +08:00
* Each task is composed of 25 poses that are repeated in 6 different weathers (Clear Noon, Heavy Rain Noon, Clear Sunset, After Rain Noon, Cloudy After Rain and Soft Rain Sunset).
2018-04-10 16:40:08 +08:00
* The entire experiment set has 600 episodes.
* The CoRL 2017 can take up to 24 hours to execute for Town01 and up to 15
2018-04-12 21:05:05 +08:00
hours for Town02 depending on the agent performance.
2018-04-10 16:40:08 +08:00
2018-04-09 23:52:12 +08:00
2018-04-07 01:56:12 +08:00