carla/Docs/benchmark_metrics.md

98 lines
3.5 KiB
Markdown
Raw Normal View History

2018-04-10 01:42:06 +08:00
Driving Benchmark Performance Metrics
2018-04-10 01:27:15 +08:00
------------------------------
2018-04-10 20:55:03 +08:00
This page explains the performance metrics module.
2018-04-17 20:54:18 +08:00
This module is used to compute a summary of results based on the actions
performed by the agent during the benchmark.
2018-04-10 01:27:15 +08:00
2018-04-10 20:55:03 +08:00
### Provided performance metrics
2018-04-10 20:55:03 +08:00
The driving benchmark performance metrics module provides the following performance metrics:
2018-04-12 21:05:05 +08:00
* **Percentage of Success**: The percentage of episodes (poses from tasks),
2018-04-10 20:55:03 +08:00
that the agent successfully completed.
2018-04-12 21:05:05 +08:00
* **Average Completion**: The average distance towards the goal that the
2018-04-10 20:55:03 +08:00
agent was able to travel.
2018-04-12 21:05:05 +08:00
* **Off Road Intersection**: The number of times the agent goes out of the road.
The intersection is only counted if the area of the vehicle outside
of the road is bigger than a *threshold*.
2018-04-12 21:05:05 +08:00
* **Other Lane Intersection**: The number of times the agent goes to the other
lane. The intersection is only counted if the area of the vehicle on the
other lane is bigger than a *threshold*.
2018-04-12 21:05:05 +08:00
* **Vehicle Collisions**: The number of collisions with vehicles that had
an impact bigger than a *threshold*.
2018-04-12 21:05:05 +08:00
* **Pedestrian Collisions**: The number of collisions with pedestrians
that had an impact bigger than a *threshold*.
2018-04-12 21:05:05 +08:00
* **General Collisions**: The number of collisions with all other
2018-04-10 02:35:35 +08:00
objects with an impact bigger than a *threshold*.
2018-04-10 20:55:03 +08:00
### Executing and Setting Parameters
The metrics are computed as the final step of the benchmark
2018-04-12 21:05:05 +08:00
and stores a summary of the results a json file.
2018-04-10 20:55:03 +08:00
Internally it is executed as follows:
2018-04-17 20:54:18 +08:00
```python
metrics_object = Metrics(metrics_parameters)
summary_dictionary = metrics_object.compute(path_to_execution_log)
```
2018-04-12 21:05:05 +08:00
The Metric's compute function
receives the full path to the execution log.
2018-04-17 20:54:18 +08:00
The Metric class should be instanced with some parameters.
The parameters are:
2018-04-12 21:05:05 +08:00
* **Threshold**: The threshold used by the metrics.
* **Frames Recount**: After making the infraction, set the number
of frames that the agent needs to keep doing the infraction for
it to be counted as another infraction.
2018-04-12 21:05:05 +08:00
* **Frames Skip**: It is related to the number of frames that are
skipped after a collision or a intersection starts.
2018-04-10 20:55:03 +08:00
These parameters are defined as property of the *Experiment Suite*
base class and can be redefined at your
2018-04-17 20:54:18 +08:00
[custom *Experiment Suite*](benchmark_creating/#defining-the-experiment-suite).
2018-04-10 20:55:03 +08:00
2018-04-12 21:05:05 +08:00
The default parameters are:
2018-04-10 20:55:03 +08:00
@property
def metrics_parameters(self):
"""
2018-04-12 21:05:05 +08:00
Property to return the parameters for the metrics module
2018-04-10 20:55:03 +08:00
Could be redefined depending on the needs of the user.
"""
return {
'intersection_offroad': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.3
},
'intersection_otherlane': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.4
},
'collision_other': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 400
},
'collision_vehicles': {'frames_skip': 10,
'frames_recount': 30,
'threshold': 400
},
'collision_pedestrians': {'frames_skip': 5,
'frames_recount': 100,
'threshold': 300
},
}