carla/Docs/benchmark_start.md

70 lines
2.0 KiB
Markdown
Raw Normal View History

2018-04-10 01:27:15 +08:00
Driving Benchmark
2018-04-07 01:56:12 +08:00
===============
2018-04-10 16:40:08 +08:00
The *driving benchmark* module is made
to evaluate a driving controller (agent) and obtain
2018-04-07 01:56:12 +08:00
metrics about its performance.
2018-04-10 16:40:08 +08:00
This module is mainly designed for:
2018-04-12 21:05:05 +08:00
* Users that work developing autonomous driving agents and want
to see how they perform in CARLA.
2018-04-10 16:40:08 +08:00
On this section you will learn.
* How to quickly get started and benchmark a trivial agent right away.
2018-04-12 21:05:05 +08:00
* Learn about the general implementation [architecture of the driving
benchmark module](benchmark_structure.md).
* Learn [how to set up your agent and create your
2018-04-17 20:54:18 +08:00
own set of experiments](benchmark_creating.md).
2018-04-12 21:05:05 +08:00
* Learn about the [performance metrics used](benchmark_metrics.md).
2018-04-10 16:40:08 +08:00
2018-04-07 01:56:12 +08:00
Getting Started
----------------
2018-04-07 01:56:12 +08:00
2018-04-10 16:40:08 +08:00
As a way to familiarize yourself with the system we
provide a trivial agent performing in an small
set of experiments (Basic). To execute it, simply
run:
2018-04-07 01:56:12 +08:00
2018-04-17 20:54:18 +08:00
2018-04-10 01:27:15 +08:00
$ ./driving_benchmark_example.py
2018-04-07 01:56:12 +08:00
2018-04-17 20:54:18 +08:00
Keep in mind that, to run the command above, you need a CARLA simulator
2018-04-12 21:05:05 +08:00
running at localhost and on port 2000.
2018-04-10 01:27:15 +08:00
2018-04-07 01:56:12 +08:00
2018-04-17 20:54:18 +08:00
We already provide the same benchmark used in the [CoRL
2017 paper](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf).
The CoRL 2017 experiment suite can be run in a trivial agent by
running:
2018-04-07 01:56:12 +08:00
2018-04-10 01:27:15 +08:00
$ ./driving_benchmark_example.py --corl-2017
2018-04-07 01:56:12 +08:00
2018-04-10 16:40:08 +08:00
This benchmark example can be further configured.
Run the help command to see options available.
2018-04-07 01:56:12 +08:00
2018-04-10 01:27:15 +08:00
$ ./driving_benchmark_example.py --help
2018-04-12 21:05:05 +08:00
One of the options available is to be able to continue
from a previous benchmark execution. For example,
2018-04-17 20:54:18 +08:00
to continue a experiment in CoRL2017 with a log name of "driving_benchmark_test", run:
$ ./driving_benchmark_example.py --continue-experiment -n driving_benchmark_test --corl-2017
2018-04-12 21:05:05 +08:00
2018-04-17 20:54:18 +08:00
!!! note
if the log name already exists and you don't set it to continue, it
will create another log under a different name.
2018-04-10 01:27:15 +08:00
When running the driving benchmark for the basic configuration
2018-04-19 22:34:16 +08:00
you should [expect these results](benchmark_creating/#expected-results)
2018-04-07 01:56:12 +08:00