Added positions
This commit is contained in:
parent
a92480c431
commit
1c0f022a7e
|
@ -1,11 +1,20 @@
|
|||
Benchmarking your Agent
|
||||
---------------------------
|
||||
|
||||
![Benchmark_structure](img/benchmark_diagram_small.png )
|
||||
|
||||
In this tutorial we show:
|
||||
|
||||
* [How to define a trivial agent with a forward going policy.](#defining-the-agent)
|
||||
* [How to define a basic experiment suite.](#defining-the-experiment-suite)
|
||||
|
||||
|
||||
#### Introduction
|
||||
|
||||
![Benchmark_structure](img/benchmark_diagram_small.png)
|
||||
|
||||
The driving benchmark is associated with other two modules.
|
||||
The *agent* module, a controller which performs in
|
||||
another module, the *experiment suite*.
|
||||
The *agent* module, that is a controller which performs in
|
||||
another module: the *experiment suite*.
|
||||
Both modules are abstract classes that must be redefined by
|
||||
the user.
|
||||
|
||||
|
@ -30,20 +39,18 @@ an example of how to apply a driving benchmark;
|
|||
|
||||
|
||||
Following this excerpt, there are two classes to be defined.
|
||||
The ForwardAgent() and the BasicExperiments().
|
||||
After that, we run the benchmark it. As result of the execution, the driving benchmark
|
||||
returns stores a summary of the calculated [performance metrics](benchmark_metrics.md).
|
||||
The ForwardAgent() and the BasicExperimentSuite().
|
||||
After that, the benchmark is run with the "run_driving_benchmark" function.
|
||||
The summary of the execution, the [performance metrics](benchmark_metrics.md), are stored
|
||||
in a json file and printed to the screen.
|
||||
|
||||
|
||||
In this tutorial we are going to show how to define
|
||||
a basic experiment suite and a trivial agent with a going
|
||||
forward policy.
|
||||
|
||||
|
||||
#### Defining the Agent
|
||||
|
||||
The tested agent must inherit the base *Agent* class.
|
||||
Lets start by deriving a simple Forward agent.
|
||||
Lets start by deriving a simple forward agent:
|
||||
|
||||
from carla.agent.agent import Agent
|
||||
from carla.client import VehicleControl
|
||||
|
@ -75,7 +82,7 @@ This function receives the following parameters:
|
|||
* Target Position: The position and orientation of the target.
|
||||
|
||||
With all this information, the *run_step* function is expected
|
||||
to return a [vehicle control message](measurements.md) containing,
|
||||
to return a [vehicle control message](measurements.md), containing:
|
||||
steering value, throttle value, brake value, etc.
|
||||
|
||||
|
||||
|
@ -88,7 +95,7 @@ the following steps:
|
|||
|
||||
* Create your custom class by inheriting the ExperimentSuite base class.
|
||||
* Define the test and train weather conditions to be used.
|
||||
* Build the *Experiment* objects
|
||||
* Build the *Experiment* objects .
|
||||
|
||||
|
||||
|
||||
|
@ -96,7 +103,7 @@ the following steps:
|
|||
|
||||
|
||||
The defined set of experiments must derive the *ExperimentSuite* class
|
||||
as in the following code.
|
||||
as in the following code excerpt:
|
||||
|
||||
from carla.agent_benchmark.experiment import Experiment
|
||||
from carla.sensor import Camera
|
||||
|
@ -105,9 +112,9 @@ as in the following code.
|
|||
from .experiment_suite import ExperimentSuite
|
||||
|
||||
|
||||
class Basic(ExperimentSuite):
|
||||
class BasicExperimentSuite(ExperimentSuite):
|
||||
|
||||
##### Define the used weathers
|
||||
##### Define test and train weather conditions
|
||||
|
||||
The user must select the weathers to be used. One should select the set
|
||||
of test weathers and the set of train weathers. This is defined as a
|
||||
|
@ -124,27 +131,26 @@ class property as in the following example:
|
|||
##### Building Experiments
|
||||
|
||||
The [experiments are composed by a *task* that is defined by a set of *poses*](benchmark_structure.md).
|
||||
Lets start by selecting poses for one of the cities, Town01 for instance.
|
||||
First of all, we need to see all the possible positions, for that, with
|
||||
a CARLA simulator running in a terminal, run:
|
||||
Lets start by selecting poses for one of the cities, lets take Town01, for instance.
|
||||
First of all, we need to see all the possible positions, for that, with
|
||||
a CARLA simulator running in a terminal, run:
|
||||
|
||||
python view_start_positions.py
|
||||
|
||||
![town01_positions](img/town01_positions.png)
|
||||
|
||||
> Figure 2: All the possible start positions for CARLA Town01.
|
||||
|
||||
|
||||
Now lets choose, for instance, 105 as start position and 29
|
||||
as end. This two positions can be visualized by running.
|
||||
as the end position. This two positions can be visualized by running:
|
||||
|
||||
python view_start_positions.py --pos 105,29 --no-labels
|
||||
|
||||
![town01_positions](img/town01_109_29.png)
|
||||
> Figure 2: A start and an end position.
|
||||
|
||||
|
||||
Lets define
|
||||
two more poses, one for going straight, other one for one simple turn.
|
||||
Also, lets also choose three poses for Town02.
|
||||
Figure 3, shows these defined poses for both carla towns.
|
||||
Lets choose two more poses, one for going straight, other one for one simple turn.
|
||||
Also, lets also choose three poses for Town02:
|
||||
|
||||
|
||||
![town01_positions](img/initial_positions.png)
|
||||
|
@ -156,13 +162,19 @@ Figure 3, shows these defined poses for both carla towns.
|
|||
the goal is far away from the start position, usually more than one turn.
|
||||
|
||||
|
||||
We define each of this defined poses as tasks. Plus, we also set
|
||||
We define each of this defined poses as a task. Plus, we also set
|
||||
the number of dynamic objects for each of these tasks and repeat
|
||||
the arbitrary position task to have it also defined with dynamic
|
||||
objects. This is defined
|
||||
in the following code excerpt:
|
||||
objects. In the following code excerpt we show the final
|
||||
defined positions and the number of dinamic objects for each task:
|
||||
|
||||
poses_tasks = [[[36, 40]], [[138, 17]], [[105, 29]], [[105, 29]]]
|
||||
# Define the start/end position above as tasks
|
||||
poses_task0 = [[36, 40]]
|
||||
poses_task1 = [[138, 17]]
|
||||
poses_task2 = [[105, 29]]
|
||||
poses_task3 = [[105, 29]]
|
||||
# Concatenate all the tasks
|
||||
poses_tasks = poses_task0, poses_task1 , poses_task1 , poses_task3]
|
||||
vehicles_tasks = [0, 0, 0, 20]
|
||||
pedestrians_tasks = [0, 0, 0, 50]
|
||||
|
||||
|
@ -171,39 +183,39 @@ Keep in mind that a task is a set of episodes with start and end points.
|
|||
Finally by using the defined tasks we can build the experiments
|
||||
vector as we show in the following code excerpt:
|
||||
|
||||
```
|
||||
experiments_vector = []
|
||||
for weather in used_weathers:
|
||||
|
||||
for iteration in range(len(poses_tasks)):
|
||||
poses = poses_tasks[iteration]
|
||||
vehicles = vehicles_tasks[iteration]
|
||||
pedestrians = pedestrians_tasks[iteration]
|
||||
|
||||
conditions = CarlaSettings()
|
||||
conditions.set(
|
||||
SendNonPlayerAgentsInfo=True,
|
||||
NumberOfVehicles=vehicles,
|
||||
NumberOfPedestrians=pedestrians,
|
||||
WeatherId=weather
|
||||
|
||||
)
|
||||
# Add all the cameras that were set for this experiments
|
||||
conditions.add_sensor(camera)
|
||||
experiment = Experiment()
|
||||
experiment.set(
|
||||
Conditions=conditions,
|
||||
Poses=poses,
|
||||
Task=iteration,
|
||||
Repetitions=1
|
||||
)
|
||||
experiments_vector.append(experiment)
|
||||
```
|
||||
experiments_vector = []
|
||||
# The used weathers is the union between test and train weathers
|
||||
for weather in used_weathers:
|
||||
for iteration in range(len(poses_tasks)):
|
||||
poses = poses_tasks[iteration]
|
||||
vehicles = vehicles_tasks[iteration]
|
||||
pedestrians = pedestrians_tasks[iteration]
|
||||
|
||||
conditions = CarlaSettings()
|
||||
conditions.set(
|
||||
SendNonPlayerAgentsInfo=True,
|
||||
NumberOfVehicles=vehicles,
|
||||
NumberOfPedestrians=pedestrians,
|
||||
WeatherId=weather
|
||||
|
||||
)
|
||||
# Add all the cameras that were set for this experiments
|
||||
conditions.add_sensor(camera)
|
||||
experiment = Experiment()
|
||||
experiment.set(
|
||||
Conditions=conditions,
|
||||
Poses=poses,
|
||||
Task=iteration,
|
||||
Repetitions=1
|
||||
)
|
||||
experiments_vector.append(experiment)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
The full code could be found at basic.py (LINK)
|
||||
The full code could be found at [basic_experiment_suite.py](LINK)
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -3,75 +3,71 @@ Driving Benchmark Performance Metrics
|
|||
------------------------------
|
||||
|
||||
This page explains the performance metrics module.
|
||||
Use to compute a summary of results based on the agent
|
||||
actions when completing the experiments.
|
||||
This module is used to compute a summary of results based on the agent
|
||||
actions when completing experiments.
|
||||
|
||||
|
||||
### Provided performance metrics
|
||||
|
||||
The driving benchmark performance metrics module provides the following performance metrics:
|
||||
|
||||
* Percentage of Success: The percentage of episodes (poses from tasks),
|
||||
* **Percentage of Success**: The percentage of episodes (poses from tasks),
|
||||
that the agent successfully completed.
|
||||
|
||||
* Average Completion: The average distance towards the goal that the
|
||||
* **Average Completion**: The average distance towards the goal that the
|
||||
agent was able to travel.
|
||||
|
||||
* Off Road Intersection: The number of times the agent goes out of the road.
|
||||
* **Off Road Intersection**: The number of times the agent goes out of the road.
|
||||
The intersection is only counted if the area of the vehicle outside
|
||||
of the road is bigger than a *threshold*.
|
||||
|
||||
* Other Lane Intersection: The number of times the agent goes to the other
|
||||
* **Other Lane Intersection**: The number of times the agent goes to the other
|
||||
lane. The intersection is only counted if the area of the vehicle on the
|
||||
other lane is bigger than a *threshold*.
|
||||
|
||||
* Vehicle Collisions: The number of collisions with vehicles that have
|
||||
* **Vehicle Collisions**: The number of collisions with vehicles that had
|
||||
an impact bigger than a *threshold*.
|
||||
|
||||
* Pedestrian Collisions: The number of collisions with pedestrians
|
||||
that have an impact bigger than a *threshold*.
|
||||
* **Pedestrian Collisions**: The number of collisions with pedestrians
|
||||
that had an impact bigger than a *threshold*.
|
||||
|
||||
* General Collisions: The number of collisions with all other
|
||||
* **General Collisions**: The number of collisions with all other
|
||||
objects with an impact bigger than a *threshold*.
|
||||
|
||||
|
||||
### Executing and Setting Parameters
|
||||
|
||||
The metrics are computed as the final step of the benchmark
|
||||
and it is returned by the [benchmark_agent()](benchmark_creating.md).
|
||||
and stores a summary of the results a json file.
|
||||
Internally it is executed as follows:
|
||||
|
||||
metrics_object = Metrics(metrics_parameters)
|
||||
summary_dictionary = metrics_object.compute(path_to_execution_log)
|
||||
|
||||
The performance metric compute function
|
||||
receives the full path to the execution log
|
||||
and dictionary with the performance metrics.
|
||||
|
||||
The Metric's compute function
|
||||
receives the full path to the execution log.
|
||||
Also, the metric class is instanced with the metric parameters.
|
||||
|
||||
|
||||
The parameters are:
|
||||
* Threshold: The threshold used by the metrics.
|
||||
|
||||
* Frames Recount: After making the infraction, set the number
|
||||
* **Threshold**: The threshold used by the metrics.
|
||||
* **Frames Recount**: After making the infraction, set the number
|
||||
of frames that the agent needs to keep doing the infraction for
|
||||
it to be counted as another infraction.
|
||||
|
||||
* Frames Skip: It is related to the number of frames that are
|
||||
* **Frames Skip**: It is related to the number of frames that are
|
||||
skipped after a collision or a intersection starts.
|
||||
|
||||
These parameters are defined as property of the *Experiment Suite*
|
||||
base class and can be redefined at your
|
||||
[custom *Experiment Suite*](benchmark_creating.md/#defining-the-experiment-suite).
|
||||
|
||||
The default defined parameters are:
|
||||
The default parameters are:
|
||||
|
||||
|
||||
@property
|
||||
def metrics_parameters(self):
|
||||
"""
|
||||
Property to return the parameters for the metric module
|
||||
Property to return the parameters for the metrics module
|
||||
Could be redefined depending on the needs of the user.
|
||||
"""
|
||||
return {
|
||||
|
|
|
@ -7,17 +7,17 @@ metrics about its performance.
|
|||
|
||||
This module is mainly designed for:
|
||||
|
||||
* Users that work with autonomous driving agents and want
|
||||
to see how they perform in carla.
|
||||
* Users that work developing autonomous driving agents and want
|
||||
to see how they perform in CARLA.
|
||||
|
||||
On this section you will learn.
|
||||
|
||||
* How to quickly get started and benchmark a trivial agent right away.
|
||||
* Learn about the general implementation architecture of the driving
|
||||
benchmark module.
|
||||
* Learn how to set up your agent to be benchmarked and create your
|
||||
how set of experiments to challenge your agents.
|
||||
* Learn about the performance metrics used.
|
||||
* Learn about the general implementation [architecture of the driving
|
||||
benchmark module](benchmark_structure.md).
|
||||
* Learn [how to set up your agent and create your
|
||||
own set of experiments](benchmark_creating.md) for challenging your agents.
|
||||
* Learn about the [performance metrics used](benchmark_metrics.md).
|
||||
|
||||
|
||||
|
||||
|
@ -34,7 +34,7 @@ run:
|
|||
|
||||
|
||||
Keep in mind that to run the command above, you need a CARLA simulator
|
||||
server running, at localhost and on port 2000.
|
||||
running at localhost and on port 2000.
|
||||
|
||||
|
||||
We already provide the same benchmark used in the CoRL
|
||||
|
@ -49,10 +49,19 @@ Run the help command to see options available.
|
|||
|
||||
$ ./driving_benchmark_example.py --help
|
||||
|
||||
One of the options available is to be able to continue
|
||||
from a previous benchmark execution. For example,
|
||||
to continue a experiment in CoRL2017 with a log name of "driving_benchmark_test", run:
|
||||
|
||||
$ ./driving_benchmark_example.py --continue-experiment -n driving_benchmark_test --corl-2017
|
||||
|
||||
! Note: if the log name already exist and you dont set it to continue it
|
||||
will create another log witha number.
|
||||
|
||||
When running the driving benchmark for the basic configuration
|
||||
you should [expect the following results](benchmark_creating.md/#expected-results)
|
||||
to be printed on the terminal.
|
||||
to be printed on the terminal in more or less 5 minutes,
|
||||
depending on your machine configuration.
|
||||
|
||||
|
||||
|
||||
|
|
|
@ -2,7 +2,7 @@
|
|||
Driving Benchmark Structure
|
||||
-------------------
|
||||
|
||||
The Figure 1 one shows the general structure of the driving
|
||||
The Figure 1 shows the general structure of the driving
|
||||
benchmark module.
|
||||
|
||||
|
||||
|
@ -16,27 +16,29 @@ The *driving benchmark* is the module responsible for evaluating a certain
|
|||
*agent* in a *experiment suite*.
|
||||
|
||||
The *experiment suite* is an abstract module.
|
||||
Thus, the user must define its own experiment suite to be tested
|
||||
on an agent. We already provide the CoRL2017 suite, and a simple
|
||||
one for more basic testing.
|
||||
Thus, the user must define its own derivation
|
||||
of *experiment suite* to be tested
|
||||
on an agent. We already provide the CoRL2017 suite and a simple
|
||||
*experiment suite* for testing.
|
||||
|
||||
The *experiment suite* is composed by set of *experiments*.
|
||||
Each *experiment* contains to *task* that consist of a set of navigation
|
||||
Each *experiment* contains a *task* that consists of a set of navigation
|
||||
episodes, that are represented by a set of *poses*.
|
||||
These *poses* are tuples of a start and end point of the episode.
|
||||
These *poses* are tuples are the start and end points of an
|
||||
episode.
|
||||
|
||||
The *experiments* are also associated with a *condition* which is
|
||||
a [carla settings](carla_settings.md) object. The conditions specify
|
||||
simulation parameters such as: weather, sensor suite, number of
|
||||
The *experiments* are also associated with a *condition*. A
|
||||
condition is represented by a [carla settings](carla_settings.md) object.
|
||||
The conditions specify simulation parameters such as: weather, sensor suite, number of
|
||||
vehicles and pedestrians, etc.
|
||||
|
||||
|
||||
The user also should define the *agent* class. The *agent* is the active
|
||||
The user also should derivate an *agent* class. The *agent* is the active
|
||||
part which will be evaluated by the agent benchmark module.
|
||||
|
||||
The driving benchmark also contains two auxiliary modules.
|
||||
The *recording module* is used to keep track of all measurements and
|
||||
can be used to pause and continue the evaluation.
|
||||
can be used to pause and continue a driving benchmark.
|
||||
The [*metrics module*](benchmark_metrics.md) is used to compute the performance metrics
|
||||
by using the recorded measurements.
|
||||
|
||||
|
@ -47,21 +49,22 @@ Example: CORL 2017
|
|||
----------------------
|
||||
|
||||
We already provide the CoRL 2017 experiment suite used to benchmark the
|
||||
agents for the [CoRL 2017 paper](LINK).
|
||||
agents for the [CoRL 2017 paper](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf).
|
||||
|
||||
The CoRL 2017 experiment suite has the following composition:
|
||||
|
||||
* A total of 24 experiments for each CARLA town containing:
|
||||
* A task for going straight.
|
||||
* A task for making a single turn.
|
||||
* A task for going to an arbitrary position.
|
||||
* A task for going to an arbitrary position with dynamic objects.
|
||||
* Each task is composed of 25 poses and is repeated in 6 different weathers
|
||||
(Clear Noon, Heavy Rain Noon, Clear Sunset, After Rain Noon, Cloudy After Rain
|
||||
and Soft Rain Sunset).
|
||||
* A task for going straight.
|
||||
* A task for making a single turn.
|
||||
* A task for going to an arbitrary position.
|
||||
* A task for going to an arbitrary position with dynamic objects.
|
||||
|
||||
|
||||
|
||||
* Each task is composed of 25 poses that is repeated in 6 different weathers (Clear Noon, Heavy Rain Noon, Clear Sunset, After Rain Noon, Cloudy After Rain and Soft Rain Sunset).
|
||||
* The entire experiment set has 600 episodes.
|
||||
* The CoRL 2017 can take up to 24 hours to execute for Town01 and up to 15
|
||||
hours for Town02 depending on the drivers performance.
|
||||
hours for Town02 depending on the agent performance.
|
||||
|
||||
|
||||
|
||||
|
|
Binary file not shown.
Before Width: | Height: | Size: 60 KiB After Width: | Height: | Size: 66 KiB |
Binary file not shown.
After Width: | Height: | Size: 92 KiB |
Binary file not shown.
Before Width: | Height: | Size: 330 KiB After Width: | Height: | Size: 320 KiB |
|
@ -15,8 +15,8 @@
|
|||
* [How to build on Linux](how_to_build_on_linux.md)
|
||||
* [How to build on Windows](how_to_build_on_windows.md)
|
||||
|
||||
<h3> Agent Benchmark </h3>
|
||||
|
||||
<h3> Driving Benchmark </h3>
|
||||
|
||||
* [Quick Start](benchmark_start.md)
|
||||
* [General Structure](benchmark_structure.md)
|
||||
* [Creating Your Benchmark](benchmark_creating.md)
|
||||
|
|
Loading…
Reference in New Issue