Merge pull request #341 from carla-simulator/some_benchmark_fixes

Some benchmark fixes
This commit is contained in:
Néstor Subirón 2018-04-20 16:30:19 +02:00 committed by GitHub
commit 140500a6f8
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
47 changed files with 5872 additions and 830 deletions

View File

@ -1,78 +0,0 @@
<h1>CARLA Benchmark</h1>
Running the Benchmark
---------------------
The "carla" api provides a basic benchmarking system, that allows making several
tests on a certain agent. We already provide the same benchmark used in the CoRL
2017 paper. By running this benchmark you can compare the results of your agent
to the results obtained by the agents show in the paper.
Besides the requirements of the CARLA client, the benchmark package also needs
the future package
$ sudo pip install future
By running the benchmark a default agent that just go straight will be tested.
To run the benchmark you need a server running. For a default localhost server
on port 2000, to run the benchmark you just need to run
$ ./run_benchmark.py
or
$ python run_benchmark.py
Run the help command to see options available
$ ./run_benchmark.py --help
Benchmarking your Agent
-----------------------
The benchmark works by calling three lines of code
```python
corl = CoRL2017(city_name=args.city_name, name_to_save=args.log_name)
agent = Manual(args.city_name)
results = corl.benchmark_agent(agent, client)
```
This is excerpt is executed in the
[run_benchmark.py](https://github.com/carla-simulator/carla/blob/master/PythonClient/run_benchmark.py)
example.
First a *benchmark* object is defined, for this case, a CoRL2017 benchmark. This
is object is used to benchmark a certain Agent.
On the second line of our sample code, there is an object of a Manual class
instanced. This class inherited an Agent base class that is used by the
*benchmark* object. To be benchmarked, an Agent subclass must redefine the
*run_step* function as it is done in the following excerpt:
```python
def run_step(self, measurements, sensor_data, target):
"""
Function to run a control step in the CARLA vehicle.
:param measurements: object of the Measurements type
:param sensor_data: images list object
:param target: target position of Transform type
:return: an object of the control type.
"""
control = VehicleControl()
control.throttle = 0.9
return control
```
The function receives measurements from the world, sensor data and a target
position. With this, the function must return a control to the car, *i.e.*
steering value, throttle value, brake value, etc.
The [measurements](measurements.md), [target](measurements.md),
[sensor_data](cameras_and_sensors.md) and [control](measurements.md) types are
described on the documentation.
Creating your Benchmark
-----------------------
> TODO

View File

@ -0,0 +1,189 @@
We show the results for test and train weathers when
[running the simple example](benchmark_creating/#expected-results) for Town01.
The following result should print on the screen after running the
example.
----- Printing results for training weathers (Seen in Training) -----
Percentage of Successful Episodes
Weather: Clear Noon
Task: 0 -> 1.0
Task: 1 -> 0.0
Task: 2 -> 0.0
Task: 3 -> 0.0
Average Between Weathers
Task 0 -> 1.0
Task 1 -> 0.0
Task 2 -> 0.0
Task 3 -> 0.0
Average Percentage of Distance to Goal Travelled
Weather: Clear Noon
Task: 0 -> 0.9643630125892909
Task: 1 -> 0.6794216252808839
Task: 2 -> 0.6593855166486696
Task: 3 -> 0.6646695325122313
Average Between Weathers
Task 0 -> 0.9643630125892909
Task 1 -> 0.6794216252808839
Task 2 -> 0.6593855166486696
Task 3 -> 0.6646695325122313
Avg. Kilometers driven before a collision to a PEDESTRIAN
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Avg. Kilometers driven before a collision to a VEHICLE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.11491704214531683
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.11491704214531683
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.22983408429063365
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> 0.12350085985904342
Task 2 -> 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> 0.12350085985904342
Task 2 -> 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Avg. Kilometers driven before invading the OPPOSITE LANE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
----- Printing results for test weathers (Unseen in Training) -----
Percentage of Successful Episodes
Weather: Clear Noon
Task: 0 -> 1.0
Task: 1 -> 0.0
Task: 2 -> 0.0
Task: 3 -> 0.0
Average Between Weathers
Task 0 -> 1.0
Task 1 -> 0.0
Task 2 -> 0.0
Task 3 -> 0.0
Average Percentage of Distance to Goal Travelled
Weather: Clear Noon
Task: 0 -> 0.9643630125892909
Task: 1 -> 0.6794216252808839
Task: 2 -> 0.6593855166486696
Task: 3 -> 0.6646695325122313
Average Between Weathers
Task 0 -> 0.9643630125892909
Task 1 -> 0.6794216252808839
Task 2 -> 0.6593855166486696
Task 3 -> 0.6646695325122313
Avg. Kilometers driven before a collision to a PEDESTRIAN
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Avg. Kilometers driven before a collision to a VEHICLE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.11491704214531683
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.11491704214531683
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.22983408429063365
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> 0.12350085985904342
Task 2 -> 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> 0.12350085985904342
Task 2 -> 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Avg. Kilometers driven before invading the OPPOSITE LANE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365

View File

@ -0,0 +1,187 @@
We show the results for test and train weathers when
[running the simple example](benchmark_creating/#expected-results) for Town02.
The following result should print on the screen after running the
example.
----- Printing results for training weathers (Seen in Training) -----
Percentage of Successful Episodes
Weather: Clear Noon
Task: 0 -> 1.0
Task: 1 -> 0.0
Task: 2 -> 0.0
Task: 3 -> 0.0
Average Between Weathers
Task 0 -> 1.0
Task 1 -> 0.0
Task 2 -> 0.0
Task 3 -> 0.0
Average Percentage of Distance to Goal Travelled
Weather: Clear Noon
Task: 0 -> 0.8127653637426329
Task: 1 -> 0.10658303206448155
Task: 2 -> -0.20448736444348714
Task: 3 -> -0.20446966646041384
Average Between Weathers
Task 0 -> 0.8127653637426329
Task 1 -> 0.10658303206448155
Task 2 -> -0.20448736444348714
Task 3 -> -0.20446966646041384
Avg. Kilometers driven before a collision to a PEDESTRIAN
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Avg. Kilometers driven before a collision to a VEHICLE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> 0.019641485501456352
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> 0.019641485501456352
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> 0.03856641710143665
Task 2 -> 0.03928511962584409
Task 3 -> 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> 0.03856641710143665
Task 2 -> 0.03928511962584409
Task 3 -> 0.039282971002912705
Avg. Kilometers driven before invading the OPPOSITE LANE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
----- Printing results for test weathers (Unseen in Training) -----
Percentage of Successful Episodes
Weather: Clear Noon
Task: 0 -> 1.0
Task: 1 -> 0.0
Task: 2 -> 0.0
Task: 3 -> 0.0
Average Between Weathers
Task 0 -> 1.0
Task 1 -> 0.0
Task 2 -> 0.0
Task 3 -> 0.0
Average Percentage of Distance to Goal Travelled
Weather: Clear Noon
Task: 0 -> 0.8127653637426329
Task: 1 -> 0.10658303206448155
Task: 2 -> -0.20448736444348714
Task: 3 -> -0.20446966646041384
Average Between Weathers
Task 0 -> 0.8127653637426329
Task 1 -> 0.10658303206448155
Task 2 -> -0.20448736444348714
Task 3 -> -0.20446966646041384
Avg. Kilometers driven before a collision to a PEDESTRIAN
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Avg. Kilometers driven before a collision to a VEHICLE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> 0.019641485501456352
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> 0.019641485501456352
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> 0.03856641710143665
Task 2 -> 0.03928511962584409
Task 3 -> 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> 0.03856641710143665
Task 2 -> 0.03928511962584409
Task 3 -> 0.039282971002912705
Avg. Kilometers driven before invading the OPPOSITE LANE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705

241
Docs/benchmark_creating.md Normal file
View File

@ -0,0 +1,241 @@
Benchmarking your Agent
---------------------------
In this tutorial we show:
* [How to define a trivial agent with a forward going policy.](#defining-the-agent)
* [How to define a basic experiment suite.](#defining-the-experiment-suite)
#### Introduction
![Benchmark_structure](img/benchmark_diagram_small.png)
The driving benchmark is associated with other two modules.
The *agent* module, that is a controller which performs in
another module: the *experiment suite*.
Both modules are abstract classes that must be redefined by
the user.
The following code excerpt is
an example of how to apply a driving benchmark;
# We instantiate a forward agent, a simple policy that just set
# acceleration as 0.9 and steering as zero
agent = ForwardAgent()
# We instantiate an experiment suite. Basically a set of experiments
# that are going to be evaluated on this benchmark.
experiment_suite = BasicExperimentSuite(city_name)
# Now actually run the driving_benchmark
# Besides the agent and experiment suite we should send
# the city name ( Town01, Town02) the log
run_driving_benchmark(agent, experiment_suite, city_name,
log_name, continue_experiment,
host, port)
Following this excerpt, there are two classes to be defined.
The ForwardAgent() and the BasicExperimentSuite().
After that, the benchmark can ne run with the "run_driving_benchmark" function.
The summary of the execution, the [performance metrics](benchmark_metrics.md), are stored
in a json file and printed to the screen.
#### Defining the Agent
The tested agent must inherit the base *Agent* class.
Let's start by deriving a simple forward agent:
from carla.agent.agent import Agent
from carla.client import VehicleControl
class ForwardAgent(Agent):
To have its performance evaluated, the ForwardAgent derived class _must_
redefine the *run_step* function as it is done in the following excerpt:
def run_step(self, measurements, sensor_data, directions, target):
"""
Function to run a control step in the CARLA vehicle.
"""
control = VehicleControl()
control.throttle = 0.9
return control
This function receives the following parameters:
* [Measurements](measurements.md): the entire state of the world received
by the client from the CARLA Simulator. These measurements contains agent position, orientation,
dynamic objects information, etc.
* [Sensor Data](cameras_and_sensors.md): The measured data from defined sensors,
such as Lidars or RGB cameras.
* Directions: Information from the high level planner. Currently the planner sends
a high level command from the follwoing set: STRAIGHT, RIGHT, LEFT, NOTHING.
* Target Position: The position and orientation of the target.
With all this information, the *run_step* function is expected
to return a [vehicle control message](measurements.md), containing:
steering value, throttle value, brake value, etc.
#### Defining the Experiment Suite
To create a Experiment Suite class you need to perform
the following steps:
* Create your custom class by inheriting the ExperimentSuite base class.
* Define the test and train weather conditions to be used.
* Build the *Experiment* objects .
##### Definition
The defined set of experiments must derive the *ExperimentSuite* class
as in the following code excerpt:
from carla.agent_benchmark.experiment import Experiment
from carla.sensor import Camera
from carla.settings import CarlaSettings
from .experiment_suite import ExperimentSuite
class BasicExperimentSuite(ExperimentSuite):
##### Define test and train weather conditions
The user must select the weathers to be used. One should select the set
of test weathers and the set of train weathers. This is defined as a
class property as in the following example:
@property
def train_weathers(self):
return [1]
@property
def test_weathers(self):
return [1]
##### Building Experiments
The [experiments are composed by a *task* that is defined by a set of *poses*](benchmark_structure.md).
Let's start by selecting poses for one of the cities, let's take Town01, for instance.
First of all, we need to see all the possible positions, for that, with
a CARLA simulator running in a terminal, run:
python view_start_positions.py
![town01_positions](img/town01_positions.png)
Now let's choose, for instance, 140 as start position and 134
as the end position. This two positions can be visualized by running:
python view_start_positions.py --pos 140,134 --no-labels
![town01_positions](img/town01_140_134.png)
Let's choose two more poses, one for going straight, other one for one simple turn.
Also, let's also choose three poses for Town02:
![town01_positions](img/initial_positions.png)
>Figure: The poses used on this basic *Experiment Suite* example. Poses are
a tuple of start and end position of a goal-directed episode. Start positions are
shown in Blue, end positions in Red. Left: Straight poses,
where the goal is just straight away from the start position. Middle: One turn
episode, where the goal is one turn away from the start point. Right: arbitrary position,
the goal is far away from the start position, usually more than one turn.
We define each of these poses as a task. Plus, we also set
the number of dynamic objects for each of these tasks and repeat
the arbitrary position task to have it also defined with dynamic
objects. In the following code excerpt we show the final
defined positions and the number of dynamic objects for each task:
# Define the start/end position below as tasks
poses_task0 = [[7, 3]]
poses_task1 = [[138, 17]]
poses_task2 = [[140, 134]]
poses_task3 = [[140, 134]]
# Concatenate all the tasks
poses_tasks = [poses_task0, poses_task1 , poses_task1 , poses_task3]
# Add dynamic objects to tasks
vehicles_tasks = [0, 0, 0, 20]
pedestrians_tasks = [0, 0, 0, 50]
Finally by using the defined tasks we can build the experiments
vector as we show in the following code excerpt:
experiments_vector = []
# The used weathers is the union between test and train weathers
for weather in used_weathers:
for iteration in range(len(poses_tasks)):
poses = poses_tasks[iteration]
vehicles = vehicles_tasks[iteration]
pedestrians = pedestrians_tasks[iteration]
conditions = CarlaSettings()
conditions.set(
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=vehicles,
NumberOfPedestrians=pedestrians,
WeatherId=weather
)
# Add all the cameras that were set for this experiments
conditions.add_sensor(camera)
experiment = Experiment()
experiment.set(
Conditions=conditions,
Poses=poses,
Task=iteration,
Repetitions=1
)
experiments_vector.append(experiment)
The full code could be found at [basic_experiment_suite.py](https://github.com/carla-simulator/carla/blob/master/PythonClient/carla/driving_benchmark/experiment_suites/basic_experiment_suite.py)
#### Expected Results
First you need a CARLA Simulator running with [fixed time-step](configuring_the_simulation/#fixed-time-step)
, so the results you will obtain will be more or less reproducible.
For that you should run the CARLA simulator as:
./CarlaUE4.sh /Game/Maps/<Town_name> -windowed -world-port=2000 -benchmark -fps=10
The example presented in this tutorial can be executed for Town01 as:
./driving_benchmark_example.py -c Town01
You should expect these results: [town01_basic_forward_results](benchmark_basic_results_town01)
For Town02:
./driving_benchmark_example.py -c Town02
You should expect these results: [town01_basic_forward_results](benchmark_basic_results_town02)

97
Docs/benchmark_metrics.md Normal file
View File

@ -0,0 +1,97 @@
Driving Benchmark Performance Metrics
------------------------------
This page explains the performance metrics module.
This module is used to compute a summary of results based on the actions
performed by the agent during the benchmark.
### Provided performance metrics
The driving benchmark performance metrics module provides the following performance metrics:
* **Percentage of Success**: The percentage of episodes (poses from tasks),
that the agent successfully completed.
* **Average Completion**: The average distance towards the goal that the
agent was able to travel.
* **Off Road Intersection**: The number of times the agent goes out of the road.
The intersection is only counted if the area of the vehicle outside
of the road is bigger than a *threshold*.
* **Other Lane Intersection**: The number of times the agent goes to the other
lane. The intersection is only counted if the area of the vehicle on the
other lane is bigger than a *threshold*.
* **Vehicle Collisions**: The number of collisions with vehicles that had
an impact bigger than a *threshold*.
* **Pedestrian Collisions**: The number of collisions with pedestrians
that had an impact bigger than a *threshold*.
* **General Collisions**: The number of collisions with all other
objects with an impact bigger than a *threshold*.
### Executing and Setting Parameters
The metrics are computed as the final step of the benchmark
and stores a summary of the results a json file.
Internally it is executed as follows:
```python
metrics_object = Metrics(metrics_parameters)
summary_dictionary = metrics_object.compute(path_to_execution_log)
```
The Metric's compute function
receives the full path to the execution log.
The Metric class should be instanced with some parameters.
The parameters are:
* **Threshold**: The threshold used by the metrics.
* **Frames Recount**: After making the infraction, set the number
of frames that the agent needs to keep doing the infraction for
it to be counted as another infraction.
* **Frames Skip**: It is related to the number of frames that are
skipped after a collision or a intersection starts.
These parameters are defined as property of the *Experiment Suite*
base class and can be redefined at your
[custom *Experiment Suite*](benchmark_creating/#defining-the-experiment-suite).
The default parameters are:
@property
def metrics_parameters(self):
"""
Property to return the parameters for the metrics module
Could be redefined depending on the needs of the user.
"""
return {
'intersection_offroad': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.3
},
'intersection_otherlane': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.4
},
'collision_other': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 400
},
'collision_vehicles': {'frames_skip': 10,
'frames_recount': 30,
'threshold': 400
},
'collision_pedestrians': {'frames_skip': 5,
'frames_recount': 100,
'threshold': 300
},
}

69
Docs/benchmark_start.md Normal file
View File

@ -0,0 +1,69 @@
Driving Benchmark
===============
The *driving benchmark* module is made
to evaluate a driving controller (agent) and obtain
metrics about its performance.
This module is mainly designed for:
* Users that work developing autonomous driving agents and want
to see how they perform in CARLA.
On this section you will learn.
* How to quickly get started and benchmark a trivial agent right away.
* Learn about the general implementation [architecture of the driving
benchmark module](benchmark_structure.md).
* Learn [how to set up your agent and create your
own set of experiments](benchmark_creating.md).
* Learn about the [performance metrics used](benchmark_metrics.md).
Getting Started
----------------
As a way to familiarize yourself with the system we
provide a trivial agent performing in an small
set of experiments (Basic). To execute it, simply
run:
$ ./driving_benchmark_example.py
Keep in mind that, to run the command above, you need a CARLA simulator
running at localhost and on port 2000.
We already provide the same benchmark used in the [CoRL
2017 paper](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf).
The CoRL 2017 experiment suite can be run in a trivial agent by
running:
$ ./driving_benchmark_example.py --corl-2017
This benchmark example can be further configured.
Run the help command to see options available.
$ ./driving_benchmark_example.py --help
One of the options available is to be able to continue
from a previous benchmark execution. For example,
to continue a experiment in CoRL2017 with a log name of "driving_benchmark_test", run:
$ ./driving_benchmark_example.py --continue-experiment -n driving_benchmark_test --corl-2017
!!! note
if the log name already exists and you don't set it to continue, it
will create another log under a different name.
When running the driving benchmark for the basic configuration
you should [expect these results](benchmark_creating/#expected-results)

View File

@ -0,0 +1,66 @@
Driving Benchmark Structure
-------------------
The figure below shows the general structure of the driving
benchmark module.
![Benchmark_structure](img/benchmark_diagram.png)
>Figure: The general structure of the agent benchmark module.
The *driving benchmark* is the module responsible for evaluating a certain
*agent* in an *experiment suite*.
The *experiment suite* is an abstract module.
Thus, the user must define its own derivation
of *experiment suite*. We already provide the CoRL2017 suite and a simple
*experiment suite* for testing.
The *experiment suite* is composed by set of *experiments*.
Each *experiment* contains a *task* that consists of a set of navigation
episodes, represented by a set of *poses*.
These *poses* are tuples containing the start and end points of an
episode.
The *experiments* are also associated with a *condition*. A
condition is represented by a [carla settings](carla_settings.md) object.
The conditions specify simulation parameters such as: weather, sensor suite, number of
vehicles and pedestrians, etc.
The user also should derivate an *agent* class. The *agent* is the active
part which will be evaluated on the driving benchmark.
The driving benchmark also contains two auxiliary modules.
The *recording module* is used to keep track of all measurements and
can be used to pause and continue a driving benchmark.
The [*metrics module*](benchmark_metrics.md) is used to compute the performance metrics
by using the recorded measurements.
Example: CORL 2017
----------------------
We already provide the CoRL 2017 experiment suite used to benchmark the
agents for the [CoRL 2017 paper](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf).
The CoRL 2017 experiment suite has the following composition:
* A total of 24 experiments for each CARLA town containing:
* A task for going straight.
* A task for making a single turn.
* A task for going to an arbitrary position.
* A task for going to an arbitrary position with dynamic objects.
* Each task is composed of 25 poses that are repeated in 6 different weathers (Clear Noon, Heavy Rain Noon, Clear Sunset, After Rain Noon, Cloudy After Rain and Soft Rain Sunset).
* The entire experiment set has 600 episodes.
* The CoRL 2017 can take up to 24 hours to execute for Town01 and up to 15
hours for Town02 depending on the agent performance.

Binary file not shown.

After

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 172 KiB

BIN
Docs/img/town01_140_134.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 320 KiB

View File

@ -15,11 +15,17 @@
* [How to build on Linux](how_to_build_on_linux.md)
* [How to build on Windows](how_to_build_on_windows.md)
<h3> Driving Benchmark </h3>
* [Quick Start](benchmark_start.md)
* [General Structure](benchmark_structure.md)
* [Creating Your Benchmark](benchmark_creating.md)
* [Computed Performance Metrics](benchmark_metrics.md)
<h3>Advanced topics</h3>
* [CARLA settings](carla_settings.md)
* [Simulator keyboard input](simulator_keyboard_input.md)
* [Benchmark](benchmark.md)
* [Running without display and selecting GPUs](carla_headless.md)
* [How to link Epic's Automotive Materials](epic_automotive_materials.md)

View File

@ -0,0 +1,2 @@
from .forward_agent import ForwardAgent
from .agent import Agent

View File

@ -0,0 +1,24 @@
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
# @author: german,felipecode
from __future__ import print_function
import abc
class Agent(object):
def __init__(self):
self.__metaclass__ = abc.ABCMeta
@abc.abstractmethod
def run_step(self, measurements, sensor_data, directions, target):
"""
Function to be redefined by an agent.
:param The measurements like speed, the image data and a target
:returns A carla Control object, with the steering/gas/brake for the agent
"""

View File

@ -0,0 +1,15 @@
from carla.agent.agent import Agent
from carla.client import VehicleControl
class ForwardAgent(Agent):
"""
Simple derivation of Agent Class,
A trivial agent agent that goes straight
"""
def run_step(self, measurements, sensor_data, directions, target):
control = VehicleControl()
control.throttle = 0.9
return control

View File

@ -1,38 +0,0 @@
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
# @author: german,felipecode
from __future__ import print_function
import abc
from carla.planner.planner import Planner
class Agent(object):
def __init__(self, city_name):
self.__metaclass__ = abc.ABCMeta
self._planner = Planner(city_name)
def get_distance(self, start_point, end_point):
path_distance = self._planner.get_shortest_path_distance(
[start_point.location.x, start_point.location.y, 22]
, [start_point.orientation.x, start_point.orientation.y, 22]
, [end_point.location.x, end_point.location.y, 22]
, [end_point.orientation.x, end_point.orientation.y, 22])
# We calculate the timout based on the distance
return path_distance
@abc.abstractmethod
def run_step(self, measurements, sensor_data, target):
"""
Function to be redefined by an agent.
:param The measurements like speed, the image data and a target
:returns A carla Control object, with the steering/gas/brake for the agent
"""

View File

@ -1,377 +0,0 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import csv
import datetime
import math
import os
import abc
import logging
from builtins import input as input_data
from carla.client import VehicleControl
def sldist(c1, c2):
return math.sqrt((c2[0] - c1[0])**2 + (c2[1] - c1[1])**2)
class Benchmark(object):
"""
The Benchmark class, controls the execution of the benchmark by an
Agent class.
The benchmark class must be inherited
"""
def __init__(
self,
city_name,
name_to_save,
continue_experiment=False,
save_images=False
):
self.__metaclass__ = abc.ABCMeta
self._city_name = city_name
self._base_name = name_to_save
self._dict_stats = {'exp_id': -1,
'rep': -1,
'weather': -1,
'start_point': -1,
'end_point': -1,
'result': -1,
'initial_distance': -1,
'final_distance': -1,
'final_time': -1,
'time_out': -1
}
self._dict_rewards = {'exp_id': -1,
'rep': -1,
'weather': -1,
'collision_gen': -1,
'collision_ped': -1,
'collision_car': -1,
'lane_intersect': -1,
'sidewalk_intersect': -1,
'pos_x': -1,
'pos_y': -1
}
self._experiments = self._build_experiments()
# Create the log files and get the names
self._suffix_name, self._full_name = self._create_log_record(name_to_save, self._experiments)
# Get the line for the experiment to be continued
self._line_on_file = self._continue_experiment(continue_experiment)
self._save_images = save_images
self._image_filename_format = os.path.join(
self._full_name, '_images/episode_{:s}/{:s}/image_{:0>5d}.jpg')
def run_navigation_episode(
self,
agent,
carla,
time_out,
target,
episode_name):
measurements, sensor_data = carla.read_data()
carla.send_control(VehicleControl())
t0 = measurements.game_timestamp
t1 = t0
success = False
measurement_vec = []
frame = 0
distance = 10000
while(t1 - t0) < (time_out * 1000) and not success:
measurements, sensor_data = carla.read_data()
control = agent.run_step(measurements, sensor_data, target)
logging.info("Controller is Inputting:")
logging.info('Steer = %f Throttle = %f Brake = %f ',
control.steer, control.throttle, control.brake)
carla.send_control(control)
# measure distance to target
if self._save_images:
for name, image in sensor_data.items():
image.save_to_disk(self._image_filename_format.format(
episode_name, name, frame))
curr_x = 1e2 * measurements.player_measurements.transform.location.x
curr_y = 1e2 * measurements.player_measurements.transform.location.y
measurement_vec.append(measurements.player_measurements)
t1 = measurements.game_timestamp
distance = sldist([curr_x, curr_y],
[target.location.x, target.location.y])
logging.info('Status:')
logging.info(
'[d=%f] c_x = %f, c_y = %f ---> t_x = %f, t_y = %f',
float(distance), curr_x, curr_y, target.location.x,
target.location.y)
if distance < 200.0:
success = True
frame += 1
if success:
return 1, measurement_vec, float(t1 - t0) / 1000.0, distance
return 0, measurement_vec, time_out, distance
def benchmark_agent(self, agent, carla):
if self._line_on_file == 0:
# The fixed name considering all the experiments being run
with open(os.path.join(self._full_name,
self._suffix_name), 'w') as ofd:
w = csv.DictWriter(ofd, self._dict_stats.keys())
w.writeheader()
with open(os.path.join(self._full_name,
'details_' + self._suffix_name), 'w') as rfd:
rw = csv.DictWriter(rfd, self._dict_rewards.keys())
rw.writeheader()
start_task = 0
start_pose = 0
else:
(start_task, start_pose) = self._get_pose_and_task(self._line_on_file)
logging.info(' START ')
for experiment in self._experiments[start_task:]:
positions = carla.load_settings(
experiment.conditions).player_start_spots
for pose in experiment.poses[start_pose:]:
for rep in range(experiment.repetitions):
start_point = pose[0]
end_point = pose[1]
carla.start_episode(start_point)
logging.info('======== !!!! ==========')
logging.info(' Start Position %d End Position %d ',
start_point, end_point)
path_distance = agent.get_distance(
positions[start_point], positions[end_point])
euclidean_distance = \
sldist([positions[start_point].location.x, positions[start_point].location.y],
[positions[end_point].location.x, positions[end_point].location.y])
time_out = self._calculate_time_out(path_distance)
# running the agent
(result, reward_vec, final_time, remaining_distance) = \
self.run_navigation_episode(
agent, carla, time_out, positions[end_point],
str(experiment.Conditions.WeatherId) + '_'
+ str(experiment.id) + '_' + str(start_point)
+ '.' + str(end_point))
# compute stats for the experiment
self._write_summary_results(
experiment, pose, rep, euclidean_distance,
remaining_distance, final_time, time_out, result)
self._write_details_results(experiment, rep, reward_vec)
if(result > 0):
logging.info('+++++ Target achieved in %f seconds! +++++',
final_time)
else:
logging.info('----- Timeout! -----')
return self.get_all_statistics()
def _write_summary_results(self, experiment, pose, rep,
path_distance, remaining_distance,
final_time, time_out, result):
self._dict_stats['exp_id'] = experiment.id
self._dict_stats['rep'] = rep
self._dict_stats['weather'] = experiment.Conditions.WeatherId
self._dict_stats['start_point'] = pose[0]
self._dict_stats['end_point'] = pose[1]
self._dict_stats['result'] = result
self._dict_stats['initial_distance'] = path_distance
self._dict_stats['final_distance'] = remaining_distance
self._dict_stats['final_time'] = final_time
self._dict_stats['time_out'] = time_out
with open(os.path.join(self._full_name, self._suffix_name), 'a+') as ofd:
w = csv.DictWriter(ofd, self._dict_stats.keys())
w.writerow(self._dict_stats)
def _write_details_results(self, experiment, rep, reward_vec):
with open(os.path.join(self._full_name,
'details_' + self._suffix_name), 'a+') as rfd:
rw = csv.DictWriter(rfd, self._dict_rewards.keys())
for i in range(len(reward_vec)):
self._dict_rewards['exp_id'] = experiment.id
self._dict_rewards['rep'] = rep
self._dict_rewards['weather'] = experiment.Conditions.WeatherId
self._dict_rewards['collision_gen'] = reward_vec[
i].collision_other
self._dict_rewards['collision_ped'] = reward_vec[
i].collision_pedestrians
self._dict_rewards['collision_car'] = reward_vec[
i].collision_vehicles
self._dict_rewards['lane_intersect'] = reward_vec[
i].intersection_otherlane
self._dict_rewards['sidewalk_intersect'] = reward_vec[
i].intersection_offroad
self._dict_rewards['pos_x'] = reward_vec[
i].transform.location.x
self._dict_rewards['pos_y'] = reward_vec[
i].transform.location.y
rw.writerow(self._dict_rewards)
def _create_log_record(self, base_name, experiments):
"""
This function creates the log files for the benchmark.
"""
suffix_name = self._get_experiments_names(experiments)
full_name = os.path.join('_benchmarks_results',
base_name + '_'
+ self._get_details() + '/')
folder = os.path.dirname(full_name)
if not os.path.isdir(folder):
os.makedirs(folder)
# Make a date file: to show when this was modified,
# the number of times the experiments were run
now = datetime.datetime.now()
open(os.path.join(full_name, now.strftime("%Y%m%d%H%M")),'w').close()
return suffix_name, full_name
def _continue_experiment(self, continue_experiment):
if self._experiment_exist():
if continue_experiment:
line_on_file = self._get_last_position()
else:
# Ask question, to avoid mistaken override situations
answer = input_data("The experiment was already found in the files"
+ ", Do you want to continue (y/n)? \n"
)
if answer == 'Yes' or answer == 'y':
line_on_file = self._get_last_position()
else:
line_on_file = 0
else:
line_on_file = 0
return line_on_file
def _experiment_exist(self):
return os.path.isfile(self._full_name)
def _get_last_position(self):
with open(os.path.join(self._full_name, self._suffix_name)) as f:
return sum(1 for _ in f)
# To be redefined on subclasses on how to calculate timeout for an episode
@abc.abstractmethod
def _calculate_time_out(self, distance):
pass
@abc.abstractmethod
def _get_details(self):
"""
Get details
:return: a string with name and town of the subclass
"""
@abc.abstractmethod
def _build_experiments(self):
"""
Returns a set of experiments to be evaluated
Must be redefined in an inherited class.
"""
@abc.abstractmethod
def get_all_statistics(self):
"""
Get the statistics of the evaluated experiments
:return:
"""
@abc.abstractmethod
def _get_pose_and_task(self, line_on_file):
"""
Parse the experiment depending on number of poses and tasks
"""
@abc.abstractmethod
def plot_summary_train(self):
"""
returns the summary for the train weather/task episodes
"""
@abc.abstractmethod
def plot_summary_test(self):
"""
returns the summary for the test weather/task episodes
"""
@staticmethod
def _get_experiments_names(experiments):
name_cat = 'w'
for experiment in experiments:
name_cat += str(experiment.Conditions.WeatherId) + '.'
return name_cat

View File

@ -1,205 +0,0 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import numpy as np
import math
import os
sldist = lambda c1, c2: math.sqrt((c2[0] - c1[0])**2 + (c2[1] - c1[1])**2)
flatten = lambda l: [item for sublist in l for item in sublist]
def get_colisions(selected_matrix, header):
count_gen = 0
count_ped = 0
count_car = 0
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_gen')]
- selected_matrix[(i-10), header.index('collision_gen')]) > 40000:
count_gen += 1
i += 20
i += 1
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_car')]
- selected_matrix[(i-10), header.index('collision_car')]) > 40000:
count_car += 1
i += 30
i += 1
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_ped')]
- selected_matrix[i-5, header.index('collision_ped')]) > 30000:
count_ped += 1
i += 100
i += 1
return count_gen, count_car, count_ped
def get_distance_traveled(selected_matrix, header):
prev_x = selected_matrix[0, header.index('pos_x')]
prev_y = selected_matrix[0, header.index('pos_y')]
i = 1
acummulated_distance = 0
while i < selected_matrix.shape[0]:
x = selected_matrix[i, header.index('pos_x')]
y = selected_matrix[i, header.index('pos_y')]
# Here we defined a maximun distance in a tick, this case 8 meters or 288km/h
if sldist((x, y), (prev_x, prev_y)) < 800:
acummulated_distance += sldist((x, y), (prev_x, prev_y))
prev_x = x
prev_y = y
i += 1
return float(acummulated_distance)/float(100*1000)
def get_out_of_road_lane(selected_matrix, header):
count_road = 0
count_lane = 0
i = 0
while i < selected_matrix.shape[0]:
# print selected_matrix[i,6]
if (selected_matrix[i, header.index('sidewalk_intersect')]
- selected_matrix[(i-10), header.index('sidewalk_intersect')]) > 0.3:
count_road += 1
i += 20
if i >= selected_matrix.shape[0]:
break
if (selected_matrix[i, header.index('lane_intersect')]
- selected_matrix[(i-10), header.index('lane_intersect')]) > 0.4:
count_lane += 1
i += 20
i += 1
return count_lane, count_road
def compute_summary(filename, dynamic_episodes):
# Separate the PATH and the basename
path = os.path.dirname(filename)
base_name = os.path.basename(filename)
f = open(filename, "rb")
header = f.readline()
header = header.split(',')
header[-1] = header[-1][:-2]
f.close()
f = open(os.path.join(path, 'details_' + base_name), "rb")
header_details = f.readline()
header_details = header_details.split(',')
header_details[-1] = header_details[-1][:-2]
f.close()
data_matrix = np.loadtxt(open(filename, "rb"), delimiter=",", skiprows=1)
# Corner Case: The presented test just had one episode
if data_matrix.ndim == 1:
data_matrix = np.expand_dims(data_matrix, axis=0)
tasks = np.unique(data_matrix[:, header.index('exp_id')])
all_weathers = np.unique(data_matrix[:, header.index('weather')])
reward_matrix = np.loadtxt(open(os.path.join(
path, 'details_' + base_name), "rb"), delimiter=",", skiprows=1)
metrics_dictionary = {'average_completion': {w: [0.0]*len(tasks) for w in all_weathers},
'intersection_offroad': {w: [0.0]*len(tasks) for w in all_weathers},
'intersection_otherlane': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_pedestrians': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_vehicles': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_other': {w: [0.0]*len(tasks) for w in all_weathers},
'average_fully_completed': {w: [0.0]*len(tasks) for w in all_weathers},
'average_speed': {w: [0.0]*len(tasks) for w in all_weathers},
'driven_kilometers': {w: [0.0]*len(tasks) for w in all_weathers}
}
for t in tasks:
task_data_matrix = data_matrix[
data_matrix[:, header.index('exp_id')] == t]
weathers = np.unique(task_data_matrix[:, header.index('weather')])
for w in weathers:
t = int(t)
task_data_matrix = data_matrix[np.logical_and(data_matrix[:, header.index(
'exp_id')] == t, data_matrix[:, header.index('weather')] == w)]
task_reward_matrix = reward_matrix[np.logical_and(reward_matrix[:, header_details.index(
'exp_id')] == float(t), reward_matrix[:, header_details.index('weather')] == float(w))]
km_run = get_distance_traveled(
task_reward_matrix, header_details)
metrics_dictionary['average_fully_completed'][w][t] = sum(
task_data_matrix[:, header.index('result')])/task_data_matrix.shape[0]
metrics_dictionary['average_completion'][w][t] = sum(
(task_data_matrix[:, header.index('initial_distance')]
- task_data_matrix[:, header.index('final_distance')])
/ task_data_matrix[:, header.index('initial_distance')]) \
/ len(task_data_matrix[:, header.index('final_distance')])
metrics_dictionary['driven_kilometers'][w][t]= km_run
metrics_dictionary['average_speed'][w][t]= km_run/ \
((sum(task_data_matrix[:, header.index('final_time')]))/3600.0)
if list(tasks).index(t) in set(dynamic_episodes):
lane_road = get_out_of_road_lane(
task_reward_matrix, header_details)
colisions = get_colisions(task_reward_matrix, header_details)
metrics_dictionary['intersection_offroad'][
w][t] = lane_road[0]/km_run
metrics_dictionary['intersection_otherlane'][
w][t] = lane_road[1]/km_run
metrics_dictionary['collision_pedestrians'][
w][t] = colisions[2]/km_run
metrics_dictionary['collision_vehicles'][
w][t] = colisions[1]/km_run
metrics_dictionary['collision_other'][
w][t] = colisions[0]/km_run
return metrics_dictionary

View File

@ -0,0 +1 @@
from .driving_benchmark import run_driving_benchmark

View File

@ -0,0 +1,317 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import abc
import logging
import math
import time
from carla.client import VehicleControl
from carla.client import make_carla_client
from carla.driving_benchmark.metrics import Metrics
from carla.planner.planner import Planner
from carla.settings import CarlaSettings
from carla.tcp import TCPConnectionError
from . import results_printer
from .recording import Recording
def sldist(c1, c2):
return math.sqrt((c2[0] - c1[0]) ** 2 + (c2[1] - c1[1]) ** 2)
class DrivingBenchmark(object):
"""
The Benchmark class, controls the execution of the benchmark interfacing
an Agent class with a set Suite.
The benchmark class must be inherited with a class that defines the
all the experiments to be run by the agent
"""
def __init__(
self,
city_name='Town01',
name_to_save='Test',
continue_experiment=False,
save_images=False,
distance_for_success=2.0
):
self.__metaclass__ = abc.ABCMeta
self._city_name = city_name
self._base_name = name_to_save
# The minimum distance for arriving into the goal point in
# order to consider ir a success
self._distance_for_success = distance_for_success
# The object used to record the benchmark and to able to continue after
self._recording = Recording(name_to_save=name_to_save,
continue_experiment=continue_experiment,
save_images=save_images
)
# We have a default planner instantiated that produces high level commands
self._planner = Planner(city_name)
def benchmark_agent(self, experiment_suite, agent, client):
"""
Function to benchmark the agent.
It first check the log file for this benchmark.
if it exist it continues from the experiment where it stopped.
Args:
experiment_suite
agent: an agent object with the run step class implemented.
client:
Return:
A dictionary with all the metrics computed from the
agent running the set of experiments.
"""
# Instantiate a metric object that will be used to compute the metrics for
# the benchmark afterwards.
metrics_object = Metrics(experiment_suite.metrics_parameters,
experiment_suite.dynamic_tasks)
# Function return the current pose and task for this benchmark.
start_pose, start_experiment = self._recording.get_pose_and_experiment(
experiment_suite.get_number_of_poses_task())
logging.info('START')
for experiment in experiment_suite.get_experiments()[int(start_experiment):]:
positions = client.load_settings(
experiment.conditions).player_start_spots
self._recording.log_start(experiment.task)
for pose in experiment.poses[start_pose:]:
for rep in range(experiment.repetitions):
start_index = pose[0]
end_index = pose[1]
client.start_episode(start_index)
# Print information on
logging.info('======== !!!! ==========')
logging.info(' Start Position %d End Position %d ',
start_index, end_index)
self._recording.log_poses(start_index, end_index,
experiment.Conditions.WeatherId)
# Calculate the initial distance for this episode
initial_distance = \
sldist(
[positions[start_index].location.x, positions[start_index].location.y],
[positions[end_index].location.x, positions[end_index].location.y])
time_out = experiment_suite.calculate_time_out(
self._get_shortest_path(positions[start_index], positions[end_index]))
# running the agent
(result, reward_vec, control_vec, final_time, remaining_distance) = \
self._run_navigation_episode(
agent, client, time_out, positions[end_index],
str(experiment.Conditions.WeatherId) + '_'
+ str(experiment.task) + '_' + str(start_index)
+ '.' + str(end_index))
# Write the general status of the just ran episode
self._recording.write_summary_results(
experiment, pose, rep, initial_distance,
remaining_distance, final_time, time_out, result)
# Write the details of this episode.
self._recording.write_measurements_results(experiment, rep, pose, reward_vec,
control_vec)
if result > 0:
logging.info('+++++ Target achieved in %f seconds! +++++',
final_time)
else:
logging.info('----- Timeout! -----')
start_pose = 0
self._recording.log_end()
return metrics_object.compute(self._recording.path)
def get_path(self):
"""
Returns the path were the log was saved.
"""
return self._recording.path
def _get_directions(self, current_point, end_point):
"""
Class that should return the directions to reach a certain goal
"""
directions = self._planner.get_next_command(
(current_point.location.x,
current_point.location.y, 0.22),
(current_point.orientation.x,
current_point.orientation.y,
current_point.orientation.z),
(end_point.location.x, end_point.location.y, 0.22),
(end_point.orientation.x, end_point.orientation.y, end_point.orientation.z))
return directions
def _get_shortest_path(self, start_point, end_point):
"""
Calculates the shortest path between two points considering the road netowrk
"""
return self._planner.get_shortest_path_distance(
[
start_point.location.x, start_point.location.y, 0.22], [
start_point.orientation.x, start_point.orientation.y, 0.22], [
end_point.location.x, end_point.location.y, end_point.location.z], [
end_point.orientation.x, end_point.orientation.y, end_point.orientation.z])
def _run_navigation_episode(
self,
agent,
client,
time_out,
target,
episode_name):
"""
Run one episode of the benchmark (Pose) for a certain agent.
Args:
agent: the agent object
client: an object of the carla client to communicate
with the CARLA simulator
time_out: the time limit to complete this episode
target: the target to reach
episode_name: The name for saving images of this episode
"""
# Send an initial command.
measurements, sensor_data = client.read_data()
client.send_control(VehicleControl())
initial_timestamp = measurements.game_timestamp
current_timestamp = initial_timestamp
# The vector containing all measurements produced on this episode
measurement_vec = []
# The vector containing all controls produced on this episode
control_vec = []
frame = 0
distance = 10000
success = False
while (current_timestamp - initial_timestamp) < (time_out * 1000) and not success:
# Read data from server with the client
measurements, sensor_data = client.read_data()
# The directions to reach the goal are calculated.
directions = self._get_directions(measurements.player_measurements.transform, target)
# Agent process the data.
control = agent.run_step(measurements, sensor_data, directions, target)
# Send the control commands to the vehicle
client.send_control(control)
# save images if the flag is activated
self._recording.save_images(sensor_data, episode_name, frame)
current_x = measurements.player_measurements.transform.location.x
current_y = measurements.player_measurements.transform.location.y
logging.info("Controller is Inputting:")
logging.info('Steer = %f Throttle = %f Brake = %f ',
control.steer, control.throttle, control.brake)
current_timestamp = measurements.game_timestamp
# Get the distance travelled until now
distance = sldist([current_x, current_y],
[target.location.x, target.location.y])
# Write status of the run on verbose mode
logging.info('Status:')
logging.info(
'[d=%f] c_x = %f, c_y = %f ---> t_x = %f, t_y = %f',
float(distance), current_x, current_y, target.location.x,
target.location.y)
# Check if reach the target
if distance < self._distance_for_success:
success = True
# Increment the vectors and append the measurements and controls.
frame += 1
measurement_vec.append(measurements.player_measurements)
control_vec.append(control)
if success:
return 1, measurement_vec, control_vec, float(
current_timestamp - initial_timestamp) / 1000.0, distance
return 0, measurement_vec, control_vec, time_out, distance
def run_driving_benchmark(agent,
experiment_suite,
city_name='Town01',
log_name='Test',
continue_experiment=False,
host='127.0.0.1',
port=2000
):
while True:
try:
with make_carla_client(host, port) as client:
# Hack to fix for the issue 310, we force a reset, so it does not get
# the positions on first server reset.
client.load_settings(CarlaSettings())
client.start_episode(0)
# We instantiate the driving benchmark, that is the engine used to
# benchmark an agent. The instantiation starts the log process, sets
benchmark = DrivingBenchmark(city_name=city_name,
name_to_save=log_name + '_'
+ type(experiment_suite).__name__
+ '_' + city_name,
continue_experiment=continue_experiment)
# This function performs the benchmark. It returns a dictionary summarizing
# the entire execution.
benchmark_summary = benchmark.benchmark_agent(experiment_suite, agent, client)
print("")
print("")
print("----- Printing results for training weathers (Seen in Training) -----")
print("")
print("")
results_printer.print_summary(benchmark_summary, experiment_suite.train_weathers,
benchmark.get_path())
print("")
print("")
print("----- Printing results for test weathers (Unseen in Training) -----")
print("")
print("")
results_printer.print_summary(benchmark_summary, experiment_suite.test_weathers,
benchmark.get_path())
break
except TCPConnectionError as error:
logging.error(error)
time.sleep(1)

View File

@ -8,9 +8,21 @@ from carla.settings import CarlaSettings
class Experiment(object):
"""
Experiment defines a certain task, under conditions
A task is associated with a set of poses, containing start and end pose.
Conditions are associated with a carla Settings and describe the following:
Number Of Vehicles
Number Of Pedestrians
Weather
Random Seed of the agents, describing their behaviour.
"""
def __init__(self):
self.Id = ''
self.Task = 0
self.Conditions = CarlaSettings()
self.Poses = [[]]
self.Repetitions = 1
@ -21,9 +33,12 @@ class Experiment(object):
raise ValueError('Experiment: no key named %r' % key)
setattr(self, key, value)
if self.Repetitions != 1:
raise NotImplementedError()
@property
def id(self):
return self.Id
def task(self):
return self.Task
@property
def conditions(self):

View File

@ -0,0 +1,2 @@
from .basic_experiment_suite import BasicExperimentSuite
from .corl_2017 import CoRL2017

View File

@ -0,0 +1,83 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
from __future__ import print_function
from carla.driving_benchmark.experiment import Experiment
from carla.sensor import Camera
from carla.settings import CarlaSettings
from .experiment_suite import ExperimentSuite
class BasicExperimentSuite(ExperimentSuite):
@property
def train_weathers(self):
return [1]
@property
def test_weathers(self):
return [1]
def build_experiments(self):
"""
Creates the whole set of experiment objects,
The experiments created depends on the selected Town.
"""
# We check the town, based on that we define the town related parameters
# The size of the vector is related to the number of tasks, inside each
# task there is also multiple poses ( start end, positions )
if self._city_name == 'Town01':
poses_tasks = [[[7, 3]], [[138, 17]], [[140, 134]], [[140, 134]]]
vehicles_tasks = [0, 0, 0, 20]
pedestrians_tasks = [0, 0, 0, 50]
else:
poses_tasks = [[[4, 2]], [[37, 76]], [[19, 66]], [[19, 66]]]
vehicles_tasks = [0, 0, 0, 15]
pedestrians_tasks = [0, 0, 0, 50]
# We set the camera
# This single RGB camera is used on every experiment
camera = Camera('CameraRGB')
camera.set(FOV=100)
camera.set_image_size(800, 600)
camera.set_position(2.0, 0.0, 1.4)
camera.set_rotation(-15.0, 0, 0)
# Based on the parameters, creates a vector with experiment objects.
experiments_vector = []
for weather in self.weathers:
for iteration in range(len(poses_tasks)):
poses = poses_tasks[iteration]
vehicles = vehicles_tasks[iteration]
pedestrians = pedestrians_tasks[iteration]
conditions = CarlaSettings()
conditions.set(
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=vehicles,
NumberOfPedestrians=pedestrians,
WeatherId=weather
)
# Add all the cameras that were set for this experiments
conditions.add_sensor(camera)
experiment = Experiment()
experiment.set(
Conditions=conditions,
Poses=poses,
Task=iteration,
Repetitions=1
)
experiments_vector.append(experiment)
return experiments_vector

View File

@ -8,63 +8,21 @@
from __future__ import print_function
import os
from .benchmark import Benchmark
from .experiment import Experiment
from carla.driving_benchmark.experiment import Experiment
from carla.sensor import Camera
from carla.settings import CarlaSettings
from .metrics import compute_summary
from carla.driving_benchmark.experiment_suites.experiment_suite import ExperimentSuite
class CoRL2017(Benchmark):
class CoRL2017(ExperimentSuite):
def get_all_statistics(self):
@property
def train_weathers(self):
return [1, 3, 6, 8]
summary = compute_summary(os.path.join(
self._full_name, self._suffix_name), [3])
return summary
def plot_summary_train(self):
self._plot_summary([1.0, 3.0, 6.0, 8.0])
def plot_summary_test(self):
self._plot_summary([4.0, 14.0])
def _plot_summary(self, weathers):
"""
We plot the summary of the testing for the set selected weathers.
The test weathers are [4,14]
"""
metrics_summary = compute_summary(os.path.join(
self._full_name, self._suffix_name), [3])
for metric, values in metrics_summary.items():
print('Metric : ', metric)
for weather, tasks in values.items():
if weather in set(weathers):
print(' Weather: ', weather)
count = 0
for t in tasks:
print(' Task ', count, ' -> ', t)
count += 1
print(' AvG -> ', float(sum(tasks)) / float(len(tasks)))
def _calculate_time_out(self, distance):
"""
Function to return the timeout ( in miliseconds) that is calculated based on distance to goal.
This is the same timeout as used on the CoRL paper.
"""
return ((distance / 100000.0) / 10.0) * 3600.0 + 10.0
@property
def test_weathers(self):
return [4, 14]
def _poses_town01(self):
"""
@ -128,10 +86,12 @@ class CoRL2017(Benchmark):
_poses_navigation()
]
def _build_experiments(self):
def build_experiments(self):
"""
Creates the whole set of experiment objects,
The experiments created depend on the selected Town.
"""
# We set the camera
@ -139,13 +99,10 @@ class CoRL2017(Benchmark):
camera = Camera('CameraRGB')
camera.set(FOV=100)
camera.set_image_size(800, 600)
camera.set_position(2.0, 0.0, 1.4)
camera.set_rotation(-15.0, 0, 0)
weathers = [1, 3, 6, 8, 4, 14]
if self._city_name == 'Town01':
poses_tasks = self._poses_town01()
vehicles_tasks = [0, 0, 0, 20]
@ -157,7 +114,7 @@ class CoRL2017(Benchmark):
experiments_vector = []
for weather in weathers:
for weather in self.weathers:
for iteration in range(len(poses_tasks)):
poses = poses_tasks[iteration]
@ -166,13 +123,10 @@ class CoRL2017(Benchmark):
conditions = CarlaSettings()
conditions.set(
SynchronousMode=True,
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=vehicles,
NumberOfPedestrians=pedestrians,
WeatherId=weather,
SeedVehicles=123456789,
SeedPedestrians=123456789
WeatherId=weather
)
# Add all the cameras that were set for this experiments
@ -182,22 +136,9 @@ class CoRL2017(Benchmark):
experiment.set(
Conditions=conditions,
Poses=poses,
Id=iteration,
Task=iteration,
Repetitions=1
)
experiments_vector.append(experiment)
return experiments_vector
def _get_details(self):
# Function to get automatic information from the experiment for writing purposes
return 'corl2017_' + self._city_name
def _get_pose_and_task(self, line_on_file):
"""
Returns the pose and task this experiment is, based on the line it was
on the log file.
"""
# We assume that the number of poses is constant
return int(line_on_file / len(self._experiments)), line_on_file % 25

View File

@ -0,0 +1,102 @@
# To be redefined on subclasses on how to calculate timeout for an episode
import abc
class ExperimentSuite(object):
def __init__(self, city_name):
self._city_name = city_name
self._experiments = self.build_experiments()
def calculate_time_out(self, path_distance):
"""
Function to return the timeout ,in milliseconds,
that is calculated based on distance to goal.
This is the same timeout as used on the CoRL paper.
"""
return ((path_distance / 1000.0) / 10.0) * 3600.0 + 10.0
def get_number_of_poses_task(self):
"""
Get the number of poses a task have for this benchmark
"""
# Warning: assumes that all tasks have the same size
return len(self._experiments[0].poses)
def get_experiments(self):
"""
Getter for the experiment set.
"""
return self._experiments
@property
def dynamic_tasks(self):
"""
Returns the episodes that contain dynamic obstacles
"""
dynamic_tasks = set()
for exp in self._experiments:
if exp.conditions.NumberOfVehicles > 0 or exp.conditions.NumberOfPedestrians > 0:
dynamic_tasks.add(exp.task)
return list(dynamic_tasks)
@property
def metrics_parameters(self):
"""
Property to return the parameters for the metric module
Could be redefined depending on the needs of the user.
"""
return {
'intersection_offroad': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.3
},
'intersection_otherlane': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.4
},
'collision_other': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 400
},
'collision_vehicles': {'frames_skip': 10,
'frames_recount': 30,
'threshold': 400
},
'collision_pedestrians': {'frames_skip': 5,
'frames_recount': 100,
'threshold': 300
},
}
@property
def weathers(self):
weathers = set(self.train_weathers)
weathers.update(self.test_weathers)
return weathers
@abc.abstractmethod
def build_experiments(self):
"""
Returns a set of experiments to be evaluated
Must be redefined in an inherited class.
"""
@abc.abstractproperty
def train_weathers(self):
"""
Return the weathers that are considered as training conditions
"""
@abc.abstractproperty
def test_weathers(self):
"""
Return the weathers that are considered as testing conditions
"""

View File

@ -0,0 +1,335 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import numpy as np
import math
import os
sldist = lambda c1, c2: math.sqrt((c2[0] - c1[0]) ** 2 + (c2[1] - c1[1]) ** 2)
flatten = lambda l: [item for sublist in l for item in sublist]
class Metrics(object):
"""
The metrics class is made to take the driving measurements
and calculate some specific performance metrics.
"""
def __init__(self, parameters, dynamic_tasks):
"""
Args
parameters: A dictionary with the used parameters for checking how to count infractions
dynamic_tasks: A list of the all dynamic tasks (That contain dynamic objects)
"""
self._parameters = parameters
self._parameters['dynamic_tasks'] = dynamic_tasks
def _divide_by_episodes(self, measurements_matrix, header):
"""
Divides the measurements matrix on different episodes.
Args:
measurements_matrix: The full measurements matrix
header: The header from the measurements matrix
"""
# Read previous for position zero
prev_start = measurements_matrix[0, header.index('start_point')]
prev_end = measurements_matrix[0, header.index('end_point')]
prev_exp_id = measurements_matrix[0, header.index('exp_id')]
# Start at the position 1.
i = 1
prev_i_position = 0
episode_matrix_metrics = []
while i < measurements_matrix.shape[0]:
current_start = measurements_matrix[i, header.index('start_point')]
current_end = measurements_matrix[i, header.index('end_point')]
current_exp_id = measurements_matrix[i, header.index('exp_id')]
# If there is a change in the position it means it is a new episode for sure.
if (current_start != prev_start and current_end != prev_end) \
or current_exp_id != prev_exp_id:
episode_matrix_metrics.append(measurements_matrix[prev_i_position:i, :])
prev_i_position = i
prev_start = current_start
prev_end = current_end
prev_exp_id = current_exp_id
i += 1
episode_matrix_metrics.append(measurements_matrix[prev_i_position:-1, :])
return episode_matrix_metrics
def _get_collisions(self, selected_matrix, header):
"""
Get the number of collisions for pedestrians, vehicles or other
Args:
selected_matrix: The matrix with all the experiments summary
header: The header , to know the positions of details
"""
count_collisions_general = 0
count_collisions_pedestrian = 0
count_collisions_vehicle = 0
i = 1
# Computing general collisions
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_other')]
- selected_matrix[
(i - self._parameters['collision_other']['frames_skip']), header.index(
'collision_other')]) > \
self._parameters['collision_other']['threshold']:
count_collisions_general += 1
i += self._parameters['collision_other']['frames_recount']
i += 1
i = 1
# Computing collisions for vehicles
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_vehicles')]
- selected_matrix[
(i - self._parameters['collision_vehicles']['frames_skip']), header.index(
'collision_vehicles')]) > \
self._parameters['collision_vehicles']['threshold']:
count_collisions_vehicle += 1
i += self._parameters['collision_vehicles']['frames_recount']
i += 1
i = 1
# Computing the collisions for pedestrians
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_pedestrians')]
- selected_matrix[i - self._parameters['collision_pedestrians']['frames_skip'],
header.index('collision_pedestrians')]) > \
self._parameters['collision_pedestrians']['threshold']:
count_collisions_pedestrian += 1
i += self._parameters['collision_pedestrians']['frames_recount']
i += 1
return count_collisions_general, count_collisions_vehicle, count_collisions_pedestrian
def _get_distance_traveled(self, selected_matrix, header):
"""
Compute the total distance travelled
Args:
selected_matrix: The matrix with all the experiments summary
header: The header , to know the positions of details
"""
prev_x = selected_matrix[0, header.index('pos_x')]
prev_y = selected_matrix[0, header.index('pos_y')]
i = 1
acummulated_distance = 0
while i < selected_matrix.shape[0]:
x = selected_matrix[i, header.index('pos_x')]
y = selected_matrix[i, header.index('pos_y')]
acummulated_distance += sldist((x, y), (prev_x, prev_y))
prev_x = x
prev_y = y
i += 1
return acummulated_distance / (1000.0)
def _get_out_of_road_lane(self, selected_matrix, header):
"""
Check for the situations were the agent goes out of the road.
Args:
selected_matrix: The matrix with all the experiments summary
header: The header , to know the positions of details
"""
count_sidewalk_intersect = 0
count_lane_intersect = 0
i = 0
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('intersection_offroad')]
- selected_matrix[(i - self._parameters['intersection_offroad']['frames_skip']),
header.index('intersection_offroad')]) \
> self._parameters['intersection_offroad']['threshold']:
count_sidewalk_intersect += 1
i += self._parameters['intersection_offroad']['frames_recount']
if i >= selected_matrix.shape[0]:
break
if (selected_matrix[i, header.index('intersection_otherlane')]
- selected_matrix[(i - self._parameters['intersection_otherlane']['frames_skip']),
header.index('intersection_otherlane')]) \
> self._parameters['intersection_otherlane']['threshold']:
count_lane_intersect += 1
i += self._parameters['intersection_otherlane']['frames_recount']
i += 1
return count_lane_intersect, count_sidewalk_intersect
def compute(self, path):
"""
Compute a dictionary containing the following metrics
* Off Road Intersection: The number of times the agent goes out of the road.
The intersection is only counted if the area of the vehicle outside
of the road is bigger than a *threshold*.
* Other Lane Intersection: The number of times the agent goes to the other
lane. The intersection is only counted if the area of the vehicle on the
other lane is bigger than a *threshold*.
* Vehicle Collisions: The number of collisions with vehicles that have
an impact bigger than a *threshold*.
* Pedestrian Collisions: The number of collisions with pedestrians
that have an impact bigger than a threshold.
* General Collisions: The number of collisions with all other
objects.
Args:
path: Path where the log files are.
"""
with open(os.path.join(path, 'summary.csv'), "rU") as f:
header = f.readline()
header = header.split(',')
header[-1] = header[-1][:-1]
with open(os.path.join(path, 'measurements.csv'), "rU") as f:
header_metrics = f.readline()
header_metrics = header_metrics.split(',')
header_metrics[-1] = header_metrics[-1][:-1]
result_matrix = np.loadtxt(os.path.join(path, 'summary.csv'), delimiter=",", skiprows=1)
# Corner Case: The presented test just had one episode
if result_matrix.ndim == 1:
result_matrix = np.expand_dims(result_matrix, axis=0)
tasks = np.unique(result_matrix[:, header.index('exp_id')])
all_weathers = np.unique(result_matrix[:, header.index('weather')])
measurements_matrix = np.loadtxt(os.path.join(path, 'measurements.csv'), delimiter=",",
skiprows=1)
metrics_dictionary = {'episodes_completion': {w: [0] * len(tasks) for w in all_weathers},
'intersection_offroad': {w: [[] for i in range(len(tasks))] for w in
all_weathers},
'intersection_otherlane': {w: [[] for i in range(len(tasks))] for w in
all_weathers},
'collision_pedestrians': {w: [[] for i in range(len(tasks))] for w in
all_weathers},
'collision_vehicles': {w: [[] for i in range(len(tasks))] for w in
all_weathers},
'collision_other': {w: [[] for i in range(len(tasks))] for w in
all_weathers},
'episodes_fully_completed': {w: [0] * len(tasks) for w in
all_weathers},
'average_speed': {w: [0] * len(tasks) for w in all_weathers},
'driven_kilometers': {w: [0] * len(tasks) for w in all_weathers}
}
for t in range(len(tasks)):
experiment_results_matrix = result_matrix[
result_matrix[:, header.index('exp_id')] == tasks[t]]
weathers = np.unique(experiment_results_matrix[:, header.index('weather')])
for w in weathers:
experiment_results_matrix = result_matrix[
np.logical_and(result_matrix[:, header.index(
'exp_id')] == tasks[t], result_matrix[:, header.index('weather')] == w)]
experiment_metrics_matrix = measurements_matrix[
np.logical_and(measurements_matrix[:, header_metrics.index(
'exp_id')] == float(tasks[t]),
measurements_matrix[:, header_metrics.index('weather')] == float(
w))]
metrics_dictionary['episodes_fully_completed'][w][t] = \
experiment_results_matrix[:, header.index('result')].tolist()
metrics_dictionary['episodes_completion'][w][t] = \
((experiment_results_matrix[:, header.index('initial_distance')]
- experiment_results_matrix[:, header.index('final_distance')])
/ experiment_results_matrix[:, header.index('initial_distance')]).tolist()
# Now we divide the experiment metrics matrix
episode_experiment_metrics_matrix = self._divide_by_episodes(
experiment_metrics_matrix, header_metrics)
count = 0
for episode_experiment_metrics in episode_experiment_metrics_matrix:
km_run_episodes = self._get_distance_traveled(
episode_experiment_metrics, header_metrics)
metrics_dictionary['driven_kilometers'][w][t] += km_run_episodes
metrics_dictionary['average_speed'][w][t] = \
km_run_episodes / (experiment_results_matrix[count,
header.index(
'final_time')] / 3600.0)
count += 1
lane_road = self._get_out_of_road_lane(
episode_experiment_metrics, header_metrics)
metrics_dictionary['intersection_otherlane'][
w][t].append(lane_road[0])
metrics_dictionary['intersection_offroad'][
w][t].append(lane_road[1])
if tasks[t] in set(self._parameters['dynamic_tasks']):
collisions = self._get_collisions(episode_experiment_metrics,
header_metrics)
metrics_dictionary['collision_pedestrians'][
w][t].append(collisions[2])
metrics_dictionary['collision_vehicles'][
w][t].append(collisions[1])
metrics_dictionary['collision_other'][
w][t].append(collisions[0])
else:
metrics_dictionary['collision_pedestrians'][
w][t].append(0)
metrics_dictionary['collision_vehicles'][
w][t].append(0)
metrics_dictionary['collision_other'][
w][t].append(0)
return metrics_dictionary

View File

@ -0,0 +1,244 @@
import csv
import datetime
import os
class Recording(object):
def __init__(self
, name_to_save
, continue_experiment
, save_images
):
self._dict_summary = {'exp_id': -1,
'rep': -1,
'weather': -1,
'start_point': -1,
'end_point': -1,
'result': -1,
'initial_distance': -1,
'final_distance': -1,
'final_time': -1,
'time_out': -1
}
self._dict_measurements = {'exp_id': -1,
'rep': -1,
'weather': -1,
'start_point': -1,
'end_point': -1,
'collision_other': -1,
'collision_pedestrians': -1,
'collision_vehicles': -1,
'intersection_otherlane': -1,
'intersection_offroad': -1,
'pos_x': -1,
'pos_y': -1,
'steer': -1,
'throttle': -1,
'brake': -1
}
# Just in the case is the first time and there is no benchmark results folder
if not os.path.exists('_benchmarks_results'):
os.mkdir('_benchmarks_results')
# Generate the full path for the log files
self._path = os.path.join('_benchmarks_results'
, name_to_save
)
# Check for continuation of experiment, also returns the last line, used for test purposes
# If you don't want to continue it will create a new path name with a number
self._path, _ = self._continue_experiment(continue_experiment)
self._create_log_files()
# A log with a date file: to show when was the last access and log what was tested,
now = datetime.datetime.now()
self._internal_log_name = os.path.join(self._path, 'log_' + now.strftime("%Y%m%d%H%M"))
open(self._internal_log_name, 'w').close()
# store the save images flag, and already store the format for image saving
self._save_images = save_images
self._image_filename_format = os.path.join(
self._path, '_images/episode_{:s}/{:s}/image_{:0>5d}.jpg')
@property
def path(self):
return self._path
def log_poses(self, start_index, end_index, weather_id):
with open(self._internal_log_name, 'a+') as log:
log.write(' Start Poses (%d %d ) on weather %d \n ' %
(start_index, end_index, weather_id))
def log_poses_finish(self):
with open(self._internal_log_name, 'a+') as log:
log.write('Finished Task')
def log_start(self, id_experiment):
with open(self._internal_log_name, 'a+') as log:
log.write('Start Task %d \n' % id_experiment)
def log_end(self):
with open(self._internal_log_name, 'a+') as log:
log.write('====== Finished Entire Benchmark ======')
def write_summary_results(self, experiment, pose, rep,
path_distance, remaining_distance,
final_time, time_out, result):
"""
Method to record the summary of an episode(pose) execution
"""
self._dict_summary['exp_id'] = experiment.task
self._dict_summary['rep'] = rep
self._dict_summary['weather'] = experiment.Conditions.WeatherId
self._dict_summary['start_point'] = pose[0]
self._dict_summary['end_point'] = pose[1]
self._dict_summary['result'] = result
self._dict_summary['initial_distance'] = path_distance
self._dict_summary['final_distance'] = remaining_distance
self._dict_summary['final_time'] = final_time
self._dict_summary['time_out'] = time_out
with open(os.path.join(self._path, 'summary.csv'), 'a+') as ofd:
w = csv.DictWriter(ofd, self._dict_summary.keys())
w.writerow(self._dict_summary)
def write_measurements_results(self, experiment, rep, pose, reward_vec, control_vec):
"""
Method to record the measurements, sensors,
controls and status of the entire benchmark.
"""
with open(os.path.join(self._path, 'measurements.csv'), 'a+') as rfd:
rw = csv.DictWriter(rfd, self._dict_measurements.keys())
for i in range(len(reward_vec)):
self._dict_measurements['exp_id'] = experiment.task
self._dict_measurements['rep'] = rep
self._dict_measurements['start_point'] = pose[0]
self._dict_measurements['end_point'] = pose[1]
self._dict_measurements['weather'] = experiment.Conditions.WeatherId
self._dict_measurements['collision_other'] = reward_vec[
i].collision_other
self._dict_measurements['collision_pedestrians'] = reward_vec[
i].collision_pedestrians
self._dict_measurements['collision_vehicles'] = reward_vec[
i].collision_vehicles
self._dict_measurements['intersection_otherlane'] = reward_vec[
i].intersection_otherlane
self._dict_measurements['intersection_offroad'] = reward_vec[
i].intersection_offroad
self._dict_measurements['pos_x'] = reward_vec[
i].transform.location.x
self._dict_measurements['pos_y'] = reward_vec[
i].transform.location.y
self._dict_measurements['steer'] = control_vec[
i].steer
self._dict_measurements['throttle'] = control_vec[
i].throttle
self._dict_measurements['brake'] = control_vec[
i].brake
rw.writerow(self._dict_measurements)
def _create_log_files(self):
"""
Just create the log files and add the necessary header for it.
"""
if not self._experiment_exist():
os.mkdir(self._path)
with open(os.path.join(self._path, 'summary.csv'), 'w') as ofd:
w = csv.DictWriter(ofd, self._dict_summary.keys())
w.writeheader()
with open(os.path.join(self._path, 'measurements.csv'), 'w') as rfd:
rw = csv.DictWriter(rfd, self._dict_measurements.keys())
rw.writeheader()
def _continue_experiment(self, continue_experiment):
"""
Get the line on the file for the experiment.
If continue_experiment is false and experiment exist, generates a new file path
"""
def get_non_existent_path(f_name_path):
"""
Get the path to a filename which does not exist by incrementing path.
"""
if not os.path.exists(f_name_path):
return f_name_path
filename, file_extension = os.path.splitext(f_name_path)
i = 1
new_f_name = "{}-{}{}".format(filename, i, file_extension)
while os.path.exists(new_f_name):
i += 1
new_f_name = "{}-{}{}".format(filename, i, file_extension)
return new_f_name
# start the new path as the same one as before
new_path = self._path
# if the experiment exist
if self._experiment_exist():
# If you want to continue just get the last position
if continue_experiment:
line_on_file = self._get_last_position()
else:
# Get a new non_conflicting path name
new_path = get_non_existent_path(new_path)
line_on_file = 1
else:
line_on_file = 1
return new_path, line_on_file
def save_images(self, sensor_data, episode_name, frame):
"""
Save a image during the experiment
"""
if self._save_images:
for name, image in sensor_data.items():
image.save_to_disk(self._image_filename_format.format(
episode_name, name, frame))
def get_pose_and_experiment(self, number_poses_task):
"""
Based on the line in log file, return the current pose and experiment.
If the line is zero, create new log files.
"""
# Warning: assumes that all tasks have the same size
line_on_file = self._get_last_position() - 1
if line_on_file == 0:
return 0, 0
else:
return line_on_file % number_poses_task, line_on_file // number_poses_task
def _experiment_exist(self):
return os.path.exists(self._path)
def _get_last_position(self):
"""
Get the last position on the summary experiment file
With this you are able to continue from there
Returns:
int, position:
"""
# Try to open, if the file is not found
try:
with open(os.path.join(self._path, 'summary.csv')) as f:
return sum(1 for _ in f)
except IOError:
return 0

View File

@ -0,0 +1,124 @@
import os
import numpy as np
import json
def print_summary(metrics_summary, weathers, path):
"""
We plot the summary of the testing for the set selected weathers.
We take the raw data and print the way it was described on CORL 2017 paper
"""
# Improve readability by adding a weather dictionary
weather_name_dict = {1: 'Clear Noon', 3: 'After Rain Noon',
6: 'Heavy Rain Noon', 8: 'Clear Sunset',
4: 'Cloudy After Rain', 14: 'Soft Rain Sunset'}
# First we write the entire dictionary on the benchmark folder.
with open(os.path.join(path, 'metrics.json'), 'w') as fo:
fo.write(json.dumps(metrics_summary))
# Second we plot the metrics that are already ready by averaging
metrics_to_average = [
'episodes_fully_completed',
'episodes_completion'
]
# We compute the number of episodes based on size of average completion
number_of_episodes = len(list(metrics_summary['episodes_fully_completed'].items())[0][1])
for metric in metrics_to_average:
if metric == 'episodes_completion':
print ("Average Percentage of Distance to Goal Travelled ")
else:
print ("Percentage of Successful Episodes")
print ("")
values = metrics_summary[metric]
metric_sum_values = np.zeros(number_of_episodes)
for weather, tasks in values.items():
if weather in set(weathers):
print(' Weather: ', weather_name_dict[weather])
count = 0
for t in tasks:
# if isinstance(t, np.ndarray) or isinstance(t, list):
if t == []:
print(' Metric Not Computed')
else:
print(' Task:', count, ' -> ', float(sum(t)) / float(len(t)))
metric_sum_values[count] += (float(sum(t)) / float(len(t))) * 1.0 / float(
len(weathers))
count += 1
print (' Average Between Weathers')
for i in range(len(metric_sum_values)):
print(' Task ', i, ' -> ', metric_sum_values[i])
print ("")
infraction_metrics = [
'collision_pedestrians',
'collision_vehicles',
'collision_other',
'intersection_offroad',
'intersection_otherlane'
]
# We need to collect the total number of kilometers for each task
for metric in infraction_metrics:
values_driven = metrics_summary['driven_kilometers']
values = metrics_summary[metric]
metric_sum_values = np.zeros(number_of_episodes)
summed_driven_kilometers = np.zeros(number_of_episodes)
if metric == 'collision_pedestrians':
print ('Avg. Kilometers driven before a collision to a PEDESTRIAN')
elif metric == 'collision_vehicles':
print('Avg. Kilometers driven before a collision to a VEHICLE')
elif metric == 'collision_other':
print('Avg. Kilometers driven before a collision to a STATIC OBSTACLE')
elif metric == 'intersection_offroad':
print('Avg. Kilometers driven before going OUTSIDE OF THE ROAD')
else:
print('Avg. Kilometers driven before invading the OPPOSITE LANE')
# print (zip(values.items(), values_driven.items()))
for items_metric, items_driven in zip(values.items(), values_driven.items()):
weather = items_metric[0]
tasks = items_metric[1]
tasks_driven = items_driven[1]
if weather in set(weathers):
print(' Weather: ', weather_name_dict[weather])
count = 0
for t, t_driven in zip(tasks, tasks_driven):
# if isinstance(t, np.ndarray) or isinstance(t, list):
if t == []:
print('Metric Not Computed')
else:
if sum(t) > 0:
print(' Task ', count, ' -> ', t_driven / float(sum(t)))
else:
print(' Task ', count, ' -> more than', t_driven)
metric_sum_values[count] += float(sum(t))
summed_driven_kilometers[count] += t_driven
count += 1
print (' Average Between Weathers')
for i in range(len(metric_sum_values)):
if metric_sum_values[i] == 0:
print(' Task ', i, ' -> more than ', summed_driven_kilometers[i])
else:
print(' Task ', i, ' -> ', summed_driven_kilometers[i] / metric_sum_values[i])
print ("")
print("")
print("")

View File

@ -1,7 +1,7 @@
0.0,0.0,-3811.000000
0.0,0.0,-0.3811000000
0.000000,0.000000,0.0
1.000000,1.000000,1.000000
-1643.022,-1643.022,0.000
-16.43022,-16.43022,0.000
49, 41
0,0 0,40 40
0,40 0,0 40

View File

@ -1,7 +1,7 @@
544.000000,-10748.000000,-22.000000
5.4400,-107.48000,-0.22000000
0.000000,0.000000,0.000000
1.000000,1.000000,1.000000
-1643.022,-1643.022,0.000
-16.43022,-16.43022,0.000
25, 25
0,10 0,24 14
0,24 0,10 14

View File

@ -14,8 +14,9 @@ class CityTrack(object):
def __init__(self, city_name):
# These values are fixed for every city.
self._node_density = 50.0
self._pixel_density = 16.43
self._pixel_density = 0.1643
self._map = CarlaMap(city_name, self._pixel_density, self._node_density)

View File

@ -137,7 +137,6 @@ class Converter(object):
"""
rotation = np.array([world[0], world[1], world[2]])
rotation *= 1e2 # meters to centimeters.
rotation = rotation.dot(self._worldrotation)
relative_location = [rotation[0] + self._worldoffset[0] - self._mapoffset[0],

View File

@ -50,11 +50,13 @@ class Planner(object):
def get_next_command(self, source, source_ori, target, target_ori):
"""
Computes the full plan and returns the next command,
:param source: source position
:param source_ori: source orientation
:param target: target position
:param target_ori: target orientation
:return: a command ( Straight,Lane Follow, Left or Right)
Args
source: source position
source_ori: source orientation
target: target position
target_ori: target orientation
Returns
a command ( Straight,Lane Follow, Left or Right)
"""
track_source = self._city_track.project_node(source)

View File

@ -8,26 +8,11 @@
import argparse
import logging
import time
from carla.benchmarks.agent import Agent
from carla.benchmarks.corl_2017 import CoRL2017
from carla.client import make_carla_client, VehicleControl
from carla.tcp import TCPConnectionError
class Manual(Agent):
"""
Sample redefinition of the Agent,
An agent that goes straight
"""
def run_step(self, measurements, sensor_data, target):
control = VehicleControl()
control.throttle = 0.9
return control
from carla.driving_benchmark import run_driving_benchmark
from carla.driving_benchmark.experiment_suites import CoRL2017
from carla.driving_benchmark.experiment_suites import BasicExperimentSuite
from carla.agent import ForwardAgent
if __name__ == '__main__':
@ -58,13 +43,23 @@ if __name__ == '__main__':
metavar='C',
default='Town01',
help='The town that is going to be used on benchmark'
+ '(needs to match active town in server, options: Town01 or Town02)')
+ '(needs to match active town in server, options: Town01 or Town02)')
argparser.add_argument(
'-n', '--log_name',
metavar='T',
default='test',
help='The name of the log file to be created by the benchmark'
)
)
argparser.add_argument(
'--corl-2017',
action='store_true',
help='If you want to benchmark the corl-2017 instead of the Basic one'
)
argparser.add_argument(
'--continue-experiment',
action='store_true',
help='If you want to continue the experiment with the same name'
)
args = argparser.parse_args()
if args.debug:
@ -75,20 +70,23 @@ if __name__ == '__main__':
log_level = logging.WARNING
logging.basicConfig(format='%(levelname)s: %(message)s', level=log_level)
logging.info('listening to server %s:%s', args.host, args.port)
while True:
try:
with make_carla_client(args.host, args.port) as client:
corl = CoRL2017(city_name=args.city_name, name_to_save=args.log_name)
agent = Manual(args.city_name)
results = corl.benchmark_agent(agent, client)
corl.plot_summary_test()
corl.plot_summary_train()
# We instantiate a forward agent, a simple policy that just set
# acceleration as 0.9 and steering as zero
agent = ForwardAgent()
break
# We instantiate an experiment suite. Basically a set of experiments
# that are going to be evaluated on this benchmark.
if args.corl_2017:
experiment_suite = CoRL2017(args.city_name)
else:
print (' WARNING: running the basic driving benchmark, to run for CoRL 2017'
' experiment suites, you should run'
' python driving_benchmark_example.py --corl-2017')
experiment_suite = BasicExperimentSuite(args.city_name)
except TCPConnectionError as error:
logging.error(error)
time.sleep(1)
# Now actually run the driving_benchmark
run_driving_benchmark(agent, experiment_suite, args.city_name,
args.log_name, args.continue_experiment,
args.host, args.port)

View File

@ -5,7 +5,8 @@ from setuptools import setup
setup(
name='carla_client',
version='0.8.1',
packages=['carla', 'carla.benchmarks', 'carla.planner'],
packages=['carla', 'carla.driving_benchmark', 'carla.agent',
'carla.driving_benchmark.experiment_suites', 'carla.planner'],
license='MIT License',
description='Python API for communicating with the CARLA server.',
url='https://github.com/carla-simulator/carla',

View File

@ -7,7 +7,7 @@
import logging
import random
import unit_tests
import suite
import carla
@ -18,7 +18,7 @@ from carla.settings import CarlaSettings
from carla.util import make_connection
class _BasicTestBase(unit_tests.CarlaServerTest):
class _BasicTestBase(suite.CarlaServerTest):
def run_carla_client(self, carla_settings, number_of_episodes, number_of_frames, use_autopilot_control=None):
with make_connection(CarlaClient, self.args.host, self.args.port, timeout=15) as client:
logging.info('CarlaClient connected, running %d episodes', number_of_episodes)

View File

@ -4,6 +4,7 @@
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
class CarlaServerTest(object):
def __init__(self, args):
self.args = args

View File

@ -26,7 +26,7 @@ import carla
from carla.tcp import TCPConnectionError
from carla.util import StopWatch
from unit_tests import CarlaServerTest
from suite import CarlaServerTest
# Modified by command-line args.
LOGGING_TO_FILE = False
@ -74,7 +74,7 @@ def iterate_tests():
strip_ext = lambda f: os.path.splitext(os.path.basename(f))[0]
is_valid = lambda obj: inspect.isclass(obj) and issubclass(obj, interface)
folder = os.path.join(os.path.dirname(__file__), 'unit_tests')
folder = os.path.join(os.path.dirname(__file__), 'suite')
modules = glob.glob(os.path.join(folder, "*.py"))
for module_name in set(strip_ext(m) for m in modules if not m.startswith('_')):

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,3 @@
weather,time_out,result,final_time,end_point,final_distance,exp_id,rep,start_point,initial_distance
3,335314,0,335314,29,171.3219381575824,3,0,105,280.44944447968976
3,243.6346,0,243.6346,130,215.56398248559435,3,0,27,174.94691018267446
1 weather time_out result final_time end_point final_distance exp_id rep start_point initial_distance
2 3 335314 0 335314 29 171.3219381575824 3 0 105 280.44944447968976
3 3 243.6346 0 243.6346 130 215.56398248559435 3 0 27 174.94691018267446

View File

@ -0,0 +1,26 @@
import unittest
from carla.driving_benchmark.experiment_suites.experiment_suite import ExperimentSuite
from carla.driving_benchmark.experiment_suites.basic_experiment_suite import BasicExperimentSuite
from carla.driving_benchmark.experiment_suites.corl_2017 import CoRL2017
class testExperimentSuite(unittest.TestCase):
def test_init(self):
base_class = ExperimentSuite('Town01')
subclasses_instanciate = [obj('Town01') for obj in ExperimentSuite.__subclasses__()]
def test_properties(self):
all_classes = [obj('Town01') for obj in ExperimentSuite.__subclasses__()]
print (all_classes)
for exp_suite in all_classes:
print(exp_suite.__class__)
print(exp_suite.dynamic_tasks)
print(exp_suite.weathers)

View File

@ -0,0 +1,190 @@
import os
import numpy as np
import unittest
from carla.driving_benchmark.metrics import Metrics
from carla.driving_benchmark.recording import Recording
def sum_matrix(matrix):
# Line trick to reduce sum a matrix in one line
return sum(sum(matrix, []))
class testMetrics(unittest.TestCase):
def __init__(self, *args, **kwargs):
super(testMetrics, self).__init__(*args, **kwargs)
self._metrics_parameters = {
'intersection_offroad': {'frames_skip': 10, # Check intersection always with 10 frames tolerance
'frames_recount': 20,
'threshold': 0.3
},
'intersection_otherlane': {'frames_skip': 10, # Check intersection always with 10 frames tolerance
'frames_recount': 20,
'threshold': 0.4
},
'collision_other': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 400
},
'collision_vehicles': {'frames_skip': 10,
'frames_recount': 30,
'threshold': 400
},
'collision_pedestrians': {'frames_skip': 5,
'frames_recount': 100,
'threshold': 300
},
'dynamic_episodes': [3]
}
def _generate_test_case(self, poses_to_test):
recording = Recording(name_to_save='TestMetric'
, continue_experiment=False, save_images=True
)
from carla.driving_benchmark.experiment import Experiment
from carla.carla_server_pb2 import Measurements
from carla.carla_server_pb2 import Control
for pose in poses_to_test:
experiment = Experiment()
recording.write_summary_results(experiment=experiment, pose=pose, rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
reward_vec = [Measurements().player_measurements for x in range(25)]
control_vec = [Control() for x in range(25)]
recording.write_measurements_results(experiment=experiment,
rep=1, pose=pose, reward_vec=reward_vec,
control_vec=control_vec)
return recording._path
def test_init(self):
# Metric should instantiate with parameters
Metrics(self._metrics_parameters,[3])
def test_divide_by_episodes(self):
metrics_obj = Metrics(self._metrics_parameters,[3])
poses_to_test = [[24, 32], [34, 36], [54, 67]]
path = self._generate_test_case(poses_to_test)
# We start by reading the summary header file and the measurements header file.
with open(os.path.join(path, 'summary.csv'), "r") as f:
header = f.readline()
header = header.split(',')
header[-1] = header[-1][:-2]
with open(os.path.join(path,'measurements.csv'), "r") as f:
header_metrics = f.readline()
header_metrics = header_metrics.split(',')
header_metrics[-1] = header_metrics[-1][:-2]
result_matrix = np.loadtxt(os.path.join(path, 'summary.csv'), delimiter=",", skiprows=1)
# Corner Case: The presented test just had one episode
if result_matrix.ndim == 1:
result_matrix = np.expand_dims(result_matrix, axis=0)
tasks = np.unique(result_matrix[:, header.index('exp_id')])
all_weathers = np.unique(result_matrix[:, header.index('weather')])
measurements_matrix = np.loadtxt(os.path.join(path, 'measurements.csv'), delimiter=",", skiprows=1)
episodes = metrics_obj._divide_by_episodes(measurements_matrix,header_metrics)
self.assertEqual(len(episodes),3)
def test_compute(self):
# This is is the last one, generate many cases, corner cases, to be tested.
metrics_obj = Metrics(self._metrics_parameters,[3])
# Lets start testing a general file, not from a real run
# The case is basically an empty case
poses_to_test = [[24, 32], [34, 36], [54, 67]]
path = self._generate_test_case(poses_to_test)
summary_dict = metrics_obj.compute(path)
number_of_colisions_vehicles = sum_matrix(summary_dict['collision_vehicles'][1.0])
number_of_colisions_general = sum_matrix(summary_dict['collision_other'][1.0])
number_of_colisions_pedestrians = sum_matrix(summary_dict['collision_pedestrians'][1.0])
number_of_intersection_offroad = sum_matrix(summary_dict['intersection_offroad'][1.0])
number_of_intersection_otherlane = sum_matrix(summary_dict['intersection_otherlane'][1.0])
self.assertEqual(number_of_colisions_vehicles, 0)
self.assertEqual(number_of_colisions_general, 0)
self.assertEqual(number_of_colisions_pedestrians, 0)
self.assertEqual(number_of_intersection_offroad, 0)
self.assertEqual(number_of_intersection_otherlane, 0)
# Now lets make a collision test on a premade file
path = 'test/unit_tests/test_data/testfile_collisions'
summary_dict = metrics_obj.compute(path)
number_of_colisions_vehicles = sum_matrix(summary_dict['collision_vehicles'][3.0])
number_of_colisions_general = sum_matrix(summary_dict['collision_other'][3.0])
number_of_colisions_pedestrians = sum_matrix(summary_dict['collision_pedestrians'][3.0])
number_of_intersection_offroad = sum_matrix(summary_dict['intersection_offroad'][3.0])
number_of_intersection_otherlane = sum_matrix(summary_dict['intersection_otherlane'][3.0])
self.assertEqual(number_of_colisions_vehicles, 2)
self.assertEqual(number_of_colisions_general, 9)
self.assertEqual(number_of_colisions_pedestrians, 0)
self.assertEqual(number_of_intersection_offroad, 1)
self.assertEqual(number_of_intersection_otherlane, 3)

View File

@ -0,0 +1,235 @@
import unittest
from carla.driving_benchmark.recording import Recording
class testRecording(unittest.TestCase):
def test_init(self):
import os
"""
The recording should have a reasonable full name
"""
recording = Recording(name_to_save='Test1'
, continue_experiment=False, save_images=True
)
_ = open(os.path.join(recording._path,'summary.csv'), 'r')
_ = open(os.path.join(recording._path, 'measurements.csv'), 'r')
# There should be three files in any newly created case
self.assertEqual(len(os.listdir(recording._path)), 3)
def test_write_summary_results(self):
import os
from carla.driving_benchmark.experiment import Experiment
recording = Recording(name_to_save='Test1'
, continue_experiment=False, save_images=True
)
recording.write_summary_results( experiment=Experiment(), pose=[24,32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
with open(os.path.join(recording._path, 'summary.csv'), 'r') as f:
header = f.readline().split(',')
#Assert if header is header
self.assertIn('exp_id', header)
self.assertEqual(len(header), len(recording._dict_summary))
#Assert if there is something writen in the row
written_row = f.readline().split(',')
#Assert if the number of collums is correct
self.assertEqual(len(written_row), len(recording._dict_summary))
def teste_write_measurements_results(self):
import os
from carla.driving_benchmark.experiment import Experiment
from carla.carla_server_pb2 import Measurements
from carla.carla_server_pb2 import Control
recording = Recording(name_to_save='Test1'
, continue_experiment=False, save_images=True
)
reward_vec = [Measurements().player_measurements for x in range(20)]
control_vec = [Control() for x in range(25)]
recording.write_measurements_results(experiment=Experiment(),
rep=1, pose=[24, 32], reward_vec=reward_vec,
control_vec=control_vec)
with open(os.path.join(recording._path, 'measurements.csv'), 'r') as f:
header = f.readline().split(',')
#Assert if header is header
self.assertIn('exp_id', header)
self.assertEqual(len(header), len(recording._dict_measurements))
#Assert if there is something writen in the row
written_row = f.readline().split(',')
#Assert if the number of collums is correct
self.assertEqual(len(written_row), len(recording._dict_measurements))
def test_continue_experiment(self):
recording = Recording( name_to_save='Test1'
, continue_experiment=False, save_images=True
)
# A just started case should return the continue experiment case
self.assertEqual(recording._continue_experiment(True)[1], 1)
# If you don't want to continue, should return also one
self.assertEqual(recording._continue_experiment(False)[1], 1)
from carla.driving_benchmark.experiment import Experiment
recording.write_summary_results(experiment=Experiment(), pose=[24, 32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
recording.write_summary_results(experiment=Experiment(), pose=[24, 32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
# After writing two experiments it should return 2, so you could start writing os pos 3
self.assertEqual(recording._continue_experiment(True)[1], 3)
# If you dont want to continue, should return also one
self.assertEqual(recording._continue_experiment(False)[1], 1)
def test_get_pose_and_experiment(self):
recording = Recording( name_to_save='Test1'
, continue_experiment=False, save_images=True
)
from carla.driving_benchmark.experiment import Experiment
pose, experiment = recording.get_pose_and_experiment(25)
# An starting experiment should return zero zero
self.assertEqual(pose, 0)
self.assertEqual(experiment, 0)
recording.write_summary_results( experiment=Experiment(), pose=[24,32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
recording.write_summary_results( experiment=Experiment(), pose=[24,32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
pose, experiment = recording.get_pose_and_experiment(25)
self.assertEqual(pose, 2)
self.assertEqual(experiment, 0)
for i in range(23):
recording.write_summary_results(experiment=Experiment(), pose=[24, 32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
pose, experiment = recording.get_pose_and_experiment(25)
self.assertEqual(pose, 0)
self.assertEqual(experiment, 1)
for i in range(23):
recording.write_summary_results(experiment=Experiment(), pose=[24, 32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
pose, experiment = recording.get_pose_and_experiment(25)
self.assertEqual(pose, 23)
self.assertEqual(experiment, 1)
def test_get_pose_and_experiment_corner(self):
from carla.driving_benchmark.experiment import Experiment
recording = Recording( name_to_save='Test1'
, continue_experiment=False, save_images=True
)
pose, experiment = recording.get_pose_and_experiment(1)
# An starting experiment should return one
self.assertEqual(pose, 0)
self.assertEqual(experiment, 0)
pose, experiment = recording.get_pose_and_experiment(2)
self.assertEqual(pose, 0)
self.assertEqual(experiment, 0)
recording.write_summary_results( experiment=Experiment(), pose=[24, 32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
pose, experiment = recording.get_pose_and_experiment(1)
print (pose, experiment)
self.assertEqual(pose, 0)
self.assertEqual(experiment, 1)
pose, experiment = recording.get_pose_and_experiment(2)
print (pose, experiment)
# An starting experiment should return one
self.assertEqual(pose, 1)
self.assertEqual(experiment, 0)
pose, experiment = recording.get_pose_and_experiment(3)
print (pose, experiment)
# An starting experiment should return one
self.assertEqual(pose, 1)
self.assertEqual(experiment, 0)
recording.write_summary_results( experiment=Experiment(), pose=[24, 32], rep=1,
path_distance=200, remaining_distance=0,
final_time=0.2, time_out=49, result=1)
pose, experiment = recording.get_pose_and_experiment(2)
self.assertEqual(pose, 0)
self.assertEqual(experiment, 1)
pose, experiment = recording.get_pose_and_experiment(3)
self.assertEqual(pose, 2)
self.assertEqual(experiment, 0)
if __name__ == '__main__':
unittest.main()

View File

@ -40,14 +40,14 @@ def view_start_positions(args):
number_of_player_starts = len(scene.player_start_spots)
if number_of_player_starts > 100: # WARNING: unsafe way to check for city, see issue #313
image = mpimg.imread("carla/planner/Town01.png")
carla_map = CarlaMap('Town01', 16.53, 50)
carla_map = CarlaMap('Town01', 0.1653, 50)
else:
image = mpimg.imread("carla/planner/Town02.png")
carla_map = CarlaMap('Town02', 16.53, 50)
carla_map = CarlaMap('Town02', 0.1653, 50)
_, ax = plt.subplots(1)
fig, ax = plt.subplots(1)
ax.imshow(image)
@ -65,14 +65,18 @@ def view_start_positions(args):
pixel = carla_map.convert_to_pixel([scene.player_start_spots[position].location.x,
scene.player_start_spots[position].location.y,
scene.player_start_spots[position].location.z])
circle = Circle((pixel[0], pixel[1]), 12, color='r', label='A point')
ax.add_patch(circle)
if not args.no_labels:
plt.text(pixel[0], pixel[1], str(position), size='x-small')
plt.axis('off')
plt.show()
fig.savefig('town_positions.pdf', orientation='landscape', bbox_inches='tight')
def main():
argparser = argparse.ArgumentParser(description=__doc__)

View File

@ -13,13 +13,17 @@ pages:
- 'Measurements': 'measurements.md'
- 'Cameras and sensors': 'cameras_and_sensors.md'
- 'F.A.Q.': 'faq.md'
- Driving Benchmark:
- 'Quick Start': 'benchmark_start.md'
- 'General Structure': 'benchmark_structure.md'
- 'Creating Your Benchmark': 'benchmark_creating.md'
- 'Computed Performance Metrics': 'benchmark_metrics.md'
- Building from source:
- 'How to build on Linux': 'how_to_build_on_linux.md'
- 'How to build on Windows': 'how_to_build_on_windows.md'
- Advanced topics:
- 'CARLA Settings': 'carla_settings.md'
- 'Simulator keyboard input': 'simulator_keyboard_input.md'
- 'Benchmark': 'benchmark.md'
- 'Running without display and selecting GPUs': 'carla_headless.md'
- "How to link Epic's Automotive Materials": 'epic_automotive_materials.md'
- Contributing:
@ -31,6 +35,11 @@ pages:
- 'How to add assets': 'how_to_add_assets.md'
- 'CARLA design': 'carla_design.md'
- 'CarlaServer documentation': 'carla_server.md'
- Appendix:
- 'Driving Benchmark Sample Results Town01': 'benchmark_basic_results_town01.md'
- 'Driving Benchmark Sample Results Town02': 'benchmark_basic_results_town02.md'
markdown_extensions:
- admonition