Remove outdated documentation

This commit is contained in:
nsubiron 2018-12-13 16:56:24 +01:00
parent a8b4d0ffa1
commit e6eb1e69e9
21 changed files with 0 additions and 1291 deletions

View File

@ -1,189 +0,0 @@
We show the results for test and train weathers when
[running the simple example](benchmark_creating/#expected-results) for Town01.
The following result should print on the screen after running the
example.
----- Printing results for training weathers (Seen in Training) -----
Percentage of Successful Episodes
Weather: Clear Noon
Task: 0 -> 1.0
Task: 1 -> 0.0
Task: 2 -> 0.0
Task: 3 -> 0.0
Average Between Weathers
Task 0 -> 1.0
Task 1 -> 0.0
Task 2 -> 0.0
Task 3 -> 0.0
Average Percentage of Distance to Goal Travelled
Weather: Clear Noon
Task: 0 -> 0.9643630125892909
Task: 1 -> 0.6794216252808839
Task: 2 -> 0.6593855166486696
Task: 3 -> 0.6646695325122313
Average Between Weathers
Task 0 -> 0.9643630125892909
Task 1 -> 0.6794216252808839
Task 2 -> 0.6593855166486696
Task 3 -> 0.6646695325122313
Avg. Kilometers driven before a collision to a PEDESTRIAN
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Avg. Kilometers driven before a collision to a VEHICLE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.11491704214531683
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.11491704214531683
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.22983408429063365
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> 0.12350085985904342
Task 2 -> 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> 0.12350085985904342
Task 2 -> 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Avg. Kilometers driven before invading the OPPOSITE LANE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
----- Printing results for test weathers (Unseen in Training) -----
Percentage of Successful Episodes
Weather: Clear Noon
Task: 0 -> 1.0
Task: 1 -> 0.0
Task: 2 -> 0.0
Task: 3 -> 0.0
Average Between Weathers
Task 0 -> 1.0
Task 1 -> 0.0
Task 2 -> 0.0
Task 3 -> 0.0
Average Percentage of Distance to Goal Travelled
Weather: Clear Noon
Task: 0 -> 0.9643630125892909
Task: 1 -> 0.6794216252808839
Task: 2 -> 0.6593855166486696
Task: 3 -> 0.6646695325122313
Average Between Weathers
Task 0 -> 0.9643630125892909
Task 1 -> 0.6794216252808839
Task 2 -> 0.6593855166486696
Task 3 -> 0.6646695325122313
Avg. Kilometers driven before a collision to a PEDESTRIAN
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Avg. Kilometers driven before a collision to a VEHICLE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.11491704214531683
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.11491704214531683
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> 0.22983408429063365
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> 0.12350085985904342
Task 2 -> 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> 0.12350085985904342
Task 2 -> 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Avg. Kilometers driven before invading the OPPOSITE LANE
Weather: Clear Noon
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365
Average Between Weathers
Task 0 -> more than 0.04316352371637994
Task 1 -> more than 0.12350085985904342
Task 2 -> more than 0.2400373917146113
Task 3 -> more than 0.22983408429063365

View File

@ -1,187 +0,0 @@
We show the results for test and train weathers when
[running the simple example](benchmark_creating/#expected-results) for Town02.
The following result should print on the screen after running the
example.
----- Printing results for training weathers (Seen in Training) -----
Percentage of Successful Episodes
Weather: Clear Noon
Task: 0 -> 1.0
Task: 1 -> 0.0
Task: 2 -> 0.0
Task: 3 -> 0.0
Average Between Weathers
Task 0 -> 1.0
Task 1 -> 0.0
Task 2 -> 0.0
Task 3 -> 0.0
Average Percentage of Distance to Goal Travelled
Weather: Clear Noon
Task: 0 -> 0.8127653637426329
Task: 1 -> 0.10658303206448155
Task: 2 -> -0.20448736444348714
Task: 3 -> -0.20446966646041384
Average Between Weathers
Task 0 -> 0.8127653637426329
Task 1 -> 0.10658303206448155
Task 2 -> -0.20448736444348714
Task 3 -> -0.20446966646041384
Avg. Kilometers driven before a collision to a PEDESTRIAN
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Avg. Kilometers driven before a collision to a VEHICLE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> 0.019641485501456352
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> 0.019641485501456352
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> 0.03856641710143665
Task 2 -> 0.03928511962584409
Task 3 -> 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> 0.03856641710143665
Task 2 -> 0.03928511962584409
Task 3 -> 0.039282971002912705
Avg. Kilometers driven before invading the OPPOSITE LANE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
----- Printing results for test weathers (Unseen in Training) -----
Percentage of Successful Episodes
Weather: Clear Noon
Task: 0 -> 1.0
Task: 1 -> 0.0
Task: 2 -> 0.0
Task: 3 -> 0.0
Average Between Weathers
Task 0 -> 1.0
Task 1 -> 0.0
Task 2 -> 0.0
Task 3 -> 0.0
Average Percentage of Distance to Goal Travelled
Weather: Clear Noon
Task: 0 -> 0.8127653637426329
Task: 1 -> 0.10658303206448155
Task: 2 -> -0.20448736444348714
Task: 3 -> -0.20446966646041384
Average Between Weathers
Task 0 -> 0.8127653637426329
Task 1 -> 0.10658303206448155
Task 2 -> -0.20448736444348714
Task 3 -> -0.20446966646041384
Avg. Kilometers driven before a collision to a PEDESTRIAN
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Avg. Kilometers driven before a collision to a VEHICLE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> 0.019641485501456352
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> 0.019641485501456352
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> 0.03856641710143665
Task 2 -> 0.03928511962584409
Task 3 -> 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> 0.03856641710143665
Task 2 -> 0.03928511962584409
Task 3 -> 0.039282971002912705
Avg. Kilometers driven before invading the OPPOSITE LANE
Weather: Clear Noon
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705
Average Between Weathers
Task 0 -> more than 0.0071004936693366055
Task 1 -> more than 0.03856641710143665
Task 2 -> more than 0.03928511962584409
Task 3 -> more than 0.039282971002912705

View File

@ -1,241 +0,0 @@
Benchmarking your Agent
---------------------------
In this tutorial we show:
* [How to define a trivial agent with a forward going policy.](#defining-the-agent)
* [How to define a basic experiment suite.](#defining-the-experiment-suite)
#### Introduction
![Benchmark_structure](img/benchmark_diagram_small.png)
The driving benchmark is associated with other two modules.
The *agent* module, that is a controller which performs in
another module: the *experiment suite*.
Both modules are abstract classes that must be redefined by
the user.
The following code excerpt is
an example of how to apply a driving benchmark;
# We instantiate a forward agent, a simple policy that just set
# acceleration as 0.9 and steering as zero
agent = ForwardAgent()
# We instantiate an experiment suite. Basically a set of experiments
# that are going to be evaluated on this benchmark.
experiment_suite = BasicExperimentSuite(city_name)
# Now actually run the driving_benchmark
# Besides the agent and experiment suite we should send
# the city name ( Town01, Town02) the log
run_driving_benchmark(agent, experiment_suite, city_name,
log_name, continue_experiment,
host, port)
Following this excerpt, there are two classes to be defined.
The ForwardAgent() and the BasicExperimentSuite().
After that, the benchmark can ne run with the "run_driving_benchmark" function.
The summary of the execution, the [performance metrics](benchmark_metrics.md), are stored
in a json file and printed to the screen.
#### Defining the Agent
The tested agent must inherit the base *Agent* class.
Let's start by deriving a simple forward agent:
from carla.agent.agent import Agent
from carla.client import VehicleControl
class ForwardAgent(Agent):
To have its performance evaluated, the ForwardAgent derived class _must_
redefine the *run_step* function as it is done in the following excerpt:
def run_step(self, measurements, sensor_data, directions, target):
"""
Function to run a control step in the CARLA vehicle.
"""
control = VehicleControl()
control.throttle = 0.9
return control
This function receives the following parameters:
* [Measurements](index.md)<!-- @todo -->: the entire state of the world received
by the client from the CARLA Simulator. These measurements contains agent position, orientation,
dynamic objects information, etc.
* [Sensor Data](cameras_and_sensors.md): The measured data from defined sensors,
such as Lidars or RGB cameras.
* Directions: Information from the high level planner. Currently the planner sends
a high level command from the follwoing set: STRAIGHT, RIGHT, LEFT, NOTHING.
* Target Position: The position and orientation of the target.
With all this information, the *run_step* function is expected
to return a [vehicle control message](index.md)<!-- @todo -->, containing:
steering value, throttle value, brake value, etc.
#### Defining the Experiment Suite
To create a Experiment Suite class you need to perform
the following steps:
* Create your custom class by inheriting the ExperimentSuite base class.
* Define the test and train weather conditions to be used.
* Build the *Experiment* objects .
##### Definition
The defined set of experiments must derive the *ExperimentSuite* class
as in the following code excerpt:
from carla.agent_benchmark.experiment import Experiment
from carla.sensor import Camera
from carla.settings import CarlaSettings
from .experiment_suite import ExperimentSuite
class BasicExperimentSuite(ExperimentSuite):
##### Define test and train weather conditions
The user must select the weathers to be used. One should select the set
of test weathers and the set of train weathers. This is defined as a
class property as in the following example:
@property
def train_weathers(self):
return [1]
@property
def test_weathers(self):
return [1]
##### Building Experiments
The [experiments are composed by a *task* that is defined by a set of *poses*](benchmark_structure.md).
Let's start by selecting poses for one of the cities, let's take Town01, for instance.
First of all, we need to see all the possible positions, for that, with
a CARLA simulator running in a terminal, run:
python view_start_positions.py
![town01_positions](img/town01_positions.png)
Now let's choose, for instance, 140 as start position and 134
as the end position. This two positions can be visualized by running:
python view_start_positions.py --pos 140,134 --no-labels
![town01_positions](img/town01_140_134.png)
Let's choose two more poses, one for going straight, other one for one simple turn.
Also, let's also choose three poses for Town02:
![town01_positions](img/initial_positions.png)
>Figure: The poses used on this basic *Experiment Suite* example. Poses are
a tuple of start and end position of a goal-directed episode. Start positions are
shown in Blue, end positions in Red. Left: Straight poses,
where the goal is just straight away from the start position. Middle: One turn
episode, where the goal is one turn away from the start point. Right: arbitrary position,
the goal is far away from the start position, usually more than one turn.
We define each of these poses as a task. Plus, we also set
the number of dynamic objects for each of these tasks and repeat
the arbitrary position task to have it also defined with dynamic
objects. In the following code excerpt we show the final
defined positions and the number of dynamic objects for each task:
# Define the start/end position below as tasks
poses_task0 = [[7, 3]]
poses_task1 = [[138, 17]]
poses_task2 = [[140, 134]]
poses_task3 = [[140, 134]]
# Concatenate all the tasks
poses_tasks = [poses_task0, poses_task1 , poses_task1 , poses_task3]
# Add dynamic objects to tasks
vehicles_tasks = [0, 0, 0, 20]
pedestrians_tasks = [0, 0, 0, 50]
Finally by using the defined tasks we can build the experiments
vector as we show in the following code excerpt:
experiments_vector = []
# The used weathers is the union between test and train weathers
for weather in used_weathers:
for iteration in range(len(poses_tasks)):
poses = poses_tasks[iteration]
vehicles = vehicles_tasks[iteration]
pedestrians = pedestrians_tasks[iteration]
conditions = CarlaSettings()
conditions.set(
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=vehicles,
NumberOfPedestrians=pedestrians,
WeatherId=weather
)
# Add all the cameras that were set for this experiments
conditions.add_sensor(camera)
experiment = Experiment()
experiment.set(
Conditions=conditions,
Poses=poses,
Task=iteration,
Repetitions=1
)
experiments_vector.append(experiment)
The full code could be found at [basic_experiment_suite.py](https://github.com/carla-simulator/carla/blob/master/Deprecated/PythonClient/carla/driving_benchmark/experiment_suites/basic_experiment_suite.py)
#### Expected Results
First you need a CARLA Simulator running with [fixed time-step](configuring_the_simulation/#fixed-time-step)
, so the results you will obtain will be more or less reproducible.
For that you should run the CARLA simulator as:
./CarlaUE4.sh /Game/Maps/<Town_name> -windowed -world-port=2000 -benchmark -fps=10
The example presented in this tutorial can be executed for Town01 as:
./driving_benchmark_example.py -c Town01
You should expect these results: [town01_basic_forward_results](benchmark_basic_results_town01)
For Town02:
./driving_benchmark_example.py -c Town02
You should expect these results: [town01_basic_forward_results](benchmark_basic_results_town02)

View File

@ -1,97 +0,0 @@
Driving Benchmark Performance Metrics
------------------------------
This page explains the performance metrics module.
This module is used to compute a summary of results based on the actions
performed by the agent during the benchmark.
### Provided performance metrics
The driving benchmark performance metrics module provides the following performance metrics:
* **Percentage of Success**: The percentage of episodes (poses from tasks),
that the agent successfully completed.
* **Average Completion**: The average distance towards the goal that the
agent was able to travel.
* **Off Road Intersection**: The number of times the agent goes out of the road.
The intersection is only counted if the area of the vehicle outside
of the road is bigger than a *threshold*.
* **Other Lane Intersection**: The number of times the agent goes to the other
lane. The intersection is only counted if the area of the vehicle on the
other lane is bigger than a *threshold*.
* **Vehicle Collisions**: The number of collisions with vehicles that had
an impact bigger than a *threshold*.
* **Pedestrian Collisions**: The number of collisions with pedestrians
that had an impact bigger than a *threshold*.
* **General Collisions**: The number of collisions with all other
objects with an impact bigger than a *threshold*.
### Executing and Setting Parameters
The metrics are computed as the final step of the benchmark
and stores a summary of the results a json file.
Internally it is executed as follows:
```python
metrics_object = Metrics(metrics_parameters)
summary_dictionary = metrics_object.compute(path_to_execution_log)
```
The Metric's compute function
receives the full path to the execution log.
The Metric class should be instanced with some parameters.
The parameters are:
* **Threshold**: The threshold used by the metrics.
* **Frames Recount**: After making the infraction, set the number
of frames that the agent needs to keep doing the infraction for
it to be counted as another infraction.
* **Frames Skip**: It is related to the number of frames that are
skipped after a collision or a intersection starts.
These parameters are defined as property of the *Experiment Suite*
base class and can be redefined at your
[custom *Experiment Suite*](benchmark_creating/#defining-the-experiment-suite).
The default parameters are:
@property
def metrics_parameters(self):
"""
Property to return the parameters for the metrics module
Could be redefined depending on the needs of the user.
"""
return {
'intersection_offroad': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.3
},
'intersection_otherlane': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 0.4
},
'collision_other': {'frames_skip': 10,
'frames_recount': 20,
'threshold': 400
},
'collision_vehicles': {'frames_skip': 10,
'frames_recount': 30,
'threshold': 400
},
'collision_pedestrians': {'frames_skip': 5,
'frames_recount': 100,
'threshold': 300
},
}

View File

@ -1,69 +0,0 @@
Driving Benchmark
===============
The *driving benchmark* module is made
to evaluate a driving controller (agent) and obtain
metrics about its performance.
This module is mainly designed for:
* Users that work developing autonomous driving agents and want
to see how they perform in CARLA.
On this section you will learn.
* How to quickly get started and benchmark a trivial agent right away.
* Learn about the general implementation [architecture of the driving
benchmark module](benchmark_structure.md).
* Learn [how to set up your agent and create your
own set of experiments](benchmark_creating.md).
* Learn about the [performance metrics used](benchmark_metrics.md).
Getting Started
----------------
As a way to familiarize yourself with the system we
provide a trivial agent performing in an small
set of experiments (Basic). To execute it, simply
run:
$ ./driving_benchmark_example.py
Keep in mind that, to run the command above, you need a CARLA simulator
running at localhost and on port 2000.
We already provide the same benchmark used in the [CoRL
2017 paper](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf).
The CoRL 2017 experiment suite can be run in a trivial agent by
running:
$ ./driving_benchmark_example.py --corl-2017
This benchmark example can be further configured.
Run the help command to see options available.
$ ./driving_benchmark_example.py --help
One of the options available is to be able to continue
from a previous benchmark execution. For example,
to continue a experiment in CoRL2017 with a log name of "driving_benchmark_test", run:
$ ./driving_benchmark_example.py --continue-experiment -n driving_benchmark_test --corl-2017
!!! note
if the log name already exists and you don't set it to continue, it
will create another log under a different name.
When running the driving benchmark for the basic configuration
you should [expect these results](benchmark_creating/#expected-results)

View File

@ -1,66 +0,0 @@
Driving Benchmark Structure
-------------------
The figure below shows the general structure of the driving
benchmark module.
![Benchmark_structure](img/benchmark_diagram.png)
>Figure: The general structure of the agent benchmark module.
The *driving benchmark* is the module responsible for evaluating a certain
*agent* in an *experiment suite*.
The *experiment suite* is an abstract module.
Thus, the user must define its own derivation
of *experiment suite*. We already provide the CoRL2017 suite and a simple
*experiment suite* for testing.
The *experiment suite* is composed by set of *experiments*.
Each *experiment* contains a *task* that consists of a set of navigation
episodes, represented by a set of *poses*.
These *poses* are tuples containing the start and end points of an
episode.
The *experiments* are also associated with a *condition*. A
condition is represented by a [carla settings](carla_settings.md) object.
The conditions specify simulation parameters such as: weather, sensor suite, number of
vehicles and pedestrians, etc.
The user also should derivate an *agent* class. The *agent* is the active
part which will be evaluated on the driving benchmark.
The driving benchmark also contains two auxiliary modules.
The *recording module* is used to keep track of all measurements and
can be used to pause and continue a driving benchmark.
The [*metrics module*](benchmark_metrics.md) is used to compute the performance metrics
by using the recorded measurements.
Example: CORL 2017
----------------------
We already provide the CoRL 2017 experiment suite used to benchmark the
agents for the [CoRL 2017 paper](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf).
The CoRL 2017 experiment suite has the following composition:
* A total of 24 experiments for each CARLA town containing:
* A task for going straight.
* A task for making a single turn.
* A task for going to an arbitrary position.
* A task for going to an arbitrary position with dynamic objects.
* Each task is composed of 25 poses that are repeated in 6 different weathers (Clear Noon, Heavy Rain Noon, Clear Sunset, After Rain Noon, Cloudy After Rain and Soft Rain Sunset).
* The entire experiment set has 600 episodes.
* The CoRL 2017 can take up to 24 hours to execute for Town01 and up to 15
hours for Town02 depending on the agent performance.

View File

@ -1,47 +0,0 @@
CARLA Design
============
> _This document is a work in progress and might be incomplete._
CARLA is composed by the following modules
* Client side
- Python client API: "Deprecated/PythonClient/carla"
* Server side
- CarlaUE4 Unreal Engine project: "Unreal/CarlaUE4"
- Carla plugin for Unreal Engine: "Unreal/CarlaUE4/Plugins/Carla"
- CarlaServer: "Util/CarlaServer"
!!! tip
Documentation for the C++ code can be generated by running
[Doxygen](http://www.doxygen.org) in the main folder of CARLA project.
Python client API
-----------------
The client API provides a Python module for communicating with the CARLA server.
In the folder "Deprecated/PythonClient", we provide several examples for scripting a CARLA
client using the "carla" module.
CarlaUE4 Unreal Engine project
------------------------------
The Unreal project "CarlaUE4" contains all the assets and scenes for generating
the CARLA binary. It uses the tools provided by the Carla plugin to assemble the
cities and behavior of the agents in the scene.
Carla plugin for Unreal Engine
------------------------------
The Carla plugin contains all the functionality of CARLA. We tried to keep this
functionality separated from the assets, so the functionality in this plugin can
be used as much as possible in any Unreal project.
It uses "CarlaServer" library for the networking communication.
CarlaServer
-----------
External library for the networking communications.
See ["CarlaServer"](carla_server.md) for implementation details.

View File

@ -1,114 +0,0 @@
<h1>CARLA Server</h1>
Build
-----
Some scripts are provided for building and testing CarlaServer on Linux
$ ./Setup.sh
$ make
$ make check
The setup script downloads and compiles all the required dependencies. The
Makefile calls CMake to build CarlaServer and installs it under "Util/Install".
Protocol
--------
All the messages are prepended by a 32 bits unsigned integer (little-endian)
indicating the size of the coming message.
Three consecutive ports are used,
* world-port (default 2000)
* measurements-port = world-port + 1
* control-port = world-port + 2
each of these ports has an associated thread that sends/reads data
asynchronuosly.
<h4>World thread</h4>
Server reads one, writes one. Always protobuf messages.
[client] RequestNewEpisode
[server] SceneDescription
[client] EpisodeStart
[server] EpisodeReady
...repeat...
<h4>Measurements thread</h4>
Server only writes, first measurements message then the bulk of raw images.
[server] Measurements
[server] raw images
...repeat...
Every image is an array of
[frame_number, width, height, type, FOV, color[0], color[1], ...]
of types
[uint64, uint32, uint32, uint32, float32, uint32, uint32, ...]
where FOV is the horizontal field of view of the camera as float, each color is
an [FColor][fcolorlink] (BGRA) as stored in Unreal Engine, and the possible
types of images are
type = 0 None (RGB without any post-processing)
type = 1 SceneFinal (RGB with post-processing present at the scene)
type = 2 Depth (Depth Map)
type = 3 SemanticSegmentation (Semantic Segmentation)
The measurements message is explained in detail [here](measurements.md).
[fcolorlink]: https://docs.unrealengine.com/latest/INT/API/Runtime/Core/Math/FColor/index.html "FColor API Documentation"
<h4>Control thread</h4>
Server only reads, client sends Control message every frame.
[client] Control
...repeat...
In the synchronous mode, the server halts execution each frame until the Control
message is received.
C API
-----
The library is encapsulated behind a single include file in C,
["carla/carla_server.h"][carlaserverhlink].
This file contains the basic interface for reading and writing messages to the
client, hiding the networking and multi-threading part. Most of the functions
have a time-out parameter and block until the corresponding asynchronous
operation is completed or the time-out is met. Set a time-out of 0 to get a
non-blocking call.
A CarlaServer instance is created with `carla_make_server()` and should be
destroyed after use with `carla_server_free(ptr)`.
[carlaserverhlink]: https://github.com/carla-simulator/carla/blob/master/Util/CarlaServer/include/carla/carla_server.h
Design
------
The C API takes care of dispatching the request to the corresponding server.
There are three asynchronous servers each of them running on its own thread.
![CarlaServer design](img/carlaserver.svg)
Conceptually there are two servers, the _World Server_ and the _Agent Server_.
The _World Server_ controls the initialization of episodes. A new episode is
started every time it is requested to the World Server by a RequestNewEpisode
message. Once the episode is ready, the World Server launches the Agent Server.
The _Agent Server_ has two threads, one for sending the streaming of the
measurements and another for receiving the control. Both agent threads
communicate with the main thread through a lock-free double-buffer to speed up
the streaming of messages and images.
The encoding of the messages (protobuf) and the networking operations are
executed asynchronously.

View File

@ -1,65 +0,0 @@
<h1>CARLA Settings</h1>
> _This document is a work in progress and might be incomplete._
!!! important
This document still refers to the 0.8.X API (stable version). The
proceedings stated here may not apply to latest versions, 0.9.0 or later.
Latest versions introduced significant changes in the API, we are still
working on documenting everything, sorry for the inconvenience.
CarlaSettings.ini
-----------------
CARLA reads its settings from a "CarlaSettings.ini" file. This file controls
most aspects of the simulation, and it is loaded every time a new episode is
started (every time the level is loaded).
Settings are loaded following the next hierarchy, with values later in the
hierarchy overriding earlier values.
1. `{CarlaFolder}/Unreal/CarlaUE4/Config/CarlaSettings.ini`.
2. File provided by command-line argument `-carla-settings="Path/To/CarlaSettings.ini"`.
3. Other command-line arguments like `-carla-port`.
4. Settings file sent by the client on every new episode.
Take a look at the [CARLA Settings example][settingslink].
[settingslink]: https://github.com/carla-simulator/carla/blob/master/Docs/Example.CarlaSettings.ini
Weather presets
---------------
The weather and lighting conditions can be chosen from a set of predefined
settings. To select one, set the `WeatherId` key in CarlaSettings.ini. The
following presets are available
* 0 - Default
* 1 - ClearNoon
* 2 - CloudyNoon
* 3 - WetNoon
* 4 - WetCloudyNoon
* 5 - MidRainyNoon
* 6 - HardRainNoon
* 7 - SoftRainNoon
* 8 - ClearSunset
* 9 - CloudySunset
* 10 - WetSunset
* 11 - WetCloudySunset
* 12 - MidRainSunset
* 13 - HardRainSunset
* 14 - SoftRainSunset
E.g., to choose the weather to be hard-rain at noon, add to CarlaSettings.ini
```
[CARLA/LevelSettings]
WeatherId=6
```
Simulator command-line options
------------------------------
* `-carla-settings="Path/To/CarlaSettings.ini"` Load settings from the given INI file. See Example.CarlaSettings.ini.
* `-carla-port=N` Listen for client connections at port N, streaming port is set to N+1.
* `-carla-no-hud` Do not display the HUD by default.

View File

@ -1,101 +0,0 @@
<h1>Connecting a Python client</h1>
![Running CARLA with client](img/client_window.png)
The power of CARLA simulator resides in its ability to be controlled
programmatically with an external client. This client can control most of the
aspects of simulation, from environment to duration of each episode, it can
retrieve data from different sensors, and send control instructions to the
player vehicle.
Deprecated/PythonClient contents
--------------------------------
In the release package, inside the _"Deprecated/PythonClient"_ folder, we
provide the Python API module together with some use examples.
File or folder | Description
------------------------ | ------------
carla/ | Contains the "carla" module, the Python API for communicating with the simulator.
client_example.py | Basic usage example of the "carla" module.
manual_control.py | A GUI client in which the vehicle can be controlled manually.
point_cloud_example.py | Usage example for converting depth images into a point cloud in world coordinates.
run_benchmark.py | Run the CoRL'17 benchmark with a trivial agent.
view_start_positions.py | Show all the possible start positions in a map
!!! note
If you are building CARLA from source, the Python code is inside the
_"Deprecated/PythonClient"_ folder in the CARLA repository. Bear in mind
that the `master` branch contains latest fixes and changes that might be
incompatible with the release version. Consider using the `stable` branch.
Install dependencies
--------------------
We recommend using Python 3.5, but all the Python code in the "carla" module and
given examples is also compatible with Python 2.7.
Install the dependencies with "pip" using the requirements file provided
$ pip install -r Deprecated/PythonClient/requirements.txt
Running the client example
--------------------------
The "client_example.py" script contains a basic usage example for using the
"carla" module. We recommend taking a look at the source-code of this script if
you plan to familiarize with the CARLA Python API.
<h4>Launching the client</h4>
The script tries to connect to a CARLA simulator instance running in _server
mode_. Now we are going to launch the script with "autopilot" enabled
$ ./client_example.py --autopilot
The script now will try repeatedly to connect with the server, since we haven't
started the simulator yet it will keep printing an error until we launch the
server.
!!! note
By default CARLA uses the ports 2000, 2001, and 2002. Make sure to have
these ports available.
<h4>Launching the simulator in server mode</h4>
To launch CARLA simulator in **server mode** we just need to pass the
`-carla-server` argument
$ ./CarlaUE4.sh -carla-server
Once the map is loaded, the vehicle should start driving around controlled by
the Python script.
!!! important
Before you start running your own experiments, it is important to know the
details for running the simulator at **fixed time-step** for achieving
maximum speed and repeatability. We will cover this in the next item
[Configuring the simulation](configuring_the_simulation.md).
<h4>Saving images to disk</h4>
Now you can stop the client script and relaunch it with different options. For
instance now we are going to save to disk the images of the two cameras that the
client attaches to the vehicle
$ ./client_example.py --autopilot --images-to-disk
And _"_out"_ folder should have appeared in your working directory containing each
captured frame as PNG.
![Saved images to disk](img/saved_images_to_disk.png)
You can see all the available options in the script's help
$ ./client_example.py --help
<h4>Running other examples</h4>
The other scripts present in the _"Deprecated/PythonClient"_ folder run in a
similar fashion. We recommend now launching _"manual_control.py"_ for a GUI
interface implemented with PyGame.

Binary file not shown.

Before

Width:  |  Height:  |  Size: 169 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 172 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 650 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 320 KiB

View File

@ -9,8 +9,6 @@
* [Getting started](getting_started.md)
* [Python API overview](python_api_overview.md)
<!-- * [Running the simulator](running_simulator_standalone.md) -->
<!-- * [Connecting a Python client](connecting_the_client.md) -->
* [Configuring the simulation](configuring_the_simulation.md)
<!-- * [Measurements](measurements.md) -->
* [Cameras and sensors](cameras_and_sensors.md)
@ -21,21 +19,11 @@
* [How to build on Linux](how_to_build_on_linux.md)
* [How to build on Windows](how_to_build_on_windows.md)
<h3> Driving Benchmark </h3>
* [Quick Start](benchmark_start.md)
* [General Structure](benchmark_structure.md)
* [Creating Your Benchmark](benchmark_creating.md)
* [Computed Performance Metrics](benchmark_metrics.md)
<h3>Advanced topics</h3>
* [CARLA settings](carla_settings.md)
* [Python API reference](python_api.md)
<!-- * [Simulator keyboard input](simulator_keyboard_input.md) -->
* [Running without display and selecting GPUs](carla_headless.md)
* [Running in a Docker](carla_docker.md)
* [How to link Epic's Automotive Materials](epic_automotive_materials.md)
<h3>Contributing</h3>
@ -47,8 +35,6 @@
<h3>Development</h3>
* [Map customization](map_customization.md)
<!-- * [CARLA design](carla_design.md) -->
<!-- * [CarlaServer documentation](carla_server.md) -->
* [Build system](build_system.md)
<h3>Art guidelines</h3>

View File

@ -1,59 +0,0 @@
<h1>Running the CARLA simulator in standalone mode</h1>
Inside the downloaded package you should find a shell script called
`CarlaUE4.sh`, this script launches the CARLA simulator.
!!! tip
Although this tutorial focuses on Linux, all the commands work as well in
Windows. Just replace all the occurrences of `./CarlaUE4.sh` by
`CarlaUE4.exe`.
Run this script without arguments to launch CARLA simulator in standalone mode
with default settings
$ ./CarlaUE4.sh
This launches the simulator window in full-screen, and you should be able
now to drive around the city using the WASD keys, and Q for toggling reverse
gear. See ["Keyboard input"](simulator_keyboard_input.md) for the complete list
of key-bindings.
![Simulator window](img/simulator_window.png)
We have currently two scenarios available, _Town01_ and _Town02_. You may want
now to take a look at _Town02_, you can do so by running the script with
$ ./CarlaUE4.sh /Game/Maps/Town02
All the parameters like number of other vehicles, pedestrians, and weather
conditions can be controlled when launching the simulation. These parameters are
set in a _"CarlaSettings.ini"_ file that is passed to the simulator either as a
command-line parameter or when connecting with a Python client. This file
controls all the variable of the CARLA simulator, from server settings to
attaching sensors to the vehicle, we will cover all these later, for now we will
just change some visible aspect in the standalone mode. For a detailed
description of how the settings work, see ["CARLA Settings"](carla_settings.md)
section.
Open the file _"Example.CarlaSettings.ini"_ in a text editor, search for the
following keys and modify their values
```ini
NumberOfVehicles=60
NumberOfPedestrians=60
WeatherId=3
```
Now run the simulator passing the settings file as argument with
$ ./CarlaUE4.sh -carla-settings=Example.CarlaSettings.ini
Now the simulation should have more vehicles and pedestrians, and a
different weather preset.
!!! tip
You can launch the simulator in windowed mode by using the argument
`-windowed`, and control the window size with `-ResX=N` and `-ResY=N`.
In the next item of this tutorial we show how to control the simulator with a
Python client.

View File

@ -1,26 +0,0 @@
<h1>CARLA Simulator keyboard input</h1>
The following key bindings are available during game play at the simulator
window. Note that vehicle controls are only available when running in
_standalone mode_.
Key | Action
---------------:|:----------------
`W` | Throttle
`S` | Brake
`A` `D` | Steer
`Q` | Toggle reverse gear
`Space` | Hand-brake
`P` | Toggle autopilot
`←` `→` `↑` `↓` | Move camera
`PgUp` `PgDn` | Zoom in and out
`Mouse Wheel` | Zoom in and out
`Tab` | Toggle on-board camera
`F11` | Toggle fullscreen
`R` | Restart level
`G` | Toggle HUD
`C` | Change weather/lighting
`Enter` | Jump
`F` | Use the force
`T` | Reset vehicle rotation
`Alt+F4` | Quit

View File

@ -8,24 +8,15 @@ pages:
- Quick start:
- 'Getting started': 'getting_started.md'
- 'Python API overview': 'python_api_overview.md'
# - 'Running the simulator': 'running_simulator_standalone.md'
# - 'Connecting a Python client': 'connecting_the_client.md'
- 'Configuring the simulation': 'configuring_the_simulation.md'
# - 'Measurements': 'measurements.md'
- 'Cameras and sensors': 'cameras_and_sensors.md'
- 'F.A.Q.': 'faq.md'
- Driving Benchmark:
- 'Quick Start': 'benchmark_start.md'
- 'General Structure': 'benchmark_structure.md'
- 'Creating Your Benchmark': 'benchmark_creating.md'
- 'Computed Performance Metrics': 'benchmark_metrics.md'
- Building from source:
- 'How to build on Linux': 'how_to_build_on_linux.md'
- 'How to build on Windows': 'how_to_build_on_windows.md'
- Advanced topics:
- 'CARLA Settings': 'carla_settings.md'
- 'Python API reference': 'python_api.md'
# - 'Simulator keyboard input': 'simulator_keyboard_input.md'
- 'Running without display and selecting GPUs': 'carla_headless.md'
- 'Running in a Docker': 'carla_docker.md'
- "How to link Epic's Automotive Materials": 'epic_automotive_materials.md'
@ -35,17 +26,10 @@ pages:
- 'Code of conduct': 'CODE_OF_CONDUCT.md'
- Development:
- 'Map customization': 'map_customization.md'
# - 'CARLA design': 'carla_design.md'
# - 'CarlaServer documentation': 'carla_server.md'
- 'Build system': 'build_system.md'
- Art guidelines:
- 'How to add assets': 'how_to_add_assets.md'
- 'How to model vehicles': 'how_to_model_vehicles.md'
- Appendix:
- 'Driving Benchmark Sample Results Town01': 'benchmark_basic_results_town01.md'
- 'Driving Benchmark Sample Results Town02': 'benchmark_basic_results_town02.md'
markdown_extensions:
- admonition