Merge master into Docs_NewMap
|
@ -1,189 +0,0 @@
|
|||
We show the results for test and train weathers when
|
||||
[running the simple example](benchmark_creating/#expected-results) for Town01.
|
||||
The following result should print on the screen after running the
|
||||
example.
|
||||
|
||||
----- Printing results for training weathers (Seen in Training) -----
|
||||
|
||||
|
||||
Percentage of Successful Episodes
|
||||
|
||||
Weather: Clear Noon
|
||||
Task: 0 -> 1.0
|
||||
Task: 1 -> 0.0
|
||||
Task: 2 -> 0.0
|
||||
Task: 3 -> 0.0
|
||||
Average Between Weathers
|
||||
Task 0 -> 1.0
|
||||
Task 1 -> 0.0
|
||||
Task 2 -> 0.0
|
||||
Task 3 -> 0.0
|
||||
|
||||
Average Percentage of Distance to Goal Travelled
|
||||
|
||||
Weather: Clear Noon
|
||||
Task: 0 -> 0.9643630125892909
|
||||
Task: 1 -> 0.6794216252808839
|
||||
Task: 2 -> 0.6593855166486696
|
||||
Task: 3 -> 0.6646695325122313
|
||||
Average Between Weathers
|
||||
Task 0 -> 0.9643630125892909
|
||||
Task 1 -> 0.6794216252808839
|
||||
Task 2 -> 0.6593855166486696
|
||||
Task 3 -> 0.6646695325122313
|
||||
|
||||
Avg. Kilometers driven before a collision to a PEDESTRIAN
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
|
||||
Avg. Kilometers driven before a collision to a VEHICLE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> 0.11491704214531683
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> 0.11491704214531683
|
||||
|
||||
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> 0.22983408429063365
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> 0.22983408429063365
|
||||
|
||||
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> 0.12350085985904342
|
||||
Task 2 -> 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> 0.12350085985904342
|
||||
Task 2 -> 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
|
||||
Avg. Kilometers driven before invading the OPPOSITE LANE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
----- Printing results for test weathers (Unseen in Training) -----
|
||||
|
||||
|
||||
Percentage of Successful Episodes
|
||||
|
||||
Weather: Clear Noon
|
||||
Task: 0 -> 1.0
|
||||
Task: 1 -> 0.0
|
||||
Task: 2 -> 0.0
|
||||
Task: 3 -> 0.0
|
||||
Average Between Weathers
|
||||
Task 0 -> 1.0
|
||||
Task 1 -> 0.0
|
||||
Task 2 -> 0.0
|
||||
Task 3 -> 0.0
|
||||
|
||||
Average Percentage of Distance to Goal Travelled
|
||||
|
||||
Weather: Clear Noon
|
||||
Task: 0 -> 0.9643630125892909
|
||||
Task: 1 -> 0.6794216252808839
|
||||
Task: 2 -> 0.6593855166486696
|
||||
Task: 3 -> 0.6646695325122313
|
||||
Average Between Weathers
|
||||
Task 0 -> 0.9643630125892909
|
||||
Task 1 -> 0.6794216252808839
|
||||
Task 2 -> 0.6593855166486696
|
||||
Task 3 -> 0.6646695325122313
|
||||
|
||||
Avg. Kilometers driven before a collision to a PEDESTRIAN
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
|
||||
Avg. Kilometers driven before a collision to a VEHICLE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> 0.11491704214531683
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> 0.11491704214531683
|
||||
|
||||
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> 0.22983408429063365
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> 0.22983408429063365
|
||||
|
||||
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> 0.12350085985904342
|
||||
Task 2 -> 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> 0.12350085985904342
|
||||
Task 2 -> 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
|
||||
Avg. Kilometers driven before invading the OPPOSITE LANE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.04316352371637994
|
||||
Task 1 -> more than 0.12350085985904342
|
||||
Task 2 -> more than 0.2400373917146113
|
||||
Task 3 -> more than 0.22983408429063365
|
||||
|
||||
|
||||
|
|
@ -1,187 +0,0 @@
|
|||
We show the results for test and train weathers when
|
||||
[running the simple example](benchmark_creating/#expected-results) for Town02.
|
||||
The following result should print on the screen after running the
|
||||
example.
|
||||
|
||||
|
||||
----- Printing results for training weathers (Seen in Training) -----
|
||||
|
||||
|
||||
Percentage of Successful Episodes
|
||||
|
||||
Weather: Clear Noon
|
||||
Task: 0 -> 1.0
|
||||
Task: 1 -> 0.0
|
||||
Task: 2 -> 0.0
|
||||
Task: 3 -> 0.0
|
||||
Average Between Weathers
|
||||
Task 0 -> 1.0
|
||||
Task 1 -> 0.0
|
||||
Task 2 -> 0.0
|
||||
Task 3 -> 0.0
|
||||
|
||||
Average Percentage of Distance to Goal Travelled
|
||||
|
||||
Weather: Clear Noon
|
||||
Task: 0 -> 0.8127653637426329
|
||||
Task: 1 -> 0.10658303206448155
|
||||
Task: 2 -> -0.20448736444348714
|
||||
Task: 3 -> -0.20446966646041384
|
||||
Average Between Weathers
|
||||
Task 0 -> 0.8127653637426329
|
||||
Task 1 -> 0.10658303206448155
|
||||
Task 2 -> -0.20448736444348714
|
||||
Task 3 -> -0.20446966646041384
|
||||
|
||||
Avg. Kilometers driven before a collision to a PEDESTRIAN
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
|
||||
Avg. Kilometers driven before a collision to a VEHICLE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
|
||||
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> 0.019641485501456352
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> 0.019641485501456352
|
||||
|
||||
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> 0.03856641710143665
|
||||
Task 2 -> 0.03928511962584409
|
||||
Task 3 -> 0.039282971002912705
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> 0.03856641710143665
|
||||
Task 2 -> 0.03928511962584409
|
||||
Task 3 -> 0.039282971002912705
|
||||
|
||||
Avg. Kilometers driven before invading the OPPOSITE LANE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
----- Printing results for test weathers (Unseen in Training) -----
|
||||
|
||||
|
||||
Percentage of Successful Episodes
|
||||
|
||||
Weather: Clear Noon
|
||||
Task: 0 -> 1.0
|
||||
Task: 1 -> 0.0
|
||||
Task: 2 -> 0.0
|
||||
Task: 3 -> 0.0
|
||||
Average Between Weathers
|
||||
Task 0 -> 1.0
|
||||
Task 1 -> 0.0
|
||||
Task 2 -> 0.0
|
||||
Task 3 -> 0.0
|
||||
|
||||
Average Percentage of Distance to Goal Travelled
|
||||
|
||||
Weather: Clear Noon
|
||||
Task: 0 -> 0.8127653637426329
|
||||
Task: 1 -> 0.10658303206448155
|
||||
Task: 2 -> -0.20448736444348714
|
||||
Task: 3 -> -0.20446966646041384
|
||||
Average Between Weathers
|
||||
Task 0 -> 0.8127653637426329
|
||||
Task 1 -> 0.10658303206448155
|
||||
Task 2 -> -0.20448736444348714
|
||||
Task 3 -> -0.20446966646041384
|
||||
|
||||
Avg. Kilometers driven before a collision to a PEDESTRIAN
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
|
||||
Avg. Kilometers driven before a collision to a VEHICLE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
|
||||
Avg. Kilometers driven before a collision to a STATIC OBSTACLE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> 0.019641485501456352
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> 0.019641485501456352
|
||||
|
||||
Avg. Kilometers driven before going OUTSIDE OF THE ROAD
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> 0.03856641710143665
|
||||
Task 2 -> 0.03928511962584409
|
||||
Task 3 -> 0.039282971002912705
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> 0.03856641710143665
|
||||
Task 2 -> 0.03928511962584409
|
||||
Task 3 -> 0.039282971002912705
|
||||
|
||||
Avg. Kilometers driven before invading the OPPOSITE LANE
|
||||
Weather: Clear Noon
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
||||
Average Between Weathers
|
||||
Task 0 -> more than 0.0071004936693366055
|
||||
Task 1 -> more than 0.03856641710143665
|
||||
Task 2 -> more than 0.03928511962584409
|
||||
Task 3 -> more than 0.039282971002912705
|
|
@ -1,241 +0,0 @@
|
|||
Benchmarking your Agent
|
||||
---------------------------
|
||||
|
||||
|
||||
In this tutorial we show:
|
||||
|
||||
* [How to define a trivial agent with a forward going policy.](#defining-the-agent)
|
||||
* [How to define a basic experiment suite.](#defining-the-experiment-suite)
|
||||
|
||||
|
||||
#### Introduction
|
||||
|
||||

|
||||
|
||||
The driving benchmark is associated with other two modules.
|
||||
The *agent* module, that is a controller which performs in
|
||||
another module: the *experiment suite*.
|
||||
Both modules are abstract classes that must be redefined by
|
||||
the user.
|
||||
|
||||
The following code excerpt is
|
||||
an example of how to apply a driving benchmark;
|
||||
|
||||
# We instantiate a forward agent, a simple policy that just set
|
||||
# acceleration as 0.9 and steering as zero
|
||||
agent = ForwardAgent()
|
||||
|
||||
# We instantiate an experiment suite. Basically a set of experiments
|
||||
# that are going to be evaluated on this benchmark.
|
||||
experiment_suite = BasicExperimentSuite(city_name)
|
||||
|
||||
# Now actually run the driving_benchmark
|
||||
# Besides the agent and experiment suite we should send
|
||||
# the city name ( Town01, Town02) the log
|
||||
run_driving_benchmark(agent, experiment_suite, city_name,
|
||||
log_name, continue_experiment,
|
||||
host, port)
|
||||
|
||||
|
||||
|
||||
Following this excerpt, there are two classes to be defined.
|
||||
The ForwardAgent() and the BasicExperimentSuite().
|
||||
After that, the benchmark can ne run with the "run_driving_benchmark" function.
|
||||
The summary of the execution, the [performance metrics](benchmark_metrics.md), are stored
|
||||
in a json file and printed to the screen.
|
||||
|
||||
|
||||
|
||||
|
||||
#### Defining the Agent
|
||||
|
||||
The tested agent must inherit the base *Agent* class.
|
||||
Let's start by deriving a simple forward agent:
|
||||
|
||||
from carla.agent.agent import Agent
|
||||
from carla.client import VehicleControl
|
||||
|
||||
class ForwardAgent(Agent):
|
||||
|
||||
|
||||
To have its performance evaluated, the ForwardAgent derived class _must_
|
||||
redefine the *run_step* function as it is done in the following excerpt:
|
||||
|
||||
def run_step(self, measurements, sensor_data, directions, target):
|
||||
"""
|
||||
Function to run a control step in the CARLA vehicle.
|
||||
"""
|
||||
control = VehicleControl()
|
||||
control.throttle = 0.9
|
||||
return control
|
||||
|
||||
|
||||
This function receives the following parameters:
|
||||
|
||||
* [Measurements](index.md)<!-- @todo -->: the entire state of the world received
|
||||
by the client from the CARLA Simulator. These measurements contains agent position, orientation,
|
||||
dynamic objects information, etc.
|
||||
* [Sensor Data](cameras_and_sensors.md): The measured data from defined sensors,
|
||||
such as Lidars or RGB cameras.
|
||||
* Directions: Information from the high level planner. Currently the planner sends
|
||||
a high level command from the follwoing set: STRAIGHT, RIGHT, LEFT, NOTHING.
|
||||
* Target Position: The position and orientation of the target.
|
||||
|
||||
With all this information, the *run_step* function is expected
|
||||
to return a [vehicle control message](index.md)<!-- @todo -->, containing:
|
||||
steering value, throttle value, brake value, etc.
|
||||
|
||||
|
||||
|
||||
#### Defining the Experiment Suite
|
||||
|
||||
|
||||
To create a Experiment Suite class you need to perform
|
||||
the following steps:
|
||||
|
||||
* Create your custom class by inheriting the ExperimentSuite base class.
|
||||
* Define the test and train weather conditions to be used.
|
||||
* Build the *Experiment* objects .
|
||||
|
||||
|
||||
|
||||
##### Definition
|
||||
|
||||
|
||||
The defined set of experiments must derive the *ExperimentSuite* class
|
||||
as in the following code excerpt:
|
||||
|
||||
from carla.agent_benchmark.experiment import Experiment
|
||||
from carla.sensor import Camera
|
||||
from carla.settings import CarlaSettings
|
||||
|
||||
from .experiment_suite import ExperimentSuite
|
||||
|
||||
|
||||
class BasicExperimentSuite(ExperimentSuite):
|
||||
|
||||
##### Define test and train weather conditions
|
||||
|
||||
The user must select the weathers to be used. One should select the set
|
||||
of test weathers and the set of train weathers. This is defined as a
|
||||
class property as in the following example:
|
||||
|
||||
@property
|
||||
def train_weathers(self):
|
||||
return [1]
|
||||
@property
|
||||
def test_weathers(self):
|
||||
return [1]
|
||||
|
||||
|
||||
##### Building Experiments
|
||||
|
||||
The [experiments are composed by a *task* that is defined by a set of *poses*](benchmark_structure.md).
|
||||
Let's start by selecting poses for one of the cities, let's take Town01, for instance.
|
||||
First of all, we need to see all the possible positions, for that, with
|
||||
a CARLA simulator running in a terminal, run:
|
||||
|
||||
python view_start_positions.py
|
||||
|
||||

|
||||
|
||||
|
||||
Now let's choose, for instance, 140 as start position and 134
|
||||
as the end position. This two positions can be visualized by running:
|
||||
|
||||
python view_start_positions.py --pos 140,134 --no-labels
|
||||
|
||||

|
||||
|
||||
Let's choose two more poses, one for going straight, other one for one simple turn.
|
||||
Also, let's also choose three poses for Town02:
|
||||
|
||||
|
||||

|
||||
>Figure: The poses used on this basic *Experiment Suite* example. Poses are
|
||||
a tuple of start and end position of a goal-directed episode. Start positions are
|
||||
shown in Blue, end positions in Red. Left: Straight poses,
|
||||
where the goal is just straight away from the start position. Middle: One turn
|
||||
episode, where the goal is one turn away from the start point. Right: arbitrary position,
|
||||
the goal is far away from the start position, usually more than one turn.
|
||||
|
||||
|
||||
We define each of these poses as a task. Plus, we also set
|
||||
the number of dynamic objects for each of these tasks and repeat
|
||||
the arbitrary position task to have it also defined with dynamic
|
||||
objects. In the following code excerpt we show the final
|
||||
defined positions and the number of dynamic objects for each task:
|
||||
|
||||
# Define the start/end position below as tasks
|
||||
poses_task0 = [[7, 3]]
|
||||
poses_task1 = [[138, 17]]
|
||||
poses_task2 = [[140, 134]]
|
||||
poses_task3 = [[140, 134]]
|
||||
# Concatenate all the tasks
|
||||
poses_tasks = [poses_task0, poses_task1 , poses_task1 , poses_task3]
|
||||
# Add dynamic objects to tasks
|
||||
vehicles_tasks = [0, 0, 0, 20]
|
||||
pedestrians_tasks = [0, 0, 0, 50]
|
||||
|
||||
|
||||
Finally by using the defined tasks we can build the experiments
|
||||
vector as we show in the following code excerpt:
|
||||
|
||||
|
||||
experiments_vector = []
|
||||
# The used weathers is the union between test and train weathers
|
||||
for weather in used_weathers:
|
||||
for iteration in range(len(poses_tasks)):
|
||||
poses = poses_tasks[iteration]
|
||||
vehicles = vehicles_tasks[iteration]
|
||||
pedestrians = pedestrians_tasks[iteration]
|
||||
|
||||
conditions = CarlaSettings()
|
||||
conditions.set(
|
||||
SendNonPlayerAgentsInfo=True,
|
||||
NumberOfVehicles=vehicles,
|
||||
NumberOfPedestrians=pedestrians,
|
||||
WeatherId=weather
|
||||
|
||||
)
|
||||
# Add all the cameras that were set for this experiments
|
||||
conditions.add_sensor(camera)
|
||||
experiment = Experiment()
|
||||
experiment.set(
|
||||
Conditions=conditions,
|
||||
Poses=poses,
|
||||
Task=iteration,
|
||||
Repetitions=1
|
||||
)
|
||||
experiments_vector.append(experiment)
|
||||
|
||||
|
||||
|
||||
|
||||
|
||||
The full code could be found at [basic_experiment_suite.py](https://github.com/carla-simulator/carla/blob/master/Deprecated/PythonClient/carla/driving_benchmark/experiment_suites/basic_experiment_suite.py)
|
||||
|
||||
|
||||
|
||||
#### Expected Results
|
||||
|
||||
First you need a CARLA Simulator running with [fixed time-step](configuring_the_simulation/#fixed-time-step)
|
||||
, so the results you will obtain will be more or less reproducible.
|
||||
For that you should run the CARLA simulator as:
|
||||
|
||||
./CarlaUE4.sh /Game/Maps/<Town_name> -windowed -world-port=2000 -benchmark -fps=10
|
||||
|
||||
The example presented in this tutorial can be executed for Town01 as:
|
||||
|
||||
./driving_benchmark_example.py -c Town01
|
||||
|
||||
You should expect these results: [town01_basic_forward_results](benchmark_basic_results_town01)
|
||||
|
||||
For Town02:
|
||||
|
||||
./driving_benchmark_example.py -c Town02
|
||||
|
||||
You should expect these results: [town01_basic_forward_results](benchmark_basic_results_town02)
|
||||
|
||||
|
||||
|
|
@ -1,97 +0,0 @@
|
|||
|
||||
Driving Benchmark Performance Metrics
|
||||
------------------------------
|
||||
|
||||
This page explains the performance metrics module.
|
||||
This module is used to compute a summary of results based on the actions
|
||||
performed by the agent during the benchmark.
|
||||
|
||||
|
||||
### Provided performance metrics
|
||||
|
||||
The driving benchmark performance metrics module provides the following performance metrics:
|
||||
|
||||
* **Percentage of Success**: The percentage of episodes (poses from tasks),
|
||||
that the agent successfully completed.
|
||||
|
||||
* **Average Completion**: The average distance towards the goal that the
|
||||
agent was able to travel.
|
||||
|
||||
* **Off Road Intersection**: The number of times the agent goes out of the road.
|
||||
The intersection is only counted if the area of the vehicle outside
|
||||
of the road is bigger than a *threshold*.
|
||||
|
||||
* **Other Lane Intersection**: The number of times the agent goes to the other
|
||||
lane. The intersection is only counted if the area of the vehicle on the
|
||||
other lane is bigger than a *threshold*.
|
||||
|
||||
* **Vehicle Collisions**: The number of collisions with vehicles that had
|
||||
an impact bigger than a *threshold*.
|
||||
|
||||
* **Pedestrian Collisions**: The number of collisions with pedestrians
|
||||
that had an impact bigger than a *threshold*.
|
||||
|
||||
* **General Collisions**: The number of collisions with all other
|
||||
objects with an impact bigger than a *threshold*.
|
||||
|
||||
|
||||
### Executing and Setting Parameters
|
||||
|
||||
The metrics are computed as the final step of the benchmark
|
||||
and stores a summary of the results a json file.
|
||||
Internally it is executed as follows:
|
||||
|
||||
```python
|
||||
metrics_object = Metrics(metrics_parameters)
|
||||
summary_dictionary = metrics_object.compute(path_to_execution_log)
|
||||
```
|
||||
|
||||
The Metric's compute function
|
||||
receives the full path to the execution log.
|
||||
The Metric class should be instanced with some parameters.
|
||||
The parameters are:
|
||||
|
||||
* **Threshold**: The threshold used by the metrics.
|
||||
* **Frames Recount**: After making the infraction, set the number
|
||||
of frames that the agent needs to keep doing the infraction for
|
||||
it to be counted as another infraction.
|
||||
* **Frames Skip**: It is related to the number of frames that are
|
||||
skipped after a collision or a intersection starts.
|
||||
|
||||
These parameters are defined as property of the *Experiment Suite*
|
||||
base class and can be redefined at your
|
||||
[custom *Experiment Suite*](benchmark_creating/#defining-the-experiment-suite).
|
||||
|
||||
The default parameters are:
|
||||
|
||||
|
||||
@property
|
||||
def metrics_parameters(self):
|
||||
"""
|
||||
Property to return the parameters for the metrics module
|
||||
Could be redefined depending on the needs of the user.
|
||||
"""
|
||||
return {
|
||||
|
||||
'intersection_offroad': {'frames_skip': 10,
|
||||
'frames_recount': 20,
|
||||
'threshold': 0.3
|
||||
},
|
||||
'intersection_otherlane': {'frames_skip': 10,
|
||||
'frames_recount': 20,
|
||||
'threshold': 0.4
|
||||
},
|
||||
'collision_other': {'frames_skip': 10,
|
||||
'frames_recount': 20,
|
||||
'threshold': 400
|
||||
},
|
||||
'collision_vehicles': {'frames_skip': 10,
|
||||
'frames_recount': 30,
|
||||
'threshold': 400
|
||||
},
|
||||
'collision_pedestrians': {'frames_skip': 5,
|
||||
'frames_recount': 100,
|
||||
'threshold': 300
|
||||
},
|
||||
|
||||
}
|
|
@ -1,69 +0,0 @@
|
|||
Driving Benchmark
|
||||
===============
|
||||
|
||||
The *driving benchmark* module is made
|
||||
to evaluate a driving controller (agent) and obtain
|
||||
metrics about its performance.
|
||||
|
||||
This module is mainly designed for:
|
||||
|
||||
* Users that work developing autonomous driving agents and want
|
||||
to see how they perform in CARLA.
|
||||
|
||||
On this section you will learn.
|
||||
|
||||
* How to quickly get started and benchmark a trivial agent right away.
|
||||
* Learn about the general implementation [architecture of the driving
|
||||
benchmark module](benchmark_structure.md).
|
||||
* Learn [how to set up your agent and create your
|
||||
own set of experiments](benchmark_creating.md).
|
||||
* Learn about the [performance metrics used](benchmark_metrics.md).
|
||||
|
||||
|
||||
|
||||
|
||||
Getting Started
|
||||
----------------
|
||||
|
||||
As a way to familiarize yourself with the system we
|
||||
provide a trivial agent performing in an small
|
||||
set of experiments (Basic). To execute it, simply
|
||||
run:
|
||||
|
||||
|
||||
$ ./driving_benchmark_example.py
|
||||
|
||||
|
||||
Keep in mind that, to run the command above, you need a CARLA simulator
|
||||
running at localhost and on port 2000.
|
||||
|
||||
|
||||
We already provide the same benchmark used in the [CoRL
|
||||
2017 paper](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf).
|
||||
The CoRL 2017 experiment suite can be run in a trivial agent by
|
||||
running:
|
||||
|
||||
$ ./driving_benchmark_example.py --corl-2017
|
||||
|
||||
This benchmark example can be further configured.
|
||||
Run the help command to see options available.
|
||||
|
||||
$ ./driving_benchmark_example.py --help
|
||||
|
||||
One of the options available is to be able to continue
|
||||
from a previous benchmark execution. For example,
|
||||
to continue a experiment in CoRL2017 with a log name of "driving_benchmark_test", run:
|
||||
|
||||
$ ./driving_benchmark_example.py --continue-experiment -n driving_benchmark_test --corl-2017
|
||||
|
||||
|
||||
!!! note
|
||||
if the log name already exists and you don't set it to continue, it
|
||||
will create another log under a different name.
|
||||
|
||||
When running the driving benchmark for the basic configuration
|
||||
you should [expect these results](benchmark_creating/#expected-results)
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,66 +0,0 @@
|
|||
|
||||
Driving Benchmark Structure
|
||||
-------------------
|
||||
|
||||
The figure below shows the general structure of the driving
|
||||
benchmark module.
|
||||
|
||||
|
||||
|
||||

|
||||
>Figure: The general structure of the agent benchmark module.
|
||||
|
||||
|
||||
The *driving benchmark* is the module responsible for evaluating a certain
|
||||
*agent* in an *experiment suite*.
|
||||
|
||||
The *experiment suite* is an abstract module.
|
||||
Thus, the user must define its own derivation
|
||||
of *experiment suite*. We already provide the CoRL2017 suite and a simple
|
||||
*experiment suite* for testing.
|
||||
|
||||
The *experiment suite* is composed by set of *experiments*.
|
||||
Each *experiment* contains a *task* that consists of a set of navigation
|
||||
episodes, represented by a set of *poses*.
|
||||
These *poses* are tuples containing the start and end points of an
|
||||
episode.
|
||||
|
||||
The *experiments* are also associated with a *condition*. A
|
||||
condition is represented by a [carla settings](carla_settings.md) object.
|
||||
The conditions specify simulation parameters such as: weather, sensor suite, number of
|
||||
vehicles and pedestrians, etc.
|
||||
|
||||
|
||||
The user also should derivate an *agent* class. The *agent* is the active
|
||||
part which will be evaluated on the driving benchmark.
|
||||
|
||||
The driving benchmark also contains two auxiliary modules.
|
||||
The *recording module* is used to keep track of all measurements and
|
||||
can be used to pause and continue a driving benchmark.
|
||||
The [*metrics module*](benchmark_metrics.md) is used to compute the performance metrics
|
||||
by using the recorded measurements.
|
||||
|
||||
|
||||
|
||||
|
||||
Example: CORL 2017
|
||||
----------------------
|
||||
|
||||
We already provide the CoRL 2017 experiment suite used to benchmark the
|
||||
agents for the [CoRL 2017 paper](http://proceedings.mlr.press/v78/dosovitskiy17a/dosovitskiy17a.pdf).
|
||||
|
||||
The CoRL 2017 experiment suite has the following composition:
|
||||
|
||||
* A total of 24 experiments for each CARLA town containing:
|
||||
* A task for going straight.
|
||||
* A task for making a single turn.
|
||||
* A task for going to an arbitrary position.
|
||||
* A task for going to an arbitrary position with dynamic objects.
|
||||
* Each task is composed of 25 poses that are repeated in 6 different weathers (Clear Noon, Heavy Rain Noon, Clear Sunset, After Rain Noon, Cloudy After Rain and Soft Rain Sunset).
|
||||
* The entire experiment set has 600 episodes.
|
||||
* The CoRL 2017 can take up to 24 hours to execute for Town01 and up to 15
|
||||
hours for Town02 depending on the agent performance.
|
||||
|
||||
|
||||
|
||||
|
|
@ -1,194 +1,170 @@
|
|||
<h1>Cameras and sensors</h1>
|
||||
|
||||
!!! important
|
||||
This document still refers to the 0.8.X API (stable version), this API is
|
||||
currently located under _"Deprecated/PythonClient"_. The proceedings stated
|
||||
here may not apply to latest versions, 0.9.0 or later. Latest versions
|
||||
introduced significant changes in the API, we are still working on
|
||||
documenting everything, sorry for the inconvenience.
|
||||

|
||||
|
||||
!!! important
|
||||
Since version 0.8.0 the positions of the sensors are specified in meters
|
||||
instead of centimeters. Always relative to the vehicle.
|
||||
Sensors are a special type of actor able to measure and stream data. All the
|
||||
sensors have a `listen` method that registers the callback function that will
|
||||
be called each time the sensor produces a new measurement. Sensors are typically
|
||||
attached to vehicles and produce data either each simulation update, or when a
|
||||
certain event is registered.
|
||||
|
||||
Cameras and sensors can be added to the player vehicle by defining them in the
|
||||
settings sent by the client on every new episode. This can be done either by
|
||||
filling a `CarlaSettings` Python class ([client_example.py][clientexamplelink])
|
||||
or by loading an INI settings file ([CARLA Settings example][settingslink]).
|
||||
The following Python excerpt shows how you would typically attach a sensor to a
|
||||
vehicle, in this case we are adding a dashboard HD camera to a vehicle.
|
||||
|
||||
This document describes the details of the different cameras/sensors currently
|
||||
available as well as the resulting images produced by them.
|
||||
```py
|
||||
# Find the blueprint of the sensor.
|
||||
blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
|
||||
# Modify the attributes of the blueprint to set image resolution and field of view.
|
||||
blueprint.set_attribute('image_size_x', '1920')
|
||||
blueprint.set_attribute('image_size_y', '1080')
|
||||
blueprint.set_attribute('fov', '110')
|
||||
# Provide the position of the sensor relative to the vehicle.
|
||||
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
|
||||
# Tell the world to spawn the sensor, don't forget to attach it to your vehicle actor.
|
||||
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
|
||||
# Subscribe to the sensor stream by providing a callback function, this function is
|
||||
# called each time a new image is generated by the sensor.
|
||||
sensor.listen(lambda data: do_something(data))
|
||||
```
|
||||
|
||||
Although we plan to extend the sensor suite of CARLA in the near future, at the
|
||||
moment there are four different sensors available.
|
||||
Note that each sensor has a different set of attributes and produces different
|
||||
type of data. However, the data produced by a sensor comes always tagged with a
|
||||
**frame number** and a **transform**. The frame number is used to identify the
|
||||
frame at which the measurement took place, the transform gives you the
|
||||
transformation in world coordinates of the sensor at that same frame.
|
||||
|
||||
* [Camera: Scene final](#camera-scene-final)
|
||||
* [Camera: Depth map](#camera-depth-map)
|
||||
* [Camera: Semantic segmentation](#camera-semantic-segmentation)
|
||||
* [Ray-cast based lidar](#ray-cast-based-lidar)
|
||||
Most sensor data objects, like images and lidar measurements, have a function
|
||||
for saving the measurements to disk.
|
||||
|
||||
!!! note
|
||||
The images are sent by the server as a BGRA array of bytes. The provided
|
||||
Python client retrieves the images in this format, it's up to the users to
|
||||
parse the images and convert them to the desired format. There are some
|
||||
examples in the Deprecated/PythonClient folder showing how to parse the
|
||||
images.
|
||||
This is the list of sensors currently available
|
||||
|
||||
There is a fourth post-processing effect available for cameras, _None_, which
|
||||
provides a view with of the scene with no effect, not even scene lighting; we
|
||||
will skip this one in the following descriptions.
|
||||
* [sensor.camera.rgb](#sensorcamerargb)
|
||||
* [sensor.camera.depth](#sensorcameradepth)
|
||||
* [sensor.camera.semantic_segmentation](#sensorcamerasemantic_segmentation)
|
||||
* [sensor.lidar.ray_cast](#sensorlidarray_cast)
|
||||
* [sensor.other.collision](#sensorothercollision)
|
||||
* [sensor.other.lane_detector](#sensorotherlane_detector)
|
||||
|
||||
We provide a tool to convert raw depth and semantic segmentation images in bulk
|
||||
to a more human readable palette of colors. It can be found at
|
||||
["Util/ImageConverter"][imgconvlink]. Alternatively, they can also be converted
|
||||
using the functions at `carla.image_converter` Python module.
|
||||
sensor.camera.rgb
|
||||
-----------------
|
||||
|
||||
Note that all the sensor data comes with a _frame number_ stamp, this _frame
|
||||
number_ matches the one received in the measurements. This is especially useful
|
||||
for running the simulator in asynchronous mode and synchronize sensor data on
|
||||
the client side.
|
||||

|
||||
|
||||
[clientexamplelink]: https://github.com/carla-simulator/carla/blob/master/Deprecated/PythonClient/client_example.py
|
||||
[settingslink]: https://github.com/carla-simulator/carla/blob/master/Docs/Example.CarlaSettings.ini
|
||||
[imgconvlink]: https://github.com/carla-simulator/carla/tree/master/Util/ImageConverter
|
||||
The "RGB" camera acts as a regular camera capturing images from the scene.
|
||||
|
||||
Camera: Scene final
|
||||
-------------------
|
||||
| Blueprint attribute | Type | Default | Description |
|
||||
| ------------------- | ---- | ------- | ----------- |
|
||||
| `image_size_x` | int | 800 | Image width in pixels |
|
||||
| `image_size_y` | int | 600 | Image height in pixels |
|
||||
| `fov` | float | 90.0 | Field of view in degrees |
|
||||
| `enable_postprocess_effects` | bool | True | Whether the post-process effect in the scene affect the image |
|
||||
|
||||

|
||||
|
||||
The "scene final" camera provides a view of the scene after applying some
|
||||
post-processing effects to create a more realistic feel. These are actually
|
||||
stored in the Level, in an actor called [PostProcessVolume][postprolink] and not
|
||||
in the Camera. We use the following post process effects:
|
||||
If `enable_postprocess_effects` is enabled, a set of post-process effects is
|
||||
applied to the image to create a more realistic feel
|
||||
|
||||
* **Vignette** Darkens the border of the screen.
|
||||
* **Grain jitter** Adds a bit of noise to the render.
|
||||
* **Bloom** Intense lights burn the area around them.
|
||||
* **Auto exposure** Modifies the image gamma to simulate the eye adaptation to darker or brighter areas.
|
||||
* **Auto exposure** Modifies the image gamma to simulate the eye adaptation to
|
||||
darker or brighter areas.
|
||||
* **Lens flares** Simulates the reflection of bright objects on the lens.
|
||||
* **Depth of field** Blurs objects near or very far away of the camera.
|
||||
|
||||
[postprolink]: https://docs.unrealengine.com/latest/INT/Engine/Rendering/PostProcessEffects/
|
||||
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
|
||||
objects.
|
||||
|
||||
<h6>Python</h6>
|
||||
| Sensor data attribute | Type | Description |
|
||||
| --------------------- | ---- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `width` | int | Image width in pixels |
|
||||
| `height` | int | Image height in pixels |
|
||||
| `fov` | float | Field of view in degrees |
|
||||
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
||||
|
||||
```py
|
||||
camera = carla.sensor.Camera('MyCamera', PostProcessing='SceneFinal')
|
||||
camera.set(FOV=90.0)
|
||||
camera.set_image_size(800, 600)
|
||||
camera.set_position(x=0.30, y=0, z=1.30)
|
||||
camera.set_rotation(pitch=0, yaw=0, roll=0)
|
||||
sensor.camera.depth
|
||||
-------------------
|
||||
|
||||
carla_settings.add_sensor(camera)
|
||||

|
||||
|
||||
The "Depth" camera provides a view over the scene codifying the distance of each
|
||||
pixel to the camera (also known as **depth buffer** or **z-buffer**).
|
||||
|
||||
| Blueprint attribute | Type | Default | Description |
|
||||
| ------------------- | ---- | ------- | ----------- |
|
||||
| `image_size_x` | int | 800 | Image width in pixels |
|
||||
| `image_size_y` | int | 600 | Image height in pixels |
|
||||
| `fov` | float | 90.0 | Field of view in degrees |
|
||||
|
||||
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
|
||||
objects.
|
||||
|
||||
| Sensor data attribute | Type | Description |
|
||||
| --------------------- | ---- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `width` | int | Image width in pixels |
|
||||
| `height` | int | Image height in pixels |
|
||||
| `fov` | float | Field of view in degrees |
|
||||
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
||||
|
||||
|
||||
The image codifies the depth in 3 channels of the RGB color space, from less to
|
||||
more significant bytes: R -> G -> B. The actual distance in meters can be
|
||||
decoded with
|
||||
|
||||
```
|
||||
normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
|
||||
in_meters = 1000 * normalized
|
||||
```
|
||||
|
||||
<h6>CarlaSettings.ini</h6>
|
||||
sensor.camera.semantic_segmentation
|
||||
-----------------------------------
|
||||
|
||||
```ini
|
||||
[CARLA/Sensor/MyCamera]
|
||||
SensorType=CAMERA
|
||||
PostProcessing=SceneFinal
|
||||
ImageSizeX=800
|
||||
ImageSizeY=600
|
||||
FOV=90
|
||||
PositionX=0.30
|
||||
PositionY=0
|
||||
PositionZ=1.30
|
||||
RotationPitch=0
|
||||
RotationRoll=0
|
||||
RotationYaw=0
|
||||
```
|
||||

|
||||
|
||||
Camera: Depth map
|
||||
-----------------
|
||||
|
||||

|
||||
|
||||
The "depth map" camera provides an image with 24 bit floating precision point
|
||||
codified in the 3 channels of the RGB color space. The order from less to more
|
||||
significant bytes is R -> G -> B.
|
||||
|
||||
| R | G | B | int24 | |
|
||||
|----------|----------|----------|----------|------------|
|
||||
| 00000000 | 00000000 | 00000000 | 0 | min (near) |
|
||||
| 11111111 | 11111111 | 11111111 | 16777215 | max (far) |
|
||||
|
||||
Our max render distance (far) is 1km.
|
||||
|
||||
1. To decodify our depth first we get the int24.
|
||||
|
||||
R + G*256 + B*256*256
|
||||
|
||||
2. Then normalize it in the range [0, 1].
|
||||
|
||||
Ans / ( 256*256*256 - 1 )
|
||||
|
||||
3. And finally multiply for the units that we want to get. We have set the far plane at 1000 metres.
|
||||
|
||||
Ans * far
|
||||
|
||||
The generated "depth map" images are usually converted to a logarithmic
|
||||
grayscale for display. A point cloud can also be extracted from depth images as
|
||||
seen in "Deprecated/PythonClient/point_cloud_example.py".
|
||||
|
||||
<h6>Python</h6>
|
||||
|
||||
```py
|
||||
camera = carla.sensor.Camera('MyCamera', PostProcessing='Depth')
|
||||
camera.set(FOV=90.0)
|
||||
camera.set_image_size(800, 600)
|
||||
camera.set_position(x=0.30, y=0, z=1.30)
|
||||
camera.set_rotation(pitch=0, yaw=0, roll=0)
|
||||
|
||||
carla_settings.add_sensor(camera)
|
||||
```
|
||||
|
||||
<h6>CarlaSettings.ini</h6>
|
||||
|
||||
```ini
|
||||
[CARLA/Sensor/MyCamera]
|
||||
SensorType=CAMERA
|
||||
PostProcessing=Depth
|
||||
ImageSizeX=800
|
||||
ImageSizeY=600
|
||||
FOV=90
|
||||
PositionX=0.30
|
||||
PositionY=0
|
||||
PositionZ=1.30
|
||||
RotationPitch=0
|
||||
RotationRoll=0
|
||||
RotationYaw=0
|
||||
```
|
||||
|
||||
Camera: Semantic segmentation
|
||||
-----------------------------
|
||||
|
||||

|
||||
|
||||
The "semantic segmentation" camera classifies every object in the view by
|
||||
The "Semantic Segmentation" camera classifies every object in the view by
|
||||
displaying it in a different color according to the object class. E.g.,
|
||||
pedestrians appear in a different color than vehicles.
|
||||
|
||||
| Blueprint attribute | Type | Default | Description |
|
||||
| ------------------- | ---- | ------- | ----------- |
|
||||
| `image_size_x` | int | 800 | Image width in pixels |
|
||||
| `image_size_y` | int | 600 | Image height in pixels |
|
||||
| `fov` | float | 90.0 | Field of view in degrees |
|
||||
|
||||
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
|
||||
objects.
|
||||
|
||||
| Sensor data attribute | Type | Description |
|
||||
| --------------------- | ---- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `width` | int | Image width in pixels |
|
||||
| `height` | int | Image height in pixels |
|
||||
| `fov` | float | Field of view in degrees |
|
||||
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
||||
|
||||
The server provides an image with the tag information **encoded in the red
|
||||
channel**. A pixel with a red value of x displays an object with tag x. The
|
||||
following tags are currently available
|
||||
|
||||
Value | Tag
|
||||
-----:|:-----
|
||||
0 | None
|
||||
1 | Buildings
|
||||
2 | Fences
|
||||
3 | Other
|
||||
4 | Pedestrians
|
||||
5 | Poles
|
||||
6 | RoadLines
|
||||
7 | Roads
|
||||
8 | Sidewalks
|
||||
9 | Vegetation
|
||||
10 | Vehicles
|
||||
11 | Walls
|
||||
12 | TrafficSigns
|
||||
|
||||
| Value | Tag | Converted color |
|
||||
| -----:|:------------ | --------------- |
|
||||
| 0 | Unlabeled | ( 0, 0, 0) |
|
||||
| 1 | Building | ( 70, 70, 70) |
|
||||
| 2 | Fence | (190, 153, 153) |
|
||||
| 3 | Other | (250, 170, 160) |
|
||||
| 4 | Pedestrian | (220, 20, 60) |
|
||||
| 5 | Pole | (153, 153, 153) |
|
||||
| 6 | Road line | (157, 234, 50) |
|
||||
| 7 | Road | (128, 64, 128) |
|
||||
| 8 | Sidewalk | (244, 35, 232) |
|
||||
| 9 | Vegetation | (107, 142, 35) |
|
||||
| 10 | Car | ( 0, 0, 142) |
|
||||
| 11 | Wall | (102, 102, 156) |
|
||||
| 12 | Traffic sign | (220, 220, 0) |
|
||||
|
||||
This is implemented by tagging every object in the scene before hand (either at
|
||||
begin play or on spawn). The objects are classified by their relative file path
|
||||
|
@ -202,91 +178,102 @@ _"Unreal/CarlaUE4/Content/Static/Pedestrians"_ folder it's tagged as pedestrian.
|
|||
and its corresponding filepath check inside `GetLabelByFolderName()`
|
||||
function in "Tagger.cpp".
|
||||
|
||||
<h6>Python</h6>
|
||||
|
||||
```py
|
||||
camera = carla.sensor.Camera('MyCamera', PostProcessing='SemanticSegmentation')
|
||||
camera.set(FOV=90.0)
|
||||
camera.set_image_size(800, 600)
|
||||
camera.set_position(x=0.30, y=0, z=1.30)
|
||||
camera.set_rotation(pitch=0, yaw=0, roll=0)
|
||||
|
||||
carla_settings.add_sensor(camera)
|
||||
```
|
||||
|
||||
<h6>CarlaSettings.ini</h6>
|
||||
|
||||
```ini
|
||||
[CARLA/Sensor/MyCamera]
|
||||
SensorType=CAMERA
|
||||
PostProcessing=SemanticSegmentation
|
||||
ImageSizeX=800
|
||||
ImageSizeY=600
|
||||
FOV=90
|
||||
PositionX=0.30
|
||||
PositionY=0
|
||||
PositionZ=1.30
|
||||
RotationPitch=0
|
||||
RotationRoll=0
|
||||
RotationYaw=0
|
||||
```
|
||||
|
||||
Ray-cast based Lidar
|
||||
--------------------
|
||||
sensor.lidar.ray_cast
|
||||
---------------------
|
||||
|
||||

|
||||
|
||||
A rotating Lidar implemented with ray-casting. The points are computed by adding
|
||||
a laser for each channel distributed in the vertical FOV, then the rotation is
|
||||
simulated computing the horizontal angle that the Lidar rotated this frame, and
|
||||
doing a ray-cast for each point that each laser was supposed to generate this
|
||||
frame; `PointsPerSecond / (FPS * Channels)`.
|
||||
This sensor simulates a rotating Lidar implemented using ray-casting. The points
|
||||
are computed by adding a laser for each channel distributed in the vertical FOV,
|
||||
then the rotation is simulated computing the horizontal angle that the Lidar
|
||||
rotated this frame, and doing a ray-cast for each point that each laser was
|
||||
supposed to generate this frame; `points_per_second / (FPS * channels)`.
|
||||
|
||||
Each frame the server sends a packet with all the points generated during a
|
||||
`1/FPS` interval. During the interval the physics wasn’t updated so all the
|
||||
points in a packet reflect the same "static picture" of the scene.
|
||||
| Blueprint attribute | Type | Default | Description |
|
||||
| -------------------- | ---- | ------- | ----------- |
|
||||
| `channels` | int | 32 | Number of lasers |
|
||||
| `range` | float | 1000 | Maximum measurement distance in meters |
|
||||
| `points_per_second` | int | 56000 | Points generated by all lasers per second |
|
||||
| `rotation_frequency` | float | 10.0 | Lidar rotation frequency |
|
||||
| `upper_fov` | float | 10.0 | Angle in degrees of the upper most laser |
|
||||
| `lower_fov` | float | -30.0 | Angle in degrees of the lower most laser |
|
||||
|
||||
The received `LidarMeasurement` object contains the following information
|
||||
This sensor produces
|
||||
[`carla.LidarMeasurement`](python_api.md#carlalidarmeasurementcarlasensordata)
|
||||
objects.
|
||||
|
||||
Key | Type | Description
|
||||
-------------------------- | ---------- | ------------
|
||||
horizontal_angle | float | Angle in XY plane of the lidar this frame (in degrees).
|
||||
channels | uint32 | Number of channels (lasers) of the lidar.
|
||||
point_count_by_channel | uint32 | Number of points per channel captured this frame.
|
||||
point_cloud | PointCloud | Captured points this frame.
|
||||
| Sensor data attribute | Type | Description |
|
||||
| -------------------------- | ---------- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `horizontal_angle` | float | Angle in XY plane of the lidar this frame (in degrees) |
|
||||
| `channels` | int | Number of channels (lasers) of the lidar |
|
||||
| `get_point_count(channel)` | int | Number of points per channel captured this frame |
|
||||
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point) |
|
||||
|
||||
<h6>Python</h6>
|
||||
The object also acts as a Python list of `carla.Location`
|
||||
|
||||
```py
|
||||
lidar = carla.sensor.Lidar('MyLidar')
|
||||
lidar.set(
|
||||
Channels=32,
|
||||
Range=50,
|
||||
PointsPerSecond=100000,
|
||||
RotationFrequency=10,
|
||||
UpperFovLimit=10,
|
||||
LowerFovLimit=-30)
|
||||
lidar.set_position(x=0, y=0, z=1.40)
|
||||
lidar.set_rotation(pitch=0, yaw=0, roll=0)
|
||||
|
||||
carla_settings.add_sensor(lidar)
|
||||
for location in lidar_measurement:
|
||||
print(location)
|
||||
```
|
||||
|
||||
<h6>CarlaSettings.ini</h6>
|
||||
A Lidar measurement contains a packet with all the points generated during a
|
||||
`1/FPS` interval. During this interval the physics is not updated so all the
|
||||
points in a measurement reflect the same "static picture" of the scene.
|
||||
|
||||
```ini
|
||||
[CARLA/Sensor/MyLidar]
|
||||
SensorType=LIDAR_RAY_CAST
|
||||
Channels=32
|
||||
Range=50
|
||||
PointsPerSecond=100000
|
||||
RotationFrequency=10
|
||||
UpperFOVLimit=10
|
||||
LowerFOVLimit=-30
|
||||
PositionX=0
|
||||
PositionY=0
|
||||
PositionZ=1.40
|
||||
RotationPitch=0
|
||||
RotationYaw=0
|
||||
RotationRoll=0
|
||||
```
|
||||
!!! tip
|
||||
Running the simulator at
|
||||
[fixed time-step](configuring_the_simulation.md#fixed-time-step) it is
|
||||
possible to tune the horizontal angle of each measurement. By adjusting the
|
||||
frame rate and the rotation frequency is possible, for instance, to get a
|
||||
360 view each measurement.
|
||||
|
||||
sensor.other.collision
|
||||
----------------------
|
||||
|
||||
This sensor, when attached to an actor, it registers an event each time the
|
||||
actor collisions against something in the world. This sensor does not have any
|
||||
configurable attribute.
|
||||
|
||||
This sensor produces a
|
||||
[`carla.CollisionEvent`](python_api.md#carlacollisioneventcarlasensordata)
|
||||
object for each collision registered
|
||||
|
||||
| Sensor data attribute | Type | Description |
|
||||
| ---------------------- | ----------- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `actor` | carla.Actor | Actor that measured the collision ("self" actor) |
|
||||
| `other_actor` | carla.Actor | Actor against whom we collide |
|
||||
| `normal_impulse` | carla.Vector3D | Normal impulse result of the collision |
|
||||
|
||||
Note that several collision events might be registered during a single
|
||||
simulation update.
|
||||
|
||||
sensor.other.lane_detector
|
||||
--------------------------
|
||||
|
||||
> _This sensor is a work in progress, currently very limited._
|
||||
|
||||
This sensor, when attached to an actor, it registers an event each time the
|
||||
actor crosses a lane marking. This sensor is somehow special as it works fully
|
||||
on the client-side. The lane detector uses the road data of the active map to
|
||||
determine whether a vehicle is invading another lane. This information is based
|
||||
on the OpenDrive file provided by the map, therefore it is subject to the
|
||||
fidelity of the OpenDrive description. In some places there might be
|
||||
discrepancies between the lanes visible by the cameras and the lanes registered
|
||||
by this sensor.
|
||||
|
||||
This sensor does not have any configurable attribute.
|
||||
|
||||
This sensor produces a
|
||||
[`carla.LaneInvasionEvent`](python_api.md#carlalaneinvasioneventcarlasensordata)
|
||||
object for each lane marking crossed by the actor
|
||||
|
||||
| Sensor data attribute | Type | Description |
|
||||
| ----------------------- | ----------- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `actor` | carla.Actor | Actor that invaded another lane ("self" actor) |
|
||||
| `crossed_lane_markings` | carla.LaneMarking list | List of lane markings that have been crossed |
|
||||
|
|
|
@ -1,47 +0,0 @@
|
|||
CARLA Design
|
||||
============
|
||||
|
||||
> _This document is a work in progress and might be incomplete._
|
||||
|
||||
CARLA is composed by the following modules
|
||||
|
||||
* Client side
|
||||
- Python client API: "Deprecated/PythonClient/carla"
|
||||
* Server side
|
||||
- CarlaUE4 Unreal Engine project: "Unreal/CarlaUE4"
|
||||
- Carla plugin for Unreal Engine: "Unreal/CarlaUE4/Plugins/Carla"
|
||||
- CarlaServer: "Util/CarlaServer"
|
||||
|
||||
!!! tip
|
||||
Documentation for the C++ code can be generated by running
|
||||
[Doxygen](http://www.doxygen.org) in the main folder of CARLA project.
|
||||
|
||||
Python client API
|
||||
-----------------
|
||||
|
||||
The client API provides a Python module for communicating with the CARLA server.
|
||||
In the folder "Deprecated/PythonClient", we provide several examples for scripting a CARLA
|
||||
client using the "carla" module.
|
||||
|
||||
CarlaUE4 Unreal Engine project
|
||||
------------------------------
|
||||
|
||||
The Unreal project "CarlaUE4" contains all the assets and scenes for generating
|
||||
the CARLA binary. It uses the tools provided by the Carla plugin to assemble the
|
||||
cities and behavior of the agents in the scene.
|
||||
|
||||
Carla plugin for Unreal Engine
|
||||
------------------------------
|
||||
|
||||
The Carla plugin contains all the functionality of CARLA. We tried to keep this
|
||||
functionality separated from the assets, so the functionality in this plugin can
|
||||
be used as much as possible in any Unreal project.
|
||||
|
||||
It uses "CarlaServer" library for the networking communication.
|
||||
|
||||
CarlaServer
|
||||
-----------
|
||||
|
||||
External library for the networking communications.
|
||||
|
||||
See ["CarlaServer"](carla_server.md) for implementation details.
|
|
@ -1,114 +0,0 @@
|
|||
<h1>CARLA Server</h1>
|
||||
|
||||
Build
|
||||
-----
|
||||
|
||||
Some scripts are provided for building and testing CarlaServer on Linux
|
||||
|
||||
$ ./Setup.sh
|
||||
$ make
|
||||
$ make check
|
||||
|
||||
The setup script downloads and compiles all the required dependencies. The
|
||||
Makefile calls CMake to build CarlaServer and installs it under "Util/Install".
|
||||
|
||||
Protocol
|
||||
--------
|
||||
|
||||
All the messages are prepended by a 32 bits unsigned integer (little-endian)
|
||||
indicating the size of the coming message.
|
||||
|
||||
Three consecutive ports are used,
|
||||
|
||||
* world-port (default 2000)
|
||||
* measurements-port = world-port + 1
|
||||
* control-port = world-port + 2
|
||||
|
||||
each of these ports has an associated thread that sends/reads data
|
||||
asynchronuosly.
|
||||
|
||||
<h4>World thread</h4>
|
||||
|
||||
Server reads one, writes one. Always protobuf messages.
|
||||
|
||||
[client] RequestNewEpisode
|
||||
[server] SceneDescription
|
||||
[client] EpisodeStart
|
||||
[server] EpisodeReady
|
||||
...repeat...
|
||||
|
||||
<h4>Measurements thread</h4>
|
||||
|
||||
Server only writes, first measurements message then the bulk of raw images.
|
||||
|
||||
[server] Measurements
|
||||
[server] raw images
|
||||
...repeat...
|
||||
|
||||
Every image is an array of
|
||||
|
||||
[frame_number, width, height, type, FOV, color[0], color[1], ...]
|
||||
|
||||
of types
|
||||
|
||||
[uint64, uint32, uint32, uint32, float32, uint32, uint32, ...]
|
||||
|
||||
where FOV is the horizontal field of view of the camera as float, each color is
|
||||
an [FColor][fcolorlink] (BGRA) as stored in Unreal Engine, and the possible
|
||||
types of images are
|
||||
|
||||
type = 0 None (RGB without any post-processing)
|
||||
type = 1 SceneFinal (RGB with post-processing present at the scene)
|
||||
type = 2 Depth (Depth Map)
|
||||
type = 3 SemanticSegmentation (Semantic Segmentation)
|
||||
|
||||
The measurements message is explained in detail [here](measurements.md).
|
||||
|
||||
[fcolorlink]: https://docs.unrealengine.com/latest/INT/API/Runtime/Core/Math/FColor/index.html "FColor API Documentation"
|
||||
|
||||
<h4>Control thread</h4>
|
||||
|
||||
Server only reads, client sends Control message every frame.
|
||||
|
||||
[client] Control
|
||||
...repeat...
|
||||
|
||||
In the synchronous mode, the server halts execution each frame until the Control
|
||||
message is received.
|
||||
|
||||
C API
|
||||
-----
|
||||
|
||||
The library is encapsulated behind a single include file in C,
|
||||
["carla/carla_server.h"][carlaserverhlink].
|
||||
|
||||
This file contains the basic interface for reading and writing messages to the
|
||||
client, hiding the networking and multi-threading part. Most of the functions
|
||||
have a time-out parameter and block until the corresponding asynchronous
|
||||
operation is completed or the time-out is met. Set a time-out of 0 to get a
|
||||
non-blocking call.
|
||||
|
||||
A CarlaServer instance is created with `carla_make_server()` and should be
|
||||
destroyed after use with `carla_server_free(ptr)`.
|
||||
|
||||
[carlaserverhlink]: https://github.com/carla-simulator/carla/blob/master/Util/CarlaServer/include/carla/carla_server.h
|
||||
|
||||
Design
|
||||
------
|
||||
|
||||
The C API takes care of dispatching the request to the corresponding server.
|
||||
There are three asynchronous servers each of them running on its own thread.
|
||||
|
||||

|
||||
|
||||
Conceptually there are two servers, the _World Server_ and the _Agent Server_.
|
||||
The _World Server_ controls the initialization of episodes. A new episode is
|
||||
started every time it is requested to the World Server by a RequestNewEpisode
|
||||
message. Once the episode is ready, the World Server launches the Agent Server.
|
||||
The _Agent Server_ has two threads, one for sending the streaming of the
|
||||
measurements and another for receiving the control. Both agent threads
|
||||
communicate with the main thread through a lock-free double-buffer to speed up
|
||||
the streaming of messages and images.
|
||||
|
||||
The encoding of the messages (protobuf) and the networking operations are
|
||||
executed asynchronously.
|
|
@ -1,65 +0,0 @@
|
|||
<h1>CARLA Settings</h1>
|
||||
|
||||
> _This document is a work in progress and might be incomplete._
|
||||
|
||||
!!! important
|
||||
This document still refers to the 0.8.X API (stable version). The
|
||||
proceedings stated here may not apply to latest versions, 0.9.0 or later.
|
||||
Latest versions introduced significant changes in the API, we are still
|
||||
working on documenting everything, sorry for the inconvenience.
|
||||
|
||||
CarlaSettings.ini
|
||||
-----------------
|
||||
|
||||
CARLA reads its settings from a "CarlaSettings.ini" file. This file controls
|
||||
most aspects of the simulation, and it is loaded every time a new episode is
|
||||
started (every time the level is loaded).
|
||||
|
||||
Settings are loaded following the next hierarchy, with values later in the
|
||||
hierarchy overriding earlier values.
|
||||
|
||||
1. `{CarlaFolder}/Unreal/CarlaUE4/Config/CarlaSettings.ini`.
|
||||
2. File provided by command-line argument `-carla-settings="Path/To/CarlaSettings.ini"`.
|
||||
3. Other command-line arguments like `-carla-port`.
|
||||
4. Settings file sent by the client on every new episode.
|
||||
|
||||
Take a look at the [CARLA Settings example][settingslink].
|
||||
|
||||
[settingslink]: https://github.com/carla-simulator/carla/blob/master/Docs/Example.CarlaSettings.ini
|
||||
|
||||
Weather presets
|
||||
---------------
|
||||
|
||||
The weather and lighting conditions can be chosen from a set of predefined
|
||||
settings. To select one, set the `WeatherId` key in CarlaSettings.ini. The
|
||||
following presets are available
|
||||
|
||||
* 0 - Default
|
||||
* 1 - ClearNoon
|
||||
* 2 - CloudyNoon
|
||||
* 3 - WetNoon
|
||||
* 4 - WetCloudyNoon
|
||||
* 5 - MidRainyNoon
|
||||
* 6 - HardRainNoon
|
||||
* 7 - SoftRainNoon
|
||||
* 8 - ClearSunset
|
||||
* 9 - CloudySunset
|
||||
* 10 - WetSunset
|
||||
* 11 - WetCloudySunset
|
||||
* 12 - MidRainSunset
|
||||
* 13 - HardRainSunset
|
||||
* 14 - SoftRainSunset
|
||||
|
||||
E.g., to choose the weather to be hard-rain at noon, add to CarlaSettings.ini
|
||||
|
||||
```
|
||||
[CARLA/LevelSettings]
|
||||
WeatherId=6
|
||||
```
|
||||
|
||||
Simulator command-line options
|
||||
------------------------------
|
||||
|
||||
* `-carla-settings="Path/To/CarlaSettings.ini"` Load settings from the given INI file. See Example.CarlaSettings.ini.
|
||||
* `-carla-port=N` Listen for client connections at port N, streaming port is set to N+1.
|
||||
* `-carla-no-hud` Do not display the HUD by default.
|
|
@ -4,8 +4,6 @@ Before you start running your own experiments there are few details to take into
|
|||
account at the time of configuring your simulation. In this document we cover
|
||||
the most important ones.
|
||||
|
||||
For the full list of settings please see [CARLA Settings](carla_settings.md).
|
||||
|
||||
Fixed time-step
|
||||
---------------
|
||||
|
||||
|
@ -13,11 +11,11 @@ The time-step is the _simulation-time_ elapsed between two steps of the
|
|||
simulation. In video-games, this _simulation-time_ is almost always adjusted to
|
||||
real time for better realism. This is achieved by having a **variable
|
||||
time-step** that adjusts the simulation to keep up with real-time. In
|
||||
simulations however, it is better to detach the _simulation-time_ from real-
|
||||
time, and let the simulation run as fast as possible using a **fixed time-
|
||||
step**. Doing so, we are not only able to simulate longer periods in less time,
|
||||
but also gain repeatability by reducing the float-point arithmetic errors that a
|
||||
variable time-step introduces.
|
||||
simulations however, it is better to detach the _simulation-time_ from
|
||||
real-time, and let the simulation run as fast as possible using a **fixed
|
||||
time-step**. Doing so, we are not only able to simulate longer periods in less
|
||||
time, but also gain repeatability by reducing the float-point arithmetic errors
|
||||
that a variable time-step introduces.
|
||||
|
||||
CARLA can be run in both modes.
|
||||
|
||||
|
@ -33,13 +31,67 @@ The simulation runs as fast as possible, simulating the same time increment on
|
|||
each step. To run the simulator this way you need to pass two parameters in the
|
||||
command-line, one to enable the fixed time-step mode, and the second to specify
|
||||
the FPS of the simulation (i.e. the inverse of the time step). For instance, to
|
||||
run the simulation at a fixed time-step of 0.2 seconds we execute
|
||||
run the simulation at a fixed time-step of 0.1 seconds we execute
|
||||
|
||||
$ ./CarlaUE4.sh -benchmark -fps=5
|
||||
$ ./CarlaUE4.sh -benchmark -fps=10
|
||||
|
||||
It is important to note that this mode can only be enabled when launching the
|
||||
simulator since this is actually a feature of Unreal Engine.
|
||||
|
||||
!!! important
|
||||
**Do not decrease the frame-rate below 10 FPS.**<br>
|
||||
Our settings are adjusted to clamp the physics engine to a minimum of 10
|
||||
FPS. If the game tick falls below this, the physics engine will still
|
||||
simulate 10 FPS. In that case, things dependent on the game's delta time are
|
||||
no longer in sync with the physics engine.
|
||||
Ref. [#695](https://github.com/carla-simulator/carla/issues/695)
|
||||
|
||||
|
||||
Changing the map
|
||||
----------------
|
||||
|
||||
The map can be selected by passing the path to the map as first argument when
|
||||
launching the simulator
|
||||
|
||||
```sh
|
||||
# Linux
|
||||
./CarlaUE4.sh /Game/Carla/Maps/Town01
|
||||
```
|
||||
|
||||
```cmd
|
||||
rem Windows
|
||||
CarlaUE4.exe /Game/Carla/Maps/Town01
|
||||
```
|
||||
|
||||
The path "/Game/" maps to the Content folder of our repository in
|
||||
"Unreal/CarlaUE4/Content/".
|
||||
|
||||
Running off-screen
|
||||
------------------
|
||||
|
||||
In Linux, you can force the simulator to run off-screen by setting the
|
||||
environment variable `DISPLAY` to empty
|
||||
|
||||
```sh
|
||||
# Linux
|
||||
DISPLAY= ./CarlaUE4.sh
|
||||
```
|
||||
|
||||
This launches the simulator without simulator window, of course you can still
|
||||
connect to it normally and run the example scripts. Note that with this method,
|
||||
in multi-GPU environments, it's not possible to select the GPU that the
|
||||
simulator will use for rendering. To do so, follow the instruction in
|
||||
[Running without display and selecting GPUs](carla_headless.md).
|
||||
|
||||
Other command-line options
|
||||
--------------------------
|
||||
|
||||
* `-carla-port=N` Listen for client connections at port N, streaming port is set to N+1.
|
||||
* `-quality-level={Low,Epic}` Change graphics quality level, "Low" mode runs significantly faster.
|
||||
* [Full list of UE4 command-line arguments][ue4clilink].
|
||||
|
||||
[ue4clilink]: https://docs.unrealengine.com/en-US/Programming/Basics/CommandLineArguments
|
||||
|
||||
<!-- Disabled for now...
|
||||
|
||||
Synchronous vs Asynchronous mode
|
||||
|
|
|
@ -1,101 +0,0 @@
|
|||
<h1>Connecting a Python client</h1>
|
||||
|
||||

|
||||
|
||||
The power of CARLA simulator resides in its ability to be controlled
|
||||
programmatically with an external client. This client can control most of the
|
||||
aspects of simulation, from environment to duration of each episode, it can
|
||||
retrieve data from different sensors, and send control instructions to the
|
||||
player vehicle.
|
||||
|
||||
Deprecated/PythonClient contents
|
||||
--------------------------------
|
||||
|
||||
In the release package, inside the _"Deprecated/PythonClient"_ folder, we
|
||||
provide the Python API module together with some use examples.
|
||||
|
||||
File or folder | Description
|
||||
------------------------ | ------------
|
||||
carla/ | Contains the "carla" module, the Python API for communicating with the simulator.
|
||||
client_example.py | Basic usage example of the "carla" module.
|
||||
manual_control.py | A GUI client in which the vehicle can be controlled manually.
|
||||
point_cloud_example.py | Usage example for converting depth images into a point cloud in world coordinates.
|
||||
run_benchmark.py | Run the CoRL'17 benchmark with a trivial agent.
|
||||
view_start_positions.py | Show all the possible start positions in a map
|
||||
|
||||
!!! note
|
||||
If you are building CARLA from source, the Python code is inside the
|
||||
_"Deprecated/PythonClient"_ folder in the CARLA repository. Bear in mind
|
||||
that the `master` branch contains latest fixes and changes that might be
|
||||
incompatible with the release version. Consider using the `stable` branch.
|
||||
|
||||
Install dependencies
|
||||
--------------------
|
||||
|
||||
We recommend using Python 3.5, but all the Python code in the "carla" module and
|
||||
given examples is also compatible with Python 2.7.
|
||||
|
||||
Install the dependencies with "pip" using the requirements file provided
|
||||
|
||||
$ pip install -r Deprecated/PythonClient/requirements.txt
|
||||
|
||||
Running the client example
|
||||
--------------------------
|
||||
|
||||
The "client_example.py" script contains a basic usage example for using the
|
||||
"carla" module. We recommend taking a look at the source-code of this script if
|
||||
you plan to familiarize with the CARLA Python API.
|
||||
|
||||
<h4>Launching the client</h4>
|
||||
|
||||
The script tries to connect to a CARLA simulator instance running in _server
|
||||
mode_. Now we are going to launch the script with "autopilot" enabled
|
||||
|
||||
$ ./client_example.py --autopilot
|
||||
|
||||
The script now will try repeatedly to connect with the server, since we haven't
|
||||
started the simulator yet it will keep printing an error until we launch the
|
||||
server.
|
||||
|
||||
!!! note
|
||||
By default CARLA uses the ports 2000, 2001, and 2002. Make sure to have
|
||||
these ports available.
|
||||
|
||||
<h4>Launching the simulator in server mode</h4>
|
||||
|
||||
To launch CARLA simulator in **server mode** we just need to pass the
|
||||
`-carla-server` argument
|
||||
|
||||
$ ./CarlaUE4.sh -carla-server
|
||||
|
||||
Once the map is loaded, the vehicle should start driving around controlled by
|
||||
the Python script.
|
||||
|
||||
!!! important
|
||||
Before you start running your own experiments, it is important to know the
|
||||
details for running the simulator at **fixed time-step** for achieving
|
||||
maximum speed and repeatability. We will cover this in the next item
|
||||
[Configuring the simulation](configuring_the_simulation.md).
|
||||
|
||||
<h4>Saving images to disk</h4>
|
||||
|
||||
Now you can stop the client script and relaunch it with different options. For
|
||||
instance now we are going to save to disk the images of the two cameras that the
|
||||
client attaches to the vehicle
|
||||
|
||||
$ ./client_example.py --autopilot --images-to-disk
|
||||
|
||||
And _"_out"_ folder should have appeared in your working directory containing each
|
||||
captured frame as PNG.
|
||||
|
||||

|
||||
|
||||
You can see all the available options in the script's help
|
||||
|
||||
$ ./client_example.py --help
|
||||
|
||||
<h4>Running other examples</h4>
|
||||
|
||||
The other scripts present in the _"Deprecated/PythonClient"_ folder run in a
|
||||
similar fashion. We recommend now launching _"manual_control.py"_ for a GUI
|
||||
interface implemented with PyGame.
|
|
@ -1,21 +1,23 @@
|
|||
# Download
|
||||
|
||||
### Development [[Documentation](https://carla.readthedocs.io/en/latest/)]
|
||||
|
||||
> These are the version of CARLA, more frequently updated and with the latest
|
||||
> features. Keep in mind that the API and features in this channel can (and
|
||||
> probably will) change.
|
||||
|
||||
- [CARLA 0.9.1](https://github.com/carla-simulator/carla/releases/tag/0.9.1) -
|
||||
[[Blog post](http://carla.org/2018/11/16/release-0.9.1/)] - _Vehicle navigation, new waypoint-based API, maps creation, and more_
|
||||
- [CARLA 0.9.0](https://github.com/carla-simulator/carla/releases/tag/0.9.0) -
|
||||
[[Blog post](http://carla.org/2018/07/30/release-0.9.0/)] - _New API, multi-client multi-agent support_
|
||||
- [CARLA 0.8.4](https://github.com/carla-simulator/carla/releases/tag/0.8.4) -
|
||||
[[Blog post](http://carla.org/2018/06/18/release-0.8.4/)] - _Fixes And More!_
|
||||
- [CARLA 0.8.3](https://github.com/carla-simulator/carla/releases/tag/0.8.3) -
|
||||
[[Blog post](http://carla.org/2018/06/08/release-0.8.3/)] - _Now with bikes!_
|
||||
|
||||
### Stable [[Documentation](https://carla.readthedocs.io/en/stable/)]
|
||||
|
||||
> The most tested and robust release out there!
|
||||
|
||||
- [CARLA 0.8.2](https://github.com/carla-simulator/carla/releases/tag/0.8.2) -
|
||||
[[Blog post](http://carla.org/2018/04/23/release-0.8.2/)] - _Driving Benchmark_
|
||||
|
||||
### Development
|
||||
|
||||
> These are the version of CARLA, more frequently updated and with the latest features.
|
||||
Keep in mind that everything in this channel can (and probably will) change.
|
||||
|
||||
- [CARLA 0.9.1](https://github.com/carla-simulator/carla/releases/tag/0.9.1)
|
||||
- [CARLA 0.9.0](https://github.com/carla-simulator/carla/releases/tag/0.9.0) -
|
||||
[[Blog post](http://carla.org/2018/07/30/release-0.9.0/)] - _New API, multi-client multi-agent support_
|
||||
- [CARLA 0.8.4](https://github.com/carla-simulator/carla/releases/tag/0.8.4) -
|
||||
[[Blog post](http://carla.org/2018/06/18/release-0.8.4/)] - _Fixes And More!_
|
||||
- [CARLA 0.8.3](https://github.com/carla-simulator/carla/releases/tag/0.8.3) -
|
||||
[[Blog post](http://carla.org/2018/06/08/release-0.8.3/)] - _Now with bikes!_
|
||||
|
|
123
Docs/faq.md
|
@ -44,57 +44,12 @@ Once you open the project in the Unreal Editor, you can hit Play to test CARLA.
|
|||
<!-- ======================================================================= -->
|
||||
<details>
|
||||
<summary><h5 style="display:inline">
|
||||
Setup.sh fails to download content, can I skip this step?
|
||||
Can I connect to the simulator while running within Unreal Editor?
|
||||
</h4></summary>
|
||||
|
||||
It is possible to skip the download step by passing the `-s` argument to the
|
||||
setup script
|
||||
|
||||
$ ./Setup.sh -s
|
||||
|
||||
Bear in mind that if you do so, you are supposed to manually download and
|
||||
extract the content package yourself, check out the last output of the Setup.sh
|
||||
for instructions or run
|
||||
|
||||
$ ./Update.sh -s
|
||||
|
||||
</details>
|
||||
|
||||
<!-- ======================================================================= -->
|
||||
<details>
|
||||
<summary><h5 style="display:inline">
|
||||
Can I run the server from within Unreal Editor?
|
||||
</h4></summary>
|
||||
|
||||
Yes, you can connect the Python client to a server running within Unreal Editor
|
||||
as if it was the standalone server.
|
||||
|
||||
Go to **"Unreal/CarlaUE4/Config/CarlaSettings.ini"** (this file should have been
|
||||
created by the Setup.sh) and enable networking. If for whatever reason you don't
|
||||
have this file, just create it and add the following
|
||||
|
||||
```ini
|
||||
[CARLA/Server]
|
||||
UseNetworking=true
|
||||
```
|
||||
|
||||
Now when you hit Play the editor will hang until a client connects.
|
||||
|
||||
</details>
|
||||
|
||||
<!-- ======================================================================= -->
|
||||
<details>
|
||||
<summary><h5 style="display:inline">
|
||||
Why Unreal Editor hangs after hitting Play?
|
||||
</h4></summary>
|
||||
|
||||
This is most probably happening because CARLA is starting in server mode. Check
|
||||
your **"Unreal/CarlaUE4/Config/CarlaSettings.ini"** and set
|
||||
|
||||
```ini
|
||||
[CARLA/Server]
|
||||
UseNetworking=false
|
||||
```
|
||||
Yes, you can connect a Python client to a simulator running within Unreal
|
||||
Editor. Press the "Play" button and wait until the scene is loaded, at that
|
||||
point you can connect as you would with the standalone simulator.
|
||||
|
||||
</details>
|
||||
|
||||
|
@ -104,9 +59,9 @@ UseNetworking=false
|
|||
How can I create a binary version of CARLA?
|
||||
</h4></summary>
|
||||
|
||||
In Linux, the recommended way is to use the `Package.sh` script provided. This
|
||||
script makes a packaged version of the project, including the Python client.
|
||||
This is the script we use to make a release of CARLA for Linux.
|
||||
In Linux, the recommended way is to run `make package` in the project folder.
|
||||
This method makes a packaged version of the project, including the Python API
|
||||
modules. This is the method we use to make a release of CARLA for Linux.
|
||||
|
||||
Alternatively, it is possible to compile a binary version of CARLA within Unreal
|
||||
Editor, open the CarlaUE4 project, go to the menu "File -> Package Project", and
|
||||
|
@ -130,11 +85,11 @@ disable the "Use Less CPU When in Background" option.
|
|||
<!-- ======================================================================= -->
|
||||
<details>
|
||||
<summary><h5 style="display:inline">
|
||||
Is it possible to dump images from the CARLA server view?
|
||||
Is it possible to dump images from the CARLA simulator view?
|
||||
</h4></summary>
|
||||
|
||||
Yes, this is an Unreal Engine feature. You can dump the images of the server
|
||||
camera by running CARLA with
|
||||
Yes, this is an Unreal Engine feature. You can dump the images of the spectator
|
||||
camera (simulator view) by running CARLA with
|
||||
|
||||
$ ./CarlaUE4.sh -benchmark -fps=30 -dumpmovie
|
||||
|
||||
|
@ -148,62 +103,12 @@ Images are saved to "CarlaUE4/Saved/Screenshots/LinuxNoEditor".
|
|||
Fatal error: 'version.h' has been modified since the precompiled header.
|
||||
</h4></summary>
|
||||
|
||||
This happens from time to time due to Linux updates. It is possible to force a
|
||||
rebuild of all the project files with
|
||||
This happens from time to time due to Linux updates, and for that we have a
|
||||
special target in our Makefile
|
||||
|
||||
$ cd Unreal/CarlaUE4/
|
||||
$ make CarlaUE4Editor ARGS=-clean
|
||||
$ make hard-clean
|
||||
$ make CarlaUE4Editor
|
||||
|
||||
It takes a long time but fixes the issue. Sometimes a reboot is also needed.
|
||||
|
||||
</details>
|
||||
|
||||
<!-- ======================================================================= -->
|
||||
<details>
|
||||
<summary><h5 style="display:inline">
|
||||
Fatal error: 'carla/carla_server.h' file not found.
|
||||
</h4></summary>
|
||||
|
||||
This indicates that the CarlaServer dependency failed to compile.
|
||||
|
||||
Please follow the instructions at
|
||||
[How to build on Linux](http://carla.readthedocs.io/en/latest/how_to_build_on_linux/).
|
||||
|
||||
Make sure that the Setup script does print _"Success!"_ at the end
|
||||
|
||||
$ ./Setup.sh
|
||||
...
|
||||
...
|
||||
****************
|
||||
*** Success! ***
|
||||
****************
|
||||
|
||||
Then check if CarlaServer compiles without errors running make
|
||||
|
||||
$ make
|
||||
|
||||
It should end printing something like
|
||||
|
||||
```
|
||||
[1/1] Install the project...
|
||||
-- Install configuration: "Release"
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++abi.so.1
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++abi.so.1.0
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++.so.1
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++.so.1.0
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++.so
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++abi.so
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/lib/libc++abi.a
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/lib/libboost_system.a
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/lib/libprotobuf.a
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/include/carla
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/include/carla/carla_server.h
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/lib/libcarlaserver.a
|
||||
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/bin/test_carlaserver
|
||||
-- Set runtime path of "Unreal/CarlaUE4/Plugins/Carla/CarlaServer/bin/test_carlaserver" to ""
|
||||
```
|
||||
|
||||
If so you can safely run Rebuild.sh.
|
||||
It takes a long time but fixes the issue.
|
||||
|
||||
</details>
|
||||
|
|
|
@ -10,59 +10,99 @@
|
|||
Welcome to CARLA! This tutorial provides the basic steps for getting started
|
||||
using CARLA.
|
||||
|
||||
CARLA consists mainly of two modules, the **CARLA Simulator** and the **CARLA
|
||||
Python API** module. The simulator does most of the heavy work, controls the
|
||||
logic, physics, and rendering of all the actors and sensors in the scene; it
|
||||
requires a machine with a dedicated GPU to run. The CARLA Python API is a module
|
||||
that you can import into your Python scripts, it provides an interface for
|
||||
controlling the simulator and retrieving data. With this Python API you can, for
|
||||
instance, control any vehicle in the simulation, attach sensors to it, and read
|
||||
back the data these sensors generate. Most of the aspects of the simulation are
|
||||
accessible from our Python API, and more will be in future releases.
|
||||
|
||||

|
||||
|
||||
<h2>How to run CARLA</h2>
|
||||
|
||||
First of all, download the latest release from our GitHub page and extract all
|
||||
the contents of the package in a folder of your choice.
|
||||
|
||||
<!-- Latest release button -->
|
||||
<p align="middle"><a href="https://github.com/carla-simulator/carla/blob/master/Docs/download.md" target="_blank" class="btn btn-neutral" title="Go to the latest CARLA release"><span class="icon icon-github"></span> Get the latest release</a></p>
|
||||
|
||||
Download the latest release from our GitHub page and extract all the contents of
|
||||
the package in a folder of your choice.
|
||||
|
||||
The release package contains the following
|
||||
|
||||
* The CARLA simulator.
|
||||
* The "carla" Python API module.
|
||||
* A few Python scripts with usage examples.
|
||||
|
||||
The simulator can be started by running `CarlaUE4.sh` on Linux, or
|
||||
`CarlaUE4.exe` on Windows. Unlike previous versions, now the simulator
|
||||
automatically starts in "server mode". That is, you can already start connecting
|
||||
your Python scripts to control the actors in the simulation.
|
||||
|
||||
CARLA requires two available TCP ports on your computer, by default 2000 and
|
||||
2001. Make sure you don't have a firewall or another application blocking those
|
||||
ports. Alternatively, you can manually change the port CARLA uses by launching
|
||||
the simulator with the command-line argument `-carla-port=N`, the second port
|
||||
will be automatically set to `N+1`.
|
||||
|
||||
!!! tip
|
||||
You can launch the simulator in windowed mode by using the argument
|
||||
`-windowed`, and control the window size with `-ResX=N` and `-ResY=N`.
|
||||
|
||||
#### Running the example script
|
||||
|
||||
Run the example script with
|
||||
The release package contains a precompiled version of the simulator, the Python
|
||||
API module, and some Python scripts with usage examples. In order to run our
|
||||
usage examples, you may need to install the following Python modules
|
||||
|
||||
```sh
|
||||
python example.py
|
||||
pip install --user pygame numpy
|
||||
```
|
||||
|
||||
If everything went well you should start seeing cars appearing in the scene.
|
||||
|
||||
_We strongly recommend taking a look at the example code to understand how it
|
||||
works, and modify it at will. We'll have soon tutorials for writing your own
|
||||
scripts, but for now the examples is all we have._
|
||||
|
||||
#### Changing the map
|
||||
|
||||
By default, the simulator starts up in our _"Town01"_ map. The second map can be
|
||||
started by passing the path to the map as first argument when launching the
|
||||
simulator
|
||||
Let's start by running the simulator. Launch a terminal window and go to the
|
||||
folder you extracted CARLA to. Start the simulator with the following command
|
||||
|
||||
```sh
|
||||
# On Linux
|
||||
$ ./CarlaUE4.sh /Game/Carla/Maps/Town02
|
||||
# Linux
|
||||
./CarlaUE4.sh
|
||||
```
|
||||
|
||||
```cmd
|
||||
rem On Windows
|
||||
> CarlaUE4.exe /Game/Carla/Maps/Town02
|
||||
rem Windows
|
||||
CarlaUE4.exe
|
||||
```
|
||||
|
||||
this launches a window with a view over the city. This is the "spectator"
|
||||
view, you can fly around the city using the mouse and WASD keys, but you cannot
|
||||
interact with the world in this view. The simulator is now running as a server,
|
||||
waiting for a client app to connect and interact with the world.
|
||||
|
||||
!!! note
|
||||
CARLA requires two available TCP ports on your computer, by default 2000 and
|
||||
2001. Make sure you don't have a firewall or another application blocking
|
||||
those ports. Alternatively, you can manually change the port by launching
|
||||
the simulator with the command-line argument `-carla-port=N`, the second
|
||||
port will be automatically set to `N+1`.
|
||||
|
||||
Let's add now some life to the city, open a new terminal window and execute
|
||||
|
||||
```sh
|
||||
python spawn_npc.py -n 80
|
||||
```
|
||||
|
||||
With this script we are adding 80 vehicles to the world driving in "autopilot"
|
||||
mode. Back to the simulator window we should see these vehicles driving around
|
||||
the city. They will keep driving randomly until we stop the script. Let's leave
|
||||
them there for now.
|
||||
|
||||
Now, it's nice and sunny in CARLA, but that's not a very interesting driving
|
||||
condition. One of the cool features of CARLA is that you can control the weather
|
||||
and lighting conditions of the world. We'll launch now a script that dynamically
|
||||
controls the weather and time of the day, open yet another terminal window and
|
||||
execute
|
||||
|
||||
```sh
|
||||
python dynamic_weather.py
|
||||
```
|
||||
|
||||
The city is now ready for us to drive, we can finally run
|
||||
|
||||
```sh
|
||||
python manual_control.py
|
||||
```
|
||||
|
||||
This should open a new window with a 3rd person view of a car, you can drive
|
||||
this car with the WASD/arrow keys. Press 'h' to see all the options available.
|
||||
|
||||

|
||||
|
||||
As you have noticed, we can connect as many scripts as we want to control the
|
||||
simulation and gather data. Even someone with a different computer can jump now
|
||||
into your simulation and drive along with you
|
||||
|
||||
```sh
|
||||
python manual_control.py --host=<your-ip-address-here>
|
||||
```
|
||||
|
||||
<br>
|
||||
Now that we covered the basics, in the next section we'll take a look at some of
|
||||
the details of the Python API to help you write your own scripts.
|
||||
|
|
|
@ -8,8 +8,9 @@ Install the build tools and dependencies
|
|||
```
|
||||
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
|
||||
sudo apt-get update
|
||||
sudo apt-get install build-essential clang-5.0 lld-5.0 g++-7 ninja-build python python-pip python-dev libpng16-dev libtiff5-dev libjpeg-dev tzdata sed curl wget unzip autoconf libtool
|
||||
pip install --user setuptools nose2
|
||||
sudo apt-get install build-essential clang-5.0 lld-5.0 g++-7 cmake ninja-build python python-pip python-dev python3-dev python3-pip libpng16-dev libtiff5-dev libjpeg-dev tzdata sed curl wget unzip autoconf libtool
|
||||
pip2 install --user setuptools nose2
|
||||
pip3 install --user setuptools nose2
|
||||
```
|
||||
|
||||
To avoid compatibility issues between Unreal Engine and the CARLA dependencies,
|
||||
|
@ -23,8 +24,6 @@ sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/lib/llvm-5.0/bi
|
|||
sudo update-alternatives --install /usr/bin/clang clang /usr/lib/llvm-5.0/bin/clang 101
|
||||
```
|
||||
|
||||
[cmakelink]: https://cmake.org/download/
|
||||
|
||||
Build Unreal Engine
|
||||
-------------------
|
||||
|
||||
|
@ -61,7 +60,7 @@ Note that the `master` branch contains the latest fixes and features, for the
|
|||
latest stable code may be best to switch to the `stable` branch.
|
||||
|
||||
Now you need to download the assets package, to do so we provide a handy script
|
||||
that downloads and extracts the latest version (note that the package is >10GB,
|
||||
that downloads and extracts the latest version (note that this package is >3GB,
|
||||
this step might take some time depending on your connection)
|
||||
|
||||
```sh
|
||||
|
@ -77,19 +76,21 @@ export UE4_ROOT=~/UnrealEngine_4.19
|
|||
|
||||
You can also add this variable to your `~/.bashrc` or `~/.profile`.
|
||||
|
||||
Now that the environment is set up, you can run make to run different commands
|
||||
Now that the environment is set up, you can use make to run different commands
|
||||
and build the different modules
|
||||
|
||||
```sh
|
||||
make launch # Compiles CARLA and launches Unreal Engine's Editor.
|
||||
make package # Compiles CARLA and creates a packaged version for distribution.
|
||||
make help # Print all available commands.
|
||||
make launch # Compiles the simulator and launches Unreal Engine's Editor.
|
||||
make PythonAPI # Compiles the PythonAPI module necessary for running the Python examples.
|
||||
make package # Compiles everything and creates a packaged version able to run without UE4 editor.
|
||||
make help # Print all available commands.
|
||||
```
|
||||
|
||||
Updating CARLA
|
||||
--------------
|
||||
|
||||
Every new release of CARLA we release a new package with the latest changes in
|
||||
the CARLA assets. To download the latest version and recompile CARLA, run
|
||||
Every new release of CARLA, we release too a new package with the latest changes
|
||||
in the CARLA assets. To download the latest version and recompile CARLA, run
|
||||
|
||||
```sh
|
||||
make clean
|
||||
|
@ -97,3 +98,25 @@ git pull
|
|||
./Update.sh
|
||||
make launch
|
||||
```
|
||||
|
||||
- - -
|
||||
|
||||
<h2>Assets repository (development only)</h2>
|
||||
|
||||
Our 3D assets, models, and maps have also a
|
||||
[publicly available git repository][contentrepolink]. We regularly push latest
|
||||
updates to this repository. However, using this version of the content is only
|
||||
recommended to developers, as we often have work in progress maps and models.
|
||||
|
||||
Handling this repository requires [git-lfs][gitlfslink] installed in your
|
||||
machine. Clone this repository to "Unreal/CarlaUE4/Content/Carla"
|
||||
|
||||
```sh
|
||||
git lfs clone https://bitbucket.org/carla-simulator/carla-content Unreal/CarlaUE4/Content/Carla
|
||||
```
|
||||
|
||||
It is recommended to clone with "git lfs clone" as this is significantly faster
|
||||
in older versions of git.
|
||||
|
||||
[contentrepolink]: https://bitbucket.org/carla-simulator/carla-content
|
||||
[gitlfslink]: https://git-lfs.github.com/
|
||||
|
|
Before Width: | Height: | Size: 169 KiB |
Before Width: | Height: | Size: 66 KiB |
After Width: | Height: | Size: 134 KiB |
Before Width: | Height: | Size: 1.5 MiB After Width: | Height: | Size: 329 KiB |
Before Width: | Height: | Size: 172 KiB |
After Width: | Height: | Size: 698 KiB |
Before Width: | Height: | Size: 650 KiB |
Before Width: | Height: | Size: 1.5 MiB |
Before Width: | Height: | Size: 93 KiB |
Before Width: | Height: | Size: 320 KiB |
Before Width: | Height: | Size: 1.1 MiB After Width: | Height: | Size: 235 KiB |
|
@ -8,8 +8,7 @@
|
|||
<h3>Quick start</h3>
|
||||
|
||||
* [Getting started](getting_started.md)
|
||||
<!-- * [Running the simulator](running_simulator_standalone.md) -->
|
||||
<!-- * [Connecting a Python client](connecting_the_client.md) -->
|
||||
* [Python API tutorial](python_api_tutorial.md)
|
||||
* [Configuring the simulation](configuring_the_simulation.md)
|
||||
<!-- * [Measurements](measurements.md) -->
|
||||
* [Cameras and sensors](cameras_and_sensors.md)
|
||||
|
@ -20,21 +19,11 @@
|
|||
* [How to build on Linux](how_to_build_on_linux.md)
|
||||
* [How to build on Windows](how_to_build_on_windows.md)
|
||||
|
||||
<h3> Driving Benchmark </h3>
|
||||
|
||||
* [Quick Start](benchmark_start.md)
|
||||
* [General Structure](benchmark_structure.md)
|
||||
* [Creating Your Benchmark](benchmark_creating.md)
|
||||
* [Computed Performance Metrics](benchmark_metrics.md)
|
||||
|
||||
<h3>Advanced topics</h3>
|
||||
|
||||
* [CARLA settings](carla_settings.md)
|
||||
* [Python API](python_api.md)
|
||||
<!-- * [Simulator keyboard input](simulator_keyboard_input.md) -->
|
||||
* [Python API reference](python_api.md)
|
||||
* [Running without display and selecting GPUs](carla_headless.md)
|
||||
* [Running in a Docker](carla_docker.md)
|
||||
|
||||
* [How to link Epic's Automotive Materials](epic_automotive_materials.md)
|
||||
|
||||
<h3>Contributing</h3>
|
||||
|
@ -46,8 +35,6 @@
|
|||
<h3>Development</h3>
|
||||
|
||||
* [Map customization](map_customization.md)
|
||||
<!-- * [CARLA design](carla_design.md) -->
|
||||
<!-- * [CarlaServer documentation](carla_server.md) -->
|
||||
* [Build system](build_system.md)
|
||||
|
||||
<h3>Art guidelines</h3>
|
||||
|
|
|
@ -157,6 +157,5 @@ Postprocess Volume (Boundless) And Light Source to exist in the world.
|
|||
- CameraPostProcessParameters.AutoExposureBias: Darkens or brightens the final image towards a defined bias.
|
||||
|
||||
You can have as many different configurations saved in the project as you want
|
||||
and choose the configuration to apply while on the build, through the
|
||||
[settings file](carla_settings.md); or in the editor while building the level or
|
||||
testing.
|
||||
and choose the configuration to apply while on the build, through the settings
|
||||
file; or in the editor while building the level or testing.
|
||||
|
|
|
@ -78,6 +78,7 @@
|
|||
|
||||
## `carla.ActorList`
|
||||
|
||||
- `find(id)`
|
||||
- `filter(wildcard_pattern)`
|
||||
- `__getitem__(pos)`
|
||||
- `__len__()`
|
||||
|
|
|
@ -0,0 +1,356 @@
|
|||
<h1>Python API tutorial</h1>
|
||||
|
||||
In this tutorial we introduce the basic concepts of the CARLA Python API, as
|
||||
well as an overview of its most important functionalities. The reference of all
|
||||
classes and methods available can be found at
|
||||
[Python API reference](python_api.md).
|
||||
|
||||
!!! note
|
||||
**This document applies only to the latest development version**. <br>
|
||||
The API has been significantly changed in the latest versions starting at
|
||||
0.9.0. We commonly refer to the new API as **0.9.X API** as opposed to
|
||||
the previous **0.8.X API**.
|
||||
|
||||
First of all, we need to introduce a few core concepts:
|
||||
|
||||
- **Actor:** Actor is anything that plays a role in the simulation and can be
|
||||
moved around, examples of actors are vehicles, pedestrians, and sensors.
|
||||
- **Blueprint:** Before spawning an actor you need to specify its attributes,
|
||||
and that's what blueprints are for. We provide a blueprint library with
|
||||
the definitions of all the actors available.
|
||||
- **World:** The world represents the currently loaded map and contains the
|
||||
functions for converting a blueprint into a living actor, among other. It
|
||||
also provides access to the road map and functions to change the weather
|
||||
conditions.
|
||||
|
||||
#### Connecting and retrieving the world
|
||||
|
||||
To connect to a simulator we need to create a "Client" object, to do so we need
|
||||
to provide the IP address and port of a running instance of the simulator
|
||||
|
||||
```py
|
||||
client = carla.Client('localhost', 2000)
|
||||
```
|
||||
|
||||
The first recommended thing to do right after creating a client instance is
|
||||
setting its time-out. This time-out sets a time limit to all networking
|
||||
operations, if the time-out is not set networking operations may block forever
|
||||
|
||||
```py
|
||||
client.set_timeout(10.0) # seconds
|
||||
```
|
||||
|
||||
Once we have the client configured we can directly retrieve the world
|
||||
|
||||
```py
|
||||
world = client.get_world()
|
||||
```
|
||||
|
||||
Typically we won't need the client object anymore, all the objects created by
|
||||
the world will connect to the IP and port provided if they need to. These
|
||||
operations are usually done in the background and are transparent to the user.
|
||||
|
||||
#### Blueprints
|
||||
|
||||
A blueprint contains the information necessary to create a new actor. For
|
||||
instance, if the blueprint defines a car, we can change its color here, if it
|
||||
defines a lidar, we can decide here how many channels the lidar will have. A
|
||||
blueprints also has an ID that uniquely identifies it and all the actor
|
||||
instances created with it. Examples of IDs are "vehicle.nissan.patrol" or
|
||||
"sensor.camera.depth".
|
||||
|
||||
The list of all available blueprints is kept in the **blueprint library**
|
||||
|
||||
```py
|
||||
blueprint_library = world.get_blueprint_library()
|
||||
```
|
||||
|
||||
The library allows us to find specific blueprints by ID, filter them with
|
||||
wildcards, or just choosing one at random
|
||||
|
||||
```py
|
||||
# Find specific blueprint.
|
||||
collision_sensor_bp = blueprint_library.find('sensor.other.collision')
|
||||
# Chose a vehicle blueprint at random.
|
||||
vehicle_bp = random.choice(blueprint_library.filter('vehicle.bmw.*'))
|
||||
```
|
||||
|
||||
Some of the attributes of the blueprints can be modified while some other are
|
||||
just read-only. For instance, we cannot modify the number of wheels of a vehicle
|
||||
but we can change its color
|
||||
|
||||
```py
|
||||
vehicles = blueprint_library.filter('vehicle.*')
|
||||
bikes = [x for x in vehicles if int(x.get_attribute('number_of_wheels')) == 2]
|
||||
for bike in bikes:
|
||||
bike.set_attribute('color', '255,0,0')
|
||||
```
|
||||
|
||||
Modifiable attributes also come with a list of recommended values
|
||||
|
||||
```py
|
||||
for attr in blueprint:
|
||||
if attr.is_modifiable:
|
||||
blueprint.set_attribute(attr.id, random.choice(attr.recommended_values))
|
||||
```
|
||||
|
||||
The blueprint system has been designed to ease contributors adding their custom
|
||||
actors directly in Unreal Editor, we'll add a tutorial on this soon, keep tuned!
|
||||
|
||||
#### Spawning actors
|
||||
|
||||
Once we have the blueprint set up, spawning an actor is pretty straightforward
|
||||
|
||||
```py
|
||||
transform = Transform(Location(x=230, y=195, z=40), Rotation(yaw=180))
|
||||
actor = world.spawn_actor(blueprint, transform)
|
||||
```
|
||||
|
||||
The spawn actor function comes in two flavours, `spawn_actor` and
|
||||
`try_spawn_actor`. The former will raise an exception if the actor could not be
|
||||
spawned, the later will return `None` instead. The most typical cause of
|
||||
failure is collision at spawn point, meaning the actor does not fit at the spot
|
||||
we chose; probably another vehicle is in that spot or we tried to spawn into a
|
||||
static object.
|
||||
|
||||
To ease the task of finding a spawn location, each map provides a list of
|
||||
recommended transforms
|
||||
|
||||
```py
|
||||
spawn_points = world.get_map().get_spawn_points()
|
||||
```
|
||||
|
||||
We'll add more on the map object later in this tutorial.
|
||||
|
||||
Finally, the spawn functions have an optional argument that controls whether the
|
||||
actor is going to be attached to another actor. This is specially useful for
|
||||
sensors. In the next example, the camera remains rigidly attached to our vehicle
|
||||
during the rest of the simulation
|
||||
|
||||
```py
|
||||
camera = world.spawn_actor(camera_bp, relative_transform, attach_to=my_vehicle)
|
||||
```
|
||||
|
||||
Note that in this case, the transform provided is treated relative to the parent
|
||||
actor.
|
||||
|
||||
#### Handling actors
|
||||
|
||||
Once we have an actor alive in the world, we can move this actor around and
|
||||
check its dynamic properties
|
||||
|
||||
```py
|
||||
location = actor.get_location()
|
||||
location.z += 10.0
|
||||
actor.set_location(location)
|
||||
print(actor.get_acceleration())
|
||||
print(actor.get_velocity())
|
||||
```
|
||||
|
||||
We can even freeze and actor by disabling its physics simulation
|
||||
|
||||
```py
|
||||
actor.set_simulate_physics(False)
|
||||
```
|
||||
|
||||
And once we get tired of an actor we can remove it from the simulation with
|
||||
|
||||
```py
|
||||
actor.destroy()
|
||||
```
|
||||
|
||||
Note that actors are not cleaned up automatically when the Python script
|
||||
finishes, if we want to get rid of them we need to explicitly destroy them.
|
||||
|
||||
!!! important
|
||||
**Known issue:** To improve performance, most of the methods send requests
|
||||
to the simulator asynchronously. The simulator queues each of these
|
||||
requests, but only has a limited amount of time each update to parse them.
|
||||
If we flood the simulator by calling "set" methods too often, e.g.
|
||||
set_transform, the requests will accumulate a significant lag.
|
||||
|
||||
#### Vehicles
|
||||
|
||||
Vehicles are a special type of actor that provide a few extra methods. Apart
|
||||
from the handling methods common to all actors, vehicles can also be controlled
|
||||
by providing throttle, break, and steer values
|
||||
|
||||
```py
|
||||
vehicle.apply_control(carla.VehicleControl(throttle=1.0, steer=-1.0))
|
||||
```
|
||||
|
||||
These are all the parameters of the VehicleControl object and their default
|
||||
values
|
||||
|
||||
```py
|
||||
carla.VehicleControl(
|
||||
throttle = 0.0
|
||||
steer = 0.0
|
||||
brake = 0.0
|
||||
hand_brake = False
|
||||
reverse = False
|
||||
manual_gear_shift = False
|
||||
gear = 0)
|
||||
```
|
||||
|
||||
Our vehicles also come with a handy autopilot
|
||||
|
||||
```py
|
||||
vehicle.set_autopilot(True)
|
||||
```
|
||||
|
||||
As has been a common misconception, we need to clarify that this autopilot
|
||||
control is purely hard-coded into the simulator and it's not based at all in
|
||||
machine learning techniques.
|
||||
|
||||
Finally, vehicles also have a bounding box that encapsulates them
|
||||
|
||||
```py
|
||||
box = vehicle.bounding_box
|
||||
print(box.location) # Location relative to the vehicle.
|
||||
print(box.extent) # XYZ half-box extents in meters.
|
||||
```
|
||||
|
||||
#### Sensors
|
||||
|
||||
Sensors are actors that produce a stream of data. Sensors are such a key
|
||||
component of CARLA that they deserve their own documentation page, so here we'll
|
||||
limit ourselves to show a small example of how sensors work
|
||||
|
||||
```py
|
||||
camera_bp = blueprint_library.find('sensor.camera.rgb')
|
||||
camera = world.spawn_actor(camera_bp, relative_transform, attach_to=my_vehicle)
|
||||
camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame_number))
|
||||
```
|
||||
|
||||
In this example we have attached a camera to a vehicle, and told the camera to
|
||||
save to disk each of the images that are going to be generated.
|
||||
|
||||
The full list of sensors and their measurement is explained in
|
||||
[Cameras and sensors](cameras_and_sensors.md).
|
||||
|
||||
#### Other actors
|
||||
|
||||
Apart from vehicles and sensors, there are a few other actors in the world. The
|
||||
full list can be requested to the world with
|
||||
|
||||
```py
|
||||
actor_list = world.get_actors()
|
||||
```
|
||||
|
||||
The actor list object returned has functions for finding, filtering, and
|
||||
iterating actors
|
||||
|
||||
```py
|
||||
# Find an actor by id.
|
||||
actor = actor_list.find(id)
|
||||
# Print the location of all the speed limit signs in the world.
|
||||
for speed_sign in actor_list.filter('traffic.speed_limit.*'):
|
||||
print(speed_sign.get_location())
|
||||
```
|
||||
|
||||
Among the actors you can find in this list are
|
||||
|
||||
* **Traffic lights** with a `state` property to check the light's current state.
|
||||
* **Speed limit signs** with the speed codified in their type_id.
|
||||
* The **Spectator** actor that can be used to move the view of the simulator window.
|
||||
|
||||
#### Changing the weather
|
||||
|
||||
The lighting and weather conditions can be requested and changed with the world
|
||||
object
|
||||
|
||||
```py
|
||||
weather = carla.WeatherParameters(
|
||||
cloudyness=80.0,
|
||||
precipitation=30.0,
|
||||
sun_altitude_angle=70.0)
|
||||
|
||||
world.set_weather(weather)
|
||||
|
||||
print(world.get_weather())
|
||||
```
|
||||
|
||||
For convenience, we also provided a list of predefined weather presets that can
|
||||
be directly applied to the world
|
||||
|
||||
```py
|
||||
world.set_weather(carla.WeatherParameters.WetCloudySunset)
|
||||
```
|
||||
|
||||
The full list of presets can be found in the
|
||||
[WeatherParameters reference](python_api.md#carlaweatherparameters).
|
||||
|
||||
#### Map and waypoints
|
||||
|
||||
One of the key features of CARLA is that our roads are fully annotated. All our
|
||||
maps come accompanied by [OpenDrive](http://www.opendrive.org/) files that
|
||||
defines the road layout. Furthermore, we provide a higher level API for querying
|
||||
and navigating this information.
|
||||
|
||||
These objects were a recent addition to our API and are still in heavy
|
||||
development, we hope to make them soon much more powerful yet.
|
||||
|
||||
Let's start by getting the map of the current world
|
||||
|
||||
```py
|
||||
map = world.get_map()
|
||||
```
|
||||
|
||||
For starters, the map has a `name` attribute that matches the name of the
|
||||
currently loaded city, e.g. Town01. And, as we've seen before, we can also ask
|
||||
the map to provide a list of recommended locations for spawning vehicles,
|
||||
`map.get_spawn_points()`.
|
||||
|
||||
However, the real power of this map API comes apparent when we introduce
|
||||
waypoints. We can tell the map to give us a waypoint on the road closest to our
|
||||
vehicle
|
||||
|
||||
```py
|
||||
waypoint = map.get_waypoint(vehicle.get_location())
|
||||
```
|
||||
|
||||
This waypoint's `transform` is located on a drivable lane, and it's oriented
|
||||
according to the road direction at that point.
|
||||
|
||||
Waypoints also have function to query the "next" waypoints; this method returns
|
||||
a list of waypoints at a certain distance that can be accessed from this
|
||||
waypoint following the traffic rules. In other words, if a vehicle is placed in
|
||||
this waypoint, give me the list of posible locations that this vehicle can drive
|
||||
to. Let's see a practical example
|
||||
|
||||
```py
|
||||
# Retrieve the closest waypoint.
|
||||
waypoint = map.get_waypoint(vehicle.get_location())
|
||||
|
||||
# Disable physics, in this example we're just teleporting the vehicle.
|
||||
vehicle.set_simulate_physics(False)
|
||||
|
||||
while True:
|
||||
# Find next waypoint 2 meters ahead.
|
||||
waypoint = random.choice(waypoint.next(2.0))
|
||||
# Teleport the vehicle.
|
||||
vehicle.set_transform(waypoint.transform)
|
||||
```
|
||||
|
||||
The map object also provides methods for generating in bulk waypoints all over
|
||||
the map at an approximated distance between them
|
||||
|
||||
```py
|
||||
waypoint_list = map.generate_waypoints(2.0)
|
||||
```
|
||||
|
||||
For routing purposes, it is also possible to retrieve a topology graph of the
|
||||
roads
|
||||
|
||||
```py
|
||||
waypoint_tuple_list = map.get_topology()
|
||||
```
|
||||
|
||||
this method returns a list of pairs (tuples) of waypoints, for each pair, the
|
||||
first element connects with the second one. Only the minimal set of waypoints to
|
||||
define the topology are generated by this method, only a waypoint for each lane
|
||||
for each road segment in the map.
|
||||
|
||||
Finally, to allow access to the whole road information, the map object can be
|
||||
converted to OpenDrive format, and saved to disk as such.
|
|
@ -3,7 +3,43 @@ CARLA Simulator
|
|||
|
||||
Thanks for downloading CARLA!
|
||||
|
||||
Execute "CarlaUE4.sh" to launch CARLA.
|
||||
http://carla.org/
|
||||
|
||||
How to run CARLA
|
||||
----------------
|
||||
|
||||
Launch a terminal in this folder and execute the simulator by running
|
||||
|
||||
$ ./CarlaUE4.sh
|
||||
|
||||
this will launch a window with a view over the city. This is the "spectator"
|
||||
view, you can fly around the city using the mouse and WASD keys, but you cannot
|
||||
interact with the world in this view. The simulator is now running as a server,
|
||||
waiting for a client app to connect and interact with the world.
|
||||
|
||||
Let's start by adding some live to the city, open a new terminal window and
|
||||
execute
|
||||
|
||||
$ ./spawn_npc.py -n 80
|
||||
|
||||
This adds 80 vehicles to the world driving in "autopilot" mode. Back to the
|
||||
simulator window we should see these vehicles driving around the city. They will
|
||||
keep driving randomly until we stop the script. Let's leave them there for now.
|
||||
|
||||
Now, it's nice and sunny in CARLA, but that's not a very interesting driving
|
||||
condition. One of the cool features of CARLA is that you can control the weather
|
||||
and lighting conditions of the world. We'll launch now a script that dynamically
|
||||
controls the weather and time of the day, open yet another terminal window and
|
||||
execute
|
||||
|
||||
$ ./dynamic_weather.py
|
||||
|
||||
The city is now ready for us to drive, we can finally run
|
||||
|
||||
$ ./manual_control.py
|
||||
|
||||
This should open a new window with a 3rd person view of a car, you can drive
|
||||
this car with the WASD/arrow keys. Press 'h' to see all the options available.
|
||||
|
||||
For more details and running options please refer to our online documentation
|
||||
|
||||
|
|
|
@ -1,59 +0,0 @@
|
|||
<h1>Running the CARLA simulator in standalone mode</h1>
|
||||
|
||||
Inside the downloaded package you should find a shell script called
|
||||
`CarlaUE4.sh`, this script launches the CARLA simulator.
|
||||
|
||||
!!! tip
|
||||
Although this tutorial focuses on Linux, all the commands work as well in
|
||||
Windows. Just replace all the occurrences of `./CarlaUE4.sh` by
|
||||
`CarlaUE4.exe`.
|
||||
|
||||
Run this script without arguments to launch CARLA simulator in standalone mode
|
||||
with default settings
|
||||
|
||||
$ ./CarlaUE4.sh
|
||||
|
||||
This launches the simulator window in full-screen, and you should be able
|
||||
now to drive around the city using the WASD keys, and Q for toggling reverse
|
||||
gear. See ["Keyboard input"](simulator_keyboard_input.md) for the complete list
|
||||
of key-bindings.
|
||||
|
||||

|
||||
|
||||
We have currently two scenarios available, _Town01_ and _Town02_. You may want
|
||||
now to take a look at _Town02_, you can do so by running the script with
|
||||
|
||||
$ ./CarlaUE4.sh /Game/Maps/Town02
|
||||
|
||||
All the parameters like number of other vehicles, pedestrians, and weather
|
||||
conditions can be controlled when launching the simulation. These parameters are
|
||||
set in a _"CarlaSettings.ini"_ file that is passed to the simulator either as a
|
||||
command-line parameter or when connecting with a Python client. This file
|
||||
controls all the variable of the CARLA simulator, from server settings to
|
||||
attaching sensors to the vehicle, we will cover all these later, for now we will
|
||||
just change some visible aspect in the standalone mode. For a detailed
|
||||
description of how the settings work, see ["CARLA Settings"](carla_settings.md)
|
||||
section.
|
||||
|
||||
Open the file _"Example.CarlaSettings.ini"_ in a text editor, search for the
|
||||
following keys and modify their values
|
||||
|
||||
```ini
|
||||
NumberOfVehicles=60
|
||||
NumberOfPedestrians=60
|
||||
WeatherId=3
|
||||
```
|
||||
|
||||
Now run the simulator passing the settings file as argument with
|
||||
|
||||
$ ./CarlaUE4.sh -carla-settings=Example.CarlaSettings.ini
|
||||
|
||||
Now the simulation should have more vehicles and pedestrians, and a
|
||||
different weather preset.
|
||||
|
||||
!!! tip
|
||||
You can launch the simulator in windowed mode by using the argument
|
||||
`-windowed`, and control the window size with `-ResX=N` and `-ResY=N`.
|
||||
|
||||
In the next item of this tutorial we show how to control the simulator with a
|
||||
Python client.
|
|
@ -1,26 +0,0 @@
|
|||
<h1>CARLA Simulator keyboard input</h1>
|
||||
|
||||
The following key bindings are available during game play at the simulator
|
||||
window. Note that vehicle controls are only available when running in
|
||||
_standalone mode_.
|
||||
|
||||
Key | Action
|
||||
---------------:|:----------------
|
||||
`W` | Throttle
|
||||
`S` | Brake
|
||||
`A` `D` | Steer
|
||||
`Q` | Toggle reverse gear
|
||||
`Space` | Hand-brake
|
||||
`P` | Toggle autopilot
|
||||
`←` `→` `↑` `↓` | Move camera
|
||||
`PgUp` `PgDn` | Zoom in and out
|
||||
`Mouse Wheel` | Zoom in and out
|
||||
`Tab` | Toggle on-board camera
|
||||
`F11` | Toggle fullscreen
|
||||
`R` | Restart level
|
||||
`G` | Toggle HUD
|
||||
`C` | Change weather/lighting
|
||||
`Enter` | Jump
|
||||
`F` | Use the force
|
||||
`T` | Reset vehicle rotation
|
||||
`Alt+F4` | Quit
|
|
@ -20,12 +20,9 @@ namespace client {
|
|||
: _episode(std::move(episode)),
|
||||
_actors(std::make_move_iterator(actors.begin()), std::make_move_iterator(actors.end())) {}
|
||||
|
||||
SharedPtr<Actor> ActorList::GetActor(actor_id_type const actor_id) const
|
||||
{
|
||||
for (auto &actor: _actors)
|
||||
{
|
||||
if (actor_id == actor.GetId())
|
||||
{
|
||||
SharedPtr<Actor> ActorList::Find(actor_id_type const actor_id) const {
|
||||
for (auto &actor : _actors) {
|
||||
if (actor_id == actor.GetId()) {
|
||||
return actor.Get(_episode, shared_from_this());
|
||||
}
|
||||
}
|
||||
|
|
|
@ -27,6 +27,9 @@ namespace client {
|
|||
|
||||
public:
|
||||
|
||||
/// Find an actor by id.
|
||||
SharedPtr<Actor> Find(actor_id_type actor_id) const;
|
||||
|
||||
/// Filters a list of Actor with type id matching @a wildcard_pattern.
|
||||
ActorList Filter(const std::string &wildcard_pattern) const;
|
||||
|
||||
|
@ -54,8 +57,6 @@ namespace client {
|
|||
return _actors.size();
|
||||
}
|
||||
|
||||
SharedPtr<Actor> GetActor(actor_id_type const actor_id) const;
|
||||
|
||||
private:
|
||||
|
||||
friend class World;
|
||||
|
|
|
@ -16,10 +16,10 @@ namespace detail {
|
|||
void ActorVariant::MakeActor(EpisodeProxy episode, SharedPtr<const client::ActorList> actor_list) const {
|
||||
auto const parent_id = GetParentId();
|
||||
SharedPtr<client::Actor> parent = nullptr;
|
||||
if ( (actor_list != nullptr) && (parent_id != 0) )
|
||||
{
|
||||
// in case we have an actor list as context, we are able to actually create the parent actor
|
||||
parent = actor_list->GetActor(parent_id);
|
||||
if ((actor_list != nullptr) && (parent_id != 0)) {
|
||||
// In case we have an actor list as context, we are able to actually
|
||||
// create the parent actor.
|
||||
parent = actor_list->Find(parent_id);
|
||||
}
|
||||
_value = detail::ActorFactory::MakeActor(
|
||||
episode,
|
||||
|
|
|
@ -16,6 +16,7 @@ namespace detail {
|
|||
#if __cplusplus >= 201703L // C++17
|
||||
inline
|
||||
#endif
|
||||
// Please update documentation if you change this.
|
||||
uint8_t CITYSCAPES_PALETTE_MAP[][3u] = {
|
||||
{ 0u, 0u, 0u}, // unlabeled = 0u,
|
||||
{ 70u, 70u, 70u}, // building = 1u,
|
||||
|
|
|
@ -114,7 +114,7 @@ except ImportError:
|
|||
|
||||
|
||||
# ==============================================================================
|
||||
# -- World ---------------------------------------------------------------------
|
||||
# -- Global functions ----------------------------------------------------------
|
||||
# ==============================================================================
|
||||
|
||||
|
||||
|
@ -130,36 +130,47 @@ def get_actor_display_name(actor, truncate=250):
|
|||
return (name[:truncate-1] + u'\u2026') if len(name) > truncate else name
|
||||
|
||||
|
||||
# ==============================================================================
|
||||
# -- World ---------------------------------------------------------------------
|
||||
# ==============================================================================
|
||||
|
||||
|
||||
class World(object):
|
||||
def __init__(self, carla_world, hud):
|
||||
self.world = carla_world
|
||||
self.hud = hud
|
||||
self.world.on_tick(hud.on_world_tick)
|
||||
blueprint = self._get_random_blueprint()
|
||||
spawn_points = self.world.get_map().get_spawn_points()
|
||||
spawn_point = random.choice(spawn_points) if spawn_points else carla.Transform()
|
||||
blueprint.set_attribute('role_name', 'hero')
|
||||
self.vehicle = self.world.spawn_actor(blueprint, spawn_point)
|
||||
self.collision_sensor = CollisionSensor(self.vehicle, self.hud)
|
||||
self.lane_invasion_sensor = LaneInvasionSensor(self.vehicle, self.hud)
|
||||
self.camera_manager = CameraManager(self.vehicle, self.hud)
|
||||
self.camera_manager.set_sensor(0, notify=False)
|
||||
self.controller = None
|
||||
self.vehicle = None
|
||||
self.collision_sensor = None
|
||||
self.lane_invasion_sensor = None
|
||||
self.camera_manager = None
|
||||
self._weather_presets = find_weather_presets()
|
||||
self._weather_index = 0
|
||||
self.restart()
|
||||
self.world.on_tick(hud.on_world_tick)
|
||||
|
||||
def restart(self):
|
||||
cam_index = self.camera_manager._index
|
||||
cam_pos_index = self.camera_manager._transform_index
|
||||
start_pose = self.vehicle.get_transform()
|
||||
start_pose.location.z += 2.0
|
||||
start_pose.rotation.roll = 0.0
|
||||
start_pose.rotation.pitch = 0.0
|
||||
blueprint = self._get_random_blueprint()
|
||||
# Keep same camera config if the camera manager exists.
|
||||
cam_index = self.camera_manager._index if self.camera_manager is not None else 0
|
||||
cam_pos_index = self.camera_manager._transform_index if self.camera_manager is not None else 0
|
||||
# Get a random vehicle blueprint.
|
||||
blueprint = random.choice(self.world.get_blueprint_library().filter('patrol'))
|
||||
blueprint.set_attribute('role_name', 'hero')
|
||||
|
||||
self.destroy()
|
||||
self.vehicle = self.world.spawn_actor(blueprint, start_pose)
|
||||
if blueprint.has_attribute('color'):
|
||||
color = random.choice(blueprint.get_attribute('color').recommended_values)
|
||||
blueprint.set_attribute('color', color)
|
||||
# Spawn the vehicle.
|
||||
if self.vehicle is not None:
|
||||
spawn_point = self.vehicle.get_transform()
|
||||
spawn_point.location.z += 2.0
|
||||
spawn_point.rotation.roll = 0.0
|
||||
spawn_point.rotation.pitch = 0.0
|
||||
self.destroy()
|
||||
self.vehicle = self.world.try_spawn_actor(blueprint, spawn_point)
|
||||
while self.vehicle is None:
|
||||
spawn_points = self.world.get_map().get_spawn_points()
|
||||
spawn_point = random.choice(spawn_points) if spawn_points else carla.Transform()
|
||||
self.vehicle = self.world.try_spawn_actor(blueprint, spawn_point)
|
||||
# Set up the sensors.
|
||||
self.collision_sensor = CollisionSensor(self.vehicle, self.hud)
|
||||
self.lane_invasion_sensor = LaneInvasionSensor(self.vehicle, self.hud)
|
||||
self.camera_manager = CameraManager(self.vehicle, self.hud)
|
||||
|
@ -192,13 +203,6 @@ class World(object):
|
|||
if actor is not None:
|
||||
actor.destroy()
|
||||
|
||||
def _get_random_blueprint(self):
|
||||
bp = random.choice(self.world.get_blueprint_library().filter('vehicle'))
|
||||
if bp.has_attribute('color'):
|
||||
color = random.choice(bp.get_attribute('color').recommended_values)
|
||||
bp.set_attribute('color', color)
|
||||
return bp
|
||||
|
||||
|
||||
# ==============================================================================
|
||||
# -- KeyboardControl -----------------------------------------------------------
|
||||
|
@ -541,8 +545,8 @@ class CameraManager(object):
|
|||
self._hud = hud
|
||||
self._recording = False
|
||||
self._camera_transforms = [
|
||||
carla.Transform(carla.Location(x=1.6, z=1.7)),
|
||||
carla.Transform(carla.Location(x=-5.5, z=2.8), carla.Rotation(pitch=-15))]
|
||||
carla.Transform(carla.Location(x=-5.5, z=2.8), carla.Rotation(pitch=-15)),
|
||||
carla.Transform(carla.Location(x=1.6, z=1.7))]
|
||||
self._transform_index = 1
|
||||
self._sensors = [
|
||||
['sensor.camera.rgb', cc.Raw, 'Camera RGB'],
|
||||
|
@ -715,8 +719,6 @@ def main():
|
|||
|
||||
except KeyboardInterrupt:
|
||||
print('\nCancelled by user. Bye!')
|
||||
except Exception as error:
|
||||
logging.exception(error)
|
||||
|
||||
|
||||
if __name__ == '__main__':
|
||||
|
|
|
@ -148,8 +148,8 @@ void export_blueprint() {
|
|||
class_<cc::BlueprintLibrary, boost::noncopyable, boost::shared_ptr<cc::BlueprintLibrary>>("BlueprintLibrary", no_init)
|
||||
.def("find", +[](const cc::BlueprintLibrary &self, const std::string &key) -> cc::ActorBlueprint {
|
||||
return self.at(key);
|
||||
})
|
||||
.def("filter", &cc::BlueprintLibrary::Filter)
|
||||
}, (arg("id")))
|
||||
.def("filter", &cc::BlueprintLibrary::Filter, (arg("wildcard_pattern")))
|
||||
.def("__getitem__", +[](const cc::BlueprintLibrary &self, size_t pos) -> cc::ActorBlueprint {
|
||||
return self.at(pos);
|
||||
})
|
||||
|
|
|
@ -64,7 +64,8 @@ void export_world() {
|
|||
;
|
||||
|
||||
class_<cc::ActorList, boost::shared_ptr<cc::ActorList>>("ActorList", no_init)
|
||||
.def("filter", &cc::ActorList::Filter)
|
||||
.def("find", &cc::ActorList::Find, (arg("id")))
|
||||
.def("filter", &cc::ActorList::Filter, (arg("wildcard_pattern")))
|
||||
.def("__getitem__", &cc::ActorList::at)
|
||||
.def("__len__", &cc::ActorList::size)
|
||||
.def("__iter__", range(&cc::ActorList::begin, &cc::ActorList::end))
|
||||
|
|
|
@ -0,0 +1,3 @@
|
|||
[/Script/Engine.GameUserSettings]
|
||||
FullscreenMode=2
|
||||
Version=5
|
|
@ -306,7 +306,7 @@ void UActorBlueprintFunctionLibrary::MakeLidarDefinition(
|
|||
FActorVariation Range;
|
||||
Range.Id = TEXT("range");
|
||||
Range.Type = EActorAttributeType::Float;
|
||||
Range.RecommendedValues = { TEXT("5000.0") };
|
||||
Range.RecommendedValues = { TEXT("1000.0") };
|
||||
// Points per second.
|
||||
FActorVariation PointsPerSecond;
|
||||
PointsPerSecond.Id = TEXT("points_per_second");
|
||||
|
|
19
mkdocs.yml
|
@ -7,24 +7,16 @@ pages:
|
|||
- Home: 'index.md'
|
||||
- Quick start:
|
||||
- 'Getting started': 'getting_started.md'
|
||||
# - 'Running the simulator': 'running_simulator_standalone.md'
|
||||
# - 'Connecting a Python client': 'connecting_the_client.md'
|
||||
- 'Python API tutorial': 'python_api_tutorial.md'
|
||||
- 'Configuring the simulation': 'configuring_the_simulation.md'
|
||||
# - 'Measurements': 'measurements.md'
|
||||
- 'Cameras and sensors': 'cameras_and_sensors.md'
|
||||
- 'F.A.Q.': 'faq.md'
|
||||
- Driving Benchmark:
|
||||
- 'Quick Start': 'benchmark_start.md'
|
||||
- 'General Structure': 'benchmark_structure.md'
|
||||
- 'Creating Your Benchmark': 'benchmark_creating.md'
|
||||
- 'Computed Performance Metrics': 'benchmark_metrics.md'
|
||||
- Building from source:
|
||||
- 'How to build on Linux': 'how_to_build_on_linux.md'
|
||||
- 'How to build on Windows': 'how_to_build_on_windows.md'
|
||||
- Advanced topics:
|
||||
- 'CARLA Settings': 'carla_settings.md'
|
||||
- 'Python API': 'python_api.md'
|
||||
# - 'Simulator keyboard input': 'simulator_keyboard_input.md'
|
||||
- 'Python API reference': 'python_api.md'
|
||||
- 'Running without display and selecting GPUs': 'carla_headless.md'
|
||||
- 'Running in a Docker': 'carla_docker.md'
|
||||
- "How to link Epic's Automotive Materials": 'epic_automotive_materials.md'
|
||||
|
@ -34,17 +26,10 @@ pages:
|
|||
- 'Code of conduct': 'CODE_OF_CONDUCT.md'
|
||||
- Development:
|
||||
- 'Map customization': 'map_customization.md'
|
||||
# - 'CARLA design': 'carla_design.md'
|
||||
# - 'CarlaServer documentation': 'carla_server.md'
|
||||
- 'Build system': 'build_system.md'
|
||||
- Art guidelines:
|
||||
- 'How to add assets': 'how_to_add_assets.md'
|
||||
- 'How to model vehicles': 'how_to_model_vehicles.md'
|
||||
- Appendix:
|
||||
- 'Driving Benchmark Sample Results Town01': 'benchmark_basic_results_town01.md'
|
||||
- 'Driving Benchmark Sample Results Town02': 'benchmark_basic_results_town02.md'
|
||||
|
||||
|
||||
|
||||
markdown_extensions:
|
||||
- admonition
|
||||
|
|