@@ -86,12 +86,12 @@ Now the latest content for the project, known as `master` branch in the reposito
!!! Note
The `master` branch contains the latest fixes and features. Stable code is inside the `stable` branch, and it can be built by changing the branch. The same goes for previous CARLA releases. Always remember to check the current branch in git with `git branch`.
-
+####Get assets
Only the assets package, the visual content, is yet to be downloaded. `\Util\ContentVersions.txt` contains the links to the assets for every CARLA version. These must be extracted in `Unreal\CarlaUE4\Content\Carla`. If the path doesn't exist, create it.
Download the **latest** assets to work with the current version of CARLA. When working with branches containing previous releases of CARLA, make sure to download the proper assets.
-
+####make CARLA
Go to the root CARLA folder, the one cloned from the repository. It is time to do the automatic build. The process may take a while, it will download and install the necessary libraries. It might take 20-40 minutes, depending on hardware and internet connection. There are different make commands to build the different modules:
@@ -122,8 +122,9 @@ Now everything is ready to go and CARLA has been successfully built. Here is a b
| `make PythonAPI` | Builds the CARLA client. |
| `make package` | Builds CARLA and creates a packaged version for distribution. |
| `make clean` | Deletes all the binaries and temporals generated by the build system. |
-| `make rebuild` | make clean and make launch both in one command. |
+| `make rebuild` | make clean and make launch both in one command. |
+
Keep reading this section to learn more about how to update CARLA, the build itself and some advanced configuration options.
Otherwise, visit the __First steps__ section to learn about CARLA:
diff --git a/Docs/how_to_control_vehicle_physics.md b/Docs/how_to_control_vehicle_physics.md
index d9ecfb889..24798176e 100644
--- a/Docs/how_to_control_vehicle_physics.md
+++ b/Docs/how_to_control_vehicle_physics.md
@@ -1,4 +1,4 @@
-
How to control vehicle physics
+# How to control vehicle physics
Physics properties can be tuned for vehicles and its wheels.
These changes are applied **only** on runtime, and values are set back to default ones when
diff --git a/Docs/how_to_generate_pedestrians_navigation.md b/Docs/how_to_generate_pedestrians_navigation.md
index 56427f88b..22203d7b5 100644
--- a/Docs/how_to_generate_pedestrians_navigation.md
+++ b/Docs/how_to_generate_pedestrians_navigation.md
@@ -1,4 +1,4 @@
-
How to generate the pedestrian navigation info
+# How to generate the pedestrian navigation info
### Introduction
The pedestrians to walk need information about the map in a specific format. That file that describes the map for navigation is a binary file with extension `.BIN`, and they are saved in the **Nav** folder of the map. Each map needs a `.BIN` file with the same name that the map, so automatically can be loaded with the map.
@@ -21,7 +21,9 @@ We have several types of meshes for navigation. The meshes need to be identified
| Grass | `Road_Crosswalk` | Pedestrians can walk over these meshes but as a second option if no ground is found. |
| Road | `Road_Grass` | Pedestrians won't be allowed to walk on it unless we specify some percentage of pedestrians that will be allowed. |
| Crosswalk | `Road_Road`, `Road_Curb`, `Road_Gutter` or `Road_Marking` | Pedestrians can cross the roads only through these meshes. |
-| Block | any other name | Pedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...). |
+| Block | any other name | Pedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...). |
+
+
For instance, all road meshes need to start with `Road_Road` e.g: `Road_Road_Mesh_1`, `Road_Road_Mesh_2`...
diff --git a/Docs/how_to_make_a_new_map.md b/Docs/how_to_make_a_new_map.md
index af4a26dca..f990e0832 100644
--- a/Docs/how_to_make_a_new_map.md
+++ b/Docs/how_to_make_a_new_map.md
@@ -1,7 +1,8 @@
-
How to create and import a new map
+# How to create and import a new map
![Town03](img/create_map_01.jpg)
+-----
## 1 Create a new map
Files needed:
@@ -15,6 +16,7 @@ tutorial.
The following steps will introduce the RoadRunner software for map creation. If the map is
created by other software, go to this [section](#3-importing-into-unreal).
+------
## 2 Create a new map with RoadRunner
RoadRunner is a powerful software from Vector Zero to create 3D scenes. Using RoadRunner is easy,
@@ -22,7 +24,7 @@ in a few steps you will be able to create an impressive scene. You can download
a trial of RoadRunner at VectorZero's web page.
-
+
Read VectorZero's RoadRunner [documentation][rr_docs] to install it and get started.
@@ -36,7 +38,7 @@ They also have very useful [tutorials][rr_tutorials] on how to use RoadRunner, c
!!! important
Create the map centered arround (0, 0).
-## 2.1 Validate the map
+#### 2.1 Validate the map
* Check that all connections and geometries seem correct.
@@ -51,8 +53,8 @@ button and export.
The _OpenDrive Preview Tool_ button lets you test the integrity of the current map.
If there is any error with map junctions, click on `Maneuver Tool`
and `Rebuild Maneuver Roads` buttons.
-
-## 2.2 Export the map
+
+#### 2.2 Export the map
After verifying that everything is correct, it is time to export the map to CARLA.
@@ -75,6 +77,7 @@ _check VectorZeros's [documentation][exportlink]._
[exportlink]: https://tracetransit.atlassian.net/wiki/spaces/VS/pages/752779356/Exporting+to+CARLA
+-------
## 3 Importing into Unreal
This section is divided into two. The first part shows how to import a map from RoadRunner
@@ -87,9 +90,9 @@ and the second part shows how to import a map from other software that generates
We have also created a new way to import assets into Unreal,
check this [`guide`](./asset_packages_for_dist.md)!
-### 3.1 Importing from RoadRunner
+#### 3.1 Importing from RoadRunner
-#### 3.1.1 Plugin Installation
+##### 3.1.1 Plugin Installation
RoadRunner provides a series of plugins that make the importing simpler.
@@ -100,7 +103,7 @@ RoadRunner provides a series of plugins that make the importing simpler.
3. Rebuild the plugin.
-##### Rebuild on Windows
+###### Rebuild on Windows
1. Generate project files.
@@ -108,7 +111,7 @@ RoadRunner provides a series of plugins that make the importing simpler.
2. Open the project and build the plugins.
-##### Rebuild on Linux
+###### Rebuild on Linux
```sh
> UE4_ROOT/GenerateProjectFiles.sh -project="carla/Unreal/CarlaUE4/CarlaUE4.uproject" -game -engine
@@ -118,7 +121,7 @@ Finally, restart Unreal Engine and make sure the checkbox is on for both plugins
![rr_ue_plugins](img/rr-ue4_plugins.png)
-#### 3.1.2 Importing
+##### 3.1.2 Importing
1. Import the _mapname.fbx_ file to a new folder under `/Content/Carla/Maps`
with the `Import` button.
@@ -144,7 +147,7 @@ The new map should now appear next to the others in the Unreal Engine _Content B
And that's it! The map is ready!
-### 3.2 Importing from the files
+#### 3.2 Importing from the files
This is the generic way to import maps into Unreal.
@@ -154,7 +157,7 @@ and paste it in the new level, otherwise, the map will be in the dark.
![ue_illumination](img/ue_illumination.png)
-#### 3.2.1 Binaries (.fbx)
+##### 3.2.1 Binaries (.fbx)
1. Import the _mapname.fbx_ file to a new folder under `/Content/Carla/Maps`
with the `Import` button. Make sure the following options are unchecked:
@@ -238,7 +241,7 @@ Content
![ue__semantic_segmentation](img/ue_ssgt.png)
-#### 3.2.2 OpenDRIVE (.xodr)
+##### 3.2.2 OpenDRIVE (.xodr)
1. Copy the `.xodr` file inside the `Content/Carla/Maps/OpenDrive` folder.
2. Open the Unreal level and drag the _Open Drive Actor_ inside the level.
@@ -248,6 +251,7 @@ It will read the level's name, search the Opendrive file with the same name and
And that's it! Now the road network information is loaded into the map.
+-------
## 4. Setting up traffic behavior
Once everything is loaded into the level, it is time to create traffic behavior.
@@ -261,7 +265,7 @@ Once everything is loaded into the level, it is time to create traffic behavior.
This will generate a bunch of _RoutePlanner_ and _VehicleSpawnPoint_ actors that make
it possible for vehicles to spawn and go in autopilot mode.
-## 4.1 Traffic lights and signs
+#### 4.1 Traffic lights and signs
To regulate the traffic, traffic lights and signs must be placed all over the map.
@@ -287,6 +291,7 @@ might need some tweaking and testing to fit perfectly into the city.
> _Example: Traffic Signs, Traffic lights and Turn based stop._
+----------
## 5 Adding pedestrian navigation areas
To make a navigable mesh for pedestrians, we use the _Recast & Detour_ library.
@@ -335,6 +340,7 @@ Then build RecastDemo. Follow their [instructions][buildrecastlink] on how to bu
Now pedestrians will be able to spawn randomly and walk on the selected meshes!
+----------
## Tips and Tricks
* Traffic light group controls wich traffic light is active (green state) at each moment.
diff --git a/Docs/how_to_model_vehicles.md b/Docs/how_to_model_vehicles.md
index 75fa29870..845da8d61 100644
--- a/Docs/how_to_model_vehicles.md
+++ b/Docs/how_to_model_vehicles.md
@@ -1,8 +1,9 @@
-
How to model vehicles
+# How to model vehicles
-# 4-Wheeled Vehicles
+------------
+## 4-Wheeled Vehicles
-## Modelling
+#### Modelling
Vehicles must have a minimum of 10.000 and a maximum of 17.000 Tris
approximately. We model the vehicles using the size and scale of actual cars.
@@ -35,7 +36,7 @@ The vehicle must be divided in 6 materials:
Put a rectangular plane with this size 29-12 cm, for the licence Plate.
We assign the license plate texture.
-## Nomenclature of Material
+#### Nomenclature of Material
* M(Material)_"CarName"_Bodywork(part of car)
@@ -49,7 +50,7 @@ The vehicle must be divided in 6 materials:
* M_"CarName"_LicencePlate
-## Textures
+#### Textures
The size of the textures is 2048x2048.
@@ -59,7 +60,7 @@ The size of the textures is 2048x2048.
* T_"CarName"_PartOfMaterial_orm (OcclusionRoughnessMetallic)
-* **EXEMPLE**:
+* **EXAMPLE**:
Type of car Tesla Model 3
TEXTURES
@@ -70,7 +71,7 @@ TEXTURES
MATERIAL
* M_Tesla3_BodyWork
-## RIG
+#### RIG
The easiest way is to copy the "General4WheeledVehicleSkeleton" present in our project,
either by exporting it and copying it to your model or by creating your skeleton
@@ -91,7 +92,7 @@ Vhehicle_Base: The origin point of the mesh, place it in the point (0,0,0) of th
* Wheel_Rear_Left: Set the joint's position in the middle of the Wheel.
-## LODs
+#### LODs
All vehicle LODs must be made in Maya or other 3D software. Because Unreal does
not generate LODs automatically, you can adjust the number of Tris to make a
diff --git a/Docs/python_api.md b/Docs/python_api.md
index 792162a12..076872652 100644
--- a/Docs/python_api.md
+++ b/Docs/python_api.md
@@ -1,3 +1,4 @@
+#Python API reference
## carla.Actor
CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by [carla.World](#carla.World) and they need for a [carla.ActorBlueprint](#carla.ActorBlueprint) to be created. These blueprints belong into a library provided by CARLA, find more about them [here](../bp_library/).
diff --git a/Docs/python_cookbook.md b/Docs/python_cookbook.md
index cf563d410..7468ca99a 100644
--- a/Docs/python_cookbook.md
+++ b/Docs/python_cookbook.md
@@ -1,5 +1,4 @@
-
-
Code recipes
+# Code recipes
This section contains a list of recipes that complement the [tutorial](../python_api_tutorial/)
and are used to illustrate the use of Python API methods.
diff --git a/Docs/recorder_and_playback.md b/Docs/recorder_and_playback.md
index f3ec90c4b..29d8baf1e 100644
--- a/Docs/recorder_and_playback.md
+++ b/Docs/recorder_and_playback.md
@@ -1,4 +1,4 @@
-
Recorder
+# Recorder
This is one of the advanced CARLA features. It allows to record and reenact a simulation while providing with a complete log of the events happened and a few queries to ease the trace and study of those.
To learn about the generated file and its specifics take a look at this [reference](recorder_binary_file_format.md).
@@ -59,12 +59,14 @@ client.replay_file("recording01.log", start, duration, camera)
| ---------- | ----------------------------------------------------- | ----- |
| `start` | Recording time in seconds to start the simulation at. | If positive, time will be considered from the beginning of the recording.
If negative, it will be considered from the end. |
| `duration` | Seconds to playback. 0 is all the recording. | By the end of the playback, vehicles will be set to autopilot and pedestrians will stop. |
-| `camera` | ID of the actor that the camera will focus on. | By default the spectator will move freely. |
+| `camera` | ID of the actor that the camera will focus on. | By default the spectator will move freely. |
+
+
!!! Note
These parameters allows to recall an event and then let the simulation run free, as vehicles will be set to autopilot when the recording stops.
-
Setting a time factor
+####Setting a time factor
The time factor will determine the playback speed.
@@ -75,7 +77,9 @@ client.set_replayer_time_factor(2.0)
```
| Parameters | Default | Fast motion | slow motion |
| ------------- | ------- | ----------- | ----------- |
-| `time_factor` | __1.0__ | __>1.0__ | __<1.0__ |
+| `time_factor` | __1.0__ | __>1.0__ | __<1.0__ |
+
+
!!! Important
Over 2.0 position interpolation is disabled and just updated. Pedestrians' animations are not affected by the time factor.
@@ -135,7 +139,7 @@ Duration: 60.3753 seconds
---------------
##Queries
-
Collisions
+####Collisions
In order to record collisions, vehicles must have a [collision detector](../ref_sensors#collision-detector) attached. The collisions registered by the recorder can be queried using arguments to filter the type of the actors involved in the collisions. For example, `h` identifies actors whose `role_name = hero`, usually assigned to vehicles managed by the user.
Currently, the actor types that can be used in the query are:
@@ -184,7 +188,7 @@ In this case, the playback showed this:
![collision](img/collision1.gif)
-
Blocked actors
+####Blocked actors
This query is used to detect vehicles that where stucked during the recording. An actor is considered blocked if it does not move a minimum distance in a certain time. This definition is made by the user during the query:
@@ -195,7 +199,9 @@ client.show_recorder_actors_blocked("recording01.log", min_time, min_distance)
| Parameters | Description | Default |
| -------------- | --------------------------------------------------------- | ----- |
| `min_time` | Minimum seconds to move `min_distance`. | 30 secs. |
-| `min_distance` | Minimum centimeters to move to not be considered blocked. | 10 cm. |
+| `min_distance` | Minimum centimeters to move to not be considered blocked. | 10 cm. |
+
+
!!! Note
Take into account that vehicles are stopped at traffic lights sometimes for longer than expected.
@@ -241,7 +247,9 @@ Some of the provided scripts in `PythonAPI/examples` facilitate the use of the r
| -------------- | ------------ |
| `-f` | Filename. |
| `-n`
(optional)| Vehicles to spawn. Default is 10. |
-| `-t`
(optional)| Duration of the recording. |
+| `-t`
(optional)| Duration of the recording. |
+
+
* __start_replaying.py__: starts the playback of a recording. Starting time, duration and actor to follow can be set.
@@ -250,7 +258,9 @@ Some of the provided scripts in `PythonAPI/examples` facilitate the use of the r
| `-f` | Filename. |
| `-s`
(optional)| Starting time. Default is 0. |
| `-d`
(optional)| Duration. Default is all. |
-| `-c`
(optional)| ID of the actor to follow. |
+| `-c`
(optional)| ID of the actor to follow. |
+
+
* __show_recorder_file_info.py__: shows all the information in the recording file.
Two modes of detail: by default it only shows frames where some event is recorded. The second shows all information for all frames.
@@ -258,7 +268,9 @@ Two modes of detail: by default it only shows frames where some event is recorde
| Parameters | Description |
| -------------- | ------------ |
| `-f` | Filename. |
-| `-s`
(optional)| Flag to show all details. |
+| `-s`
(optional)| Flag to show all details. |
+
+
* __show_recorder_collisions.py__: shows recorded collisions between two actors of type __A__ and __B__ defined using a series of flags: `-t = vv` would show all collisions between vehicles.
@@ -274,8 +286,9 @@ Two modes of detail: by default it only shows frames where some event is recorde
| --------------- | ------------ |
| `-f` | Filename. |
| `-t`
(optional) | Time to move `-d` before being considered blocked. |
-| `-d`
(optional) | Distance to move to not be considered blocked. |
+| `-d`
(optional) | Distance to move to not be considered blocked. |
+
---------------
Now it is time to experiment for a while. Use the recorder to playback a simulation, trace back events, make changes to see new outcomes. Feel free to say your word in the CARLA forum about this matter:
diff --git a/Docs/ref_sensors.md b/Docs/ref_sensors.md
index f80eb58fb..94841f901 100644
--- a/Docs/ref_sensors.md
+++ b/Docs/ref_sensors.md
@@ -1,4 +1,4 @@
-
Sensors' documentation
+# Sensors' documentation
* [__Collision detector__](#collision-detector)
* [__Depth camera__](#depth-camera)
@@ -23,7 +23,7 @@ To ensure that collisions with any kind of object are detected, the server creat
Collision detectors do not have any configurable attribute.
-
Output attributes:
+#### Output attributes
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
@@ -56,16 +56,18 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
![ImageDepth](img/capture_depth.png)
-
Basic camera attributes
+#### Basic camera attributes
| Blueprint attribute | Type | Default | Description |
| ------------------- | ---- | ------- | ----------- |
| `image_size_x` | int | 800 | Image width in pixels. |
| `image_size_y` | int | 600 | Image height in pixels. |
| `fov` | float | 90.0 | Horizontal field of view in degrees. |
-| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
+| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
-
Camera lens distortion attributes
+
+
+#### Camera lens distortion attributes
| Blueprint attribute | Type | Default | Description |
|--------------------------|-------|---------|-------------|
@@ -74,9 +76,11 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
| `lens_k` | float | -1.0 | Range: [-inf, inf] |
| `lens_kcube` | float | 0.0 | Range: [-inf, inf] |
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
-| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
+| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
-
Output attributes
+
+
+#### Output attributes
| Sensor data attribute | Type | Description |
| --------------------- | ------------------------------------------------ | ----------- |
@@ -96,7 +100,7 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gnss) of its parent object. This is calculated by adding the metric position to an initial geo reference location defined within the OpenDRIVE map definition.
-
GNSS attributes
+#### GNSS attributes
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
@@ -107,9 +111,11 @@ Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gns
| `noise_lon_bias` | float | 0.0 | Mean parameter in the noise model for longitude. |
| `noise_lon_stddev` | float | 0.0 | Standard deviation parameter in the noise model for longitude. |
| `noise_seed` | int | 0 | Initializer for a pseudorandom number generator. |
-| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
+| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
-
Output attributes
+
+
+#### Output attributes
| Sensor data attribute | Type | Description |
| ---------------------- | ------------------------------------------------ | ----------- |
@@ -128,7 +134,7 @@ Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gns
Provides measures that accelerometer, gyroscope and compass would retrieve for the parent object. The data is collected from the object's current state.
-
IMU attributes
+#### IMU attributes
| Blueprint attribute | Type | Default | Description |
| --------------------- | ---- | ------- | ----------- |
@@ -142,10 +148,11 @@ Provides measures that accelerometer, gyroscope and compass would retrieve for t
| `noise_gyro_stddev_y` | float | 0.0 | Standard deviation parameter in the noise model for the gyroscope (Y axis). |
| `noise_gyro_stddev_z` | float | 0.0 | Standard deviation parameter in the noise model for the gyroscope (Z axis). |
| `noise_seed` | int | 0 | Initializer for a pseudorandom number generator. |
-| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
+| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
+
-
Output attributes
+#### Output attributes
| Sensor data attribute | Type | Description |
| --------------------- | ------------------------------------------------ | ----------- |
@@ -174,7 +181,7 @@ This sensor does not have any configurable attribute.
!!! Important
This sensor works fully on the client-side.
-
Output attributes
+#### Output attributes
| Sensor data attribute | Type | Description |
| ----------------------- | ---------------------------------------------------------- | ----------- |
@@ -210,7 +217,7 @@ for location in lidar_measurement:
![LidarPointCloud](img/lidar_point_cloud.gif)
-
Lidar attributes
+#### Lidar attributes
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
@@ -220,9 +227,11 @@ for location in lidar_measurement:
| `rotation_frequency` | float | 10.0 | Lidar rotation frequency. |
| `upper_fov` | float | 10.0 | Angle in degrees of the highest laser. |
| `lower_fov` | float | -30.0 | Angle in degrees of the lowest laser. |
-| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
+| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
-
Output attributes
+
+
+#### Output attributes
| Sensor data attribute | Type | Description |
| -------------------------- | ------------------------------------------------ | ----------- |
@@ -235,7 +244,7 @@ for location in lidar_measurement:
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point). |
---------------
-##Obstacle detector
+## Obstacle detector
* __Blueprint:__ sensor.other.obstacle
* __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent) per obstacle (unless `sensor_tick` says otherwise).
@@ -250,9 +259,11 @@ To ensure that collisions with any kind of object are detected, the server creat
| `hit_radius` | float | 0.5 | Radius of the trace. |
| `only_dynamics` | bool | false | If true, the trace will only consider dynamic objects. |
| `debug_linetrace` | bool | false | If true, the trace will be visible. |
-| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
+| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
-
Output attributes
+
+
+#### Output attributes
| Sensor data attribute | Type | Description |
| ---------------------- | ------------------------------------------------ | ----------- |
@@ -264,7 +275,7 @@ To ensure that collisions with any kind of object are detected, the server creat
| `distance` | float | Distance from `actor` to `other_actor`. |
---------------
-##Radar sensor
+## Radar sensor
* __Blueprint:__ sensor.other.radar
* __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) per step (unless `sensor_tick` says otherwise).
@@ -289,9 +300,11 @@ The provided script `manual_control.py` uses this sensor to show the points bein
| `points_per_second` | int | 1500 | Points generated by all lasers per second. |
| `range` | float | 100 | Maximum distance to measure/raycast in meters. |
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
-| `vertical_fov` | float | 30 | Vertical field of view in degrees. |
+| `vertical_fov` | float | 30 | Vertical field of view in degrees. |
-
Output attributes
+
+
+#### Output attributes
| Sensor data attribute | Type | Description |
| ---------------------- | ---------------------------------------------------------------- | ----------- |
@@ -305,7 +318,7 @@ The provided script `manual_control.py` uses this sensor to show the points bein
| `velocity` | float | Velocity towards the sensor. |
---------------
-##RGB camera
+## RGB camera
* __Blueprint:__ sensor.camera.rgb
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise)..
@@ -328,7 +341,7 @@ A value of 1.5 means that we want the sensor to capture data each second and a h
![ImageRGB](img/capture_scenefinal.png)
-
Basic camera attributes
+#### Basic camera attributes
| Blueprint attribute | Type | Default | Description |
|---------------------|-------|---------|-------------|
@@ -339,9 +352,11 @@ A value of 1.5 means that we want the sensor to capture data each second and a h
| `iso` | float | 1200.0 | The camera sensor sensitivity. |
| `gamma` | float | 2.2 | Target gamma value of the camera. |
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
-| `shutter_speed` | float | 60.0 | The camera shutter speed in seconds (1.0 / s). |
+| `shutter_speed` | float | 60.0 | The camera shutter speed in seconds (1.0 / s). |
-
Camera lens distortion attributes
+
+
+#### Camera lens distortion attributes
| Blueprint attribute | Type | Default | Description |
|--------------------------|-------|---------|-------------|
@@ -350,9 +365,11 @@ A value of 1.5 means that we want the sensor to capture data each second and a h
| `lens_k` | float | -1.0 | Range: [-inf, inf] |
| `lens_kcube` | float | 0.0 | Range: [-inf, inf] |
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
-| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
+| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
-
Advanced camera attributes
+
+
+#### Advanced camera attributes
Since these effects are provided by UE, please make sure to check their documentation:
@@ -390,11 +407,13 @@ Since these effects are provided by UE, please make sure to check their document
| `tint` | float | 0.0 | White balance temperature tint. Adjusts cyan and magenta color ranges. This should be used along with the white balance Temp property to get accurate colors. Under some light temperatures, the colors may appear to be more yellow or blue. This can be used to balance the resulting color to look more natural. |
| `chromatic_aberration_intensity` | float | 0.0 | Scaling factor to control color shifting, more noticeable on the screen borders. |
| `chromatic_aberration_offset` | float | 0.0 | Normalized distance to the center of the image where the effect takes place. |
-| `enable_postprocess_effects` | bool | True | Post-process effects activation. |
+| `enable_postprocess_effects` | bool | True | Post-process effects activation. |
+
+
[AutomaticExposure.gamesetting]: https://docs.unrealengine.com/en-US/Engine/Rendering/PostProcessEffects/AutomaticExposure/index.html#gamesetting
-
Output attributes
+#### Output attributes
| Sensor data attribute | Type | Description |
| --------------------- | ------------------------------------------------ | ----------- |
@@ -433,7 +452,9 @@ The following tags are currently available:
| 9 | Vegetation | (107, 142, 35) |
| 10 | Car | ( 0, 0, 142) |
| 11 | Wall | (102, 102, 156) |
-| 12 | Traffic sign | (220, 220, 0) |
+| 12 | Traffic sign | (220, 220, 0) |
+
+
!!! Note
**Adding new tags**:
@@ -441,16 +462,18 @@ The following tags are currently available:
![ImageSemanticSegmentation](img/capture_semseg.png)
-
Basic camera attributes
+#### Basic camera attributes
| Blueprint attribute | Type | Default | Description |
| ------------------- | ---- | ------- | ----------- |
| `fov` | float | 90.0 | Horizontal field of view in degrees. |
| `image_size_x` | int | 800 | Image width in pixels. |
| `image_size_y` | int | 600 | Image height in pixels. |
-| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
+| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
-
Camera lens distortion attributes
+
+
+#### Camera lens distortion attributes
| Blueprint attribute | Type | Default | Description |
|------------------------- |------ |---------|-------------|
@@ -459,9 +482,11 @@ The following tags are currently available:
| `lens_k` | float | -1.0 | Range: [-inf, inf] |
| `lens_kcube` | float | 0.0 | Range: [-inf, inf] |
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
-| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
+| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
-
Output attributes
+
+
+#### Output attributes
| Sensor data attribute | Type | Description |
| --------------------- | ------------------------------------------------ | ----------- |
@@ -471,4 +496,6 @@ The following tags are currently available:
| `raw_data` | bytes | Array of BGRA 32-bit pixels. |
| `timestamp` | double | Simulation time of the measurement in seconds since the beginning of the episode. |
| `transform` | [carla.Transform](python_api.md#carla.Transform) | Location and rotation in world coordinates of the sensor at the time of the measurement. |
-| `width` | int | Image width in pixels. |
+| `width` | int | Image width in pixels. |
+
+
diff --git a/Docs/rendering_options.md b/Docs/rendering_options.md
index e432744c1..ef925c396 100644
--- a/Docs/rendering_options.md
+++ b/Docs/rendering_options.md
@@ -1,4 +1,4 @@
-
Rendering options
+# Rendering options
Before you start running your own experiments there are few details to take into
account at the time of configuring your simulation. In this document we cover
@@ -21,7 +21,7 @@ the most important ones.
---------------
##Graphics quality
-
Vulkan vs OpenGL
+####Vulkan vs OpenGL
Vulkan is the default graphics API used by Unreal Engine and CARLA (if installed). It consumes more memory, but performs faster and makes for a better frame rate. However, it is quite experimental, especially in Linux, and it may lead to some issues.
For said reasons, there is the option to change to OpenGL simply by using a flag when running CARLA. The same flag works for both Linux and Windows:
@@ -32,7 +32,7 @@ cd carla && ./CarlaUE4.sh -opengl
When working with the build version of CARLA it is Unreal Engine the one that needs to be set to use OpenGL. [Here][UEdoc] is a documentation regarding different command line options for Unreal Engine.
[UEdoc]: https://docs.unrealengine.com/en-US/Programming/Basics/CommandLineArguments/index.html
-
Quality levels
+####Quality levels
CARLA also allows for two different graphic quality levels named as __Epic__, the default, and __Low__, which disables all post-processing, shadows and the drawing distance is set to 50m instead of infinite and makes the simulation run significantly faster.
Low mode is not only used when precision is nonessential or there are technical limitations, but also to train agents under conditions with simpler data or regarding only close elements.
@@ -82,11 +82,11 @@ Unreal Engine needs for a screen in order to run, but there is a workaround for
The simulator launches but there is no available window. However, it can be connected in the usual manner and scripts run the same way. For the sake of understanding let's sake that this mode tricks Unreal Engine into running in a fake screen.
-
Off-screen vs no-rendering
+####Off-screen vs no-rendering
These may look similar but are indeed quite different. It is important to understand the disctintion them to prevent misunderstandings. In off-screen Unreal Engine is working as usual and rendering is computed as usual. The only difference is that there is no available display. In no-rendering, it is Unreal Engine the one that is said to avoid rendering and thus, graphics are not computed. For said reasons, GPU sensors return data when off-screen and no-rendering mode can be enabled at will.
-
Setting off-screen mode
+####Setting off-screen mode
Right now this is __only possible in Linux while using OpenGL__ instead of Vulkan. Unreal Engine crushes when Vulkan is running off-screen, and this issue is yet to be fixed by Epic.
@@ -101,7 +101,7 @@ Note that this method, in multi-GPU environments, does not allow to choose the G
---------------
##Running off-screen using a preferred GPU
-
Docker: recommended approach
+####Docker: recommended approach
The best way to run a headless CARLA and select the GPU is to [__run CARLA in a Docker__](../carla_docker).
This section contains an alternative tutorial, but this method is deprecated and performance is much worse. However, it is here just in case, for those who Docker is not an option.
@@ -114,7 +114,7 @@ This section contains an alternative tutorial, but this method is deprecated and
!!! Warning
This tutorial is deprecated. To run headless CARLA, please [__run CARLA in a Docker__](../carla_docker).
-
Requirements
+* __Requirements:__
This tutorial only works in Linux and makes it possible for a remote server using several graphical cards to use CARLA on all GPUs. This is also translatable to a desktop user trying to use CARLA with a GPU that is not plugged to any screen. To achieve that, the steps can be summarized as:
@@ -139,15 +139,15 @@ sudo apt install x11-xserver-utils libxrandr-dev
```
!!! Warning
Make sure that VNC version is compatible with Unreal. The one above worked properly during the making of this tutorial.
+
-
-
Configure the X
+* __Configure the X__
Generate a X compatible with the Nvdia installed and able to run without display:
- sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
+ sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
-
Emulate the virtual display
+* __Emulate the virtual display__
Run a Xorg. Here number 7 is used, but it could be labeled with any free number:
@@ -162,17 +162,17 @@ If everything is working fine the following command will run glxinfo on Xserver
DISPLAY=:8 vglrun -d :7.0 glxinfo
!!! Important
- To run on other GPU, change the `7.X` pattern in the previous command. To set it to GPU 1: `DISPLAY=:8 vglrun -d :7.1 glxinfo`
+ To run on other GPU, change the `7.X` pattern in the previous command. To set it to GPU 1: `DISPLAY=:8 vglrun -d :7.1 glxinfo`
-
Extra
+* __Extra__
To disable the need of sudo when creating the `nohup Xorg` go to `/etc/X11/Xwrapper.config` and change `allowed_users=console` to `allowed_users=anybody`.
It may be needed to stop all Xorg servers before running `nohup Xorg`. The command for that could change depending on your system. Generally for Ubuntu 16.04 use:
- sudo service lightdm stop
+ sudo service lightdm stop
-
Running CARLA
+* __Running CARLA__
To run CARLA on a certain `
` in a certain `$CARLA_PATH` use the following command:
diff --git a/Docs/simulation_time_and_synchrony.md b/Docs/simulation_time_and_synchrony.md
index 69a84e559..d4ddcb2e9 100644
--- a/Docs/simulation_time_and_synchrony.md
+++ b/Docs/simulation_time_and_synchrony.md
@@ -1,4 +1,4 @@
-Synchrony and time-step
+# Synchrony and time-step
This section deals with two concepts that are fundamental to fully comprehend CARLA and gain control over it to achieve the desired results. There are different configurations that define how does time go by in the simulation and how does the server running said simulation work. The following sections will dive deep into these concepts:
@@ -22,7 +22,7 @@ The time-step can be fixed or variable depending on user preferences, and CARLA
!!! Note
After reading this section it would be a great idea to go for the following one, __Client-server synchrony__, especially the part about synchrony and time-step. Both are related concepts and affect each other when using CARLA.
-Variable time-step
+####Variable time-step
This is the default mode in CARLA. When the time-step is variable, the simulation time that goes by between steps will be the time that the server takes to compute these.
In order to set the simulation to a variable time-step the code could look like this:
@@ -36,7 +36,7 @@ The provided script `PythonAPI/util/config.py` automatically sets time-step wit
cd PythonAPI/util && ./config.py --delta-seconds 0
```
-Fixed time-step
+####Fixed time-step
Going for a fixed time-step makes the server run a simulation where the elapsed time remains constant between steps. If it is set to 0.5 seconds, there will be two frames per simulated second.
Using the same time increment on each step is the best way to gather data from the simulation, as physics and sensor data will correspond to an easy to comprehend moment of the simulation. Also, if the server is fast enough, it makes possible to simulate longer time periods in less real time.
@@ -52,7 +52,7 @@ Thus, the simulator will take twenty steps (1/0.05) to recreate one second of th
cd PythonAPI/util && ./config.py --delta-seconds 0.05
```
-Tips when recording the simulation
+####Tips when recording the simulation
CARLA has a [recorder feature](recorder_and_playback.md) that allows a simulation to be recorded and then reenacted. However, when looking for precision, some things need to be taken into account.
If the simulation ran with a fixed time-step, reenacting it will be easy, as the server can be set to the same time-step used in the original simulation. However, if the simulation used a variable time-step, things are a bit more complicated.
@@ -61,7 +61,7 @@ Secondly, the server can be forced to reproduce the exact same time-steps passin
Finally there is also the float-point arithmetic error that working with a variable time-step introduces. As the simulation is running with a time-step equal to the real one, being real time a continuous and simulation one a float variable, the time-steps show decimal limitations. The time that is cropped for each step is an error that accumulates and prevents the simulation from a precise repetition of what has happened.
-Time-step limitations
+####Time-step limitations
Physics must be computed within very low time steps to be precise. The more time goes by, the more variables and chaos come to place and so, the more defective the simulation will be.
CARLA uses up to 6 substeps to compute physics in every step, each with a maximum delta time of 0.016667s.
@@ -96,7 +96,7 @@ cd PythonAPI/util && ./config.py --no-sync
```
Must be mentioned that synchronous mode cannot be enabled using the script, only disabled. Enabling the synchronous mode makes the server wait for a client tick, and using this script the user cannot send ticks when desired.
-Using synchronous mode
+####Using synchronous mode
The synchronous mode becomes specially relevant when running with slow clients applications and when synchrony between different elements, such as sensors, is needed. If the client is too slow and the server does not wait for it, the amount of information received will be impossible to manage and it can easily be mixed. On a similar tune, if there are ten sensors waiting to retrieve data and the server is sending all these information without waiting for all of them to have the previous one, it would be impossible to know if all the sensors are using data from the same moment in the simulation.
As a little extension to the previous code, in the following fragment, the client creates a camera sensor that puts the image data received in the current step in a queue and sends ticks to the server only after retrieving it from the queue. A more complex example regarding several sensors can be found [here][syncmodelink].
@@ -139,7 +139,9 @@ The configuration of both concepts explained in this page, simulation time-step
| | __Fixed time-step__ | __Variable time-step__ |
| --- | --- | --- |
| __Synchronous mode__ | Client is in total control over the simulation and its information. | Risk of non reliable simulations. |
-| __Asynchronous mode__ | Good time references for information. Server runs as fast as possible. | Non easily repeatable simulations. |
+| __Asynchronous mode__ | Good time references for information. Server runs as fast as possible. | Non easily repeatable simulations. |
+
+
* __Synchronous mode + variable time-step:__ This is almost for sure a non-desirable state. Physics cannot run properly when the time-step is bigger than 0.1s and, if the server needs to wait for the client to compute the steps, this is likely to happen. Simulation time and physics then will not be in synchrony and thus, the simulation is not reliable.
diff --git a/Docs/update_carla.md b/Docs/update_carla.md
index 8cdcbf7f0..42a26fe2f 100644
--- a/Docs/update_carla.md
+++ b/Docs/update_carla.md
@@ -1,4 +1,4 @@
-Update CARLA
+#Update CARLA
* [__Get lastest binary release__](#get-latest-binary-release)
* [__Update Linux and Windows build__](#update-linux-and-windows-build)
@@ -18,7 +18,6 @@ CARLA forum
-
---------------
##Get latest binary release
@@ -46,20 +45,20 @@ Binary releases are prepackaged and thus, tied to a specific version of CARLA. I
The process of updating is quite similar and straightforward for both platforms:
-
+####Clean the build
Go to the CARLA main directory and delete binaries and temporals generated by previous build:
```sh
git checkout master
make clean
```
-
+####Pull from origin
Get the current version from `master` in the CARLA repository:
```sh
git pull origin master
```
-
+####Download the assets
__Linux:__
```sh
@@ -74,7 +73,7 @@ __Windows:__
!!! Note
In order to work with the current content used by developers in the CARLA team, follow the get development assets section right below this one.
-
+####Launch the server
Run the editor with the spectator view to be sure that everything worked properly:
```sh
diff --git a/Docs/walker_bone_control.md b/Docs/walker_bone_control.md
index 369a872a4..caa8430c5 100644
--- a/Docs/walker_bone_control.md
+++ b/Docs/walker_bone_control.md
@@ -1,4 +1,4 @@
-
+# Walker Bone Control
In this tutorial we describe how to manually control and animate the
skeletons of walkers from the CARLA Python API. The reference of
diff --git a/PythonAPI/docs/bp_doc_gen.py b/PythonAPI/docs/bp_doc_gen.py
index 3d5ab4517..fac1c2e5f 100644
--- a/PythonAPI/docs/bp_doc_gen.py
+++ b/PythonAPI/docs/bp_doc_gen.py
@@ -101,7 +101,7 @@ class MarkdownFile:
def not_title(self, buf):
self._data = join([
- self._data, '\n', self.list_depth(), '
', '\n'])
+ self._data, '\n', self.list_depth(), '#', buf, '\n'])
def title(self, strongness, buf):
self._data = join([
diff --git a/PythonAPI/docs/doc_gen.py b/PythonAPI/docs/doc_gen.py
index b99a67c24..be9a0a77c 100755
--- a/PythonAPI/docs/doc_gen.py
+++ b/PythonAPI/docs/doc_gen.py
@@ -33,7 +33,7 @@ class MarkdownFile:
self._data = ""
self._list_depth = 0
self.endl = ' \n'
-
+
def data(self):
return self._data
@@ -70,6 +70,10 @@ class MarkdownFile:
def textn(self, buf):
self._data = join([self._data, self.list_depth(), buf, self.endl])
+ def first_title(self):
+ self._data = join([
+ self._data, '#Python API reference'])
+
def title(self, strongness, buf):
self._data = join([
self._data, '\n', self.list_depth(), '#' * strongness, ' ', buf, '\n'])
@@ -437,6 +441,7 @@ class Documentation:
def gen_body(self):
"""Generates the documentation body"""
md = MarkdownFile()
+ md.first_title()
for module_name in sorted(self.master_dict):
module = self.master_dict[module_name]
module_key = module_name
diff --git a/mkdocs.yml b/mkdocs.yml
index 1a59ba32d..7c2b3f19b 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -43,7 +43,6 @@ nav:
- 'Generate pedestrian navigation': 'how_to_generate_pedestrians_navigation.md'
- "Link Epic's Automotive Materials": 'epic_automotive_materials.md'
- 'Map customization': 'dev/map_customization.md'
- - 'Running without display and selecting GPUs': 'carla_headless.md'
- How to... (content):
- 'Add assets': 'how_to_add_assets.md'
- 'Create and import a new map': 'how_to_make_a_new_map.md'