sergi-e/sidebar-bug (#2529)

* Fix indexes for all the documentation.

* Added instructions for development assets in linux build

* Table bottom-margin workaround
This commit is contained in:
sergi.e 2020-03-02 09:40:34 +01:00 committed by GitHub
parent 85b192530d
commit eae903e908
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
40 changed files with 291 additions and 223 deletions

View File

@ -1,4 +1,4 @@
Contributing to CARLA
# Contributing to CARLA
=====================
We are more than happy to accept contributions!
@ -10,8 +10,8 @@ How can I contribute?
* Improving documentation
* Code contributions
Reporting bugs
--------------
---------
## Reporting bugs
Use our [issue section][issueslink] on GitHub. Please check before that the
issue is not already reported, and make sure you have read our
@ -21,8 +21,8 @@ issue is not already reported, and make sure you have read our
[docslink]: http://carla.readthedocs.io
[faqlink]: http://carla.readthedocs.io/en/latest/faq/
Feature requests
----------------
---------
## Feature requests
Please check first the list of [feature requests][frlink]. If it is not there
and you think is a feature that might be interesting for users, please submit
@ -30,8 +30,8 @@ your request as a new issue.
[frlink]: https://github.com/carla-simulator/carla/issues?q=is%3Aissue+is%3Aopen+label%3A%22feature+request%22+sort%3Acomments-desc
Improving documentation
-----------------------
-------
## Improving documentation
If you feel something is missing in the documentation, please don't hesitate to
open an issue to let us know. Even better, if you think you can improve it
@ -61,8 +61,8 @@ Once you are done with your changes, please submit a pull-request.
> mkdocs serve
```
Code contributions
------------------
------------
## Code contributions
So you are considering making a code contribution, great! We love to have
contributions from the community.

View File

@ -1,4 +1,4 @@
<h1>Creating standalone asset packages for distribution</h1>
# Creating standalone asset packages for distribution
*Please note that we will use the term *assets* for referring to **props** and also **maps**.*

View File

@ -1,5 +1,5 @@
<h1>Blueprint Library</h1>
#Blueprint Library
The Blueprint Library ([`carla.BlueprintLibrary`](../python_api/#carlablueprintlibrary-class)) is a summary of all [`carla.ActorBlueprint`](../python_api/#carla.ActorBlueprint) and its attributes ([`carla.ActorAttribute`](../python_api/#carla.ActorAttribute)) available to the user in CARLA.
Here is an example code for printing all actor blueprints and their attributes:
@ -594,13 +594,6 @@ Check out our [blueprint tutorial](../python_api_tutorial/#blueprints).
- `object_type` (_String_)
- `role_name` (_String_)<sub>_ Modifiable_</sub>
- `sticky_control` (_Bool_)<sub>_ Modifiable_</sub>
- **<font color="#498efc">vehicle.ford.mustang</font>**
- **Attributes:**
- `color` (_RGBColor_)<sub>_ Modifiable_</sub>
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `role_name` (_String_)<sub>_ Modifiable_</sub>
- `sticky_control` (_Bool_)<sub>_ Modifiable_</sub>
- **<font color="#498efc">vehicle.gazelle.omafiets</font>**
- **Attributes:**
- `color` (_RGBColor_)<sub>_ Modifiable_</sub>
@ -632,7 +625,7 @@ Check out our [blueprint tutorial](../python_api_tutorial/#blueprints).
- `object_type` (_String_)
- `role_name` (_String_)<sub>_ Modifiable_</sub>
- `sticky_control` (_Bool_)<sub>_ Modifiable_</sub>
- **<font color="#498efc">vehicle.lincoln.mkz2017</font>**
- **<font color="#498efc">vehicle.lincoln.lincoln</font>**
- **Attributes:**
- `color` (_RGBColor_)<sub>_ Modifiable_</sub>
- `number_of_wheels` (_Int_)
@ -653,6 +646,13 @@ Check out our [blueprint tutorial](../python_api_tutorial/#blueprints).
- `object_type` (_String_)
- `role_name` (_String_)<sub>_ Modifiable_</sub>
- `sticky_control` (_Bool_)<sub>_ Modifiable_</sub>
- **<font color="#498efc">vehicle.mustang.mustang</font>**
- **Attributes:**
- `color` (_RGBColor_)<sub>_ Modifiable_</sub>
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `role_name` (_String_)<sub>_ Modifiable_</sub>
- `sticky_control` (_Bool_)<sub>_ Modifiable_</sub>
- **<font color="#498efc">vehicle.nissan.micra</font>**
- **Attributes:**
- `color` (_RGBColor_)<sub>_ Modifiable_</sub>

View File

@ -1,4 +1,4 @@
<h1>Building from source</h1>
#Building from source
* [How to build on Linux](how_to_build_on_linux.md)
* [How to build on Windows](how_to_build_on_windows.md)

View File

@ -1,4 +1,4 @@
<h1>Running CARLA in a Docker </h1>
# Running CARLA in a Docker
This tutorial is designed for:
@ -10,25 +10,27 @@ This tutorial was tested in Ubuntu 16.04 and using NVIDIA 396.37 drivers.
This method requires a version of NVIDIA drivers >=390.
## Docker Installation
---
##Docker Installation
!!! note
Docker requires sudo to run. Follow this guide to add users to the docker sudo
group <https://docs.docker.com/install/linux/linux-postinstall/>
### Docker CE
####Docker CE
For our tests we used the Docker CE version.
To install Docker CE we recommend using [this tutorial][tutoriallink]
[tutoriallink]: https://docs.docker.com/install/linux/docker-ce/ubuntu/#extra-steps-for-aufs
### NVIDIA-Docker2
#### NVIDIA-Docker2
To install nvidia-docker-2 we recommend using the "Quick Start"
section from the [nvidia-dockers github](https://github.com/NVIDIA/nvidia-docker).
## Getting it Running
---
##Getting it Running
Pull the CARLA image.

View File

@ -1,13 +1,13 @@
<h1>Coding standard</h1>
# Coding standard
General
=======
-------
## General
* Use spaces, not tabs.
* Avoid adding trailing whitespace as it creates noise in the diffs.
Python
------
-------
## Python
* Comments should not exceed 80 columns, code should not exceed 120 columns.
* All code must be compatible with Python 2.7, 3.5, and 3.6.
@ -19,8 +19,8 @@ Python
[pylintlink]: https://www.pylint.org/
[pep8link]: https://www.python.org/dev/peps/pep-0008/
C++
---
-------
## C++
* Comments should not exceed 80 columns, code may exceed this limit a bit in
rare occasions if it results in clearer code.

View File

@ -1,4 +1,4 @@
<h1>2nd. Actors and blueprints</h1>
# 2nd. Actors and blueprints
The actors in CARLA include almost everything playing a role the simulation. That includes not only vehicles and walkers but also sensors, traffic signs, traffic lights and the spectator, the camera providing the simulation's point of view. They are crucial, and so it is to have fully understanding on how to operate on them.
This section will cover the basics: from spawning up to destruction and their different types. However, the possibilities they present are almost endless. This is the first step to then experiment, take a look at the __How to's__ in this documentation and share doubts and ideas in the [CARLA forum](https://forum.carla.org/).
@ -21,7 +21,7 @@ This section will cover the basics: from spawning up to destruction and their di
This layouts allow the user to smoothly add new actors into the simulation. They basically are already-made models with a series of attributes listed, some of which are modifiable and others are not: vehicle color, amount of channels in a lidar sensor, _fov_ in a camera, a walker's speed. All of these can be changed at will. All the available blueprints are listed in the [blueprint library](bp_library.md) with their attributes and a tag to identify which can be set by the user.
<h4>Managing the blueprint library</h4>
####Managing the blueprint library
There is a [carla.BlueprintLibrary](python_api.md#carla.BlueprintLibrary) class containing a list of [carla.ActorBlueprint](python_api.md#carla.ActorBlueprint) elements. It is the world object who can provide access to an instance of it:
```py
@ -61,7 +61,7 @@ for attr in blueprint:
!!! Important
All along this section, many different functions and methods regarding actors will be covered. The Python API provides for __[commands](python_api.md#command.SpawnActor)__ to apply batches of this common functions (such as spawning or destroying actors) in just one frame.
<h4>Spawning</h4>
####Spawning
The world object is responsible of spawning actors and keeping track of those who are currently on scene. Besides a blueprint, only a [carla.Transform](python_api.md#carla.Transform), which basically defines a location and rotation for the actor.
@ -105,7 +105,7 @@ for speed_sign in actor_list.filter('traffic.speed_limit.*'):
print(speed_sign.get_location())
```
<h4>Handling</h4>
####Handling
Once an actor si spawned, handling is quite straightforward. The [carla.Actor](python_api.md#carla.Actor) class mostly consists of _get()_ and _set()_ methods to manage the actors around the map:
@ -128,7 +128,7 @@ Besides that, actors also have tags provided by their blueprints that are mostly
Most of the methods send requests to the simulator asynchronously. The simulator queues these, but has a limited amount of time each update to parse them. Flooding the simulator with lots of "set" methods will accumulate a significant lag.
<h4>Destruction</h4>
####Destruction
To remove an actor from the simulation, an actor can get rid of itself and notify if successful doing so:
@ -143,7 +143,7 @@ Actors are not destroyed when the Python script finishes, they remain and the wo
---------------
##Types of actors
<h4>Sensors</h4>
####Sensors
Sensors are actors that produce a stream of data. They are so important and vast that they will be properly written about on their own section: [4th. Sensors and data](core_sensors.md).
So far, let's just take a look at a common sensor spawning routine:
@ -159,11 +159,11 @@ This example spawns a camera sensor, attaches it to a vehicle, and tellds the ca
* Most of the sensors will be attached to a vehicle to gather information on its surroundings.
* Sensors __listen__ to data. When receiving it, they call a function described with __[Lambda expressions](https://docs.python.org/3/reference/expressions.html)__, so it is advisable to learn on that beforehand <small>(6.13 in the link provided)</small>.
<h4>Spectator</h4>
####Spectator
This unique actor is the one element placed by Unreal Engine to provide an in-game point of view. It can be used to move the view of the simulator window.
<h4>Traffic signs and traffic lights</h4>
####Traffic signs and traffic lights
So far, CARLA only is aware of some signs: stop, yield and speed limit, described with [carla.TrafficSign](python_api.md#carla.TrafficSign). Traffic lights, which are considered an inherited class named [carla.TrafficLight](python_api.md#carla.TrafficLight). __These cannot be found on the blueprint library__ and thus, cannot be spawned. They set traffic conditions and so, they are mindfully placed by developers.
@ -187,7 +187,7 @@ if traffic_light.get_state() == carla.TrafficLightState.Red:
* **Speed limit signs** with the speed codified in their type_id.
<h4>Vehicles</h4>
####Vehicles
[carla.Vehicle](python_api.md#carla.Vehicle) are a special type of actor that provide some better physics control. This is achieved applying four types of different controls:
@ -217,7 +217,7 @@ print(box.location) # Location relative to the vehicle.
print(box.extent) # XYZ half-box extents in meters.
```
<h4>Walkers</h4>
####Walkers
[carla.Walker](python_api.md#carla.Walker) are moving actors and so, work in quite a similar way as vehicles do. Control over them is provided by controllers:

View File

@ -1,4 +1,4 @@
<h1>Core concepts</h1>
# Core concepts
This section summarizes the main features and modules in CARLA. While this page is just an overview, the rest of the information can be found in their respective pages, including fragments of code and in-depth explanations.
In order to learn everything about the different classes and methods in the API, take a look at the [Python API reference](python_api.md). There is also another reference named [Code recipes](python_cookbook.md) containing some of the most common fragments of code regarding different functionalities that could be specially useful during these first steps.
@ -17,13 +17,13 @@ In order to learn everything about the different classes and methods in the API,
---------------
##First steps
<h4>1st. World and client</h4>
#### 1st. World and client
The client is the module the user runs to ask for information or changes in the simulation. It communicates with the server via terminal. A client runs with an IP and a specific port. There can be many clients running at the same time, although multiclient managing needs a full comprehension on CARLA in order to make things work properly.
The world is an object representing the simulation, an abstract layer containing the main methods to manage it: spawn actors, change the weather, get the current state of the world, etc. There is only one world per simulation, but it will be destroyed and substituted for a new one when the map is changed.
<h4>2nd. Actors and blueprints</h4>
#### 2nd. Actors and blueprints
In CARLA, an actor is anything that plays a role in the simulation. That includes:
* Vehicles.
@ -34,13 +34,13 @@ In CARLA, an actor is anything that plays a role in the simulation. That include
A blueprint is needed in order to spawn an actor. __Blueprints__ are a set of already-made actor layouts: models with animations and different attributes. Some of these attributes can be set by the user, others don't. There is a library provided by CARLA containing all the available blueprints and the information regarding them. Visit the [Blueprint library](bp_library.md) to learn more about this.
<h4>3rd. Maps and navigation</h4>
#### 3rd. Maps and navigation
The map is the object representing the model of the world. There are many maps available, seven by the time this is written, and all of them use OpenDRIVE 1.4 standard to describe the roads.
Roads, lanes and junctions are managed by the API to be accessed using different classes and methods. These are later used along with the waypoint class to provide vehicles with a navigation path.
Traffic signs and traffic lights have bounding boxes placed on the road that make vehicles aware of them and their current state in order to set traffic conditions.
<h4>4th. Sensors and data</h4>
#### 4th. Sensors and data
Sensors are one of the most important actors in CARLA and their use can be quite complex. A sensor is attached to a parent vehicle and follows it around, gathering information of the surroundings for the sake of learning. Sensors, as any other actor, have blueprints available in the [Blueprint library](bp_library.md) that correspond to the types available. Currently, these are:

View File

@ -1,4 +1,4 @@
<h1>3rd. Maps and navigation</h1>
# 3rd. Maps and navigation
After discussing about the world and its actors, it is time to put everything into place and understand the map and how do the actors navigate it.
@ -15,7 +15,7 @@ After discussing about the world and its actors, it is time to put everything in
Understanding the map in CARLA is equivalent to understanding the road. All of the maps have an OpenDRIVE file defining the road layout fully annotated. The way the [OpenDRIVE standard 1.4](http://www.opendrive.org/docs/OpenDRIVEFormatSpecRev1.4H.pdf) defines roads, lanes, junctions, etc. is extremely important. It determines the possibilities of the API and the reasoning behind many decisions made.
The Python API provides a higher level querying system to navigate these roads. It is constantly evolving to provide a wider set of tools.
<h4>Changing the map</h4>
####Changing the map
This was briefly mentioned in [1st. World and client](core_world.md), so let's expand a bit on it: __To change the map, the world has to change too__. Everything will be rebooted and created from scratch, besides the Unreal Editor itself.
Using `reload_world()` creates a new instance of the world with the same map while `load_world()` is used to change the current one:
@ -29,20 +29,21 @@ print(client.get_available_maps())
```
So far there are seven different maps available. Each of these has a specific structure or unique features that are useful for different purposes, so a brief sum up on these:
Town | Summary
-- | --
__Town 01__ | As __Town 02__, a basic town layout with all "T junctions". These are the most stable.
__Town 02__ | As __Town 01__, a basic town layout with all "T junctions". These are the most stable.
__Town 03__ | The most complex town with a roundabout, unevenness, a tunnel. Essentially a medley.
__Town 04__ | An infinite loop in a highway.
__Town 05__ | Squared-grid town with cross junctions and a bridge.
__Town 06__ | Long highways with a lane exit and a [Michigan left](https://en.wikipedia.org/wiki/Michigan_left).
__Town 07__ | A rural environment with narrow roads, barely non traffic lights and barns.
|Town | Summary |
| -- | -- |
|__Town 01__ | As __Town 02__, a basic town layout with all "T junctions". These are the most stable.|
|__Town 02__ | As __Town 01__, a basic town layout with all "T junctions". These are the most stable.|
|__Town 03__ | The most complex town with a roundabout, unevenness, a tunnel. Essentially a medley.|
|__Town 04__ | An infinite loop in a highway.|
|__Town 05__ | Squared-grid town with cross junctions and a bridge.|
|__Town 06__ | Long highways with a lane exit and a [Michigan left](https://en.wikipedia.org/wiki/Michigan_left). |
|__Town 07__ | A rural environment with narrow roads, barely non traffic lights and barns.|
<br>
Users can also [customize a map](dev/map_customization.md) or even [create a new map](how_to_make_a_new_map.md) to be used in CARLA. These are more advanced steps and have been developed in their own tutorials.
<h4>Lanes</h4>
####Lanes
The different types of lane as defined by [OpenDRIVE standard 1.4](http://www.opendrive.org/docs/OpenDRIVEFormatSpecRev1.4H.pdf) are translated to the API in [carla.LaneType](python_api.md#carla.LaneType). The surrounding lane markings for each lane can also be accessed using [carla.LaneMarking](python_api.md#carla.LaneMarkingType).
A lane marking is defined by: a [carla.LaneMarkingType](python_api.md#carla.LaneMarkingType) and a [carla.LaneMarkingColor](python_api.md#carla.LaneMarkingColor), a __width__ to state thickness and a variable stating lane changing permissions with [carla.LaneChange](python_api.md#carla.LaneChange).
@ -58,7 +59,7 @@ left_lanemarking_type = waypoint.left_lane_marking.type()
lane_change = waypoint.lane_change
```
<h4>Junctions</h4>
####Junctions
To ease managing junctions with OpenDRIVE, the [carla.Junction](python_api.md#carla.Junction) class provides for a bounding box to state whereas lanes or vehicles are inside of it.
There is also a method to get a pair of waypoints per lane determining the starting and ending point inside the junction boundaries for each lane:
@ -66,7 +67,7 @@ There is also a method to get a pair of waypoints per lane determining the start
waypoints_junc = my_junction.get_waypoints()
```
<h4>Waypoints</h4>
####Waypoints
[carla.Waypoint](python_api.md#carla.Waypoint) objects are 3D-directed points that are prepared to mediate between the world and the openDRIVE definition of the road.
Each waypoint contains a [carla.Transform](python_api.md#carla.Transform) summarizing a point on the map inside a lane and the orientation of the lane. The variables `road_id`,`section_id`,`lane_id` and `s` that translate this transform to the OpenDRIVE road and are used to create an __identifier__ of the waypoint.

View File

@ -1,4 +1,4 @@
<h1>4th. Sensors and data</h1>
# 4th. Sensors and data
The last step in this introduction to CARLA are sensors. They allow to retrieve data from the surroundings and so, are crucial to use CARLA as a learning environment for driving agents.
This page summarizes everything necessary to start handling sensors including some basic information about the different types available and a step-by-step of their life cycle. The specifics for every sensor can be found in their [reference](ref_sensors.md)
@ -24,7 +24,7 @@ The class [carla.Sensor](python_api.md#carla.Sensor) defines a special type of a
Despite their differences, the way the user manages every sensor is quite similar.
<h4>Setting</h4>
####Setting
As with every other actor, the first step is to find the proper blueprint in the library and set specific attributes to get the desired results. This is essential when handling sensors, as their capabilities depend on the way these are set. Their attributes are detailed in the [sensors' reference](ref_sensors.md).
@ -40,7 +40,7 @@ blueprint.set_attribute('fov', '110')
blueprint.set_attribute('sensor_tick', '1.0')
```
<h4>Spawning</h4>
####Spawning
Sensors are also spawned like any other actor, only this time the two optional parameters, `attachment_to` and `attachment_type` are crucial. They should be attached to another actor, usually a vehicle, to follow it around and gather the information regarding its surroundings.
There are two types of attachment:
@ -55,7 +55,7 @@ sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
!!! Important
When spawning an actor with attachment, remember that its location should be relative to its parent, not global.
<h4>Listening</h4>
####Listening
Every sensor has a [`listen()`](python_api.md#carla.Sensor.listen) method that is called every time the sensor retrieves data.
This method has one argument: `callback`, which is a [lambda expression](https://www.w3schools.com/python/python_lambda.asp) of a function, defining what should the sensor do when data is retrieved.
@ -89,11 +89,11 @@ Sensor data differs a lot between sensor types, but it is always tagged with:
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode. |
| `transform` | carla.Transform | World reference of the sensor at the time of the measurement. |
<br>
---------------
##Types of sensors
<h4>Cameras</h4>
####Cameras
These sensors take a shot of the world from their point of view and then use the helper class to alter this image and provide different types of information.
__Retrieve data:__ every simulation step.
@ -104,7 +104,8 @@ __Retrieve data:__ every simulation step.
| RGB | [carla.Image](python_api.md#carla.Image) | Provides clear vision of the surroundings. Looks like a normal photo of the scene. |
| Semantic segmentation | [carla.Image](python_api.md#carla.Image) | Renders elements in the field of view with a specific color according to their tags. |
<h4>Detectors</h4>
<br>
####Detectors
Sensors that retrieve data when a parent object they are attached to registers a specific event in the simulation.
__Retrieve data:__ when triggered.
@ -115,7 +116,8 @@ __Retrieve data:__ when triggered.
| Lane invasion | [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent) | Registers when its parent crosses a lane marking. |
| Obstacle | [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleEvent) | Detects possible obstacles ahead of its parent. |
<h4>Other</h4>
<br>
####Other
This group gathers sensors with different functionalities: navigation, measure physical properties of an object and provide 2D and 3D models of the scene.
__Retrieve data:__ every simulation step.
@ -127,6 +129,7 @@ __Retrieve data:__ every simulation step.
| Lidar raycast | [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement) | A rotating lidar retrieving a cloud of points to generate a 3D model the surroundings. |
| Radar | [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) | 2D point map that models elements in sight and their movement regarding the sensor. |
<br>
---------------
That is a wrap on sensors and how do these retrieve simulation data and thus, the introduction to CARLA is finished. However there is yet a lot to learn. Some of the different paths to follow now are listed here:

View File

@ -1,4 +1,4 @@
<h1>1st. World and client</h1>
# 1st. World and client
This is bound to be one of the first topics to learn about when entering CARLA. The client and the world are two of the fundamentals of CARLA, a necessary abstraction to operate the simulation and its actors.
This tutorial goes from defining the basics and creation of these elements to describing their possibilities without entering into higher complex matters. If any doubt or issue arises during the reading, the [CARLA forum](forum.carla.org/) is there to solve them.
@ -21,7 +21,7 @@ Besides that, the client is also able to access other CARLA modules, features an
The __carla.Client__ class is explained thoroughly in the [PythonAPI reference](python_api.md#carla.Client).
<h4>Client creation</h4>
####Client creation
Two things are needed: The IP address identifying it and two TCP ports the client will be using to communicate with the server. There is an optional third parameter, an `int` to set the working threads that by default is set to all (`0`). [This code recipe](python_cookbook.md#parse-client-creation-arguments) shows how to parse these as arguments when running the script.
@ -41,7 +41,7 @@ It is possible to have many clients connected, as it is common to have more than
!!! Note
Client and server have different `libcarla` modules. If the versions differ due to different origin commits, issues may arise. This will not normally the case, but it can be checked using the `get_client_version()` and `get_server_version()` methods.
<h4>World connection</h4>
####World connection
Being the simulation running, a configured client can connect and retrieve the current world easily:
@ -58,7 +58,7 @@ world = client.load_world('Town01')
Every world object has an `id` or episode. Everytime the client calls for `load_world()` or `reload_world()` the previous one is destroyed and the new one is created from from scratch without rebooting Unreal Engine, so this episode will change.
<h4>Other client utilities</h4>
####Other client utilities
The main purpose of the client object is to get or change the world and many times, it is no longer used after that. However, this object is in charge of two other main tasks: accessing to advanced CARLA features and applying command batches.
The list of features that are accessed from the client object are:
@ -87,7 +87,7 @@ This class acts as the major ruler of the simulation and its instance should be
In fact, some of the most important methods of this class are the _getters_. They summarize all the information the world has access to. More explicit information regarding the World class can be found in the [Python API reference](python_api.md#carla.World).
<h4>Actors</h4>
####Actors
The world has different methods related with actors that allow it to:
@ -99,7 +99,7 @@ The world has different methods related with actors that allow it to:
Explanations on spawning will be conducted in the second step of this guide: [2nd. Actors and blueprints](core_actors.md), as it requires some understanding on the blueprint library, attributes, etc. Keep reading or visit the [Python API reference](python_api.md) to learn more about this matter.
<h4>Weather</h4>
####Weather
The weather is not a class on its own, but a world setting. However, there is a helper class named [carla.WeatherParameters](python_api.md#carla.WeatherParameters) that allows to define a series of visual characteristics such as sun orientation, cloudiness, lightning, wind and much more. The changes can then be applied using the world as the following example does:
```py
@ -122,7 +122,7 @@ world.set_weather(carla.WeatherParameters.WetCloudySunset)
!!! Note
Changes in the weather do not affect physics. They are only visuals that can be captured by the camera sensors.
<h4>Debugging</h4>
#### Debugging
World objects have a public attribute that defines a [carla.DebugHelper](python_api.md#carla.DebugHelper) object. It allows for different shapes to be drawn during the simulation in order to trace the events happening. The following example would access the attribute to draw a red box at an actor's location and rotation.
@ -133,7 +133,7 @@ debug.draw_box(carla.BoundingBox(actor_snapshot.get_transform().location,carla.V
This example is extended in this [code recipe](python_cookbook.md#debug-bounding-box-recipe) to draw boxes for every actor in a world snapshot. Take a look at it and at the Python API reference to learn more about this.
<h4>World snapshots</h4>
#### World snapshots
Contains the state of every actor in the simulation at a single frame, a sort of still image of the world with a time reference. This feature makes sure that all the information contained comes from the same simulation step without the need of using synchronous mode.
@ -157,7 +157,7 @@ for actor_snapshot in world_snapshot: #Get the actor and the snapshot informatio
actor_snapshot = world_snapshot.find(actual_actor.id) #Get an actor's snapshot
```
<h4>World settings</h4>
#### World settings
The world also has access to some advanced configurations for the simulation that determine rendering conditions, steps in the simulation time and synchrony between clients and server. These are advanced concepts that do better if untouched by newcomers.
For the time being let's say that CARLA by default runs in with its best quality, with a variable time-step and asynchronously. The helper class is [carla.WorldSettings](python_api.md#carla.WorldSettings). To dive further in this matters take a look at the __Advanced steps__ section of the documentation and read about [synchrony and time-step](simulation_time_and_synchrony.md) or [rendering_options.md](../rendering_options).

View File

@ -1,5 +1,5 @@
<h1>C++ Reference </h1>
# C++ Reference
We use Doxygen to generate the documentation of our C++ code:
[Libcarla/Source](http://carla.org/Doxygen/html/dir_b9166249188ce33115fd7d5eed1849f2.html)<br>

View File

@ -1,4 +1,4 @@
<h1>Build system</h1>
#Build system
> _This document is a work in progress, only the Linux build system is taken into account here._
@ -51,6 +51,7 @@ Two configurations:
| **Output** | headers and test exes | `libcarla_client.a` |
| **Required by** | Carla plugin | PythonAPI |
<br>
#### CarlaUE4 and Carla plugin
Both compiled at the same step with Unreal Engine build tool. They require the

View File

@ -1,4 +1,4 @@
<h1>How to add a new sensor</h1>
# How to add a new sensor
This tutorial explains the basics for adding a new sensor to CARLA. It provides
the necessary steps to implement a sensor in Unreal Engine 4 (UE4) and expose

View File

@ -1,4 +1,4 @@
<h1>How to make a release</h1>
# How to make a release
> _This document is meant for developers that want to publish a new release._

View File

@ -1,4 +1,4 @@
<h1>How to upgrade content</h1>
# How to upgrade content
Our content resides on a separate [Git LFS repository][contentrepolink]. As part
of our build system, we generate and upload a package containing the latest

View File

@ -1,4 +1,4 @@
<h1>Map customization</h1>
# Map customization
> _This document is a work in progress and might be incomplete._
@ -11,11 +11,11 @@ Creating a new map
this guide will suggest duplicating an existing level instead of creating
one from scratch.
<h4>Requirements</h4>
#### Requirements
- Checkout and build Carla from source on [Linux](../how_to_build_on_linux.md) or [Windows](../how_to_build_on_windows.md)
<h4>Creating</h4>
#### Creating
- Duplicate an existing map
- Remove everything you don't need from the map
@ -66,7 +66,7 @@ SplinemeshRepeater
!!! Bug
See [#35 SplineMeshRepeater loses its collider mesh](https://github.com/carla-simulator/carla/issues/35)
<h4>Standard use:</h4>
#### Standard use
SplineMeshRepeater "Content/Blueprints/SplineMeshRepeater" is a tool included in
the Carla Project to help building urban environments; It repeats and aligns a
@ -92,7 +92,7 @@ the lower point possible with the rest of the mesh pointing positive (Preferably
by the X axis)
<h4>Specific Walls (Dynamic material)</h4>
#### Specific Walls (Dynamic material)
In the project folder "Content/Static/Walls" are included some specific assets
to be used with this SplineMeshRepeater with a series of special

View File

@ -1,4 +1,4 @@
<h1>Documentation Standard</h1>
# Documentation Standard
This document will serve as a guide and example of some rules that need to be
followed in order to contribute to the documentation.
@ -6,8 +6,9 @@ followed in order to contribute to the documentation.
We use a mix of markdown and HTML tags to customize the documentation along with an
[`extra.css`](https://github.com/carla-simulator/carla/tree/master/Docs/extra.css) file.
Rules
-----
-------
## Rules
* Leave always an empty line between sections and at the end of the document.
* Writting should not exceed `100` columns, except for HTML related content, markdown tables,
@ -20,7 +21,8 @@ Rules
* Use `------` underlining a Heading or `#` hierarchy to make headings and show them in the
navigation bar.
<h3>Exceptions:</h3>
--------
## Exceptions
* Documentation generated via python scripts like PythonAPI reference

View File

@ -1,4 +1,4 @@
<h1>How to link Epic's Automotive Materials</h1>
# How to link Epic's Automotive Materials
!!! important
Since version 0.8.0 CARLA does not use Epic's _Automotive Materials_ by

View File

@ -1,4 +1,4 @@
<h1>F.A.Q.</h1>
# F.A.Q.
Some of the most common issues regarding CARLA installation and builds are listed hereunder. More issues can be found in the [GitHub issues][issuelink] for the project. In case a doubt is not listed here, take a look at the forum and feel free to post it.
[issuelink]: https://github.com/carla-simulator/carla/issues?utf8=%E2%9C%93&q=label%3Aquestion+

View File

@ -1,4 +1,4 @@
<h1>Quickstart</h1>
#Quick start
* [Requirements](#requirements)
* [Downloading CARLA](#downloading-carla)
@ -6,6 +6,7 @@
* Command-line options
* [Updating CARLA](#updating-carla)
* [Summary](#summary)
---------------
##Requirements
@ -63,7 +64,7 @@ A window will open, containing a view over the city. This is the "spectator" vie
!!! note
If the firewall or any other application are blocking the TCP ports needed, these can be manually changed by adding to the previous command the argument: `-carla-port=N`, being `N` the desired port. The second will be automatically set to `N+1`.
<h4>Command-line options</h4>
####Command-line options
There are some configuration options available when launching CARLA:

View File

@ -1,4 +1,4 @@
<h1>How to add assets</h1>
# How to add assets
Adding a vehicle
----------------

View File

@ -1,4 +1,4 @@
<h1>How to add friction triggers</h1>
# How to add friction triggers
*Friction Triggers* are box triggers that can be added on runtime and let users define
a different friction of the vehicles' wheels when being inside those type of triggers.

View File

@ -1,4 +1,4 @@
<h1>Linux build</h1>
#Linux build
* [__Requirements__](#requirements):
* System specifics
@ -148,6 +148,8 @@ Run the script to get the assets:
```sh
./Update.sh
```
!!! Important
To get the assets still in development visit the [Update CARLA](../update_carla#get-development-assets) page and read __Get development assets__.
<h4>Set the environment variable </h4>
@ -192,6 +194,7 @@ Now everything is ready to go and CARLA has been successfully built. Here is a b
| `make clean` | Deletes all the binaries and temporals generated by the build system. |
| `make rebuild` | make clean and make launch both in one command. |
<br>
Keep reading this section to learn more about how to update CARLA, the build itself and some advanced configuration options. Otherwise, visit the __First steps__ section to learn about CARLA:
<div class="build-buttons">
<!-- Latest release button -->

View File

@ -1,4 +1,4 @@
<h1>Windows build</h1>
#Windows build
* [__Requirements__](#requirements):
* System specifics
* [__Necessary software__](#necessary-software):
@ -21,7 +21,7 @@ CARLA forum</a>
</div>
---
##Requirements
<h4>System specifics</h4>
####System specifics
* __x64 system:__ The simulator should run in any Windows system currently available as long as it is a 64 bits OS.
* __30GB disk space:__ Installing all the software needed and CARLA itself will require quite a lot of space, especially Unreal Engine. Make sure to have around 30/50GB of free disk space.
@ -29,7 +29,7 @@ CARLA forum</a>
* __Two TCP ports and good internet connection:__ 2000 and 2001 by default. Be sure neither firewall nor any other application are blocking these.
---
##Necessary software
<h4>Minor installations</h4>
####Minor installations
Some software is needed for the build process the installation of which is quite straightforward.
* [CMake](https://cmake.org/download/): Generates standard build files from simple configuration files.
@ -41,7 +41,7 @@ Some software is needed for the build process the installation of which is quite
Be sure that these programs are added to your [environment path](https://www.java.com/en/download/help/path.xml), so you can use them from your command prompt. The path values to add lead to the _bin_ directories for each software.
<h4>Visual Studio 2017</h4>
####Visual Studio 2017
Get the 2017 version from [here](https://developerinsider.co/download-visual-studio-2017-web-installer-iso-community-professional-enterprise/). **Community** is the free version. Two elements will be needed to set up the environment for the build process. These must be added when using the Visual Studio Installer:
@ -52,7 +52,7 @@ Get the 2017 version from [here](https://developerinsider.co/download-visual-stu
Having other Visual Studio versions may cause conflict during the build process, even if these have been uninstalled (Visual Studio is not that good at getting rid of itself and erasing registers). To completely clean Visual Studio from the computer run `.\InstallCleanup.exe -full` found in `Program Files (x86)\Microsoft Visual Studio\Installer\resources\app\layout`. This may need admin permissions.
<h4>Unreal Engine 4.22</h4>
####Unreal Engine 4.22
Go to the [Unreal Engine](https://www.unrealengine.com/download) site and download the Epic Games Launcher. In the _Library_ section, inside the _Engine versions_ panel, download any Unreal Engine 4.22.x version. After installing it, make sure to run it in order to be sure that everything was properly installed.
@ -60,12 +60,12 @@ Go to the [Unreal Engine](https://www.unrealengine.com/download) site and downlo
This note will only be relevant if issues arise during the build process and manual build is required. Having VS2017 and UE4.22 installed, a **Generate Visual Studio project files** option should appear when doing right-click on **.uproject** files. If this option is not available, something went wrong while installing Unreal Engine and it may need to be reinstalled. Create a simple Unreal Engine project to check up.
---
# CARLA build
##CARLA build
!!! Important
Lots of things have happened so far. It is highly advisable to do a quick restart of the system.
<h4>Clone repository</h4>
####Clone repository
<div class="build-buttons">
<!-- Latest release button -->
@ -86,12 +86,12 @@ Now the latest content for the project, known as `master` branch in the reposito
!!! Note
The `master` branch contains the latest fixes and features. Stable code is inside the `stable` branch, and it can be built by changing the branch. The same goes for previous CARLA releases. Always remember to check the current branch in git with `git branch`.
<h3>Get assets</h3>
####Get assets
Only the assets package, the visual content, is yet to be downloaded. `\Util\ContentVersions.txt` contains the links to the assets for every CARLA version. These must be extracted in `Unreal\CarlaUE4\Content\Carla`. If the path doesn't exist, create it.
Download the **latest** assets to work with the current version of CARLA. When working with branches containing previous releases of CARLA, make sure to download the proper assets.
<h3>make CARLA</h3>
####make CARLA
Go to the root CARLA folder, the one cloned from the repository. It is time to do the automatic build. The process may take a while, it will download and install the necessary libraries. It might take 20-40 minutes, depending on hardware and internet connection. There are different make commands to build the different modules:
@ -124,6 +124,7 @@ Now everything is ready to go and CARLA has been successfully built. Here is a b
| `make clean` | Deletes all the binaries and temporals generated by the build system. |
| `make rebuild` | make clean and make launch both in one command. |
<br>
Keep reading this section to learn more about how to update CARLA, the build itself and some advanced configuration options.
Otherwise, visit the __First steps__ section to learn about CARLA:
<div class="build-buttons">

View File

@ -1,4 +1,4 @@
<h1>How to control vehicle physics</h1>
# How to control vehicle physics
Physics properties can be tuned for vehicles and its wheels.
These changes are applied **only** on runtime, and values are set back to default ones when

View File

@ -1,4 +1,4 @@
<h1>How to generate the pedestrian navigation info</h1>
# How to generate the pedestrian navigation info
### Introduction
The pedestrians to walk need information about the map in a specific format. That file that describes the map for navigation is a binary file with extension `.BIN`, and they are saved in the **Nav** folder of the map. Each map needs a `.BIN` file with the same name that the map, so automatically can be loaded with the map.
@ -23,6 +23,8 @@ We have several types of meshes for navigation. The meshes need to be identified
| Crosswalk | `Road_Road`, `Road_Curb`, `Road_Gutter` or `Road_Marking` | Pedestrians can cross the roads only through these meshes. |
| Block | any other name | Pedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...). |
<br>
For instance, all road meshes need to start with `Road_Road` e.g: `Road_Road_Mesh_1`, `Road_Road_Mesh_2`...
This nomenclature is used by RoadRunner when it exports the map, so we are just following the same.

View File

@ -1,7 +1,8 @@
<h1>How to create and import a new map</h1>
# How to create and import a new map
![Town03](img/create_map_01.jpg)
-----
## 1 Create a new map
Files needed:
@ -15,6 +16,7 @@ tutorial.
The following steps will introduce the RoadRunner software for map creation. If the map is
created by other software, go to this [section](#3-importing-into-unreal).
------
## 2 Create a new map with RoadRunner
RoadRunner is a powerful software from Vector Zero to create 3D scenes. Using RoadRunner is easy,
@ -22,7 +24,7 @@ in a few steps you will be able to create an impressive scene. You can download
a trial of RoadRunner at VectorZero's web page.
<div class="vector-zero">
<a href="https://www.vectorzero.io/"><img src="./img/VectorZeroAndIcon.webp"/></a>
<a href="https://www.vectorzero.io/"><img src="../img/VectorZeroAndIcon.webp"/></a>
</div> <br>
Read VectorZero's RoadRunner [documentation][rr_docs] to install it and get started.
@ -36,7 +38,7 @@ They also have very useful [tutorials][rr_tutorials] on how to use RoadRunner, c
!!! important
Create the map centered arround (0, 0).
## 2.1 Validate the map
#### 2.1 Validate the map
* Check that all connections and geometries seem correct.
@ -52,7 +54,7 @@ button and export.
If there is any error with map junctions, click on `Maneuver Tool`
and `Rebuild Maneuver Roads` buttons.
## 2.2 Export the map
#### 2.2 Export the map
After verifying that everything is correct, it is time to export the map to CARLA.
@ -75,6 +77,7 @@ _check VectorZeros's [documentation][exportlink]._
[exportlink]: https://tracetransit.atlassian.net/wiki/spaces/VS/pages/752779356/Exporting+to+CARLA
-------
## 3 Importing into Unreal
This section is divided into two. The first part shows how to import a map from RoadRunner
@ -87,9 +90,9 @@ and the second part shows how to import a map from other software that generates
We have also created a new way to import assets into Unreal,
check this [`guide`](./asset_packages_for_dist.md)!
### 3.1 Importing from RoadRunner
#### 3.1 Importing from RoadRunner
#### 3.1.1 Plugin Installation
##### 3.1.1 Plugin Installation
RoadRunner provides a series of plugins that make the importing simpler.
@ -100,7 +103,7 @@ RoadRunner provides a series of plugins that make the importing simpler.
3. Rebuild the plugin.
##### Rebuild on Windows
###### Rebuild on Windows
1. Generate project files.
@ -108,7 +111,7 @@ RoadRunner provides a series of plugins that make the importing simpler.
2. Open the project and build the plugins.
##### Rebuild on Linux
###### Rebuild on Linux
```sh
> UE4_ROOT/GenerateProjectFiles.sh -project="carla/Unreal/CarlaUE4/CarlaUE4.uproject" -game -engine
@ -118,7 +121,7 @@ Finally, restart Unreal Engine and make sure the checkbox is on for both plugins
![rr_ue_plugins](img/rr-ue4_plugins.png)
#### 3.1.2 Importing
##### 3.1.2 Importing
1. Import the _mapname.fbx_ file to a new folder under `/Content/Carla/Maps`
with the `Import` button.
@ -144,7 +147,7 @@ The new map should now appear next to the others in the Unreal Engine _Content B
And that's it! The map is ready!
### 3.2 Importing from the files
#### 3.2 Importing from the files
This is the generic way to import maps into Unreal.
@ -154,7 +157,7 @@ and paste it in the new level, otherwise, the map will be in the dark.
![ue_illumination](img/ue_illumination.png)
#### 3.2.1 Binaries (.fbx)
##### 3.2.1 Binaries (.fbx)
1. Import the _mapname.fbx_ file to a new folder under `/Content/Carla/Maps`
with the `Import` button. Make sure the following options are unchecked:
@ -238,7 +241,7 @@ Content
![ue__semantic_segmentation](img/ue_ssgt.png)
#### 3.2.2 OpenDRIVE (.xodr)
##### 3.2.2 OpenDRIVE (.xodr)
1. Copy the `.xodr` file inside the `Content/Carla/Maps/OpenDrive` folder.
2. Open the Unreal level and drag the _Open Drive Actor_ inside the level.
@ -248,6 +251,7 @@ It will read the level's name, search the Opendrive file with the same name and
And that's it! Now the road network information is loaded into the map.
-------
## 4. Setting up traffic behavior
Once everything is loaded into the level, it is time to create traffic behavior.
@ -261,7 +265,7 @@ Once everything is loaded into the level, it is time to create traffic behavior.
This will generate a bunch of _RoutePlanner_ and _VehicleSpawnPoint_ actors that make
it possible for vehicles to spawn and go in autopilot mode.
## 4.1 Traffic lights and signs
#### 4.1 Traffic lights and signs
To regulate the traffic, traffic lights and signs must be placed all over the map.
@ -287,6 +291,7 @@ might need some tweaking and testing to fit perfectly into the city.
> _Example: Traffic Signs, Traffic lights and Turn based stop._
----------
## 5 Adding pedestrian navigation areas
To make a navigable mesh for pedestrians, we use the _Recast & Detour_ library.<br>
@ -335,6 +340,7 @@ Then build RecastDemo. Follow their [instructions][buildrecastlink] on how to bu
Now pedestrians will be able to spawn randomly and walk on the selected meshes!
----------
## Tips and Tricks
* Traffic light group controls wich traffic light is active (green state) at each moment.

View File

@ -1,8 +1,9 @@
<h1>How to model vehicles</h1>
# How to model vehicles
# 4-Wheeled Vehicles
------------
## 4-Wheeled Vehicles
## Modelling
#### Modelling
Vehicles must have a minimum of 10.000 and a maximum of 17.000 Tris
approximately. We model the vehicles using the size and scale of actual cars.
@ -35,7 +36,7 @@ The vehicle must be divided in 6 materials:
Put a rectangular plane with this size 29-12 cm, for the licence Plate.
We assign the license plate texture.
## Nomenclature of Material
#### Nomenclature of Material
* M(Material)_"CarName"_Bodywork(part of car)
@ -49,7 +50,7 @@ The vehicle must be divided in 6 materials:
* M_"CarName"_LicencePlate
## Textures
#### Textures
The size of the textures is 2048x2048.
@ -59,7 +60,7 @@ The size of the textures is 2048x2048.
* T_"CarName"_PartOfMaterial_orm (OcclusionRoughnessMetallic)
* **EXEMPLE**:
* **EXAMPLE**:
Type of car Tesla Model 3
TEXTURES
@ -70,7 +71,7 @@ TEXTURES
MATERIAL
* M_Tesla3_BodyWork
## RIG
#### RIG
The easiest way is to copy the "General4WheeledVehicleSkeleton" present in our project,
either by exporting it and copying it to your model or by creating your skeleton
@ -91,7 +92,7 @@ Vhehicle_Base: The origin point of the mesh, place it in the point (0,0,0) of th
* Wheel_Rear_Left: Set the joint's position in the middle of the Wheel.
## LODs
#### LODs
All vehicle LODs must be made in Maya or other 3D software. Because Unreal does
not generate LODs automatically, you can adjust the number of Tris to make a

View File

@ -1,3 +1,4 @@
#Python API reference
## carla.Actor<a name="carla.Actor"></a>
CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by [carla.World](#carla.World) and they need for a [carla.ActorBlueprint](#carla.ActorBlueprint) to be created. These blueprints belong into a library provided by CARLA, find more about them [here](../bp_library/).

View File

@ -1,5 +1,4 @@
<h1> Code recipes </h1>
# Code recipes
This section contains a list of recipes that complement the [tutorial](../python_api_tutorial/)
and are used to illustrate the use of Python API methods.

View File

@ -1,4 +1,4 @@
<h1> Recorder </h1>
# Recorder
This is one of the advanced CARLA features. It allows to record and reenact a simulation while providing with a complete log of the events happened and a few queries to ease the trace and study of those.
To learn about the generated file and its specifics take a look at this [reference](recorder_binary_file_format.md).
@ -61,10 +61,12 @@ client.replay_file("recording01.log", start, duration, camera)
| `duration` | Seconds to playback. 0 is all the recording. | By the end of the playback, vehicles will be set to autopilot and pedestrians will stop. |
| `camera` | ID of the actor that the camera will focus on. | By default the spectator will move freely. |
<br>
!!! Note
These parameters allows to recall an event and then let the simulation run free, as vehicles will be set to autopilot when the recording stops.
<h4>Setting a time factor</h4>
####Setting a time factor
The time factor will determine the playback speed.
@ -77,6 +79,8 @@ client.set_replayer_time_factor(2.0)
| ------------- | ------- | ----------- | ----------- |
| `time_factor` | __1.0__ | __>1.0__ | __<1.0__ |
<br>
!!! Important
Over 2.0 position interpolation is disabled and just updated. Pedestrians' animations are not affected by the time factor.
@ -135,7 +139,7 @@ Duration: 60.3753 seconds
---------------
##Queries
<h4>Collisions</h4>
####Collisions
In order to record collisions, vehicles must have a [collision detector](../ref_sensors#collision-detector) attached. The collisions registered by the recorder can be queried using arguments to filter the type of the actors involved in the collisions. For example, `h` identifies actors whose `role_name = hero`, usually assigned to vehicles managed by the user.
Currently, the actor types that can be used in the query are:
@ -184,7 +188,7 @@ In this case, the playback showed this:
![collision](img/collision1.gif)
<h4>Blocked actors</h4>
####Blocked actors
This query is used to detect vehicles that where stucked during the recording. An actor is considered blocked if it does not move a minimum distance in a certain time. This definition is made by the user during the query:
@ -197,6 +201,8 @@ client.show_recorder_actors_blocked("recording01.log", min_time, min_distance)
| `min_time` | Minimum seconds to move `min_distance`. | 30 secs. |
| `min_distance` | Minimum centimeters to move to not be considered blocked. | 10 cm. |
<br>
!!! Note
Take into account that vehicles are stopped at traffic lights sometimes for longer than expected.
@ -243,6 +249,8 @@ Some of the provided scripts in `PythonAPI/examples` facilitate the use of the r
| `-n` <small>(optional)</small>| Vehicles to spawn. Default is 10. |
| `-t` <small>(optional)</small>| Duration of the recording. |
<br>
* __start_replaying.py__: starts the playback of a recording. Starting time, duration and actor to follow can be set.
| Parameters | Description |
@ -252,6 +260,8 @@ Some of the provided scripts in `PythonAPI/examples` facilitate the use of the r
| `-d` <small>(optional)</small>| Duration. Default is all. |
| `-c` <small>(optional)</small>| ID of the actor to follow. |
<br>
* __show_recorder_file_info.py__: shows all the information in the recording file.
Two modes of detail: by default it only shows frames where some event is recorded. The second shows all information for all frames.
@ -260,6 +270,8 @@ Two modes of detail: by default it only shows frames where some event is recorde
| `-f` | Filename. |
| `-s` <small>(optional)</small>| Flag to show all details. |
<br>
* __show_recorder_collisions.py__: shows recorded collisions between two actors of type __A__ and __B__ defined using a series of flags: `-t = vv` would show all collisions between vehicles.
| Parameters | Description |
@ -276,6 +288,7 @@ Two modes of detail: by default it only shows frames where some event is recorde
| `-t` <small>(optional)</small> | Time to move `-d` before being considered blocked. |
| `-d` <small>(optional)</small> | Distance to move to not be considered blocked. |
<br>
---------------
Now it is time to experiment for a while. Use the recorder to playback a simulation, trace back events, make changes to see new outcomes. Feel free to say your word in the CARLA forum about this matter:

View File

@ -1,4 +1,4 @@
<h1>Sensors' documentation</h1>
# Sensors' documentation
* [__Collision detector__](#collision-detector)
* [__Depth camera__](#depth-camera)
@ -23,7 +23,7 @@ To ensure that collisions with any kind of object are detected, the server creat
Collision detectors do not have any configurable attribute.
<h4>Output attributes: </h4>
#### Output attributes
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
@ -56,7 +56,7 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
![ImageDepth](img/capture_depth.png)
<h4>Basic camera attributes</h4>
#### Basic camera attributes
| Blueprint attribute | Type | Default | Description |
| ------------------- | ---- | ------- | ----------- |
@ -65,7 +65,9 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
| `fov` | float | 90.0 | Horizontal field of view in degrees. |
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
<h4>Camera lens distortion attributes</h4>
<br>
#### Camera lens distortion attributes
| Blueprint attribute | Type | Default | Description |
|--------------------------|-------|---------|-------------|
@ -76,7 +78,9 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
<h4>Output attributes</h4>
<br>
#### Output attributes
| Sensor data attribute | Type | Description |
| --------------------- | ------------------------------------------------ | ----------- |
@ -96,7 +100,7 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gnss) of its parent object. This is calculated by adding the metric position to an initial geo reference location defined within the OpenDRIVE map definition.
<h4>GNSS attributes</h4>
#### GNSS attributes
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
@ -109,7 +113,9 @@ Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gns
| `noise_seed` | int | 0 | Initializer for a pseudorandom number generator. |
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
<h4>Output attributes</h4>
<br>
#### Output attributes
| Sensor data attribute | Type | Description |
| ---------------------- | ------------------------------------------------ | ----------- |
@ -128,7 +134,7 @@ Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gns
Provides measures that accelerometer, gyroscope and compass would retrieve for the parent object. The data is collected from the object's current state.
<h4>IMU attributes</h4>
#### IMU attributes
| Blueprint attribute | Type | Default | Description |
| --------------------- | ---- | ------- | ----------- |
@ -144,8 +150,9 @@ Provides measures that accelerometer, gyroscope and compass would retrieve for t
| `noise_seed` | int | 0 | Initializer for a pseudorandom number generator. |
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
<br>
<h4>Output attributes</h4>
#### Output attributes
| Sensor data attribute | Type | Description |
| --------------------- | ------------------------------------------------ | ----------- |
@ -174,7 +181,7 @@ This sensor does not have any configurable attribute.
!!! Important
This sensor works fully on the client-side.
<h4>Output attributes</h4>
#### Output attributes
| Sensor data attribute | Type | Description |
| ----------------------- | ---------------------------------------------------------- | ----------- |
@ -210,7 +217,7 @@ for location in lidar_measurement:
![LidarPointCloud](img/lidar_point_cloud.gif)
<h4>Lidar attributes</h4>
#### Lidar attributes
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
@ -222,7 +229,9 @@ for location in lidar_measurement:
| `lower_fov` | float | -30.0 | Angle in degrees of the lowest laser. |
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
<h4>Output attributes</h4>
<br>
#### Output attributes
| Sensor data attribute | Type | Description |
| -------------------------- | ------------------------------------------------ | ----------- |
@ -235,7 +244,7 @@ for location in lidar_measurement:
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point). |
---------------
##Obstacle detector
## Obstacle detector
* __Blueprint:__ sensor.other.obstacle
* __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent) per obstacle (unless `sensor_tick` says otherwise).
@ -252,7 +261,9 @@ To ensure that collisions with any kind of object are detected, the server creat
| `debug_linetrace` | bool | false | If true, the trace will be visible. |
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
<h4>Output attributes</h4>
<br>
#### Output attributes
| Sensor data attribute | Type | Description |
| ---------------------- | ------------------------------------------------ | ----------- |
@ -264,7 +275,7 @@ To ensure that collisions with any kind of object are detected, the server creat
| `distance` | float | Distance from `actor` to `other_actor`. |
---------------
##Radar sensor
## Radar sensor
* __Blueprint:__ sensor.other.radar
* __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) per step (unless `sensor_tick` says otherwise).
@ -291,7 +302,9 @@ The provided script `manual_control.py` uses this sensor to show the points bein
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
| `vertical_fov` | float | 30 | Vertical field of view in degrees. |
<h4>Output attributes</h4>
<br>
#### Output attributes
| Sensor data attribute | Type | Description |
| ---------------------- | ---------------------------------------------------------------- | ----------- |
@ -305,7 +318,7 @@ The provided script `manual_control.py` uses this sensor to show the points bein
| `velocity` | float | Velocity towards the sensor. |
---------------
##RGB camera
## RGB camera
* __Blueprint:__ sensor.camera.rgb
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise)..
@ -328,7 +341,7 @@ A value of 1.5 means that we want the sensor to capture data each second and a h
![ImageRGB](img/capture_scenefinal.png)
<h4>Basic camera attributes</h4>
#### Basic camera attributes
| Blueprint attribute | Type | Default | Description |
|---------------------|-------|---------|-------------|
@ -341,7 +354,9 @@ A value of 1.5 means that we want the sensor to capture data each second and a h
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
| `shutter_speed` | float | 60.0 | The camera shutter speed in seconds (1.0 / s). |
<h4>Camera lens distortion attributes</h4>
<br>
#### Camera lens distortion attributes
| Blueprint attribute | Type | Default | Description |
|--------------------------|-------|---------|-------------|
@ -352,7 +367,9 @@ A value of 1.5 means that we want the sensor to capture data each second and a h
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
<h4>Advanced camera attributes</h4>
<br>
#### Advanced camera attributes
Since these effects are provided by UE, please make sure to check their documentation:
@ -392,9 +409,11 @@ Since these effects are provided by UE, please make sure to check their document
| `chromatic_aberration_offset` | float | 0.0 | Normalized distance to the center of the image where the effect takes place. |
| `enable_postprocess_effects` | bool | True | Post-process effects activation. |
<br>
[AutomaticExposure.gamesetting]: https://docs.unrealengine.com/en-US/Engine/Rendering/PostProcessEffects/AutomaticExposure/index.html#gamesetting
<h4>Output attributes</h4>
#### Output attributes
| Sensor data attribute | Type | Description |
| --------------------- | ------------------------------------------------ | ----------- |
@ -435,13 +454,15 @@ The following tags are currently available:
| 11 | Wall | (102, 102, 156) |
| 12 | Traffic sign | (220, 220, 0) |
<br>
!!! Note
**Adding new tags**:
It requires some C++ coding. Add a new label to the `ECityObjectLabel` enum in "Tagger.h", and its corresponding filepath check inside `GetLabelByFolderName()` function in "Tagger.cpp".
![ImageSemanticSegmentation](img/capture_semseg.png)
<h4>Basic camera attributes</h4>
#### Basic camera attributes
| Blueprint attribute | Type | Default | Description |
| ------------------- | ---- | ------- | ----------- |
@ -450,7 +471,9 @@ The following tags are currently available:
| `image_size_y` | int | 600 | Image height in pixels. |
| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). |
<h4>Camera lens distortion attributes</h4>
<br>
#### Camera lens distortion attributes
| Blueprint attribute | Type | Default | Description |
|------------------------- |------ |---------|-------------|
@ -461,7 +484,9 @@ The following tags are currently available:
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
<h4>Output attributes</h4>
<br>
#### Output attributes
| Sensor data attribute | Type | Description |
| --------------------- | ------------------------------------------------ | ----------- |
@ -472,3 +497,5 @@ The following tags are currently available:
| `timestamp` | double | Simulation time of the measurement in seconds since the beginning of the episode. |
| `transform` | [carla.Transform](python_api.md#carla.Transform) | Location and rotation in world coordinates of the sensor at the time of the measurement. |
| `width` | int | Image width in pixels. |
<br>

View File

@ -1,4 +1,4 @@
<h1>Rendering options</h1>
# Rendering options
Before you start running your own experiments there are few details to take into
account at the time of configuring your simulation. In this document we cover
@ -21,7 +21,7 @@ the most important ones.
---------------
##Graphics quality
<h4>Vulkan vs OpenGL</h4>
####Vulkan vs OpenGL
Vulkan is the default graphics API used by Unreal Engine and CARLA (if installed). It consumes more memory, but performs faster and makes for a better frame rate. However, it is quite experimental, especially in Linux, and it may lead to some issues.
For said reasons, there is the option to change to OpenGL simply by using a flag when running CARLA. The same flag works for both Linux and Windows:
@ -32,7 +32,7 @@ cd carla && ./CarlaUE4.sh -opengl
When working with the build version of CARLA it is Unreal Engine the one that needs to be set to use OpenGL. [Here][UEdoc] is a documentation regarding different command line options for Unreal Engine.
[UEdoc]: https://docs.unrealengine.com/en-US/Programming/Basics/CommandLineArguments/index.html
<h4>Quality levels</h4>
####Quality levels
CARLA also allows for two different graphic quality levels named as __Epic__, the default, and __Low__, which disables all post-processing, shadows and the drawing distance is set to 50m instead of infinite and makes the simulation run significantly faster.
Low mode is not only used when precision is nonessential or there are technical limitations, but also to train agents under conditions with simpler data or regarding only close elements.
@ -82,11 +82,11 @@ Unreal Engine needs for a screen in order to run, but there is a workaround for
The simulator launches but there is no available window. However, it can be connected in the usual manner and scripts run the same way. For the sake of understanding let's sake that this mode tricks Unreal Engine into running in a fake screen.
<h4>Off-screen vs no-rendering</h4>
####Off-screen vs no-rendering
These may look similar but are indeed quite different. It is important to understand the disctintion them to prevent misunderstandings. In off-screen Unreal Engine is working as usual and rendering is computed as usual. The only difference is that there is no available display. In no-rendering, it is Unreal Engine the one that is said to avoid rendering and thus, graphics are not computed. For said reasons, GPU sensors return data when off-screen and no-rendering mode can be enabled at will.
<h4>Setting off-screen mode</h4>
####Setting off-screen mode
Right now this is __only possible in Linux while using OpenGL__ instead of Vulkan. Unreal Engine crushes when Vulkan is running off-screen, and this issue is yet to be fixed by Epic.
@ -101,7 +101,7 @@ Note that this method, in multi-GPU environments, does not allow to choose the G
---------------
##Running off-screen using a preferred GPU
<h4> Docker: recommended approach </h4>
####Docker: recommended approach
The best way to run a headless CARLA and select the GPU is to [__run CARLA in a Docker__](../carla_docker).
This section contains an alternative tutorial, but this method is deprecated and performance is much worse. However, it is here just in case, for those who Docker is not an option.
@ -114,7 +114,7 @@ This section contains an alternative tutorial, but this method is deprecated and
!!! Warning
This tutorial is deprecated. To run headless CARLA, please [__run CARLA in a Docker__](../carla_docker).
<h6> Requirements </h6>
* __Requirements:__
This tutorial only works in Linux and makes it possible for a remote server using several graphical cards to use CARLA on all GPUs. This is also translatable to a desktop user trying to use CARLA with a GPU that is not plugged to any screen. To achieve that, the steps can be summarized as:
@ -141,13 +141,13 @@ sudo apt install x11-xserver-utils libxrandr-dev
Make sure that VNC version is compatible with Unreal. The one above worked properly during the making of this tutorial.
<h6>Configure the X</h6>
* __Configure the X__
Generate a X compatible with the Nvdia installed and able to run without display:
sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
<h6> Emulate the virtual display </h6>
* __Emulate the virtual display__
Run a Xorg. Here number 7 is used, but it could be labeled with any free number:
@ -164,7 +164,7 @@ If everything is working fine the following command will run glxinfo on Xserver
!!! Important
To run on other GPU, change the `7.X` pattern in the previous command. To set it to GPU 1: `DISPLAY=:8 vglrun -d :7.1 glxinfo`
<h6> Extra </h6>
* __Extra__
To disable the need of sudo when creating the `nohup Xorg` go to `/etc/X11/Xwrapper.config` and change `allowed_users=console` to `allowed_users=anybody`.
@ -172,7 +172,7 @@ It may be needed to stop all Xorg servers before running `nohup Xorg`. The comma
sudo service lightdm stop
<h6> Running CARLA </h6>
* __Running CARLA__
To run CARLA on a certain `<gpu_number>` in a certain `$CARLA_PATH` use the following command:

View File

@ -1,4 +1,4 @@
<h1>Synchrony and time-step</h1>
# Synchrony and time-step
This section deals with two concepts that are fundamental to fully comprehend CARLA and gain control over it to achieve the desired results. There are different configurations that define how does time go by in the simulation and how does the server running said simulation work. The following sections will dive deep into these concepts:
@ -22,7 +22,7 @@ The time-step can be fixed or variable depending on user preferences, and CARLA
!!! Note
After reading this section it would be a great idea to go for the following one, __Client-server synchrony__, especially the part about synchrony and time-step. Both are related concepts and affect each other when using CARLA.
<h4>Variable time-step</h4>
####Variable time-step
This is the default mode in CARLA. When the time-step is variable, the simulation time that goes by between steps will be the time that the server takes to compute these.
In order to set the simulation to a variable time-step the code could look like this:
@ -36,7 +36,7 @@ The provided script `PythonAPI/util/config.py` automatically sets time-step wit
cd PythonAPI/util && ./config.py --delta-seconds 0
```
<h4>Fixed time-step</h4>
####Fixed time-step
Going for a fixed time-step makes the server run a simulation where the elapsed time remains constant between steps. If it is set to 0.5 seconds, there will be two frames per simulated second.
Using the same time increment on each step is the best way to gather data from the simulation, as physics and sensor data will correspond to an easy to comprehend moment of the simulation. Also, if the server is fast enough, it makes possible to simulate longer time periods in less real time.
@ -52,7 +52,7 @@ Thus, the simulator will take twenty steps (1/0.05) to recreate one second of th
cd PythonAPI/util && ./config.py --delta-seconds 0.05
```
<h4>Tips when recording the simulation</h4>
####Tips when recording the simulation
CARLA has a [recorder feature](recorder_and_playback.md) that allows a simulation to be recorded and then reenacted. However, when looking for precision, some things need to be taken into account.
If the simulation ran with a fixed time-step, reenacting it will be easy, as the server can be set to the same time-step used in the original simulation. However, if the simulation used a variable time-step, things are a bit more complicated.
@ -61,7 +61,7 @@ Secondly, the server can be forced to reproduce the exact same time-steps passin
Finally there is also the float-point arithmetic error that working with a variable time-step introduces. As the simulation is running with a time-step equal to the real one, being real time a continuous and simulation one a float variable, the time-steps show decimal limitations. The time that is cropped for each step is an error that accumulates and prevents the simulation from a precise repetition of what has happened.
<h4>Time-step limitations</h4>
####Time-step limitations
Physics must be computed within very low time steps to be precise. The more time goes by, the more variables and chaos come to place and so, the more defective the simulation will be.
CARLA uses up to 6 substeps to compute physics in every step, each with a maximum delta time of 0.016667s.
@ -96,7 +96,7 @@ cd PythonAPI/util && ./config.py --no-sync
```
Must be mentioned that synchronous mode cannot be enabled using the script, only disabled. Enabling the synchronous mode makes the server wait for a client tick, and using this script the user cannot send ticks when desired.
<h4>Using synchronous mode</h4>
####Using synchronous mode
The synchronous mode becomes specially relevant when running with slow clients applications and when synchrony between different elements, such as sensors, is needed. If the client is too slow and the server does not wait for it, the amount of information received will be impossible to manage and it can easily be mixed. On a similar tune, if there are ten sensors waiting to retrieve data and the server is sending all these information without waiting for all of them to have the previous one, it would be impossible to know if all the sensors are using data from the same moment in the simulation.
As a little extension to the previous code, in the following fragment, the client creates a camera sensor that puts the image data received in the current step in a queue and sends ticks to the server only after retrieving it from the queue. A more complex example regarding several sensors can be found [here][syncmodelink].
@ -141,6 +141,8 @@ The configuration of both concepts explained in this page, simulation time-step
| __Synchronous mode__ | Client is in total control over the simulation and its information. | Risk of non reliable simulations. |
| __Asynchronous mode__ | Good time references for information. Server runs as fast as possible. | Non easily repeatable simulations. |
<br>
* __Synchronous mode + variable time-step:__ This is almost for sure a non-desirable state. Physics cannot run properly when the time-step is bigger than 0.1s and, if the server needs to wait for the client to compute the steps, this is likely to happen. Simulation time and physics then will not be in synchrony and thus, the simulation is not reliable.
* __Aynchronous mode + variable time-step:__ This is the default CARLA state. Client and server are asynchronous but the simulation time flows according to the real time. Reenacting the simulation needs to take into account float-arithmetic error and possible differences in time steps between servers.

View File

@ -1,4 +1,4 @@
<h1>Update CARLA</h1>
#Update CARLA
* [__Get lastest binary release__](#get-latest-binary-release)
* [__Update Linux and Windows build__](#update-linux-and-windows-build)
@ -18,7 +18,6 @@ CARLA forum</a>
</p>
</div>
---------------
##Get latest binary release
@ -46,20 +45,20 @@ Binary releases are prepackaged and thus, tied to a specific version of CARLA. I
The process of updating is quite similar and straightforward for both platforms:
<h4>Clean the build</h4>
####Clean the build
Go to the CARLA main directory and delete binaries and temporals generated by previous build:
```sh
git checkout master
make clean
```
<h4>Pull from origin</h4>
####Pull from origin
Get the current version from `master` in the CARLA repository:
```sh
git pull origin master
```
<h4>Download the assets</h4>
####Download the assets
__Linux:__
```sh
@ -74,7 +73,7 @@ __Windows:__
!!! Note
In order to work with the current content used by developers in the CARLA team, follow the get development assets section right below this one.
<h4>Launch the server</h4>
####Launch the server
Run the editor with the spectator view to be sure that everything worked properly:
```sh

View File

@ -1,4 +1,4 @@
<h1>Walker Bone Control</h1>
# Walker Bone Control
In this tutorial we describe how to manually control and animate the
skeletons of walkers from the CARLA Python API. The reference of

View File

@ -101,7 +101,7 @@ class MarkdownFile:
def not_title(self, buf):
self._data = join([
self._data, '\n', self.list_depth(), '<h1>', buf, '</h1>', '\n'])
self._data, '\n', self.list_depth(), '#', buf, '\n'])
def title(self, strongness, buf):
self._data = join([

View File

@ -70,6 +70,10 @@ class MarkdownFile:
def textn(self, buf):
self._data = join([self._data, self.list_depth(), buf, self.endl])
def first_title(self):
self._data = join([
self._data, '#Python API reference'])
def title(self, strongness, buf):
self._data = join([
self._data, '\n', self.list_depth(), '#' * strongness, ' ', buf, '\n'])
@ -437,6 +441,7 @@ class Documentation:
def gen_body(self):
"""Generates the documentation body"""
md = MarkdownFile()
md.first_title()
for module_name in sorted(self.master_dict):
module = self.master_dict[module_name]
module_key = module_name

View File

@ -43,7 +43,6 @@ nav:
- 'Generate pedestrian navigation': 'how_to_generate_pedestrians_navigation.md'
- "Link Epic's Automotive Materials": 'epic_automotive_materials.md'
- 'Map customization': 'dev/map_customization.md'
- 'Running without display and selecting GPUs': 'carla_headless.md'
- How to... (content):
- 'Add assets': 'how_to_add_assets.md'
- 'Create and import a new map': 'how_to_make_a_new_map.md'