Final draft

This commit is contained in:
sergi-e 2020-03-02 14:35:50 +01:00 committed by bernat
parent 8802adbdc6
commit 48247f1ad9
59 changed files with 486 additions and 1954 deletions

View File

@ -1,7 +1,7 @@
# Recorder
This is one of the advanced CARLA features. It allows to record and reenact a simulation while providing with a complete log of the events happened and a few queries to ease the trace and study of those.
To learn about the generated file and its specifics take a look at this [reference](recorder_binary_file_format.md).
To learn about the generated file and its specifics take a look at this [reference](ref_recorder_binary_file_format.md).
* [__Recording__](#recording)
* [__Simulation playback__](#simulation-playback):
@ -12,7 +12,7 @@ To learn about the generated file and its specifics take a look at this [referen
* Blocked actors
* [__Sample Python scripts__](#sample-python-scripts)
---------------
---
## Recording
All the data is written in a binary file on the server side only. However, the recorder is managed using the [carla.Client](python_api.md#carla.Client).
@ -42,7 +42,7 @@ client.stop_recorder()
!!! Note
As an estimate: 1h recording with 50 traffic lights and 100 vehicles takes around 200MB in size.
---------------
---
## Simulation playback
A playback can be started at any point during a simulation only specifying the file name.
@ -88,10 +88,10 @@ For instance, with a time factor of __20x__ traffic flow is easily appreciated:
![flow](img/RecorderFlow2.gif)
---------------
---
## Recorded file
The details of a recording can be retrieved using a simple API call. By default, it only retrieves those frames where an event was registered, but setting the parameter `show_all` would return all the information for every frame. The specifics on how the data is stored are detailed in the [recorder's reference](recorder_binary_file_format.md).
The details of a recording can be retrieved using a simple API call. By default, it only retrieves those frames where an event was registered, but setting the parameter `show_all` would return all the information for every frame. The specifics on how the data is stored are detailed in the [recorder's reference](ref_recorder_binary_file_format.md).
The following example only would retrieve remarkable events:
```py
@ -136,12 +136,12 @@ Frame 2351 at 60.3057 seconds
Frames: 2354
Duration: 60.3753 seconds
```
---------------
---
## Queries
#### Collisions
In order to record collisions, vehicles must have a [collision detector](../ref_sensors#collision-detector) attached. The collisions registered by the recorder can be queried using arguments to filter the type of the actors involved in the collisions. For example, `h` identifies actors whose `role_name = hero`, usually assigned to vehicles managed by the user.
In order to record collisions, vehicles must have a [collision detector](ref_sensors.md#collision-detector) attached. The collisions registered by the recorder can be queried using arguments to filter the type of the actors involved in the collisions. For example, `h` identifies actors whose `role_name = hero`, usually assigned to vehicles managed by the user.
Currently, the actor types that can be used in the query are:
* __h__ = Hero
@ -236,7 +236,7 @@ client.replay_file("col3.log", 34, 0, 173)
![accident](img/accident.gif)
---------------
---
## Sample python scripts
Some of the provided scripts in `PythonAPI/examples` facilitate the use of the recorder:
@ -290,7 +290,7 @@ Two modes of detail: by default it only shows frames where some event is recorde
<br>
---------------
---
Now it is time to experiment for a while. Use the recorder to playback a simulation, trace back events, make changes to see new outcomes. Feel free to say your word in the CARLA forum about this matter:
<div class="build-buttons">
<!-- Latest release button -->

View File

@ -18,7 +18,7 @@ the most important ones.
!!! Important
Some of the command options shown below are not directly equivalent when using the CARLA packaged releases. Read the [Command line options](#command-line-options) section to learn more about this.
---------------
---
## Graphics quality
#### Vulkan vs OpenGL
@ -46,7 +46,7 @@ The images below show how do both modes look like and how to start the CARLA pac
!!! Important
The issue that made Epic mode show an abnormal whiteness has been fixed. If the problem persists delete `GameUserSettings.ini` as it is saving the previous settings. It will be generated again in the next run. __Ubuntu path:__ ` ~/.config/Epic/CarlaUE4/Saved/Config/LinuxNoEditor/` __Windows path:__ `<Package folder>\WindowsNoEditor\CarlaUE4\Saved\Config\WindowsNoEditor\`
---------------
---
## No-rendering mode
This mode completely disables rendering in the simulator, Unreal Engine will skip everything regarding graphics. This facilitates a lot simulating traffic and road behaviours at very high frequencies without the rendering overhead. To enable or disable no-rendering mode the user can either change the world settings in a script or use the provided script in `/PythonAPI/util/config.py` that does that same thing automatically.
@ -75,7 +75,7 @@ cd PythonAPI/examples && ./no_rendering_mode.py
!!! Warning
In no-rendering mode, cameras and GPU sensors will return empty data. The GPU is not used, as Unreal Engine is not drawing any scene.
---------------
---
## Off-screen mode
Unreal Engine needs for a screen in order to run, but there is a workaround for this that makes possible to work on rendering for remote servers with no display or desktop users with a GPU not connected to any screen.
@ -98,12 +98,12 @@ DISPLAY= ./CarlaUE4.sh -opengl
```
Note that this method, in multi-GPU environments, does not allow to choose the GPU that the simulator will use for rendering. To do so, read the following section.
---------------
---
## Running off-screen using a preferred GPU
#### Docker: recommended approach
The best way to run a headless CARLA and select the GPU is to [__run CARLA in a Docker__](../carla_docker).
The best way to run a headless CARLA and select the GPU is to [__run CARLA in a Docker__](build_docker.md).
This section contains an alternative tutorial, but this method is deprecated and performance is much worse. However, it is here just in case, for those who Docker is not an option.
<details>
@ -112,7 +112,7 @@ This section contains an alternative tutorial, but this method is deprecated and
</h4></summary>
!!! Warning
This tutorial is deprecated. To run headless CARLA, please [__run CARLA in a Docker__](../carla_docker).
This tutorial is deprecated. To run headless CARLA, please [__run CARLA in a Docker__](build_docker.md).
* __Requirements:__
@ -183,7 +183,7 @@ To run CARLA on a certain `<gpu_number>` in a certain `$CARLA_PATH` use the foll
</details>
----------------
---
That is all there is to know about the different rendering options in CARLA.
Open CARLA and mess around for a while to make sure that everything is clear and yet, if there are any doubts, feel free to post these in the forum.

View File

@ -12,7 +12,7 @@ This section deals with two concepts that are fundamental to fully comprehend CA
* Using synchronous mode
* [__Possible configurations__](#possible-configurations)
---------------
---
## Simulation time-step
The very first and essential concept to understand in this section is the difference between real time and simulation time. The simulated world has its own clock and time, conducted by the server. Between two steps of the simulation, there is the time spent to compute said steps (real time) and a time span that went by in those two moments of the simulation (simulated time). This latest is the time-step.
@ -54,7 +54,7 @@ cd PythonAPI/util && ./config.py --delta-seconds 0.05
#### Tips when recording the simulation
CARLA has a [recorder feature](recorder_and_playback.md) that allows a simulation to be recorded and then reenacted. However, when looking for precision, some things need to be taken into account.
CARLA has a [recorder feature](adv_recorder.md) that allows a simulation to be recorded and then reenacted. However, when looking for precision, some things need to be taken into account.
If the simulation ran with a fixed time-step, reenacting it will be easy, as the server can be set to the same time-step used in the original simulation. However, if the simulation used a variable time-step, things are a bit more complicated.
Firstly, if the server reenacting the simulation also runs with a variable time-step, the time-steps will be different from the original one, as logic cycles differ from time to time. The information will then be interpolated using the recorded data.
Secondly, the server can be forced to reproduce the exact same time-steps passing them one by one. Must be mentioned though that as those time steps were the result of the original simulation running as fast as possible, as the time taken to represent this time-steps now will mostly be different, the simulation is bound to be reproduced with weird time fluctuations. The steps simulated are the same, but the real-time between them changes.
@ -72,7 +72,7 @@ Being these a maximum of 6, `6*0.016667 = 0.1`. If the time-step is greater ther
__Do not use a time-step greater than 0.1s.__<br>
As explained above, the physics will not be representative for the simulation. The original issue can be found here: Ref. [#695](https://github.com/carla-simulator/carla/issues/695)
----------------
---
## Client-server synchrony
CARLA is built over a client-server architecture. This has been previously stated: the server runs the simulation and the client retrieves information and demands for changes in the world. But how do these two elements communicate?
@ -81,7 +81,7 @@ By default, CARLA runs in __asynchronous mode__, meaning that the server runs th
!!! Note
In a multiclient architecture, only one client should make the tick. The server would react to receiving many as if these were all coming from one client and thus, take one step per tick.
<h4>Setting synchronous mode</h4>
#### Setting synchronous mode
Changing between synchronous and asynchronous mode is just a matter of a boolean state. In the following example, there is the code to make the simulation run on synchronous mode:
```py
@ -131,7 +131,7 @@ world_snapshot = world.wait_for_tick()
world.on_tick(lambda world_snapshot: do_something(world_snapshot))
```
----------------
---
## Possible configurations
The configuration of both concepts explained in this page, simulation time-step and client-server synchrony, leads for different types of simulation and results. Here is a brief summary on the possibilities and a better explanation of the reasoning behind it:
@ -155,7 +155,7 @@ The configuration of both concepts explained in this page, simulation time-step
__In synchronous mode, always use a fixed time-step__. If the server has to wait for the user to compute the following step, and it is using a variable time-step, the simulation world will use time-steps too big for the physics to be reliable. This issue is better explained in the __time-step limitations__ section.
----------------
---
That is all there is to know about the roles of simulation time and client-server synchrony in CARLA.
Open CARLA and mess around for a while to make sure that everything is clear and yet, if there are any doubts, feel free to post these in the forum.

View File

@ -1,6 +1,6 @@
#Blueprint Library
The Blueprint Library ([`carla.BlueprintLibrary`](../python_api/#carlablueprintlibrary-class)) is a summary of all [`carla.ActorBlueprint`](../python_api/#carla.ActorBlueprint) and its attributes ([`carla.ActorAttribute`](../python_api/#carla.ActorAttribute)) available to the user in CARLA.
The Blueprint Library ([`carla.BlueprintLibrary`](python_api.md#carlablueprintlibrary-class)) is a summary of all [`carla.ActorBlueprint`](python_api.md#carla.ActorBlueprint) and its attributes ([`carla.ActorAttribute`](python_api.md#carla.ActorAttribute)) available to the user in CARLA.
Here is an example code for printing all actor blueprints and their attributes:
```py
@ -11,7 +11,7 @@ for blueprint in blueprints:
print(' - {}'.format(attr))
```
Check out our [blueprint tutorial](../python_api_tutorial/#blueprints).
Check out our [introduction to blueprints](core_actors.md).
### controller
- **<font color="#498efc">controller.ai.walker</font>**

View File

@ -9,7 +9,7 @@ CARLA forum</a>
</p>
</div>
------
---
## System requirements
<!-- ======================================================================= -->
<details>
@ -32,7 +32,7 @@ CARLA forum</a>
</details>
------
---
## Linux build
<!-- ======================================================================= -->
<details>
@ -40,7 +40,7 @@ CARLA forum</a>
"CarlaUE4.sh" script does not appear when downloading from GitHub.
</h5></summary>
There is no `CarlaUE4.sh` script in the source version of CARLA. Follow the [build instructions](../how_to_build_on_linux) to build CARLA from source. To directly get the `CarlaUE4.sh` script, follow the [quick start instructions](../getting_started).
There is no `CarlaUE4.sh` script in the source version of CARLA. Follow the [build instructions](build_linux.md) to build CARLA from source. To directly get the `CarlaUE4.sh` script, follow the [quick start instructions](start_quickstart.md).
</details>
<!-- ======================================================================= -->
@ -61,7 +61,7 @@ CARLA forum</a>
Other specific reasons for a system to show conflicts with CARLA may occur. Please, post these on the forum so the team can get to know more about them.
</details>
------
---
## Windows build
<!-- ======================================================================= -->
@ -70,7 +70,7 @@ Other specific reasons for a system to show conflicts with CARLA may occur. Plea
"CarlaUE4.exe" does not appear when downloading from GitHub.
</h5></summary>
There is no `CarlaUE4.exe` executable in the source version of CARLA. Follow the [build instructions](../how_to_build_on_windows) to build CARLA from source. To directly get the `CarlaUE4.exe`, follow the [quick start instructions](../getting_started).
There is no `CarlaUE4.exe` executable in the source version of CARLA. Follow the [build instructions](build_windows.md) to build CARLA from source. To directly get the `CarlaUE4.exe`, follow the [quick start instructions](start_quickstart.md).
</details>
@ -156,7 +156,7 @@ This may happen, especially when building for the very first time. Just click on
</details>
------
---
## Running CARLA
<!-- ======================================================================= -->
<details>
@ -221,7 +221,7 @@ Copy the file named `zlib.dll` in the directory of the script.
A 32-bit Python version is creating conflicts when trying to run a script. Uninstall it and leave only the Python3 x64 required.
</details>
------
---
## Other
<!-- ======================================================================= -->
<details>
@ -248,4 +248,4 @@ A 32-bit Python version is creating conflicts when trying to run a script. Unins
Open the CarlaUE4 project, go to the menu "File -> Package Project", and select a platform. This takes a while, but it should generate a packaged version of CARLA to execute without Unreal Editor.
</details>
----
---

View File

@ -16,7 +16,7 @@
* Set the environment variable
* make CARLA
The build process can be quite long and tedious. This documentation tries to make things clear and provides for a **[F.A.Q.](../faq)** with solutions for the most common starting issues. However, the CARLA forum is open for anybody to post unexpected issues, doubts or suggestions. There is a specific section for installation issues on Linux. Feel free to login and become part of the community.
The build process can be quite long and tedious. This documentation tries to make things clear and provides for a **[F.A.Q.](build_faq.md)** with solutions for the most common starting issues. However, the CARLA forum is open for anybody to post unexpected issues, doubts or suggestions. There is a specific section for installation issues on Linux. Feel free to login and become part of the community.
<div class="build-buttons">
<!-- Latest release button -->
@ -26,7 +26,7 @@ CARLA forum</a>
</p>
</div>
---------------
---
## Requirements
<h4>System specifics</h4>
@ -36,7 +36,7 @@ CARLA forum</a>
* __Two TCP ports and good internet connection:__ 2000 and 2001 by default. Be sure neither firewall nor any other application are blocking these.
<h4>Dependencies</h4>
#### Dependencies
CARLA needs many dependencies to run. Some of them are built automatically during this process, such as *Boost.Python*. Others are binaries that should be installed before starting the build (*cmake*, *clang*, different versions of *Python* and much more). In order to do so, run the commands below in a terminal window.
@ -72,14 +72,14 @@ sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/lib/llvm-7/bin/
sudo update-alternatives --install /usr/bin/clang clang /usr/lib/llvm-7/bin/clang 170
```
-------------------
---
## GitHub
First of all, a [GitHub](https://github.com/) account will be needed, as CARLA content is organized in different repositories in there. Also, [git](https://git-scm.com/book/en/v2/Getting-Started-Installing-Git) will be used in this build guide when facilitating commands to be run in terminal.
In order to access the Unreal Engine repositories, which are set to private, create an [Unreal Engine](https://www.unrealengine.com/en-US/feed) account and connect it to a GitHub account. To do so, there is a section in Unreal Engine's profile settings under the name of __Connected accounts__. [Here](https://www.unrealengine.com/en-US/blog/updated-authentication-process-for-connecting-epic-github-accounts) is a brief explanation just in case.
-------------------
---
## Unreal Engine
The current version of CARLA runs on __Unreal Engine 4.22__ only, so the following steps will be downloading this version and building it. The path is irrelevant, but for the sake of this tutorial, installation will be done under `~/UnrealEngine_4.22`. If the path chosen differs, remember to change it accordingly when running the commands on terminal.
@ -105,7 +105,7 @@ cd ~/UnrealEngine_4.22/Engine/Binaries/Linux && ./UE4Editor
If anything goes wrong, it is related with Unreal Engine and there is not much CARLA can do about it. However, checking the guide mentioned above or visiting the [build documentation](https://wiki.unrealengine.com/Building_On_Linux) provided by Unreal Engine could be helpful.
-----------
---
## CARLA build
The system should be ready to start building CARLA. Just for clarity, a brief summary so far:
@ -117,7 +117,7 @@ The system should be ready to start building CARLA. Just for clarity, a brief su
!!! Note
Optionally you can download aria2 (with `sudo apt-get install aria2`) so the following commands run a bit faster.
<h4>Clone repository</h4>
#### Clone repository
<div class="build-buttons">
<!-- Latest release button -->
@ -137,7 +137,7 @@ Now the latest content for the project, known as `master` branch in the reposito
!!! Note
The `master` branch contains the latest fixes and features. Stable code is inside the `stable` branch, and it can be built by changing the branch. The same goes for previous CARLA releases. Always remember to check the current branch in git with `git branch`.
<h4>Get assets</h4>
#### Get assets
Only the assets package, the visual content, is yet to be donwloaded. These are stored separately to make the repository a bit lighter. CARLA cannot be built without the assets, so there is a script that downloads and extracts the latest content version (this package is >3GB, it might take some time depending on internet connection).
Get into the root carla folder. The path should correspond with the repository just cloned:
@ -149,10 +149,10 @@ Run the script to get the assets:
./Update.sh
```
!!! Important
To get the assets still in development visit the [Update CARLA](../update_carla#get-development-assets) page and read __Get development assets__.
To get the assets still in development visit the [Update CARLA](build_update.md#get-development-assets) page and read __Get development assets__.
<h4>Set the environment variable </h4>
#### Set the environment variable
For CARLA to find the Unreal Engine 4.22 installation folder, an environment variable needs to be set.
@ -162,7 +162,7 @@ export UE4_ROOT=~/UnrealEngine_4.22
This variable should be added to `~/.bashrc` or `~/.profile` to set it persistently session-wide. Otherwise, it will only be accessible for current shell.
<h4>make CARLA</h4>
#### make CARLA
The last step is to finally build CARLA. There are different `make` commands to build the different modules. All of them run in the root CARLA folder:

View File

@ -9,7 +9,7 @@ in the client-side.
The goal is to be able to call Unreal Engine's functions from a separate Python
process.
![modules](../img/modules.png)
![modules](img/modules.png)
In Linux, we compile CARLA and all the dependencies with clang-7.0 and C++14
standard. We however link against different runtime C++ libraries depending on
@ -52,6 +52,7 @@ Two configurations:
| **Required by** | Carla plugin | PythonAPI |
<br>
#### CarlaUE4 and Carla plugin
Both compiled at the same step with Unreal Engine build tool. They require the

View File

@ -18,10 +18,10 @@ CARLA forum</a>
</p>
</div>
---------------
---
## Get latest binary release
Binary releases are prepackaged and thus, tied to a specific version of CARLA. In order to get the latest, erase the previous one and follow the procedure stated in the [quick start installation](../getting_started/quickstart). In the CARLA repository, releases are listed in __Development__ and there is also a highly experimental __Nightly build__ containing the current state of CARLA up to date:
Binary releases are prepackaged and thus, tied to a specific version of CARLA. In order to get the latest, erase the previous one and follow the procedure stated in the [quick start installation](start_quickstart.md). In the CARLA repository, releases are listed in __Development__ and there is also a highly experimental __Nightly build__ containing the current state of CARLA up to date:
<div class="build-buttons">
<!-- Latest release button -->
@ -37,7 +37,7 @@ Binary releases are prepackaged and thus, tied to a specific version of CARLA. I
</p>
</div>
-------------------
---
## Update Linux and Windows build
!!! Important
@ -80,7 +80,7 @@ Run the editor with the spectator view to be sure that everything worked properl
make launch
```
-------------------
---
## Get development assets
The 3D assets, models, and maps have also a [public git repository][contentrepolink] where the CARLA team regularly pushes latest updates. However, using this version of the content is only recommended to developers, as it may contain unfinished maps and/or models.

View File

@ -10,7 +10,7 @@
* Get assets
* make CARLA
The build process can be quite long and tedious. This documentation tries to make things clear and provides for a **[F.A.Q.](../faq)** with solutions for the most common starting issues. However, the CARLA forum is open for anybody to post unexpected issues, doubts or suggestions. There is a specific section for installation issues on Windows. Feel free to login and become part of the community.
The build process can be quite long and tedious. This documentation tries to make things clear and provides for a **[F.A.Q.](build_faq.md)** with solutions for the most common starting issues. However, the CARLA forum is open for anybody to post unexpected issues, doubts or suggestions. There is a specific section for installation issues on Windows. Feel free to login and become part of the community.
<div class="build-buttons">
<!-- Latest release button -->
@ -19,6 +19,7 @@ The build process can be quite long and tedious. This documentation tries to mak
CARLA forum</a>
</p>
</div>
---
## Requirements
#### System specifics
@ -27,9 +28,11 @@ CARLA forum</a>
* __30GB disk space:__ Installing all the software needed and CARLA itself will require quite a lot of space, especially Unreal Engine. Make sure to have around 30/50GB of free disk space.
* __An adequate GPU:__ CARLA aims for realistic simulations, so the server needs at least a 4GB GPU. A dedicated GPU is highly recommended for machine learning.
* __Two TCP ports and good internet connection:__ 2000 and 2001 by default. Be sure neither firewall nor any other application are blocking these.
---
## Necessary software
#### Minor installations
Some software is needed for the build process the installation of which is quite straightforward.
* [CMake](https://cmake.org/download/): Generates standard build files from simple configuration files.

View File

@ -1,4 +0,0 @@
#Building from source
* [How to build on Linux](how_to_build_on_linux.md)
* [How to build on Windows](how_to_build_on_windows.md)

View File

@ -1,106 +0,0 @@
<h1>Running CARLA without Display and Selecting GPUs</h1>
!!! note
See [#225](https://github.com/carla-simulator/carla/issues/225) for an
alternative method.
This tutorial is designed for
* Remote server users that have several nvidia graphical cards and want to
effectively use CARLA on all GPUs.
* Desktop users who want to use the GPU that is not plugged on the screen for
rendering CARLA.
On this tutorial you will learn
* How to configure your server to have nvidia working on rendering without a
display attached.
* How to use VNC + VGL to simulate a display connected to any GPU you have in
your machine.
* And Finally, how to run CARLA in this environment
This tutorial was tested in Ubuntu 16.04 and using NVIDIA 384.11 drivers.
## Preliminaries
A few things need to be working in your server before. Latest NVIDIA Drivers,
OpenGL, VirtualGL(VGL), TurboVNC 2.11.
<h4>NVIDIA Drivers</h4>
Download and install [NVIDIA-drivers][nvidialink] with typical tutorials.
[nvidialink]: http://www.nvidia.es/Download/index.aspx
<h4>OpenGL</h4>
Openg GL is necessary for Virtual GL. Normally OpenGL can be installed through
apt.
sudo apt-get install freeglut3-dev mesa-utils
<h4>VGL</h4>
Follow this tutorial and install vgl:
[Installing VGL](https://virtualgl.org/vgldoc/2_2_1/#hd004001)
<h4>TurboVNC</h4>
Follow the tutorial below to install TurboVNC 2.11:
[Installing TurboVNC](https://cdn.rawgit.com/TurboVNC/turbovnc/2.1.1/doc/index.html#hd005001)
!!! warning
Take care on which VNC you install as it may not be compatible with
Unreal. The one above was the only one that worked for me.
<h4>Extra Packages</h4>
These extra packages were necessary to make unreal to work.
sudo apt install x11-xserver-utils libxrandr-dev
<h4>Configure your X</h4>
You must generate a X compatible with your nvdia and compatible to run without
display. For that, the following command worked:
sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
## Emulating The Virtual Display
Run your own Xorg. Here I use number 7, but it could be labeled with any free
number
sudo nohup Xorg :7 &
Run an auxiliary remote VNC-Xserver. This will create a virtual display "8".
/opt/TurboVNC/bin/vncserver :8
If everything is working fine the following command should run smoothly.
DISPLAY=:8 vglrun -d :7.0 glxinfo
Note. This will run glxinfo on Xserver 7, device 0. This means you are selecting
the GPU 0 on your machine. To run on other GPU, such as GPU 1 run:
DISPLAY=:8 vglrun -d :7.1 glxinfo
<h3> Extra </h3>
If you want disable the need of sudo when creating the 'nohup Xorg' go to the
'/etc/X11/Xwrapper.config' file and change 'allowed_users=console' to
'allowed_users=anybody'
It may be needed to stop all Xorg servers before running nohup Xorg. The command
for that could change depending on your system. Generally for Ubuntu 16.04 you
should use:
sudo service lightdm stop
## Running CARLA
Now, finally, to run CARLA on a certain gpu_number placed in a certain
$CARLA_PATH, run.
DISPLAY=:8 vglrun -d :7.<gpu_number> $CARLA_PATH/CarlaUE4/Binaries/Linux/CarlaUE4

View File

@ -1,228 +0,0 @@
<h1>Configuring the simulation</h1>
Before you start running your own experiments there are few details to take into
account at the time of configuring your simulation. In this document we cover
the most important ones.
Changing the map
----------------
The map can be changed from the Python API with
```py
world = client.load_world('Town01')
```
this creates an empty world with default settings. The list of currently
available maps can be retrieved with
```py
print(client.get_available_maps())
```
To reload the world using the current active map, use
```py
world = client.reload_world()
```
Graphics Quality
----------------
<h4>Vulkan vs OpenGL</h4>
Vulkan _(if installed)_ is the default graphics API used by Unreal Engine and CARLA on Linux.
It consumes more memory but performs faster.
On the other hand, OpenGL is less memory consuming but performs slower than Vulkan.
!!! note
Vulkan is an experimental build so it may have some bugs when running the simulator.
OpenGL API can be selected with the flag `-opengl`.
```sh
> ./CarlaUE4.sh -opengl
```
<h4>Quality levels</h4>
Currently, there are two levels of quality, `Low` and `Epic` _(default)_. The image below shows
how the simulator has to be started with the appropiate flag in order to set a quality level
and the difference between qualities.
![](img/epic_quality_capture.png) | ![](img/low_quality_capture.png)
:-------------------------:|:-------------------------:
`./CarlaUE4.sh -quality-level=Epic` | `./CarlaUE4.sh -quality-level=Low`
**Low mode runs significantly faster**, ideal for users that don't rely on quality precision.
!!! important
The issue that made quality levels show an abnormal whiteness has been fixed. If the problem persists delete `GameUserSettings.ini` as it is saving the previous settings. It will be generated again in the next run. __Ubuntu path:__ ` ~/.config/Epic/CarlaUE4/Saved/Config/LinuxNoEditor/` __Windows path:__ `<Package folder>\WindowsNoEditor\CarlaUE4\Saved\Config\WindowsNoEditor\`
Running off-screen
------------------
In Linux, you can force the simulator to run off-screen by setting the
environment variable `DISPLAY` to empty
!!! important
**DISPLAY= only works with OpenGL**<br>
Unreal Engine currently crashes when Vulkan is used when running
off-screen. Therefore the `-opengl` flag must be added to force the engine to
use OpenGL instead. We hope that this issue is addressed by Epic in the near
future.
```sh
# Linux
DISPLAY= ./CarlaUE4.sh -opengl
```
This launches the simulator without simulator window, of course you can still
connect to it normally and run the example scripts. Note that with this method,
in multi-GPU environments, it's not possible to select the GPU that the
simulator will use for rendering. To do so, follow the instruction in
[Running without display and selecting GPUs](carla_headless.md).
No-rendering mode
-----------------
It is possible to completely disable rendering in the simulator by enabling
_no-rendering mode_ in the world settings. This way is possible to simulate
traffic and road behaviours at very high frequencies without the rendering
overhead. Note that in this mode, cameras and other GPU-based sensors return
empty data.
```py
settings = world.get_settings()
settings.no_rendering_mode = True
world.apply_settings(settings)
```
Fixed time-step
---------------
The time-step is the _simulation-time_ elapsed between two steps of the
simulation. In video-games, this _simulation-time_ is almost always adjusted to
real time for better realism. This is achieved by having a **variable
time-step** that adjusts the simulation to keep up with real-time. In
simulations however, it is better to detach the _simulation-time_ from
real-time, and let the simulation run as fast as possible using a **fixed
time-step**. Doing so, we are not only able to simulate longer periods in less
time, but also gain repeatability by reducing the float-point arithmetic errors
that a variable time-step introduces.
CARLA can be run in both modes.
<h4>Variable time-step</h4>
The simulation tries to keep up with real-time. To do so, the time-step is
slightly adjusted each update. Simulations are not repeatable. By default, the
simulator starts in this mode, but it can be re-enabled if changed with
```py
settings = world.get_settings()
settings.fixed_delta_seconds = None
world.apply_settings(settings)
```
<h4>Fixed time-step</h4>
The simulation runs as fast as possible, simulating the same time increment on
each step. To enable this mode set a fixed delta seconds in the world settings.
For instance, to run the simulation at a fixed time-step of 0.05 seconds (20
FPS) apply the following settings
```py
settings = world.get_settings()
settings.fixed_delta_seconds = 0.05
world.apply_settings(settings)
```
!!! important
**Do not decrease the frame-rate below 10 FPS.**<br>
Our settings are adjusted to clamp the physics engine to a minimum of 10
FPS. If the game tick falls below this, the physics engine will still
simulate 10 FPS. In that case, things dependent on the game's delta time are
no longer in sync with the physics engine.
Ref. [#695](https://github.com/carla-simulator/carla/issues/695)
Synchronous mode
----------------
!!! important
**Always run the simulator at fixed time-step when using the synchronous
mode**. Otherwise the physics engine will try to recompute at once all the
time spent waiting for the client, this usually results in inconsistent or
not very realistic physics.
The client-simulator communication can be synchronized by using the _synchronous
mode_. When the synchronous mode is enabled, the simulation is halted each
update until a _tick_ message is received.
This is very useful when dealing with slow client applications, as the simulator
waits until the client is ready to continue. This mode can also be used to
synchronize data among sensors by waiting until all the data is received. Note
that data coming from GPU-based sensors (cameras) is usually generated with a
delay of a couple of frames respect to data coming from CPU-based sensors.
The synchronous mode can be enabled at any time in the world settings.
```py
# Example: Synchronizing a camera with synchronous mode.
settings = world.get_settings()
settings.synchronous_mode = True
world.apply_settings(settings)
camera = world.spawn_actor(blueprint, transform)
image_queue = queue.Queue()
camera.listen(image_queue.put)
while True:
world.tick()
image = image_queue.get()
```
For a more complex scenario synchronizing data from several sensors, take a look
at the example [synchronous_mode.py][syncmodelink].
[syncmodelink]: https://github.com/carla-simulator/carla/blob/master/PythonAPI/examples/synchronous_mode.py
Command-line options
--------------------------
!!! important
Some of the command-line options are not available in `Linux` due to the "Shipping" build.
Therefore, the use of [`config.py`][configlink] script is needed to configure the simulation.
[configlink]: https://github.com/carla-simulator/carla/blob/master/PythonAPI/util/config.py
Some configuration examples:
```sh
> ./config.py --no-rendering # Disable rendering
> ./config.py --map Town05 # Change map
> ./config.py --weather ClearNoon # Change weather
...
```
To check all the available configurations, run the following command:
```sh
> ./config.py --help
```
Commands directly available:
* `-carla-rpc-port=N` Listen for client connections at port N, streaming port is set to N+1 by default.
* `-carla-streaming-port=N` Specify the port for sensor data streaming, use 0 to get a random unused port.
* `-quality-level={Low,Epic}` Change graphics quality level.
* [Full list of UE4 command-line arguments][ue4clilink] (note that many of these won't work in the release version).
Example:
```sh
> ./CarlaUE4.sh -carla-rpc-port=3000
```
[ue4clilink]: https://docs.unrealengine.com/en-US/Programming/Basics/CommandLineArguments

View File

@ -1,5 +1,6 @@
# Contributor Covenant Code of Conduct
---
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
@ -9,6 +10,7 @@ size, disability, ethnicity, gender identity and expression, level of experience
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
---
## Our Standards
Examples of behavior that contributes to creating a positive environment
@ -31,6 +33,7 @@ Examples of unacceptable behavior by participants include:
* Other conduct which could reasonably be considered inappropriate in a
professional setting
---
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
@ -43,6 +46,7 @@ that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
---
## Scope
This Code of Conduct applies both within project spaces and in public spaces
@ -52,6 +56,7 @@ address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
---
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
@ -65,6 +70,7 @@ Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
---
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,

View File

@ -1,12 +1,12 @@
# Coding standard
-------
---
## General
* Use spaces, not tabs.
* Avoid adding trailing whitespace as it creates noise in the diffs.
-------
---
## Python
* Comments should not exceed 80 columns, code should not exceed 120 columns.
@ -19,7 +19,7 @@
[pylintlink]: https://www.pylint.org/
[pep8link]: https://www.python.org/dev/peps/pep-0008/
-------
---
## C++
* Comments should not exceed 80 columns, code may exceed this limit a bit in

View File

@ -1,5 +1,4 @@
# Contributing to CARLA
=====================
We are more than happy to accept contributions!
@ -10,7 +9,7 @@ How can I contribute?
* Improving documentation
* Code contributions
---------
---
## Reporting bugs
Use our [issue section][issueslink] on GitHub. Please check before that the
@ -21,7 +20,7 @@ issue is not already reported, and make sure you have read our
[docslink]: http://carla.readthedocs.io
[faqlink]: http://carla.readthedocs.io/en/latest/faq/
---------
---
## Feature requests
Please check first the list of [feature requests][frlink]. If it is not there
@ -30,7 +29,7 @@ your request as a new issue.
[frlink]: https://github.com/carla-simulator/carla/issues?q=is%3Aissue+is%3Aopen+label%3A%22feature+request%22+sort%3Acomments-desc
-------
---
## Improving documentation
If you feel something is missing in the documentation, please don't hesitate to
@ -61,7 +60,7 @@ Once you are done with your changes, please submit a pull-request.
> mkdocs serve
```
------------
---
## Code contributions
So you are considering making a code contribution, great! We love to have
@ -96,7 +95,7 @@ the current documentation if you feel confident enough.
#### Coding standard
Please follow the current [coding standard](coding_standard.md) when submitting
Please follow the current [coding standard](cont_coding_standard.md) when submitting
new code.
#### Pull-requests

View File

@ -6,7 +6,7 @@ followed in order to contribute to the documentation.
We use a mix of markdown and HTML tags to customize the documentation along with an
[`extra.css`](https://github.com/carla-simulator/carla/tree/master/Docs/extra.css) file.
-------
---
## Rules
@ -21,7 +21,7 @@ We use a mix of markdown and HTML tags to customize the documentation along with
* Use `------` underlining a Heading or `#` hierarchy to make headings and show them in the
navigation bar.
--------
---
## Exceptions
* Documentation generated via python scripts like PythonAPI reference

View File

@ -16,7 +16,7 @@ This section will cover the basics: from spawning up to destruction and their di
* Vehicles
* Walkers
---------------
---
## Blueprints
This layouts allow the user to smoothly add new actors into the simulation. They basically are already-made models with a series of attributes listed, some of which are modifiable and others are not: vehicle color, amount of channels in a lidar sensor, _fov_ in a camera, a walker's speed. All of these can be changed at will. All the available blueprints are listed in the [blueprint library](bp_library.md) with their attributes and a tag to identify which can be set by the user.
@ -53,9 +53,9 @@ for attr in blueprint:
blueprint.set_attribute(attr.id, random.choice(attr.recommended_values))
```
!!! Note
Users can create their own vehicles, take a look at the tutorials in __How to... (content)__ to learn on that. Contributors can [add their new content to CARLA](dev/how_to_upgrade_content.md).
Users can create their own vehicles, take a look at the tutorials in __How to... (content)__ to learn on that. Contributors can [add their new content to CARLA](tuto_D_contribute_assets.md).
---------------
---
## Actor life cycle
!!! Important
@ -141,7 +141,7 @@ Actors are not destroyed when the Python script finishes, they remain and the wo
!!! Important
Destroying an actor blocks the simulator until the process finishes.
---------------
---
## Types of actors
#### Sensors
@ -222,7 +222,7 @@ print(box.extent) # XYZ half-box extents in meters.
[carla.Walker](python_api.md#carla.Walker) are moving actors and so, work in quite a similar way as vehicles do. Control over them is provided by controllers:
* __[carla.WalkerControl](python_api.md#carla.Walker)__: to move the pedestrian around with a certain direction and speed. It also allows them to jump.
* __[carla.WalkerBoneControl](python_api.md#carla.Walker)__: provides control over the specific bones of the 3D model. The skeleton structure and how to control it is summarized in this __[How to](walker_bone_control.md)__.
* __[carla.WalkerBoneControl](python_api.md#carla.Walker)__: provides control over the specific bones of the 3D model. The skeleton structure and how to control it is summarized in this __[How to](tuto_G_control_walker_skeletons.md)__.
Walkers can be AI controlled. They do not have an autopilot mode, but there is another actor, [carla.WalkerAIController](python_api.md#carla.WalkerAIController) that, when spawned attached to a walker, can move them around:
@ -242,12 +242,12 @@ ai_controller.stop()
```
When a walker reaches the target location, they will automatically walk to another random point. If the target point is not reachable, then they reach the closest point from the are where they are.
For a more advanced reference on how to use this, take a look at [this recipe](python_cookbook.md#walker-batch-recipe) where a lot of walkers is spawned and set to wander around using batches.
For a more advanced reference on how to use this, take a look at [this recipe](ref_code_recipes.md#walker-batch-recipe) where a lot of walkers is spawned and set to wander around using batches.
!!! Note
To **destroy the pedestrians**, the AI controller needs to be stopped first and then, both actor and controller should be destroyed.
---------------
---
That is a wrap as regarding actors in CARLA.
The next step should be learning more about the map, roads and traffic in CARLA. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:
<div text-align: center>

View File

@ -1,7 +1,7 @@
# Core concepts
This section summarizes the main features and modules in CARLA. While this page is just an overview, the rest of the information can be found in their respective pages, including fragments of code and in-depth explanations.
In order to learn everything about the different classes and methods in the API, take a look at the [Python API reference](python_api.md). There is also another reference named [Code recipes](python_cookbook.md) containing some of the most common fragments of code regarding different functionalities that could be specially useful during these first steps.
In order to learn everything about the different classes and methods in the API, take a look at the [Python API reference](python_api.md). There is also another reference named [Code recipes](ref_code_recipes.md) containing some of the most common fragments of code regarding different functionalities that could be specially useful during these first steps.
* [__First steps__](#first-steps)
* 1st. World and client
@ -14,7 +14,7 @@ In order to learn everything about the different classes and methods in the API,
**This documentation refers to CARLA 0.9.X**. <br>
The API changed significantly from previous versions (0.8.X). There is another documentation regarding those versions that can be found [here](https://carla.readthedocs.io/en/stable/getting_started/).
---------------
---
## First steps
#### 1st. World and client
@ -55,7 +55,7 @@ Sensors are one of the most important actors in CARLA and their use can be quite
Sensors wait for some event to happen to gather data and then call for a function defining what they should do. Depending on which, sensors retrieve different types of data in different ways and their usage varies substantially.
---------------
---
## Advanced steps
Some more complex elements and features in CARLA are listed here to make newcomers familiar with their existence. However it is highly encouraged to first take a closer look to the pages regarding the first steps in order to learn the basics.
@ -65,7 +65,7 @@ Some more complex elements and features in CARLA are listed here to make newcome
- **Simulation time and synchrony:** Everything regarding the simulation time and how does the server run the simulation depending on clients.
- **Traffic manager:** This module is in charge of every vehicle set to autopilot mode. It conducts the traffic in the city for the simulation to look like a real urban environment.
---------------
---
That sums up the basics necessary to understand CARLA.
However, these broad strokes are just a big picture of the system.The next step should be learning more about the world of the simulation and the clients connecting to it. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:
<div text-align: center>

View File

@ -9,7 +9,7 @@ After discussing about the world and its actors, it is time to put everything in
* Waypoints
* [__Map navigation__](#map-navigation)
---------------
---
## The map
Understanding the map in CARLA is equivalent to understanding the road. All of the maps have an OpenDRIVE file defining the road layout fully annotated. The way the [OpenDRIVE standard 1.4](http://www.opendrive.org/docs/OpenDRIVEFormatSpecRev1.4H.pdf) defines roads, lanes, junctions, etc. is extremely important. It determines the possibilities of the API and the reasoning behind many decisions made.
@ -40,7 +40,8 @@ So far there are seven different maps available. Each of these has a specific st
|__Town 07__ | A rural environment with narrow roads, barely non traffic lights and barns.|
<br>
Users can also [customize a map](dev/map_customization.md) or even [create a new map](how_to_make_a_new_map.md) to be used in CARLA. These are more advanced steps and have been developed in their own tutorials.
Users can also [customize a map](tuto_A_map_customization.md) or even [create a new map](tuto_A_map_creation.md) to be used in CARLA. These are more advanced steps and have been developed in their own tutorials.
#### Lanes
@ -103,7 +104,7 @@ while True:
Waypoints can also find their equivalent at the center of an adjacent lane (if said lane exists) using `get_right_lane()` and `get_left_lane()`. This is useful to find the next waypoint on a neighbour lane to then perform a lane change:
---------------
---
## Map Navigation
The instance of the map is provided by the world. Once it is retrieved, it provides acces to different methods that will be useful to create routes and make vehicles roam around the city and reach goal destinations:
@ -144,7 +145,7 @@ my_geolocation = map.transform_to_geolocation(vehicle.transform)
info_map = map.to_opendrive()
```
---------------
---
That is a wrap as regarding maps and navigation around the cities in CARLA.
The next step should be learning more about sensors, the different types and the data they retrieve. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:
<div text-align: center>

View File

@ -13,7 +13,7 @@ This page summarizes everything necessary to start handling sensors including so
* Detectors
* Other
---------------
---
## Sensors step-by-step
The class [carla.Sensor](python_api.md#carla.Sensor) defines a special type of actor able to measure and stream data.
@ -90,7 +90,8 @@ Sensor data differs a lot between sensor types, but it is always tagged with:
| `transform` | carla.Transform | World reference of the sensor at the time of the measurement. |
<br>
---------------
---
## Types of sensors
#### Cameras
@ -105,6 +106,7 @@ __Retrieve data:__ every simulation step.
| Semantic segmentation | [carla.Image](python_api.md#carla.Image) | Renders elements in the field of view with a specific color according to their tags. |
<br>
#### Detectors
Sensors that retrieve data when a parent object they are attached to registers a specific event in the simulation.
@ -117,6 +119,7 @@ __Retrieve data:__ when triggered.
| Obstacle | [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleEvent) | Detects possible obstacles ahead of its parent. |
<br>
#### Other
This group gathers sensors with different functionalities: navigation, measure physical properties of an object and provide 2D and 3D models of the scene.
@ -130,7 +133,8 @@ __Retrieve data:__ every simulation step.
| Radar | [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) | 2D point map that models elements in sight and their movement regarding the sensor. |
<br>
---------------
---
That is a wrap on sensors and how do these retrieve simulation data and thus, the introduction to CARLA is finished. However there is yet a lot to learn. Some of the different paths to follow now are listed here:
* __Gain some practise__: if diving alone in CARLA is still frightening, it may be a good idea to try some of the code recipes provided in this documentation and combine them with the example scripts or some ideas of your own.
@ -138,7 +142,7 @@ That is a wrap on sensors and how do these retrieve simulation data and thus, th
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="../python_cookbook" target="_blank" class="btn btn-neutral" title="Code recipes">
<a href="ref_code_recipes.md" target="_blank" class="btn btn-neutral" title="Code recipes">
Code recipes</a>
</p>
</div>
@ -148,7 +152,7 @@ Code recipes</a>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="../simulation_time_and_synchrony" target="_blank" class="btn btn-neutral" title="Synchrony and time-step">
<a href="adv_synchrony_timestep.md" target="_blank" class="btn btn-neutral" title="Synchrony and time-step">
Synchrony and time-step</a>
</p>
</div>
@ -158,7 +162,7 @@ Synchrony and time-step</a>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="../python_api" target="_blank" class="btn btn-neutral" title="Python API reference">
<a href="python_api.md" target="_blank" class="btn btn-neutral" title="Python API reference">
Python API reference</a>
</p>
</div>

View File

@ -13,7 +13,8 @@ This tutorial goes from defining the basics and creation of these elements to de
* Weather
* World snapshots
* Settings
---------------
---
## The client
Clients are one of the main elements in the CARLA architecture. Using these, the user can connect to the server, retrieve information from the simulation and command changes. That is done via scripts where the client identifies itself and connects to the world to then operate with the simulation.
@ -23,7 +24,7 @@ The __carla.Client__ class is explained thoroughly in the [PythonAPI reference](
#### Client creation
Two things are needed: The IP address identifying it and two TCP ports the client will be using to communicate with the server. There is an optional third parameter, an `int` to set the working threads that by default is set to all (`0`). [This code recipe](python_cookbook.md#parse-client-creation-arguments) shows how to parse these as arguments when running the script.
Two things are needed: The IP address identifying it and two TCP ports the client will be using to communicate with the server. There is an optional third parameter, an `int` to set the working threads that by default is set to all (`0`). [This code recipe](ref_code_recipes.md#parse-client-creation-arguments) shows how to parse these as arguments when running the script.
```py
client = carla.Client('localhost', 2000)
@ -64,7 +65,7 @@ The main purpose of the client object is to get or change the world and many tim
The list of features that are accessed from the client object are:
* __Traffic manager:__ this module is in charge of every vehicle set to autopilot to recreate an urban environment.
* __[Recorder](recorder_and_playback.md):__ allows to reenact a previous simulation using the information stored in the [snapshots]() summarizing the simulation state per frame.
* __[Recorder](adv_recorder.md):__ allows to reenact a previous simulation using the information stored in the [snapshots]() summarizing the simulation state per frame.
As far as batches are concerned, the latest sections in the Python API describe the [available commands](python_api.md#command.ApplyAngularVelocity). These are common functions that have been prepared to be executed in batches or lots so that they are applied during the same step of the simulation.
The following example would destroy all the vehicles contained in `vehicles_list` at once:
@ -74,7 +75,7 @@ client.apply_batch([carla.command.DestroyActor(x) for x in vehicles_list])
The method `apply_batch_sync()` is only available when running CARLA in [synchronous mode]() and allows to return a __command.Response__ per command applied.
---------------
---
## The world
This class acts as the major ruler of the simulation and its instance should be retrieved by the client. It does not contain the model of the world itself (that is part of the [Map](core_map.md) class), but rather is an anchor for the simulation. Most of the information and general settings can be accessed from this class, for example:
@ -131,7 +132,7 @@ debug = world.debug
debug.draw_box(carla.BoundingBox(actor_snapshot.get_transform().location,carla.Vector3D(0.5,0.5,2)),actor_snapshot.get_transform().rotation, 0.05, carla.Color(255,0,0,0),0)
```
This example is extended in this [code recipe](python_cookbook.md#debug-bounding-box-recipe) to draw boxes for every actor in a world snapshot. Take a look at it and at the Python API reference to learn more about this.
This example is extended in this [code recipe](ref_code_recipes.md#debug-bounding-box-recipe) to draw boxes for every actor in a world snapshot. Take a look at it and at the Python API reference to learn more about this.
#### World snapshots
@ -160,9 +161,9 @@ actor_snapshot = world_snapshot.find(actual_actor.id) #Get an actor's snapshot
#### World settings
The world also has access to some advanced configurations for the simulation that determine rendering conditions, steps in the simulation time and synchrony between clients and server. These are advanced concepts that do better if untouched by newcomers.
For the time being let's say that CARLA by default runs in with its best quality, with a variable time-step and asynchronously. The helper class is [carla.WorldSettings](python_api.md#carla.WorldSettings). To dive further in this matters take a look at the __Advanced steps__ section of the documentation and read about [synchrony and time-step](simulation_time_and_synchrony.md) or [rendering_options.md](../rendering_options).
For the time being let's say that CARLA by default runs in with its best quality, with a variable time-step and asynchronously. The helper class is [carla.WorldSettings](python_api.md#carla.WorldSettings). To dive further in this matters take a look at the __Advanced steps__ section of the documentation and read about [synchrony and time-step](adv_synchrony_timestep.md) or [rendering_options.md](adv_rendering_options.md).
---------------
---
That is a wrap on the world and client objects, the very first steps in CARLA.
The next step should be learning more about actors and blueprints to give life to the simulation. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:
<div text-align: center>

View File

@ -1,7 +0,0 @@
<h1>CARLA Development</h1>
* [Map customization](map_customization.md)
* [Build system](build_system.md)
* [How to add a new sensor](how_to_add_a_new_sensor.md)
* [How to upgrade content](how_to_upgrade_content.md)
* [How to make a release](how_to_make_a_release.md)

View File

@ -1,48 +0,0 @@
# Download
### Nightly build
> This is an automated build with the latest changes pushed to our "master"
> branch. It contains the very last fixes and features that will be part of the
> next release, but also some experimental changes. Use at your own risk!
- [CARLA Nightly Build](http://carla-assets-internal.s3.amazonaws.com/Releases/Linux/Dev/CARLA_Latest.tar.gz)
### Development [[Documentation](https://carla.readthedocs.io/en/latest/)]
> These are the version of CARLA, more frequently updated and with the latest
> features. Keep in mind that the API and features in this channel can (and
> probably will) change.
- [CARLA 0.9.7](https://github.com/carla-simulator/carla/releases/tag/0.9.7)
- [CARLA 0.9.6](https://github.com/carla-simulator/carla/releases/tag/0.9.6)
- [CARLA 0.9.5](https://github.com/carla-simulator/carla/releases/tag/0.9.5)
- [CARLA 0.9.4](https://github.com/carla-simulator/carla/releases/tag/0.9.4)
- [CARLA 0.9.3](https://github.com/carla-simulator/carla/releases/tag/0.9.3)
- [CARLA 0.9.2](https://github.com/carla-simulator/carla/releases/tag/0.9.2)
- [CARLA 0.9.1](https://github.com/carla-simulator/carla/releases/tag/0.9.1)
- [CARLA 0.9.0](https://github.com/carla-simulator/carla/releases/tag/0.9.0)
- [CARLA 0.8.4](https://github.com/carla-simulator/carla/releases/tag/0.8.4)
- [CARLA 0.8.3](https://github.com/carla-simulator/carla/releases/tag/0.8.3)
### Stable [[Documentation](https://carla.readthedocs.io/en/stable/)]
> The most tested and robust release out there!
- [CARLA 0.8.2](https://github.com/carla-simulator/carla/releases/tag/0.8.2)
- - -
### Docker
All the versions are also available to pull from DockerHub
```sh
docker pull carlasim/carla:X.X.X
```
Use tag "latest" for the nightly build
```sh
docker pull carlasim/carla:latest
```

View File

@ -1,134 +0,0 @@
<h1>Getting started with CARLA</h1>
![Welcome to CARLA](img/welcome.png)
!!! important
This tutorial refers to the latest development versions of CARLA, 0.9.0 or
later. For the documentation of the stable version please switch to the
[stable branch](https://carla.readthedocs.io/en/stable/getting_started/).
Welcome to CARLA! This tutorial provides the basic steps for getting started
using CARLA.
CARLA consists mainly of two modules, the **CARLA Simulator** and the **CARLA
Python API** module. The simulator does most of the heavy work, controls the
logic, physics, and rendering of all the actors and sensors in the scene; it
requires a machine with a dedicated GPU to run. The CARLA Python API is a module
that you can import into your Python scripts, it provides an interface for
controlling the simulator and retrieving data. With this Python API you can, for
instance, control any vehicle in the simulation, attach sensors to it, and read
back the data these sensors generate. Most of the aspects of the simulation are
accessible from our Python API, and more will be in future releases.
![CARLA Modules](img/carla_modules.png)
<h2>How to run CARLA</h2>
First of all, download the latest release from our GitHub page and extract all
the contents of the package in a folder of your choice.
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="https://github.com/carla-simulator/carla/blob/master/Docs/download.md" target="_blank" class="btn btn-neutral" title="Go to the latest CARLA release">
<span class="icon icon-github"></span> Get the latest release</a>
</p>
<!-- Nightly build button -->
<p>
<a href="http://carla-assets-internal.s3.amazonaws.com/Releases/Linux/Dev/CARLA_Latest.tar.gz" target="_blank" class="btn btn-neutral" title="Go to the nightly CARLA build">
<span class="icon fa-cloud-download"></span> Get the nightly build</a>
</p>
</div>
The release package contains a precompiled version of the simulator, the Python
API module, and some Python scripts with usage examples. In order to run our
usage examples, you may need to install the following Python modules
```sh
pip install --user pygame numpy
```
Let's start by running the simulator. Launch a terminal window and go to the
folder you extracted CARLA to. Start the simulator with the following command:
_Linux:_
```sh
./CarlaUE4.sh
```
_Windows:_
```cmd
CarlaUE4.exe
```
This launches a window with a view over the city. This is the "spectator"
view, you can fly around the city using the mouse and WASD keys, but you cannot
interact with the world in this view. The simulator is now running as a server,
waiting for a client app to connect and interact with the world.
!!! note
CARLA requires two available TCP ports on your computer, by default 2000 and
1. Make sure you don't have a firewall or another application blocking
those ports. Alternatively, you can manually change the port by launching
the simulator with the command-line argument `-carla-port=N`, the second
port will be automatically set to `N+1`.
Let's see a few examples of these clients. Open a new terminal and navigate to
the `PythonAPI/examples` folder, where our python clients are located:
```sh
cd PythonAPI/examples
```
Let's add now some life to the city by running:
```sh
python tm_spawn_npc.py
```
Which will create an appropriate amount of cars given the specs of your machine.
Alternatively, you can use the `-n <NUMBER_OF_VEHICLES>` and/or
`-w <NUMBER_OF_WALKERS>` flag to choose how many actors you want to create.
!!! note
We still support the old `spawn_npc.py` script, even if it will be removed
soon. This one uses the old and simple autopilot mode.
With this script we are adding vehicles to the world driving in "autopilot"
mode. Back to the simulator window we should see these vehicles driving around
the city. They will keep driving randomly until we stop the script. Let's leave
them there for now.
Now, it's nice and sunny in CARLA, but that's not a very interesting driving
condition. One of the cool features of CARLA is that you can control the weather
and lighting conditions of the world. We'll launch now a script that dynamically
controls the weather and time of the day, open yet another terminal window and
execute
```sh
python dynamic_weather.py
```
The city is now ready for us to drive, we can finally run
```sh
python manual_control.py
```
This should open a new window with a 3rd person view of a car, you can drive
this car with the WASD/arrow keys. Press 'h' to see all the options available.
![manual_control.py](img/manual_control.png)
As you have noticed, we can connect as many scripts as we want to control the
simulation and gather data. Even someone with a different computer can jump now
into your simulation and drive along with you
```sh
python manual_control.py --host=<your-ip-address-here>
```
<br>
Now that we covered the basics, in the next section we'll take a look at some of
the details of the Python API to help you write your own scripts.

View File

@ -1,13 +1,13 @@
<h1>CARLA Documentation</h1>
# CARLA Documentation
Welcome to the CARLA documentation.
This page contains the index with a brief explanation on the different sections for clarity.
Feel free to explore the documentation on your own, however, here are a few tips for newcomers:
* __Install CARLA__: visit the [Quickstart installation](../dev/quickstart) to get the CARLA releases or make the build for a desired platform.
* __Start using CARLA__: there is a section titled [First steps](../core_concepts), highly recommended for newcomers.
* __Doubts on the API__: there is a handy [Python API reference](../python_api) to consult classes and methods.
* __Install CARLA__: visit the [Quickstart installation](start_quickstart.md) to get the CARLA releases or make the build for a desired platform.
* __Start using CARLA__: there is a section titled [First steps](core_concepts.md), highly recommended for newcomers.
* __Doubts on the API__: there is a handy [Python API reference](python_api.md) to consult classes and methods.
Besides that, there is also the CARLA forum where the community gathers to share issues, suggestions and solutions:
<div class="build-buttons">
@ -15,51 +15,52 @@ Besides that, there is also the CARLA forum where the community gathers to share
CARLA forum</a>
</div>
!!! important
!!! Important
This is documentation for CARLA 0.9.0 or later. Previous documentation is in the [stable branch](https://carla.readthedocs.io/en/stable/).
---------------
<h3>Getting started</h3>
---
## Getting started
<p style="padding-left:30px;line-height:1.8">
<a href="../getting_started/introduction"><b>
<a href="../start_introduction"><b>
Introduction
</b></a>
— Capabilities and intentions behind the project.
<a href="../getting_started/quickstart"><b>
<a href="../start_quickstart"><b>
Quickstart installation
</b></a>
— Get the CARLA releases.
</p>
<h3>Building CARLA</h3>
## Building CARLA
<p style="padding-left:30px;line-height:1.8">
<a href="../how_to_build_on_linux"><b>
<a href="../build_linux"><b>
Linux build
</b></a>
— Make the build on Linux.
<a href="../how_to_build_on_windows"><b>
<a href="../build_windows"><b>
Windows build
</b></a>
— Make the build on Windows.
<a href="../update_carla"><b>
<a href="../build_update"><b>
Update CARLA
</b></a>
— Get up to date with the latest content.
<a href="../dev/build_system"><b>
<a href="../build_system"><b>
Build system
</b></a>
— Learn about the build and how it is made.
<a href="../carla_docker"><b>
<a href="../build_docker"><b>
Running in a Docker
</b></a>
— Run CARLA using a container solution.
<a href="../faq"><b>
<a href="../build_faq"><b>
F.A.Q.
</b></a>
— Some of the most frequent issues for newcomers.
</p>
<h3>First steps</h3>
## First steps
<p style="padding-left:30px;line-height:1.8">
<a href="../core_concepts"><b>
Core concepts
@ -78,36 +79,36 @@ CARLA forum</a>
</b></a>
— Discover the different maps and how to move around.
<a href="../core_sensors"><b>
(broken) 4th. Sensors and data
4th. Sensors and data
</b></a>
— Retrieve simulation data using sensors.
<h3>Advanced steps</h3>
## Advanced steps
<p style="padding-left:30px;line-height:1.8">
<a href="../recorder_and_playback"><b>
<a href="../adv_recorder"><b>
Recorder
</b></a>
— Store all the events in a simulation a play it again.
<a href="../rendering_options"><b>
<a href="../adv_rendering_options"><b>
Rendering options
</b></a>
— Different settings, from quality to no-render or off-screen runs.
<a href="../simulation_time_and_synchrony"><b>
<a href="../adv_synchrony_timestep"><b>
Synchrony and time-step
</b></a>
— Client-server communication and simulation time.
<a href="../traffic_manager"><b>
(broken) Traffic manager
</b></a>
<b>
(soon) Traffic manager
</b>
— Feature to handle autopilot vehicles and emulate traffic.
<h3>References</h3>
## References
<p style="padding-left:30px;line-height:1.8">
<a href="../python_api"><b>
Python API reference
</b></a>
— Classes and methods in the Python API.
<a href="../python_cookbook"><b>
<a href="../ref_code_recipes"><b>
Code recipes
</b></a>
— Code fragments commonly used.
@ -115,95 +116,95 @@ CARLA forum</a>
Blueprint library
</b></a>
— Blueprints provided to spawn actors.
<a href="../cpp_reference"><b>
<a href="../ref_cpp"><b>
C++ reference
</b></a>
— Classes and methods in CARLA C++.
<a href="../recorder_binary_file_format"><b>
<a href="../ref_recorder_binary_file_format"><b>
Recorder binary file format
</b></a>
— Detailed explanation of the recorder file format.
<a href="../ref_sensors"><b>
(broken) Sensors reference
Sensors reference
</b></a>
— Everything about sensors and the data they retrieve.
<h3>Tutorials — General</h3>
## Tutorials — General
<p style="padding-left:30px;line-height:1.8">
<a href="../how_to_add_friction_triggers"><b>
<a href="../tuto_G_add_friction_triggers"><b>
Add friction triggers
</b></a>
— Define dynamic box triggers for wheels.
<a href="../how_to_control_vehicle_physics"><b>
<a href="../tuto_G_control_vehicle_physics"><b>
Control vehicle physics
</b></a>
— Set runtime changes on a vehicle physics.
<a href="../walker_bone_control"><b>
<a href="../tuto_G_control_walker_skeletons"><b>
Control walker skeletons
</b></a>
— Skeleton and animation for walkers explained.
<h3>Tutorials — Assets</h3>
## Tutorials — Assets
<p style="padding-left:30px;line-height:1.8">
<a href="../dev/how_to_upgrade_content"><b>
Contribute with new assets
</b></a>
— Add new content to CARLA.
<a href="../how_to_add_assets"><b>
<a href="../tuto_A_import_assets"><b>
Import new assets
</b></a>
— Use personal assets in CARLA.
<a href="../dev/map_customization"><b>
Map customization
</b></a>
— Edit an existing map.
<a href="../how_to_make_a_new_map"><b>
<a href="../tuto_A_map_creation"><b>
Map creation
</b></a>
— Guidelines to create a new map.
<a href="../asset_packages_for_dist"><b>
<a href="../tuto_A_map_customization"><b>
Map customization
</b></a>
— Edit an existing map.
<a href="../tuto_A_standalone_packages"><b>
Standalone asset packages
</b></a>
— Import assets into Unreal Engine and prepare them for package distribution.
<a href="../epic_automotive_materials"><b>
Use Automotive materials
<a href="../tuto_A_epic_automotive_materials"><b>
Use Epic's Automotive materials
</b></a>
— Apply Epic's set of Automotive materials to vehicles for a more realistic painting.
<a href="../how_to_model_vehicles"><b>
<a href="../tuto_A_vehicle_modelling"><b>
Vehicle modelling
</b></a>
— Guidelines to create a new vehicle for CARLA.
<h3>Tutorials — Developers</h3>
## Tutorials — Developers
<p style="padding-left:30px;line-height:1.8">
<a href="../dev/how_to_add_a_new_sensor"><b>
<a href="../tuto_D_contribute_assets"><b>
Contribute with new assets
</b></a>
— Add new content to CARLA.
<a href="../tuto_D_create_sensor"><b>
Create a sensor
</b></a>
— The basics on how to add a new sensor to CARLA.
<a href="../dev/how_to_make_a_release"><b>
<a href="../tuto_D_make_release"><b>
Make a release
</b></a>
— For developers who want to publish a release.
<a href="../bp_library"><b>
Pedestrian navigation physics
<a href="../tuto_D_generate_pedestrian_navigation"><b>
Generate pedestrian navigation
</b></a>
— Generate the information needed for walkers to navigate a map.
<h3>Contributing</h3>
## Contributing
<p style="padding-left:30px;line-height:1.8">
<a href="../CONTRIBUTING"><b>
General guidelines
<a href="../cont_contribution_guidelines"><b>
Contribution guidelines
</b></a>
— The different ways to contribute to CARLA.
<a href="../coding_standard"><b>
Coding standard
</b></a>
— Guidelines to write proper code.
<a href="../doc_standard"><b>
Documentation standard
</b></a>
— Guidelines to write proper documentation.
<a href="../CODE_OF_CONDUCT"><b>
<a href="../cont_code_of_conduct"><b>
Code of conduct
</b></a>
— Some standards for CARLA, rights and duties for contributors.
<a href="../cont_coding_standard"><b>
Coding standard
</b></a>
— Guidelines to write proper code.
<a href="../cont_doc_standard"><b>
Documentation standard
</b></a>
— Guidelines to write proper documentation.

View File

@ -1,203 +0,0 @@
<h1>Measurements</h1>
!!! important
Since version 0.8.0 the measurements received by the client are in SI
units. All locations have been converted to `meters` and speeds to
`meters/second`.
Every frame the server sends a package with the measurements and images gathered
to the client. This document describes the details of these measurements.
Time-stamps
-----------
Every frame is described by three different counters/time-stamps
Key | Type | Units | Description
-------------------------- | --------- | ------------ | ------------
frame | uint64 | | Frame number (it is **not** restarted on each episode).
platform_timestamp | uint32 | milliseconds | Time-stamp of the current frame, as given by the OS.
game_timestamp | uint32 | milliseconds | In-game time-stamp, elapsed since the beginning of the current episode.
In real-time mode, the elapsed time between two time steps should be similar
both platform and game time-stamps. When run in fixed-time step, the game
time-stamp increments in constant time steps (delta=1/FPS) while the platform
time-stamp keeps the actual time elapsed.
Player measurements
-------------------
Key | Type | Units | Description
-------------------------- | ----------- | ------ | ------------
transform | Transform | | World transform of the player (contains a locations and a rotation) respect the vehicle's mesh pivot.
bounding_box | BoundingBox | | Bounding box of the player.
acceleration | Vector3D | m/s^2 | Current acceleration of the player.
forward_speed | float | m/s | Forward speed of the player.
collision_vehicles | float | kg*m/s | Collision intensity with other vehicles.
collision_pedestrians | float | kg*m/s | Collision intensity with pedestrians.
collision_other | float | kg*m/s | General collision intensity (everything else but pedestrians and vehicles).
intersection_otherlane | float | | Percentage of the vehicle invading other lanes.
intersection_offroad | float | | Percentage of the vehicle off-road.
autopilot_control | Control | | Vehicle's autopilot control that would apply this frame.
<h4>Transform</h4>
The transform contains the location and rotation of the player.
Key | Type | Units | Description
-------------------------- | ---------- | ------- | ------------
location | Vector3D | m | World location.
orientation *[deprecated]* | Vector3D | | Orientation in Cartesian coordinates.
rotation | Rotation3D | degrees | Pitch, roll, and yaw.
<h4>BoundingBox</h4>
Contains the transform and extent of a bounding box.
Key | Type | Units | Description
-------------------------- | ---------- | ------- | ------------
transform | Transform | | Transform of the bounding box relative to the vehicle.
extent | Vector3D | m | Radii dimensions of the bounding box (half-box).
<h4>Collision</h4>
Collision variables keep an accumulation of all the collisions occurred during
this episode. Every collision contributes proportionally to the intensity of the
collision (norm of the normal impulse between the two colliding objects).
Three different counts are kept (pedestrians, vehicles, and other). Colliding
objects are classified based on their tag (same as for semantic segmentation).
!!! Bug
See [#13 Collisions are not annotated when vehicle's speed is low](https://github.com/carla-simulator/carla/issues/13)
Collisions are not annotated if the vehicle is not moving (<1km/h) to avoid
annotating undesired collision due to mistakes in the AI of non-player agents.
<h4>Lane/off-road intersection</h4>
The lane intersection measures the percentage of the vehicle invading the
opposite lane. The off-road intersection measures the percentage of the vehicle
outside the road.
These values are computed intersecting the bounding box of the vehicle (as a 2D
rectangle) against the map image of the city. These images are generated in the
editor and serialized for runtime use. You can find them too in the release
package under the folder "RoadMaps".
<h4>Autopilot control</h4>
The `autopilot_control` measurement contains the control values that the in-game
autopilot system would apply as if it were controlling the vehicle.
This is the same structure used to send the vehicle control to the server.
Key | Type | Description
-------------------------- | --------- | ------------
steer | float | Steering angle between [-1.0, 1.0] (*)
throttle | float | Throttle input between [ 0.0, 1.0]
brake | float | Brake input between [ 0.0, 1.0]
hand_brake | bool | Whether the hand-brake is engaged
reverse | bool | Whether the vehicle is in reverse gear
To activate the autopilot from the client, send this `autopilot_control` back
to the server. Note that you can modify it before sending it back.
```py
measurements, sensor_data = carla_client.read_data()
control = measurements.player_measurements.autopilot_control
# modify here control if wanted.
carla_client.send_control(control)
```
(*) The actual steering angle depends on the vehicle used. The default Mustang
has a maximum steering angle of 70 degrees (this can be checked in the vehicle's
front wheel blueprint).
![Mustan Steering Angle](img/steering_angle_mustang.png)
Non-player agents info
----------------------
!!! important
Since version 0.8.0 the player vehicle is not sent in the list of non-player
agents.
To receive info of every non-player agent in the scene every frame you need to
activate this option in the settings file sent by the client at the beginning of
the episode.
```ini
[CARLA/Server]
SendNonPlayerAgentsInfo=true
```
If enabled, the server attaches a list of agents to the measurements package
every frame. Each of these agents has an unique id that identifies it, and
belongs to one of the following classes
* Vehicle
* Pedestrian
* Traffic ligth
* Speed limit sign
Each of them can be accessed in Python by checking if the agent object has the
field enabled
```python
measurements, sensor_data = client.read_data()
for agent in measurements.non_player_agents:
agent.id # unique id of the agent
if agent.HasField('vehicle'):
agent.vehicle.forward_speed
agent.vehicle.transform
agent.vehicle.bounding_box
```
<h6>Vehicle</h6>
Key | Type | Description
------------------------------- | --------- | ------------
id | uint32 | Agent ID
vehicle.forward_speed | float | Forward speed of the vehicle in m/s, is the linear speed projected to the forward vector of the chassis of the vehicle
vehicle.transform | Transform | Agent-to-world transform
vehicle.bounding_box.transform | Transform | Transform of the bounding box relative to the vehicle
vehicle.bounding_box.extent | Vector3D | Radii dimensions of the bounding box in meters
<h6>Pedestrian</h6>
Key | Type | Description
--------------------------------- | --------- | ------------
id | uint32 | Agent ID
pedestrian.forward_speed | float | Forward speed of the pedestrian in m/s
pedestrian.transform | Transform | Agent-to-world transform
pedestrian.bounding_box.transform | Transform | Transform of the bounding box relative to the pedestrian
pedestrian.bounding_box.extent | Vector3D | Radii dimensions of the bounding box in meters (*)
<small>(*) At this point every pedestrian is assumed to have the same
bounding-box size.</small>
<h6>Traffic light</h6>
Key | Type | Description
---------------------------- | --------- | ------------
id | uint32 | Agent ID
traffic_light.transform | Transform | Agent-to-world transform
traffic_light.state | enum | Traffic light state; `GREEN`, `YELLOW`, or `RED`
<h6>Speed limit sign</h6>
Key | Type | Description
---------------------------- | --------- | ------------
id | uint32 | Agent ID
speed_limit_sign.transform | Transform | Agent-to-world transform
speed_limit_sign.speed_limit | float | Speed limit in m/s
<h4>Transform and bounding box</h4>
The transform defines the location and orientation of the agent. The transform
of the bounding box is given relative to the vehicle. The box extent gives the
radii dimensions of the bounding box of the agent.
![Vehicle Bounding Box](img/vehicle_bounding_box.png)

View File

@ -1,6 +1,6 @@
#Python API reference
## carla.Actor<a name="carla.Actor"></a>
CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by [carla.World](#carla.World) and they need for a [carla.ActorBlueprint](#carla.ActorBlueprint) to be created. These blueprints belong into a library provided by CARLA, find more about them [here](../bp_library/).
CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by [carla.World](#carla.World) and they need for a [carla.ActorBlueprint](#carla.ActorBlueprint) to be created. These blueprints belong into a library provided by CARLA, find more about them [here](bp_library.md).
<h3>Instance Variables</h3>
- <a name="carla.Actor.attributes"></a>**<font color="#f8805a">attributes</font>** (_dict_)
@ -10,7 +10,7 @@ Identifier for this actor. Unique during a given episode.
- <a name="carla.Actor.parent"></a>**<font color="#f8805a">parent</font>** (_[carla.Actor](#carla.Actor)_)
Actors may be attached to a parent actor that they will follow around. This is said actor.
- <a name="carla.Actor.semantic_tags"></a>**<font color="#f8805a">semantic_tags</font>** (_list(int)_)
A list of semantic tags provided by the blueprint listing components for this actor. E.g. a traffic light could be tagged with "pole" and "traffic light". These tags are used by the semantic segmentation sensor. Find more about this and other sensors [here](../cameras_and_sensors/#sensor.camera.semantic_segmentation).
A list of semantic tags provided by the blueprint listing components for this actor. E.g. a traffic light could be tagged with "pole" and "traffic light". These tags are used by the semantic segmentation sensor. Find more about this and other sensors [here](ref_sensors.md#semantic-segmentation-camera).
- <a name="carla.Actor.type_id"></a>**<font color="#f8805a">type_id</font>** (_str_)
The identifier of the blueprint this actor was based on, e.g. "vehicle.ford.mustang".
@ -226,20 +226,20 @@ Returns the velocity vector registered for an actor in that tick.
---
## carla.AttachmentType<a name="carla.AttachmentType"></a>
Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for cameras and sensors. [Here](../python_cookbook/#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for cameras and sensors. [Here](ref_code_recipes.md#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
<h3>Instance Variables</h3>
- <a name="carla.AttachmentType.Rigid"></a>**<font color="#f8805a">Rigid</font>**
With this fixed attatchment the object follow its parent position strictly.
- <a name="carla.AttachmentType.SpringArm"></a>**<font color="#f8805a">SpringArm</font>**
An attachment that expands or retracts depending on camera situation. SpringArms are an Unreal Engine component so [check this out](../python_cookbook/#attach-sensors-recipe) to learn some more about them.
An attachment that expands or retracts depending on camera situation. SpringArms are an Unreal Engine component so [check this out](ref_code_recipes.md#attach-sensors-recipe) to learn some more about them.
---
## carla.BlueprintLibrary<a name="carla.BlueprintLibrary"></a>
A class that contains the blueprints provided for actor spawning. Its main application is to return [carla.ActorBlueprint](#carla.ActorBlueprint) objects needed to spawn actors. Each blueprint has an identifier and attributes that may or may not be modifiable. The library is automatically created by the server and can be accessed through [carla.World](#carla.World).
[Here](../bp_library/) is a reference containing every available blueprint and its specifics.
[Here](bp_library.md) is a reference containing every available blueprint and its specifics.
<h3>Methods</h3>
- <a name="carla.BlueprintLibrary.__getitem__"></a>**<font color="#7fb800">\__getitem__</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**pos**</font>)
@ -269,7 +269,7 @@ Returns the blueprint corresponding to that identifier.
---
## carla.BoundingBox<a name="carla.BoundingBox"></a>
Helper class defining a box location and its dimensions that will later be used by [carla.DebugHelper](#carla.DebugHelper) or a [carla.Client](#carla.Client) to draw shapes and detect collisions. Bounding boxes normally act for object colliders. Check out this [recipe](../python_cookbook/#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
Helper class defining a box location and its dimensions that will later be used by [carla.DebugHelper](#carla.DebugHelper) or a [carla.Client](#carla.Client) to draw shapes and detect collisions. Bounding boxes normally act for object colliders. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
<h3>Instance Variables</h3>
- <a name="carla.BoundingBox.location"></a>**<font color="#f8805a">location</font>** (_[carla.Location](#carla.Location)_)
@ -316,7 +316,7 @@ Parses the location and extent of the bounding box to string.
## carla.Client<a name="carla.Client"></a>
The Client connects CARLA to the server which runs the simulation. Both server and client contain a CARLA library (libcarla) with some differences that allow communication between them. Many clients can be created and each of these will connect to the RPC server inside the simulation to send commands. The simulation runs server-side. Once the connection is established, the client will only receive data retrieved from the simulation. Walkers are the exception. The client is in charge of managing pedestrians so, if you are running a simulation with multiple clients, some issues may arise. For example, if you spawn walkers through different clients, collisions may happen, as each client is only aware of the ones it is in charge of.
The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](recorder_and_playback.md) is some information about how to use this recorder.
The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](adv_recorder.md) is some information about how to use this recorder.
<h3>Methods</h3>
- <a name="carla.Client.__init__"></a>**<font color="#7fb800">\__init__</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**host**=127.0.0.1</font>, <font color="#00a6ed">**port**=2000</font>, <font color="#00a6ed">**worker_threads**=0</font>)
@ -410,7 +410,7 @@ If you want to see only collisions between a vehicles and a walkers, use for `ca
- `category1` (_single char_) Character variable specifying a first type of actor involved in the collision.
- `category2` (_single char_) Character variable specifying the second type of actor involved in the collision.
- <a name="carla.Client.show_recorder_file_info"></a>**<font color="#7fb800">show_recorder_file_info</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**filename**</font>, <font color="#00a6ed">**show_all**=False</font>)
The information saved by the recorder will be parsed and shown in your terminal as text (frames, times, events, state, positions...). The information shown can be specified by using the `show_all` parameter. [Here](recorder_binary_file_format.md) is some more information about how to read the recorder file.
The information saved by the recorder will be parsed and shown in your terminal as text (frames, times, events, state, positions...). The information shown can be specified by using the `show_all` parameter. [Here](ref_recorder_binary_file_format.md) is some more information about how to read the recorder file.
- **Parameters:**
- `filename` (_str_) Name or absolute path of the file recorded, depending on your previous choice.
- `show_all` (_bool_) When true, will show all the details per frame (traffic light states, positions of all actors, orientation and animation data...), but by default it will only show a summary.
@ -468,7 +468,7 @@ Initializes a color, black by default.
---
## carla.ColorConverter<a name="carla.ColorConverter"></a>
Class that defines conversion patterns that can be applied to a [carla.Image](#carla.Image) in order to show information provided by [carla.Sensor](#carla.Sensor). Depth conversions cause a loss of accuracy, as sensors detect depth as <b>float</b> that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](../python_cookbook/#converted-image-recipe) to see an example of how to create and save image data for <b>sensor.camera.semantic_segmentation</b>.
Class that defines conversion patterns that can be applied to a [carla.Image](#carla.Image) in order to show information provided by [carla.Sensor](#carla.Sensor). Depth conversions cause a loss of accuracy, as sensors detect depth as <b>float</b> that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for <b>sensor.camera.semantic_segmentation</b>.
<h3>Instance Variables</h3>
- <a name="carla.ColorConverter.CityScapesPalette"></a>**<font color="#f8805a">CityScapesPalette</font>**
@ -483,7 +483,7 @@ No changes applied to the image.
---
## carla.DebugHelper<a name="carla.DebugHelper"></a>
Helper class part of [carla.World](#carla.World) that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](../python_cookbook/#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
Helper class part of [carla.World](#carla.World) that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
<h3>Methods</h3>
- <a name="carla.DebugHelper.draw_point"></a>**<font color="#7fb800">draw_point</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**location**</font>, <font color="#00a6ed">**size**=0.1f</font>, <font color="#00a6ed">**color**=(255,0,0)</font>, <font color="#00a6ed">**life_time**=-1.0f</font>)
@ -817,7 +817,7 @@ Type 381.
---
## carla.LaneChange<a name="carla.LaneChange"></a>
Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every [carla.Waypoint](#carla.Waypoint) according to the OpenDRIVE file. In this [recipe](../python_cookbook/#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every [carla.Waypoint](#carla.Waypoint) according to the OpenDRIVE file. In this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
<h3>Instance Variables</h3>
- <a name="carla.LaneChange.NONE"></a>**<font color="#f8805a">NONE</font>**
@ -876,7 +876,7 @@ White by default.
---
## carla.LaneMarkingType<a name="carla.LaneMarkingType"></a>
Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](../python_cookbook/#lanes-recipe) where the user creates a [carla.Waypoint](#carla.Waypoint) for a vehicle location and retrieves from it the information about adjacent lane markings.
Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](ref_code_recipes.md#lanes-recipe) where the user creates a [carla.Waypoint](#carla.Waypoint) for a vehicle location and retrieves from it the information about adjacent lane markings.
__Note on double types:__ Lane markings are defined under the OpenDRIVE standard that determines whereas a line will be considered "BrokenSolid" or "SolidBroken". For each road there is a center lane marking, defined from left to right regarding the lane's directions. The rest of the lane markings are defined in order from the center lane to the closest outside of the road.
<h3>Instance Variables</h3>
@ -895,7 +895,7 @@ __Note on double types:__ Lane markings are defined under the OpenDRIVE standard
---
## carla.LaneType<a name="carla.LaneType"></a>
Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](../python_cookbook/#lanes-recipe) the user creates a [carla.Waypoint](#carla.Waypoint) for the current location of a vehicle and uses it to get the current and adjacent lane types.
Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a [carla.Waypoint](#carla.Waypoint) for the current location of a vehicle and uses it to get the current and adjacent lane types.
<h3>Instance Variables</h3>
- <a name="carla.LaneType.NONE"></a>**<font color="#f8805a">NONE</font>**
@ -1231,7 +1231,7 @@ Time register of the frame at which this measurement was taken given by the OS i
## carla.TrafficLight<a name="carla.TrafficLight"></a>
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.TrafficSign](#carla.TrafficSign)_</b></small></div></p><p>A traffic light actor, considered a specific type of traffic sign. As traffic lights will mostly appear at junctions, they belong to a group which contains the different traffic lights in it. Inside the group, traffic lights are differenciated by their pole index.
Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](../python_cookbook/#traffic-lights-recipe) to learn how to do so.
Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to learn how to do so.
<h3>Instance Variables</h3>
- <a name="carla.TrafficLight.state"></a>**<font color="#f8805a">state</font>** (_[carla.TrafficLightState](#carla.TrafficLightState)_)
@ -1288,7 +1288,7 @@ Sets a given time (in seconds) for the yellow light to be active.
---
## carla.TrafficLightState<a name="carla.TrafficLightState"></a>
All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](../python_cookbook/#traffic-lights-recipe) to see an example.
All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to see an example.
<h3>Instance Variables</h3>
- <a name="carla.TrafficLightState.Green"></a>**<font color="#f8805a">Green</font>**
@ -1658,7 +1658,7 @@ Sets a speed for the walker in meters per second.
---
## carla.WalkerBoneControl<a name="carla.WalkerBoneControl"></a>
This class grants bone specific manipulation for walker. The skeletons of walkers have been unified for clarity and the transform applied to each bone are always relative to its parent. Take a look [here](walker_bone_control.md) to learn more on how to create a walker and define its movement.
This class grants bone specific manipulation for walker. The skeletons of walkers have been unified for clarity and the transform applied to each bone are always relative to its parent. Take a look [here](tuto_G_control_walker_skeletons.md) to learn more on how to create a walker and define its movement.
<h3>Instance Variables</h3>
- <a name="carla.WalkerBoneControl.bone_transforms"></a>**<font color="#f8805a">bone_transforms</font>** (_list([name,transform])_)
@ -1956,7 +1956,7 @@ The client tells the server to block calling thread until a **<font color="#7fb8
---
## carla.WorldSettings<a name="carla.WorldSettings"></a>
The simulation has some advanced configuration options that are contained in this class and can be managed using [carla.World](#carla.World) and its methods. These allow the user to choose between client-server synchrony/asynchrony, activation of "no rendering mode" and either if the simulation should run with a fixed or variable time-step. Check [this](../configuring_the_simulation/) out if you want to learn about it.
The simulation has some advanced configuration options that are contained in this class and can be managed using [carla.World](#carla.World) and its methods. These allow the user to choose between client-server synchrony/asynchrony, activation of "no rendering mode" and either if the simulation should run with a fixed or variable time-step. Check [this](adv_synchrony_timestep.md) out if you want to learn about it.
<h3>Instance Variables</h3>
- <a name="carla.WorldSettings.synchronous_mode"></a>**<font color="#f8805a">synchronous_mode</font>** (_bool_)

View File

@ -1,688 +0,0 @@
<h1>Python
API tutorial</h1>
In this tutorial we introduce the basic concepts of the CARLA Python API, as
well as an overview of its most important functionalities. The reference of all
classes and methods available can be found at
[Python API reference](python_api.md).
!!! note
**This document applies only to the latest development version**. <br>
The API has been significantly changed in the latest versions starting at
0.9.0. We commonly refer to the new API as **0.9.X API** as opposed to
the previous **0.8.X API**.
First of all, we need to introduce a few core concepts:
- **Actor:** Actor is anything that plays a role in the simulation and can be
moved around, examples of actors are vehicles, pedestrians, and sensors.
- **Blueprint:** Before spawning an actor you need to specify its attributes,
and that's what blueprints are for. We provide a blueprint library with
the definitions of all the actors available.
- **World:** The world represents the currently loaded map and contains the
functions for converting a blueprint into a living actor, among other. It
also provides access to the road map and functions to change the weather
conditions.
#### Connecting and retrieving the world
To connect to a simulator we need to create a "Client" object, to do so we need
to provide the IP address and port of a running instance of the simulator
```py
client = carla.Client('localhost', 2000)
```
The first recommended thing to do right after creating a client instance is
setting its time-out. This time-out sets a time limit to all networking
operations, if the time-out is not set networking operations may block forever
```py
client.set_timeout(10.0) # seconds
```
Once we have the client configured we can directly retrieve the world
```py
world = client.get_world()
```
Typically we won't need the client object anymore, all the objects created by
the world will connect to the IP and port provided if they need to. These
operations are usually done in the background and are transparent to the user.
Changing the map
----------------
The map can be changed from the Python API with
```py
world = client.load_world('Town01')
```
this creates an empty world with default settings. The list of currently
available maps can be retrieved with
```py
print(client.get_available_maps())
```
To reload the world using the current active map, use
```py
world = client.reload_world()
```
#### Blueprints
A blueprint contains the information necessary to create a new actor. For
instance, if the blueprint defines a car, we can change its color here, if it
defines a lidar, we can decide here how many channels the lidar will have. A
blueprints also has an ID that uniquely identifies it and all the actor
instances created with it. Examples of IDs are "vehicle.nissan.patrol" or
"sensor.camera.depth".
The list of all available blueprints is kept in the [**blueprint library**](/bp_library)
```py
blueprint_library = world.get_blueprint_library()
```
The library allows us to find specific blueprints by ID, filter them with
wildcards, or just choosing one at random
```py
# Find specific blueprint.
collision_sensor_bp = blueprint_library.find('sensor.other.collision')
# Chose a vehicle blueprint at random.
vehicle_bp = random.choice(blueprint_library.filter('vehicle.bmw.*'))
```
Some of the attributes of the blueprints can be modified while some other are
just read-only. For instance, we cannot modify the number of wheels of a vehicle
but we can change its color
```py
vehicles = blueprint_library.filter('vehicle.*')
bikes = [x for x in vehicles if int(x.get_attribute('number_of_wheels')) == 2]
for bike in bikes:
bike.set_attribute('color', '255,0,0')
```
Modifiable attributes also come with a list of recommended values
```py
for attr in blueprint:
if attr.is_modifiable:
blueprint.set_attribute(attr.id, random.choice(attr.recommended_values))
```
The blueprint system has been designed to ease contributors adding their custom
actors directly in Unreal Editor, we'll add a tutorial on this soon, stay tuned!
#### Spawning actors
Once we have the blueprint set up, spawning an actor is pretty straightforward
```py
transform = Transform(Location(x=230, y=195, z=40), Rotation(yaw=180))
actor = world.spawn_actor(blueprint, transform)
```
The spawn actor function comes in two flavours, [`spawn_actor`](python_api.md#carla.World.spawn_actor) and
[`try_spawn_actor`](python_api.md#carla.World.try_spawn_actor).
The former will raise an exception if the actor could not be spawned,
the later will return `None` instead. The most typical cause of
failure is collision at spawn point, meaning the actor does not fit at the spot
we chose; probably another vehicle is in that spot or we tried to spawn into a
static object.
To ease the task of finding a spawn location, each map provides a list of
recommended transforms
```py
spawn_points = world.get_map().get_spawn_points()
```
We'll add more on the map object later in this tutorial.
Finally, the spawn functions have an optional argument that controls whether the
actor is going to be attached to another actor. This is specially useful for
sensors. In the next example, the camera remains rigidly attached to our vehicle
during the rest of the simulation
```py
camera = world.spawn_actor(camera_bp, relative_transform, attach_to=my_vehicle)
```
Note that in this case, the transform provided is treated relative to the parent
actor.
#### Handling actors
Once we have an actor alive in the world, we can move this actor around and
check its dynamic properties
```py
location = actor.get_location()
location.z += 10.0
actor.set_location(location)
print(actor.get_acceleration())
print(actor.get_velocity())
```
We can even freeze an actor by disabling its physics simulation
```py
actor.set_simulate_physics(False)
```
And once we get tired of an actor we can remove it from the simulation with
```py
actor.destroy()
```
Note that actors are not cleaned up automatically when the Python script
finishes, if we want to get rid of them we need to explicitly destroy them.
!!! important
**Known issue:** To improve performance, most of the methods send requests
to the simulator asynchronously. The simulator queues each of these
requests, but only has a limited amount of time each update to parse them.
If we flood the simulator by calling "set" methods too often, e.g.
set_transform, the requests will accumulate a significant lag.
#### Vehicles
Vehicles are a special type of actor that provide a few extra methods. Apart
from the handling methods common to all actors, vehicles can also be controlled
by providing throttle, break, and steer values
```py
vehicle.apply_control(carla.VehicleControl(throttle=1.0, steer=-1.0))
```
These are all the parameters of the [`VehicleControl`](python_api.md#carla.VehicleControl)
object and their default values
```py
carla.VehicleControl(
throttle = 0.0
steer = 0.0
brake = 0.0
hand_brake = False
reverse = False
manual_gear_shift = False
gear = 0)
```
Also, physics control properties can be tuned for vehicles and its wheels
```py
vehicle.apply_physics_control(carla.VehiclePhysicsControl(max_rpm = 5000.0, center_of_mass = carla.Vector3D(0.0, 0.0, 0.0), torque_curve=[[0,400],[5000,400]]))
```
These properties are controlled through a
[`VehiclePhysicsControl`](python_api.md#carla.VehiclePhysicsControl) object,
which also contains a property to control each wheel's physics through a
[`WheelPhysicsControl`](python_api.md#carla.WheelPhysicsControl) object.
```py
carla.VehiclePhysicsControl(
torque_curve,
max_rpm,
moi,
damping_rate_full_throttle,
damping_rate_zero_throttle_clutch_engaged,
damping_rate_zero_throttle_clutch_disengaged,
use_gear_autobox,
gear_switch_time,
clutch_strength,
mass,
drag_coefficient,
center_of_mass,
steering_curve,
wheels)
```
Where:
- *torque_curve*: Curve that indicates the torque measured in Nm for a specific revolutions
per minute of the vehicle's engine
- *max_rpm*: The maximum revolutions per minute of the vehicle's engine
- *moi*: The moment of inertia of the vehicle's engine
- *damping_rate_full_throttle*: Damping rate when the throttle is maximum.
- *damping_rate_zero_throttle_clutch_engaged*: Damping rate when the thottle is zero
with clutch engaged
- *damping_rate_zero_throttle_clutch_disengaged*: Damping rate when the thottle is zero
with clutch disengaged
- *use_gear_autobox*: If true, the vehicle will have automatic transmission
- *gear_switch_time*: Switching time between gears
- *clutch_strength*: The clutch strength of the vehicle. Measured in Kgm^2/s
- *final_ratio*: The fixed ratio from transmission to wheels.
- *forward_gears*: List of [`GearPhysicsControl`](python_api.md#carla.GearPhysicsControl) objects.
- *mass*: The mass of the vehicle measured in Kg
- *drag_coefficient*: Drag coefficient of the vehicle's chassis
- *center_of_mass*: The center of mass of the vehicle
- *steering_curve*: Curve that indicates the maximum steering for a specific forward speed
- *wheels*: List of [`WheelPhysicsControl`](python_api.md#carla.WheelPhysicsControl) objects.
```py
carla.WheelPhysicsControl(
tire_friction,
damping_rate,
max_steer_angle,
radius,
max_brake_torque,
max_handbrake_torque,
position)
```
Where:
- *tire_friction*: Scalar value that indicates the friction of the wheel.
- *damping_rate*: The damping rate of the wheel.
- *max_steer_angle*: The maximum angle in degrees that the wheel can steer.
- *radius*: The radius of the wheel in centimeters.
- *max_brake_torque*: The maximum brake torque in Nm.
- *max_handbrake_torque*: The maximum handbrake torque in Nm.
- *position*: The position of the wheel.
```py
carla.GearPhysicsControl(
ratio,
down_ratio,
up_ratio)
```
Where:
- *ratio*: The transmission ratio of this gear.
- *down_ratio*: The level of RPM (in relation to MaxRPM) where the gear autobox initiates shifting down.
- *up_ratio*: The level of RPM (in relation to MaxRPM) where the gear autobox initiates shifting up.
Our vehicles also come with a handy autopilot
```py
vehicle.set_autopilot(True)
```
As has been a common misconception, we need to clarify that this autopilot
control is purely hard-coded into the simulator and it's not based at all in
machine learning techniques.
Finally, vehicles also have a bounding box that encapsulates them
```py
box = vehicle.bounding_box
print(box.location) # Location relative to the vehicle.
print(box.extent) # XYZ half-box extents in meters.
```
#### Sensors
Sensors are actors that produce a stream of data. Sensors are such a key
component of CARLA that they deserve their own documentation page, so here we'll
limit ourselves to show a small example of how sensors work
```py
camera_bp = blueprint_library.find('sensor.camera.rgb')
camera = world.spawn_actor(camera_bp, relative_transform, attach_to=my_vehicle)
camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame))
```
In this example we have attached a camera to a vehicle, and told the camera to
save to disk each of the images that are going to be generated.
The full list of sensors and their measurement is explained in
[Cameras and sensors](core_sensors.md).
#### Other actors
Apart from vehicles and sensors, there are a few other actors in the world. The
full list can be requested to the world with
```py
actor_list = world.get_actors()
```
The actor list object returned has functions for finding, filtering, and
iterating actors
```py
# Find an actor by id.
actor = actor_list.find(id)
# Print the location of all the speed limit signs in the world.
for speed_sign in actor_list.filter('traffic.speed_limit.*'):
print(speed_sign.get_location())
```
Among the actors you can find in this list are
* **Traffic lights** with a [`state`](python_api.md#carla.TrafficLight.state) property
to check the light's current state.
* **Speed limit signs** with the speed codified in their type_id.
* The **Spectator** actor that can be used to move the view of the simulator window.
#### Changing the weather
The lighting and weather conditions can be requested and changed with the world
object
```py
weather = carla.WeatherParameters(
cloudiness=80.0,
precipitation=30.0,
sun_altitude_angle=70.0)
world.set_weather(weather)
print(world.get_weather())
```
For convenience, we also provided a list of predefined weather presets that can
be directly applied to the world
```py
world.set_weather(carla.WeatherParameters.WetCloudySunset)
```
The full list of presets can be found in the
[WeatherParameters reference](python_api.md#carla.WeatherParameters).
### World Snapshot
A world snapshot represents the state of every actor in the simulation at a single frame,
a sort of still image of the world with a timestamp. With this feature it is possible to
record the location of every actor and make sure all of them were captured at the same
frame without the need of using synchronous mode.
```py
# Retrieve a snapshot of the world at this point in time.
world_snapshot = world.get_snapshot()
# Wait for the next tick and retrieve the snapshot of the tick.
world_snapshot = world.wait_for_tick()
# Register a callback to get called every time we receive a new snapshot.
world.on_tick(lambda world_snapshot: do_something(world_snapshot))
```
The world snapshot contains a timestamp and a list of actor snapshots. Actor snapshots do not
allow to operate on the actor directly as they only contain data about the physical state of
the actor, but you can use their id to retrieve the actual actor. And the other way around,
you can look up snapshots by id (average O(1) complexity).
```py
timestamp = world_snapshot.timestamp
timestamp.frame_count
timestamp.elapsed_seconds
timestamp.delta_seconds
timestamp.platform_timestamp
for actor_snapshot in world_snapshot:
actor_snapshot.get_transform()
actor_snapshot.get_velocity()
actor_snapshot.get_angular_velocity()
actor_snapshot.get_acceleration()
actual_actor = world.get_actor(actor_snapshot.id)
actor_snapshot = world_snapshot.find(actual_actor.id)
```
#### Map and waypoints
One of the key features of CARLA is that our roads are fully annotated. All our
maps come accompanied by [OpenDrive](http://www.opendrive.org/) files that
defines the road layout. Furthermore, we provide a higher level API for querying
and navigating this information.
These objects were a recent addition to our API and are still in heavy
development, we hope to make them much more powerful soon.
Let's start by getting the map of the current world
```py
map = world.get_map()
```
For starters, the map has a [`name`](python_api.md#carla.Map.name) attribute that matches
the name of the currently loaded city, e.g. Town01. And, as we've seen before, we can also ask
the map to provide a list of recommended locations for spawning vehicles,
[`map.get_spawn_points()`](python_api.md#carla.Map.get_spawn_points).
However, the real power of this map API comes apparent when we introduce
[`waypoints`](python_api.md#carla.Waypoint). We can tell the map to give us a waypoint on
the road closest to our vehicle
```py
waypoint = map.get_waypoint(vehicle.get_location())
```
This waypoint's [`transform`](python_api.md#carla.Waypoint.transform) is located on a drivable lane,
and it's oriented according to the road direction at that point.
Waypoints have their unique identifier [`carla.Waypoint.id`](python_api.md#carla.Waypoint.id)
based on the hash of its [`road_id`](python_api.md#carla.Waypoint.road_id),
[`section_id`](python_api.md#carla.Waypoint.section_id),
[`lane_id`](python_api.md#carla.Waypoint.lane_id) and [`s`](python_api.md#carla.Waypoint.s).
They also provide more information about lanes, such as the
[`lane_type`](python_api.md#carla.Waypoint.lane_type) of the current waypoint
and if a [`lane_change`](python_api.md#carla.Waypoint.lane_change) is possible and in which direction.
```py
# Nearest waypoint on the center of a Driving or Sidewalk lane.
waypoint = map.get_waypoint(vehicle.get_location(),project_to_road=True, lane_type=(carla.LaneType.Driving | carla.LaneType.Sidewalk))
# Get the current lane type (driving or sidewalk).
lane_type = waypoint.lane_type
# Get available lane change.
lane_change = waypoint.lane_change
```
Surrounding lane markings _(right / left)_ can also be accessed through the waypoint API.
Therefore, it is possible to know all the information provided by a
[`carla.LaneMarking`](python_api.md#carla.LaneMarking),
like the lane marking [`type`](python_api.md#carla.LaneMarkingType) and its
[`lane_change`](python_api.md#carla.LaneChange) availability.
```py
# Get right lane marking type
right_lm_type = waypoint.right_lane_marking.type
```
Waypoints also have function to query the "next" waypoints; this method returns
a list of waypoints at a certain distance that can be accessed from this
waypoint following the traffic rules. In other words, if a vehicle is placed in
this waypoint, give me the list of posible locations that this vehicle can drive
to. Let's see a practical example:
```py
# Retrieve the closest waypoint.
waypoint = map.get_waypoint(vehicle.get_location())
# Disable physics, in this example we're just teleporting the vehicle.
vehicle.set_simulate_physics(False)
while True:
# Find next waypoint 2 meters ahead.
waypoint = random.choice(waypoint.next(2.0))
# Teleport the vehicle.
vehicle.set_transform(waypoint.transform)
```
The map object also provides methods for generating in bulk waypoints all over
the map at an approximated distance between them
```py
waypoint_list = map.generate_waypoints(2.0)
```
For routing purposes, it is also possible to retrieve a topology graph of the
roads
```py
waypoint_tuple_list = map.get_topology()
```
This method returns a list of pairs (tuples) of waypoints, for each pair, the
first element connects with the second one. Only the minimal set of waypoints to
define the topology are generated by this method, only a waypoint for each lane
for each road segment in the map.
Finally, to allow access to the whole road information, the map object can be
converted to OpenDrive format, and saved to disk as such.
### Recording and Replaying system
CARLA includes now a recording and replaying API, that allows to record a simulation in a file and
later replay that simulation. The file is written on server side only, and it includes which
**actors are created or destroyed** in the simulation, the **state of the traffic lights**
and the **position** and **orientation** of all vehicles and pedestrians.
To start recording we only need to supply a file name:
```py
client.start_recorder("recording01.log")
```
To stop the recording, we need to call:
```py
client.stop_recorder()
```
At any point we can replay a simulation, specifying the filename:
```py
client.replay_file("recording01.log")
```
The replayer replicates the actor and traffic light information of the recording each frame.
For more details, [Recorder and Playback system](recorder_and_playback.md)
#### Pedestrians
![pedestrian types](img/pedestrian_types.png)
We can get a lit of all pedestrians from the blueprint library and choose one:
```py
world = client.get_world()
blueprintsWalkers = world.get_blueprint_library().filter("walker.pedestrian.*")
walker_bp = random.choice(blueprintsWalkers)
```
We can **get a list of random points** where to spawn the pedestrians. Those points are always
from the areas where the pedestrian can walk:
```py
# 1. take all the random locations to spawn
spawn_points = []
for i in range(50):
spawn_point = carla.Transform()
spawn_point.location = world.get_random_location_from_navigation()
if (spawn_point.location != None):
spawn_points.append(spawn_point)
```
Now we can **spawn the pedestrians** at those positions using a batch of commands:
```py
# 2. build the batch of commands to spawn the pedestrians
batch = []
for spawn_point in spawn_points:
walker_bp = random.choice(blueprintsWalkers)
batch.append(carla.command.SpawnActor(walker_bp, spawn_point))
# apply the batch
results = client.apply_batch_sync(batch, True)
for i in range(len(results)):
if results[i].error:
logging.error(results[i].error)
else:
walkers_list.append({"id": results[i].actor_id})
```
We save the id of each walker from the results of the batch, in a dictionary because we will
assign to them also a controller.
We need to **create the controller** that will manage the pedestrian automatically:
```py
# 3. we spawn the walker controller
batch = []
walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker')
for i in range(len(walkers_list)):
batch.append(carla.command.SpawnActor(walker_controller_bp, carla.Transform(), walkers_list[i]["id"]))
# apply the batch
results = client.apply_batch_sync(batch, True)
for i in range(len(results)):
if results[i].error:
logging.error(results[i].error)
else:
walkers_list[i]["con"] = results[i].actor_id
```
We create the controller as child of the walker, so the location we pass is (0,0,0).
At this point we have a list of pedestrians with a controller each one, but we need to get
the actual actor from the id. Because the controller is a child of the pedestrian,
we need to **put all id in the same list** so the parent can find the child in the same list.
```py
# 4. we put altogether the walkers and controllers id to get the objects from their id
for i in range(len(walkers_list)):
all_id.append(walkers_list[i]["con"])
all_id.append(walkers_list[i]["id"])
all_actors = world.get_actors(all_id)
```
The list all_actors has now all the actor objects we created.
At this point is a good idea to **wait for a tick** on client, because then the server has
time to send all new data about the new actors we just created (we need the transform of
each one updated). So we can do a call like:
```py
# wait for a tick to ensure client receives the last transform of the walkers we have just created
world.wait_for_tick()
```
After that, our client has the data about the actors updated.
**Using the controller** we can set the locations where we want each pedestrian walk to:
```py
# 5. initialize each controller and set target to walk to (list is [controller, actor, controller, actor ...])
for i in range(0, len(all_actors), 2):
# start walker
all_actors[i].start()
# set walk to random point
all_actors[i].go_to_location(world.get_random_location_from_navigation())
# random max speed
all_actors[i].set_max_speed(1 + random.random()) # max speed between 1 and 2 (default is 1.4 m/s)
```
There we have set at each pedestrian (through its controller) a random point and random speed.
When they reach the target point then automatically walk to another random point.
If the target point is not reachable, then they reach the closest point from the are where they are.
![pedestrian sample](img/pedestrians_shoot.png)
To **destroy the pedestrians**, we need to stop them from the navigation,
and then destroy the objects (actor and controller):
```py
# stop pedestrians (list is [controller, actor, controller, actor ...])
for i in range(0, len(all_id), 2):
all_actors[i].stop()
# destroy pedestrian (actor and controller)
client.apply_batch([carla.command.DestroyActor(x) for x in all_id])
```

View File

@ -1,24 +1,24 @@
# Code recipes
This section contains a list of recipes that complement the [tutorial](../python_api_tutorial/)
and are used to illustrate the use of Python API methods.
This section contains a list of recipes that complement the [first steps](core_concepts.md) section and are used to illustrate the use of Python API methods.
Each recipe has a list of [python API classes](../python_api/),
Each recipe has a list of [python API classes](python_api.md),
which is divided into those in which the recipe is centered, and those that need to be used.
There are more recipes to come!
---
## Actor Spectator Recipe
This recipe spawns an actor and the spectator camera at the actor's location.
Focused on:<br>
[`carla.World`](../python_api/#carla.World)<br>
[`carla.Actor`](../python_api/#carla.Actor)
[`carla.World`](python_api.md#carla.World)<br>
[`carla.Actor`](python_api.md#carla.Actor)
Used:<br>
[`carla.WorldSnapshot`](../python_api/#carla.WorldSnapshot)<br>
[`carla.ActorSnapshot`](../python_api/#carla.ActorSnapshot)
[`carla.WorldSnapshot`](python_api.md#carla.WorldSnapshot)<br>
[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)
```py
# ...
@ -40,16 +40,17 @@ spectator.set_transform(actor_snapshot.get_transform())
# ...
```
---
## Attach Sensors Recipe
This recipe attaches different camera / sensors to a vehicle with different attachments.
Focused on:<br>
[`carla.Sensor`](../python_api/#carla.Sensor)<br>
[`carla.AttachmentType`](../python_api/#carla.AttachmentType)<br>
[`carla.Sensor`](python_api.md#carla.Sensor)<br>
[`carla.AttachmentType`](python_api.md#carla.AttachmentType)<br>
Used:<br>
[`carla.World`](../python_api/#carla.World)
[`carla.World`](python_api.md#carla.World)
```py
# ...
@ -61,17 +62,18 @@ lane_invasion_sensor = world.spawn_actor(sensor_lane_invasion_bp, transform, att
# ...
```
---
## Actor Attribute Recipe
This recipe changes attributes of different type of blueprint actors.
Focused on:<br>
[`carla.ActorAttribute`](../python_api/#carla.ActorAttribute)<br>
[`carla.ActorBlueprint`](../python_api/#carla.ActorBlueprint)<br>
[`carla.ActorAttribute`](python_api.md#carla.ActorAttribute)<br>
[`carla.ActorBlueprint`](python_api.md#carla.ActorBlueprint)<br>
Used:<br>
[`carla.World`](../python_api/#carla.World)<br>
[`carla.BlueprintLibrary`](../python_api/#carla.BlueprintLibrary)<br>
[`carla.World`](python_api.md#carla.World)<br>
[`carla.BlueprintLibrary`](python_api.md#carla.BlueprintLibrary)<br>
```py
# ...
@ -92,14 +94,15 @@ camera_bp.set_attribute('image_size_y', 600)
# ...
```
---
## Converted Image Recipe
This recipe applies a color conversion to the image taken by a camera sensor,
so it is converted to a semantic segmentation image.
Focused on:<br>
[`carla.ColorConverter`](../python_api/#carla.ColorConverter)<br>
[`carla.Sensor`](../python_api/#carla.Sensor)
[`carla.ColorConverter`](python_api.md#carla.ColorConverter)<br>
[`carla.Sensor`](python_api.md#carla.Sensor)
```py
# ...
@ -110,20 +113,21 @@ camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame,
# ...
```
---
## Lanes Recipe
This recipe shows the current traffic rules affecting the vehicle. Shows the current lane type and
if a lane change can be done in the actual lane or the surrounding ones.
Focused on:<br>
[`carla.LaneMarking`](../python_api/#carla.LaneMarking)<br>
[`carla.LaneMarkingType`](../python_api/#carla.LaneMarkingType)<br>
[`carla.LaneChange`](../python_api/#carla.LaneChange)<br>
[`carla.LaneType`](../python_api/#carla.LaneType)<br>
[`carla.LaneMarking`](python_api.md#carla.LaneMarking)<br>
[`carla.LaneMarkingType`](python_api.md#carla.LaneMarkingType)<br>
[`carla.LaneChange`](python_api.md#carla.LaneChange)<br>
[`carla.LaneType`](python_api.md#carla.LaneType)<br>
Used:<br>
[`carla.Waypoint`](../python_api/#carla.Waypoint)<br>
[`carla.World`](../python_api/#carla.World)
[`carla.Waypoint`](python_api.md#carla.Waypoint)<br>
[`carla.World`](python_api.md#carla.World)
```py
# ...
@ -141,19 +145,20 @@ print("R lane marking change: " + str(waypoint.right_lane_marking.lane_change))
![lane_marking_recipe](img/lane_marking_recipe.png)
---
## Debug Bounding Box Recipe
This recipe shows how to draw traffic light actor bounding boxes from a world snapshot.
Focused on:<br>
[`carla.DebugHelper`](../python_api/#carla.DebugHelper)<br>
[`carla.BoundingBox`](../python_api/#carla.BoundingBox)
[`carla.DebugHelper`](python_api.md#carla.DebugHelper)<br>
[`carla.BoundingBox`](python_api.md#carla.BoundingBox)
Used:<br>
[`carla.ActorSnapshot`](../python_api/#carla.ActorSnapshot)<br>
[`carla.Actor`](../python_api/#carla.Actor)<br>
[`carla.Vector3D`](../python_api/#carla.Vector3D)<br>
[`carla.Color`](../python_api/#carla.Color)
[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)<br>
[`carla.Actor`](python_api.md#carla.Actor)<br>
[`carla.Vector3D`](python_api.md#carla.Vector3D)<br>
[`carla.Color`](python_api.md#carla.Color)
```py
# ....
@ -169,6 +174,7 @@ for actor_snapshot in world_snapshot:
![debug_bb_recipe](img/debug_bb_recipe.png)
---
## Debug Vehicle Trail Recipe
This recipe is a modification of
@ -176,16 +182,16 @@ This recipe is a modification of
It draws the path of an actor through the world, printing information at each waypoint.
Focused on:<br>
[`carla.DebugHelper`](../python_api/#carla.DebugHelper)<br>
[`carla.Waypoint`](../python_api/#carla.Waypoint)<br>
[`carla.Actor`](../python_api/#carla.Actor)
[`carla.DebugHelper`](python_api.md#carla.DebugHelper)<br>
[`carla.Waypoint`](python_api.md#carla.Waypoint)<br>
[`carla.Actor`](python_api.md#carla.Actor)
Used:<br>
[`carla.ActorSnapshot`](../python_api/#carla.ActorSnapshot)<br>
[`carla.Vector3D`](../python_api/#carla.Vector3D)<br>
[`carla.LaneType`](../python_api/#carla.LaneType)<br>
[`carla.Color`](../python_api/#carla.Color)<br>
[`carla.Map`](../python_api/#carla.Map)
[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)<br>
[`carla.Vector3D`](python_api.md#carla.Vector3D)<br>
[`carla.LaneType`](python_api.md#carla.LaneType)<br>
[`carla.Color`](python_api.md#carla.Color)<br>
[`carla.Map`](python_api.md#carla.Map)
```py
# ...
@ -215,15 +221,16 @@ path it was following and the speed at each waypoint.
![debug_trail_recipe](img/debug_trail_recipe.png)
---
## Parse client creation arguments
This recipe shows in every script provided in `PythonAPI/Examples` and it is used to parse the client creation arguments when running the script.
Focused on:<br>
[`carla.Client`](../python_api/#carla.Client)<br>
[`carla.Client`](python_api.md#carla.Client)<br>
Used:<br>
[`carla.Client`](../python_api/#carla.Client)
[`carla.Client`](python_api.md#carla.Client)
```py
argparser = argparse.ArgumentParser(
@ -253,17 +260,18 @@ Used:<br>
client = carla.Client(args.host, args.port)
```
---
## Traffic lights Recipe
This recipe changes from red to green the traffic light that affects the vehicle.
This is done by detecting if the vehicle actor is at a traffic light.
Focused on:<br>
[`carla.TrafficLight`](../python_api/#carla.TrafficLight)<br>
[`carla.TrafficLightState`](../python_api/#carla.TrafficLightState)
[`carla.TrafficLight`](python_api.md#carla.TrafficLight)<br>
[`carla.TrafficLightState`](python_api.md#carla.TrafficLightState)
Used:<br>
[`carla.Vehicle`](../python_api/#carla.Vehicle)
[`carla.Vehicle`](python_api.md#carla.Vehicle)
```py
# ...
@ -277,7 +285,7 @@ if vehicle_actor.is_at_traffic_light():
![tl_recipe](img/tl_recipe.gif)
---
## Walker batch recipe
```py

View File

@ -1,4 +1,3 @@
# C++ Reference
We use Doxygen to generate the documentation of our C++ code:

View File

@ -13,6 +13,7 @@ In summary, the file format has a small header with general info
![global file format](img/RecorderFileFormat3.png)
---
## 1. Strings in binary
Strings are encoded first with the length of it, followed by its characters without null
@ -21,6 +22,7 @@ as hex values: 06 00 54 6f 77 6e 30 36
![binary dynamic string](img/RecorderString.png)
---
## 2. Info header
The info header has general information about the recorded file. Basically, it contains the version
@ -34,6 +36,7 @@ A sample info header is:
![info header sample](img/RecorderHeader.png)
---
## 3. Packets
Each packet starts with a little header of two fields (5 bytes):
@ -173,6 +176,7 @@ that is used in the animation.
![state](img/RecorderWalker.png)
---
## 4. Frame Layout
A frame consists of several packets, where all of them are optional, except the ones that
@ -190,6 +194,7 @@ or set the state of traffic lights.
The **animation** packets are also optional, but by default they are recorded. That way the walkers
are animated and also the vehicle wheels follow the direction of the vehicles.
---
## 5. File Layout
The layout of the file starts with the **info header** and then follows a collection of packets in

View File

@ -12,7 +12,7 @@
* [__Semantic segmentation camera__](#semantic-segmentation-camera)
---------------
---
## Collision detector
* __Blueprint:__ sensor.other.collision
@ -34,7 +34,7 @@ Collision detectors do not have any configurable attribute.
| `other_actor` | [carla.Actor](python_api.md#carla.Actor) | Actor against whom the parent collided. |
| `normal_impulse` | [carla.Vector3D](python_api.md#carla.Vector3D) | Normal impulse result of the collision. |
---------------
---
## Depth camera
* __Blueprint:__ sensor.camera.depth
@ -92,7 +92,7 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
| `fov` | float | Horizontal field of view in degrees. |
| `raw_data` | bytes | Array of BGRA 32-bit pixels. |
---------------
---
## GNSS sensor
* __Blueprint:__ sensor.other.gnss
@ -126,7 +126,7 @@ Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gns
| `longitude` | double | Longitude of the actor. |
| `altitude` | double | Altitude of the actor. |
---------------
---
## IMU sensor
* __Blueprint:__ sensor.other.imu
@ -163,7 +163,7 @@ Provides measures that accelerometer, gyroscope and compass would retrieve for t
| `gyroscope` | [carla.Vector3D](python_api.md#carla.Vector3D) | Measures angular velocity in `rad/sec`. |
| `compass` | float | Orientation in radians. North is `(0.0, -1.0, 0.0)` in UE. |
---------------
---
## Lane invasion detector
* __Blueprint:__ sensor.other.lane_invasion
@ -192,7 +192,7 @@ This sensor does not have any configurable attribute.
| `crossed_lane_markings` | list([carla.LaneMarking](python_api.md#carla.LaneMarking)) | List of lane markings that have been crossed. |
---------------
---
## Lidar raycast sensor
* __Blueprint:__ sensor.lidar.ray_cast
@ -212,8 +212,7 @@ for location in lidar_measurement:
```
!!! Tip
Running the simulator at [fixed time-step](configuring_the_simulation.md#fixed-time-step) it is possible to tune the rotation for each measurement. Adjust the
step and the rotation frequency to get, for instance, a 360 view each measurement.
Running the simulator at [fixed time-step](adv_synchrony_timestep.md) it is possible to tune the rotation for each measurement. Adjust the step and the rotation frequency to get, for instance, a 360 view each measurement.
![LidarPointCloud](img/lidar_point_cloud.gif)
@ -243,7 +242,7 @@ for location in lidar_measurement:
| `get_point_count(channel)` | int | Number of points per channel captured this frame. |
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point). |
---------------
---
## Obstacle detector
* __Blueprint:__ sensor.other.obstacle
@ -274,7 +273,7 @@ To ensure that collisions with any kind of object are detected, the server creat
| `other_actor` | [carla.Actor](python_api.md#carla.Actor) | Actor detected as an obstacle. |
| `distance` | float | Distance from `actor` to `other_actor`. |
---------------
---
## Radar sensor
* __Blueprint:__ sensor.other.radar
@ -317,7 +316,7 @@ The provided script `manual_control.py` uses this sensor to show the points bein
| `depth` | float | Distance in meters. |
| `velocity` | float | Velocity towards the sensor. |
---------------
---
## RGB camera
* __Blueprint:__ sensor.camera.rgb
@ -425,7 +424,7 @@ Since these effects are provided by UE, please make sure to check their document
| `fov` | float | Horizontal field of view in degrees. |
| `raw_data` | bytes | Array of BGRA 32-bit pixels. |
---------------
---
## Semantic segmentation camera
* __Blueprint:__ sensor.camera.semantic_segmentation

View File

@ -1,54 +0,0 @@
CARLA Simulator
===============
Thanks for downloading CARLA!
<http://carla.org/>
How to run CARLA
----------------
Launch a terminal in this folder and execute the simulator by running
```sh
./CarlaUE4.sh
```
this will launch a window with a view over the city. This is the "spectator"
view, you can fly around the city using the mouse and WASD keys, but you cannot
interact with the world in this view. The simulator is now running as a server,
waiting for a client app to connect and interact with the world.
Let's start by adding some live to the city, open a new terminal window and
execute
```sh
./spawn_npc.py -n 80
```
This adds 80 vehicles to the world driving in "autopilot" mode. Back to the
simulator window we should see these vehicles driving around the city. They will
keep driving randomly until we stop the script. Let's leave them there for now.
Now, it's nice and sunny in CARLA, but that's not a very interesting driving
condition. One of the cool features of CARLA is that you can control the weather
and lighting conditions of the world. We'll launch now a script that dynamically
controls the weather and time of the day, open yet another terminal window and
execute
```sh
./dynamic_weather.py
```
The city is now ready for us to drive, we can finally run
```sh
./manual_control.py
```
This should open a new window with a 3rd person view of a car, you can drive
this car with the WASD/arrow keys. Press 'h' to see all the options available.
For more details and running options please refer to our online documentation
<http://carla.readthedocs.io>

View File

@ -1,42 +0,0 @@
<h1>AD Responsibility Sensitive Safety model (RSS) integration</h1>
> _This feature is a work in progress, only a Linux build variant is available._
This feature integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) into the CARLA Client library.
**As the _ad-rss-lib_ library is licensed under LGPL-2.1-only, building the variant which includes this feature and therefor the library might have some implications to the outgoing license of the resulting binary!**
It provides basic implementations of both an **RssSensor**, the situation analysis and response generation by the **ad-rss-lib** and an basic **RssRestrictor** class which applies the restrictions to given vehicle commands.
The **RssSensor** results can be visualized within CARLA.
[![RSS safety sensor in CARLA](img/rss_carla_integration.png)](https://www.youtube.com/watch?v=UxKPXPT2T8Q)
Please see [C++ Library for Responsibility Sensitive Safety documentation](https://intel.github.io/ad-rss-lib/) and especially the [Background documentation](https://intel.github.io/ad-rss-lib/documentation/Main.html) for further details.
<h2>Compilation</h2>
RSS integration is a Linux-only build variant.
Please see [Build System](dev/build_system.md) for general information.
*LibCarla* with RSS has the be explicitly compiled by
```sh
make LibCarla.client.rss
```
The *PythonAPI* with RSS is built by
```sh
make PythonAPI.rss
```
<h2>Current state</h2>
<h3>RssSensor</h3>
The RssSensor is currently only considering vehicles within the same road segment, but on all lanes within that segment. Intersections are not yet supported!
<h3>RssRestrictor</h3>
The current implementation of the RssRestrictor checks and potentially modifies a given *VehicleControl* generated by e.g. and Automated Driving stack or user imput via a *manual_control* client (see the *PythonAPI/examples/manual_control_rss.py*).
Due to the structure of *VehicleControl* (just throttle, brake, streering values for the car under control), the Restrictor modifies and sets these values to best reach the desired accelerations or decelerations by a given restriction. Due to car physics and the simple control options these might not be met.

View File

@ -1,6 +1,6 @@
# CARLA
![Welcome to CARLA](../img/welcome.png)
![Welcome to CARLA](img/welcome.png)
!!! important
This documentation refers to the latest development versions of CARLA, 0.9.0 or
@ -10,14 +10,14 @@ CARLA is an open-source autonomous driving simulator. It was built from scratch
In order to smooth the process of developing, training and validating driving systems, CARLA evolved to become an ecosystem of projects, built around the main platform by the community. In this context, it is important to understand some things about how does CARLA work, so as to fully comprehend its capabilities.
---------------
---
## The simulator
The CARLA simulator consists of a scalable client-server architecture.
The server is responsible of everything related with the simulation itself: sensor rendering, computation of physics, updates on the world-state and its actors and much more. As it aims for realistic results, the best fit would be running the server with a dedicated GPU, especially when dealing with machine learning.
The client side consists of a sum of client modules controlling the logic of actors on scene and setting world conditions. This is achieved by leveraging the CARLA API (in Python or C++), a layer that mediates between server and client that is constantly evolving to provide new functionalities.
![CARLA Modules](../img/carla_modules.png)
![CARLA Modules](img/carla_modules.png)
That summarizes the basic structure of the simulator. Understanding CARLA though is much more than that, as many different features and elements coexist within it. Some of these are listed hereunder, as to gain perspective on the capabilities of what CARLA can achieve.
@ -28,7 +28,7 @@ That summarizes the basic structure of the simulator. Understanding CARLA though
* __Open assets:__ CARLA facilitates different maps for urban settings with control over weather conditions and a blueprint library with a wide set of actors to be used. However, these elements can be customized and new can be generated following simple guidelines.
* __Scenario runner:__ In order to ease the learning process for vehicles, CARLA provides a series of routes describing different situations to iterate on. These also set the basis for the [CARLA challenge](https://carlachallenge.org/), open for everybody to test their solutions and make it to the leaderboard.
---------------
---
## The project
CARLA grows fast and steady, widening the range of solutions provided and opening the way for the different approaches to autonomous driving. It does so while never forgetting its open-source nature. The project is transparent, acting as a white box where anybody is granted access to the tools and the development community. In that democratization is where CARLA finds its value.
@ -40,11 +40,11 @@ Welcome to CARLA.
<div class="build-buttons">
<p>
<a href="../../how_to_build_on_linux" target="_blank" class="btn btn-neutral" title="Go to the latest CARLA release">
<a href="../build_linux" target="_blank" class="btn btn-neutral" title="Go to the latest CARLA release">
<b>Linux</b> build</a>
</p>
<p>
<a href="../../how_to_build_on_windows" target="_blank" class="btn btn-neutral" title="Go to the latest CARLA release">
<a href="../build_windows" target="_blank" class="btn btn-neutral" title="Go to the latest CARLA release">
<b>Windows</b> build</a>
</p>
</div>

View File

@ -7,7 +7,7 @@
* [Updating CARLA](#updating-carla)
* [Summary](#summary)
---------------
---
## Requirements
The quickstart installation uses a pre-packaged version of CARLA. This comprises the content in a boundle that can run automatically with no build installation needed. The API can be accesseded fully but in exchange, advanced customization and developing options are unavailable.
@ -22,7 +22,7 @@ If you have [pip](https://pip.pypa.io/en/stable/installing/) in your system, you
```sh
pip install --user pygame numpy
```
---------------
---
## Downloading CARLA
<div class="build-buttons">
@ -46,7 +46,7 @@ If you downloaded any additional assets in Linux, move them to the _Import_ fold
./ImportAssets.sh
```
---------------
---
## Running CARLA
Open a terminal in the folder where CARLA was extracted. The following command will execute the package file and start the simulation:
@ -90,21 +90,21 @@ To check all the available configurations, run the following command:
> ./config.py --help
```
---------------
---
## Updating CARLA
The packaged version requires no updates. The content is bundled and thus, tied to a specific version of CARLA. Everytime there is a release, the repository will be updated. To run this latest or any other version, delete the previous one and repeat the installation steps with the desired.
---------------
---
## Summary
That concludes the quickstart installation process. In case any unexpected error or issue occurs, the [CARLA forum](https://forum.carla.org/) is open to everybody. There is an _Installation issues_ category to post this kind of problems and doubts.
So far, CARLA should be operative in the desired system. Terminals will be used to contact the server via script and retrieve data. Thus will access all of the capabilities that CARLA provides. Next step should be visiting the __First steps__ section to learn more about this. However, all the information about the Python API regarding classes and its methods can be accessed in the [Python API reference](../python_api.md).
So far, CARLA should be operative in the desired system. Terminals will be used to contact the server via script and retrieve data. Thus will access all of the capabilities that CARLA provides. Next step should be visiting the __First steps__ section to learn more about this. However, all the information about the Python API regarding classes and its methods can be accessed in the [Python API reference](python_api.md).
<div class="build-buttons">
<p>
<a href="../../core_concepts" target="_blank" class="btn btn-neutral" title="Go to first steps">
<a href="../core_concepts" target="_blank" class="btn btn-neutral" title="Go to first steps">
Go to: First steps</a>
</p>
</div>

View File

@ -8,8 +8,8 @@ Epic Game's provides a set of realistic _Automotive Materials_ free to use. In
this document we explain how to download and link these materials to our
vehicles for a more realistic car paint.
Download from Marketplace
-------------------------
---
## Download from Marketplace
Epic Games' [Automotive Materials][automatlink] package can be downloaded for
free from the Unreal Engine Marketplace.
@ -26,8 +26,8 @@ free from the Unreal Engine Marketplace.
[automatlink]: https://www.unrealengine.com/marketplace/automotive-material-pack
Manually link the materials
---------------------------
---
## Manually link the materials
Right after opening the project, you should link the automotive materials you
just downloaded.

View File

@ -1,7 +1,7 @@
# How to add assets
Adding a vehicle
----------------
---
## Adding a vehicle
Follow [Art Guide][artlink] for creating the Skeletal Mesh and Physics Asset. And
[Vehicles User Guide][userguide] for the rest.
@ -53,8 +53,8 @@ Follow [Art Guide][artlink] for creating the Skeletal Mesh and Physics Asset. An
8. Test it, go to CarlaGameMode blueprint and change "Default Pawn Class" to the newly
created car blueprint.
Adding a 2 wheeled vehicle
--------------------------
---
## Adding a 2 wheeled vehicle
Adding 2 wheeled vehicles is similar to adding a 4 wheeled one but due to the complexity of the
animation you'll need to set up aditional bones to guide the driver's animation:
@ -121,8 +121,8 @@ Bone Setup:
9. Test it, go to CarlaGameMode blueprint and change "Default Pawn Class" to the newly
created bike blueprint.
Map generation
--------------
---
## Map generation
For the road generation, the following meshes are expected to be found

View File

@ -2,7 +2,7 @@
![Town03](img/create_map_01.jpg)
-----
---
## 1 Create a new map
Files needed:
@ -10,13 +10,13 @@ Files needed:
* Binaries `.fbx` - All meshes you need to build the map, i.e., roads, lanemarking, sidewalk, ect.
* OpenDRIVE `.xodr` - Road network information that cars need to circulate on the map.
It is possible to modify an existing CARLA map, check out the [map customization](../dev/map_customization)
It is possible to modify an existing CARLA map, check out the [map customization](tuto_A_map_customization.md)
tutorial.
The following steps will introduce the RoadRunner software for map creation. If the map is
created by other software, go to this [section](#3-importing-into-unreal).
------
---
## 2 Create a new map with RoadRunner
RoadRunner is a powerful software from Vector Zero to create 3D scenes. Using RoadRunner is easy,
@ -77,7 +77,7 @@ _check VectorZeros's [documentation][exportlink]._
[exportlink]: https://tracetransit.atlassian.net/wiki/spaces/VS/pages/752779356/Exporting+to+CARLA
-------
---
## 3 Importing into Unreal
This section is divided into two. The first part shows how to import a map from RoadRunner
@ -88,7 +88,7 @@ and the second part shows how to import a map from other software that generates
i.e. `mapname.fbx` `mapname.xodr`.
We have also created a new way to import assets into Unreal,
check this [`guide`](./asset_packages_for_dist.md)!
check this [`guide`](tuto_A_standalone_packages).md)!
#### 3.1 Importing from RoadRunner
@ -251,7 +251,7 @@ It will read the level's name, search the Opendrive file with the same name and
And that's it! Now the road network information is loaded into the map.
-------
---
## 4. Setting up traffic behavior
Once everything is loaded into the level, it is time to create traffic behavior.
@ -273,7 +273,7 @@ To regulate the traffic, traffic lights and signs must be placed all over the ma
2. Adjust the _[`trigger box`][triggerlink]_ of each traffic light / sign
until it covers the road it affects.
[triggerlink]: ../python_api/#carla.TrafficSign.trigger_volume
[triggerlink]: python_api.md#carla.TrafficSign.trigger_volume
![ue_trafficlight](img/ue_trafficlight.png)
@ -291,7 +291,7 @@ might need some tweaking and testing to fit perfectly into the city.
> _Example: Traffic Signs, Traffic lights and Turn based stop._
----------
---
## 5 Adding pedestrian navigation areas
To make a navigable mesh for pedestrians, we use the _Recast & Detour_ library.<br>
@ -340,7 +340,7 @@ Then build RecastDemo. Follow their [instructions][buildrecastlink] on how to bu
Now pedestrians will be able to spawn randomly and walk on the selected meshes!
----------
---
## Tips and Tricks
* Traffic light group controls wich traffic light is active (green state) at each moment.

View File

@ -2,8 +2,8 @@
> _This document is a work in progress and might be incomplete._
Creating a new map
------------------
---
## Creating a new map
!!! Bug
Creating a map from scratch with the Carla tools causes a crash with
@ -13,7 +13,7 @@ Creating a new map
#### Requirements
- Checkout and build Carla from source on [Linux](../how_to_build_on_linux.md) or [Windows](../how_to_build_on_windows.md)
- Checkout and build Carla from source on [Linux](build_linux.md) or [Windows](build_windows.md).
#### Creating
@ -25,7 +25,7 @@ Creating a new map
- You can change the seed until you have a map you are satisfied with.
- After that you can place new PlayerStarts at the places you want the cars to be spawned.
- The AI already works, but the cars won't act randomly. Vehicles will follow the instructions given by the RoadMapGenerator. They will follow the road easily while in straight roads but wont so much when entering Intersections:
![road_instructions_example.png](../img/road_instructions_example.png)
![road_instructions_example.png](img/road_instructions_example.png)
> (This is a debug view of the instructions the road gives to the Vehicle. They will always follow the green arrows, the white points are shared points between one or more routes, by default they order the vehicle to continue straight; Black points are off the road, the vehicle gets no instructions and drives to the left, trying to get back to the road)
- To get a random behavior, you have to place IntersectionEntrances, this will let you redefine the direction the vehicle will take overwriting the directions given by the road map (until they finish their given order).
@ -37,8 +37,8 @@ Creating a new map
Every street at a crossing should have its own turn at green without the other streets having green.
- Then you can populate the world with landscape and buildings.
MultipleFloorBuilding
---------------------
---
## MultipleFloorBuilding
The purpose of this blueprint is to make repeating and varying tall buildings a
bit easier. Provided a Base, a MiddleFloor and a roof; this blueprint repeats
@ -60,8 +60,9 @@ This blueprint is controlled by this 6 specific Parameters:
All of This parameters can be modified once this blueprint is placed in the
world.
SplinemeshRepeater
------------------
---
## SplinemeshRepeater
!!! Bug
See [#35 SplineMeshRepeater loses its collider mesh](https://github.com/carla-simulator/carla/issues/35)
@ -91,7 +92,6 @@ that all the meshes have their pivot placed wherever the repetition starts in
the lower point possible with the rest of the mesh pointing positive (Preferably
by the X axis)
#### Specific Walls (Dynamic material)
In the project folder "Content/Static/Walls" are included some specific assets
@ -119,8 +119,8 @@ The rest of the parameters are the mask the textures and the color corrections
that won't be modified in this instance but in the blueprint that will be
launched into the world.
Weather
-------
---
## Weather
This is the actor in charge of modifying all the lighting, environmental actors
an anything that affects the impression of the climate. It runs automatically

View File

@ -6,8 +6,8 @@ The main objective for importing and exporting assets is to reduce the size of
the distribution build. This is possible since these assets will be imported as
independent packages that can be plugged in anytime inside Carla and also exported.
How to import assets inside Unreal Engine
-----------------------------------------
---
## How to import assets inside Unreal Engine
The first step is to create an empty folder inside the Carla `Import` folder and rename it with any
folder name desired. For simplifying this newly created folder structure, we recommend having
@ -155,7 +155,7 @@ _required files and place them following the structure listed above._
_If the process doesn't work due to different names or other issues, you can always move the assets_
_manually, check this [`tutorial`][importtutorial]_ (_Section 3.2.1 - 6_).
[importtutorial]: ../how_to_make_a_new_map/#32-importing-from-the-files
[importtutorial]: tuto_A_map_creation.md#32-importing-from-the-files
Now we have everything ready for importing assets. To do so, you just need to run the command:
@ -175,10 +175,10 @@ _a new one with the same name._
The imported map won't have collisions, so they should be generated manually. This
[tutorial][collisionlink] (_Section 3.2.1 - 5_) shows how to do it.
[collisionlink]: ../how_to_make_a_new_map/#32-importing-from-the-files
[collisionlink]: how_to_make_a_new_map.md/#32-importing-from-the-files
How to export assets
--------------------
---
## How to export assets
Once imported all the packages inside Unreal, users could also generate a **cooked package**
for each of them. This last step is important in order to have all packages ready to add for

View File

@ -1,6 +1,6 @@
# How to model vehicles
------------
---
## 4-Wheeled Vehicles
#### Modelling

View File

@ -5,14 +5,16 @@ the necessary steps to implement a sensor in Unreal Engine 4 (UE4) and expose
its data via CARLA's Python API. We'll follow all the steps by creating a new
sensor as an example.
---
## Prerequisites
In order to implement a new sensor, you'll need to compile CARLA source code,
for detailed instructions on how to achieve this see
[Building from source](../building_from_source.md).
[Building from source](build_linux.md).
This tutorial also assumes the reader is fluent in C++ programming.
---
## Introduction
Sensors in CARLA are a special type of actor that produce a stream of data. Some
@ -31,7 +33,7 @@ In this tutorial, we'll be focusing on server-side sensors.
In order to have a sensor running inside UE4 sending data all the way to a
Python client, we need to cover the whole communication pipeline.
![Communication pipeline](../img/pipeline.png)
![Communication pipeline](img/pipeline.png)
Thus we'll need the following classes covering the different steps of the
pipeline
@ -53,6 +55,7 @@ pipeline
sort of "compile-time plugin system" based on template meta-programming.
Most likely, the code won't compile until all the pieces are present.
---
## Creating a new sensor
[**Full source code here.**](https://gist.github.com/nsubiron/011fd1b9767cd441b1d8467dc11e00f9)
@ -62,12 +65,13 @@ that we'll create a trigger box that detects objects within, and we'll be
reporting status to the client every time a vehicle is inside our trigger box.
Let's call it _Safe Distance Sensor_.
![Trigger box](../img/safe_distance_sensor.jpg)
![Trigger box](img/safe_distance_sensor.jpg)
_For the sake of simplicity we're not going to take into account all the edge
cases, nor it will be implemented in the most efficient way. This is just an
illustrative example._
---
### 1. The sensor actor
This is the most complicated class we're going to create. Here we're running
@ -291,6 +295,7 @@ that, the data is going to travel through several layers. First of them will be
the serializer that we have to create next. We'll fully understand this part
once we have completed the `Serialize` function in the next section.
---
### 2. The sensor data serializer
This class is actually rather simple, it's only required to have two static
@ -360,6 +365,7 @@ SharedPtr<SensorData> SafeDistanceSerializer::Deserialize(RawData &&data) {
except for the fact that we haven't defined yet what's a `SafeDistanceEvent`.
---
### 3. The sensor data object
We need to create a data object for the users of this sensor, representing the
@ -425,6 +431,7 @@ What we're doing here is exposing some C++ methods in Python. Just with this,
the Python API will be able to recognise our new event and it'll behave similar
to an array in Python, except that cannot be modified.
---
### 4. Register your sensor
Now that the pipeline is complete, we're ready to register our new sensor. We do
@ -447,6 +454,7 @@ be a bit cryptic.
make rebuild
```
---
### 5. Usage example
Finally, we have the sensor included and we have finished recompiling, our
@ -486,6 +494,7 @@ That's it, we have a new sensor working!
- - -
---
## Appendix: Reusing buffers
In order to optimize memory usage, we can use the fact that each sensor sends
@ -523,6 +532,7 @@ buffer.reset(512u); // (size 512 bytes, capacity 1024 bytes)
buffer.reset(2048u); // (size 2048 bytes, capacity 2048 bytes) -> allocates
```
---
## Appendix: Sending data asynchronously
Some sensors may require to send data asynchronously, either for performance or
@ -546,6 +556,7 @@ void MySensor::Tick(float DeltaSeconds)
}
```
---
## Appendix: Client-side sensors
Some sensors do not require the simulator to do their measurements, those

View File

@ -1,5 +1,7 @@
# How to generate the pedestrian navigation info
### Introduction
---
## Introduction
The pedestrians to walk need information about the map in a specific format. That file that describes the map for navigation is a binary file with extension `.BIN`, and they are saved in the **Nav** folder of the map. Each map needs a `.BIN` file with the same name that the map, so automatically can be loaded with the map.
@ -11,7 +13,8 @@ If we need to generate this `.BIN` file for a custom map, we need to follow this
* Rebuild the `.BIN` file with RecastBuilder
* Copy the `.BIN` file in a `Nav/` folder with the map
### Export meshes
---
## Export meshes
We have several types of meshes for navigation. The meshes need to be identified as one of those types, using specific nomenclature.
@ -38,7 +41,8 @@ Once we have all meshes with the proper nomenclature and tagged the ones that we
The `.OBJ` file will be saved with the name of the map with extension OBJ in the **CarlaUE4/Saved** folder of UE4.
### Rebuild the navigation binary
---
## Rebuild the navigation binary
With Recast & Detour library comes an executable file that needs to be used to generate the final `.BIN` file.
The executable uses by default the parameters to work on Carla, and you only need to pass as parameter the `.OBJ` you exported above, and the `.BIN` will be created.

View File

@ -3,7 +3,7 @@
> _This document is meant for developers that want to publish a new release._
1. **Make sure content is up-to-date.**<br>
See [How to upgrade content](how_to_upgrade_content.md).
See [Upgrade the content](tuto_D_contribute_assets.md).
2. **Increase CARLA version where necessary.**<br>
Increase version in the following files: _DefaultGame.ini_, _Carla.uplugin_,

View File

@ -9,6 +9,7 @@ These properties are controlled through a
which also provides the control of each wheel's physics through a
[carla.WheelPhysicsControl](/python_api/#carla.WheelPhysicsControl) object.
---
## Example
```py

View File

@ -10,7 +10,8 @@ all classes and methods available can be found at
The user should read the first steps tutorial before reading this document.
[Core concepts](core_concepts.md).
### Walker skeleton structure
---
## Walker skeleton structure
All walkers have the same skeleton hierarchy and bone names. Below is an image of the skeleton
hierarchy.
@ -84,7 +85,8 @@ crl_root
└── crl_toeEnd__R
```
### How to manually control a walker's bones
---
## How to manually control a walker's bones
Following is a detailed step-by-step example of how to change the bone transforms of a walker
from the CARLA Python API

View File

@ -5,7 +5,7 @@
- class_name: Actor
# - DESCRIPTION ------------------------
doc: >
CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by carla.World and they need for a carla.ActorBlueprint to be created. These blueprints belong into a library provided by CARLA, find more about them [here](../bp_library/).
CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by carla.World and they need for a carla.ActorBlueprint to be created. These blueprints belong into a library provided by CARLA, find more about them [here](bp_library.md).
# - PROPERTIES -------------------------
instance_variables:
- var_name: attributes
@ -23,7 +23,7 @@
- var_name: semantic_tags
type: list(int)
doc: >
A list of semantic tags provided by the blueprint listing components for this actor. E.g. a traffic light could be tagged with "pole" and "traffic light". These tags are used by the semantic segmentation sensor. Find more about this and other sensors [here](../cameras_and_sensors/#sensor.camera.semantic_segmentation).
A list of semantic tags provided by the blueprint listing components for this actor. E.g. a traffic light could be tagged with "pole" and "traffic light". These tags are used by the semantic segmentation sensor. Find more about this and other sensors [here](ref_sensors.md#semantic-segmentation-camera).
- var_name: type_id
type: str
doc: >
@ -321,7 +321,7 @@
- class_name: TrafficLightState
# - DESCRIPTION ------------------------
doc: >
All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](../python_cookbook/#traffic-lights-recipe) to see an example.
All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to see an example.
# - PROPERTIES -------------------------
instance_variables:
- var_name: Green
@ -337,7 +337,7 @@
doc: >
A traffic light actor, considered a specific type of traffic sign. As traffic lights will mostly appear at junctions, they belong to a group which contains the different traffic lights in it. Inside the group, traffic lights are differenciated by their pole index.
Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](../python_cookbook/#traffic-lights-recipe) to learn how to do so.
Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to learn how to do so.
# - PROPERTIES -------------------------
instance_variables:
- var_name: state

View File

@ -235,7 +235,7 @@
doc: >
A class that contains the blueprints provided for actor spawning. Its main application is to return carla.ActorBlueprint objects needed to spawn actors. Each blueprint has an identifier and attributes that may or may not be modifiable. The library is automatically created by the server and can be accessed through carla.World.
[Here](../bp_library/) is a reference containing every available blueprint and its specifics.
[Here](bp_library.md) is a reference containing every available blueprint and its specifics.
# - METHODS ----------------------------
methods:
- def_name: __getitem__

View File

@ -9,7 +9,7 @@
doc: >
The Client connects CARLA to the server which runs the simulation. Both server and client contain a CARLA library (libcarla) with some differences that allow communication between them. Many clients can be created and each of these will connect to the RPC server inside the simulation to send commands. The simulation runs server-side. Once the connection is established, the client will only receive data retrieved from the simulation. Walkers are the exception. The client is in charge of managing pedestrians so, if you are running a simulation with multiple clients, some issues may arise. For example, if you spawn walkers through different clients, collisions may happen, as each client is only aware of the ones it is in charge of.
The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](recorder_and_playback.md) is some information about how to use this recorder.
The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](adv_recorder.md) is some information about how to use this recorder.
# - PROPERTIES -------------------------
instance_variables:
# - METHODS ----------------------------
@ -223,7 +223,7 @@
doc: >
When true, will show all the details per frame (traffic light states, positions of all actors, orientation and animation data...), but by default it will only show a summary.
doc: >
The information saved by the recorder will be parsed and shown in your terminal as text (frames, times, events, state, positions...). The information shown can be specified by using the `show_all` parameter. [Here](recorder_binary_file_format.md) is some more information about how to read the recorder file.
The information saved by the recorder will be parsed and shown in your terminal as text (frames, times, events, state, positions...). The information shown can be specified by using the `show_all` parameter. [Here](ref_recorder_binary_file_format.md) is some more information about how to read the recorder file.
# --------------------------------------
- def_name: start_recorder
params:

View File

@ -139,7 +139,7 @@
- class_name: WalkerBoneControl
# - DESCRIPTION ------------------------
doc: >
This class grants bone specific manipulation for walker. The skeletons of walkers have been unified for clarity and the transform applied to each bone are always relative to its parent. Take a look [here](walker_bone_control.md) to learn more on how to create a walker and define its movement.
This class grants bone specific manipulation for walker. The skeletons of walkers have been unified for clarity and the transform applied to each bone are always relative to its parent. Take a look [here](tuto_G_control_walker_skeletons.md) to learn more on how to create a walker and define its movement.
# - PROPERTIES -------------------------
instance_variables:
- var_name: bone_transforms

View File

@ -362,7 +362,7 @@
- class_name: BoundingBox
# - DESCRIPTION ------------------------
doc: >
Helper class defining a box location and its dimensions that will later be used by carla.DebugHelper or a carla.Client to draw shapes and detect collisions. Bounding boxes normally act for object colliders. Check out this [recipe](../python_cookbook/#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
Helper class defining a box location and its dimensions that will later be used by carla.DebugHelper or a carla.Client to draw shapes and detect collisions. Bounding boxes normally act for object colliders. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
# - PROPERTIES -------------------------
instance_variables:
- var_name: location

View File

@ -6,7 +6,7 @@
- class_name: LaneType
# - DESCRIPTION ------------------------
doc: >
Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](../python_cookbook/#lanes-recipe) the user creates a carla.Waypoint for the current location of a vehicle and uses it to get the current and adjacent lane types.
Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a carla.Waypoint for the current location of a vehicle and uses it to get the current and adjacent lane types.
# - PROPERTIES -------------------------
instance_variables:
- var_name: NONE
@ -58,7 +58,7 @@
- class_name: LaneChange
# - DESCRIPTION ------------------------
doc: >
Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every carla.Waypoint according to the OpenDRIVE file. In this [recipe](../python_cookbook/#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every carla.Waypoint according to the OpenDRIVE file. In this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
# - PROPERTIES -------------------------
instance_variables:
- var_name: NONE
@ -99,7 +99,7 @@
- class_name: LaneMarkingType
# - DESCRIPTION ------------------------
doc: >
Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](../python_cookbook/#lanes-recipe) where the user creates a carla.Waypoint for a vehicle location and retrieves from it the information about adjacent lane markings.
Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](ref_code_recipes.md#lanes-recipe) where the user creates a carla.Waypoint for a vehicle location and retrieves from it the information about adjacent lane markings.
__Note on double types:__ Lane markings are defined under the OpenDRIVE standard that determines whereas a line will be considered "BrokenSolid" or "SolidBroken". For each road there is a center lane marking, defined from left to right regarding the lane's directions. The rest of the lane markings are defined in order from the center lane to the closest outside of the road.
# - PROPERTIES -------------------------

View File

@ -33,7 +33,7 @@
- class_name: ColorConverter
# - DESCRIPTION ------------------------
doc: >
Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as <b>float</b> that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](../python_cookbook/#converted-image-recipe) to see an example of how to create and save image data for <b>sensor.camera.semantic_segmentation</b>.
Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as <b>float</b> that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for <b>sensor.camera.semantic_segmentation</b>.
# - PROPERTIES -------------------------
instance_variables:
- var_name: CityScapesPalette

View File

@ -104,7 +104,7 @@
- class_name: WorldSettings
# - DESCRIPTION ------------------------
doc: >
The simulation has some advanced configuration options that are contained in this class and can be managed using carla.World and its methods. These allow the user to choose between client-server synchrony/asynchrony, activation of "no rendering mode" and either if the simulation should run with a fixed or variable time-step. Check [this](../configuring_the_simulation/) out if you want to learn about it.
The simulation has some advanced configuration options that are contained in this class and can be managed using carla.World and its methods. These allow the user to choose between client-server synchrony/asynchrony, activation of "no rendering mode" and either if the simulation should run with a fixed or variable time-step. Check [this](adv_synchrony_timestep.md) out if you want to learn about it.
# - PROPERTIES -------------------------
instance_variables:
- var_name: synchronous_mode
@ -170,7 +170,7 @@
- class_name: AttachmentType
# - DESCRIPTION ------------------------
doc: >
Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for cameras and sensors. [Here](../python_cookbook/#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for cameras and sensors. [Here](ref_code_recipes.md#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
# - PROPERTIES -------------------------
instance_variables:
@ -179,7 +179,7 @@
With this fixed attatchment the object follow its parent position strictly.
- var_name: SpringArm
doc: >
An attachment that expands or retracts depending on camera situation. SpringArms are an Unreal Engine component so [check this out](../python_cookbook/#attach-sensors-recipe) to learn some more about them.
An attachment that expands or retracts depending on camera situation. SpringArms are an Unreal Engine component so [check this out](ref_code_recipes.md#attach-sensors-recipe) to learn some more about them.
# --------------------------------------
- class_name: World
@ -361,7 +361,7 @@
- class_name: DebugHelper
# - DESCRIPTION ------------------------
doc: >
Helper class part of carla.World that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](../python_cookbook/#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
Helper class part of carla.World that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
# - METHODS ----------------------------
methods:
- def_name: draw_point

View File

@ -8,15 +8,15 @@ extra_css: [extra.css]
nav:
- Home: 'index.md'
- Getting started:
- 'Introduction': 'getting_started/introduction.md'
- 'Quick start': 'getting_started/quickstart.md'
- 'Introduction': 'start_introduction.md'
- 'Quickstart installation': 'start_quickstart.md'
- Building CARLA:
- 'Linux build': 'how_to_build_on_linux.md'
- 'Windows build': 'how_to_build_on_windows.md'
- 'Update CARLA': 'update_carla.md'
- 'Build system': 'dev/build_system.md'
- 'Running in a Docker': 'carla_docker.md'
- 'F.A.Q.': 'faq.md'
- 'Linux build': 'build_linux.md'
- 'Windows build': 'build_windows.md'
- 'Update CARLA': 'build_update.md'
- 'Build system': 'build_system.md'
- 'Running in a Docker': 'build_docker.md'
- 'F.A.Q.': 'build_faq.md'
- First steps:
- 'Core concepts': 'core_concepts.md'
- '1st. World and client': 'core_world.md'
@ -24,36 +24,37 @@ nav:
- '3rd. Maps and navigation': 'core_map.md'
- '4th. Sensors and data': 'core_sensors.md'
- Advanced steps:
- 'Recorder': 'recorder_and_playback.md'
- 'Rendering options': 'rendering_options.md'
- 'Synchrony and time-step': 'simulation_time_and_synchrony.md'
- 'Recorder': 'adv_recorder.md'
- 'Rendering options': 'adv_rendering_options.md'
- 'Synchrony and time-step': 'adv_synchrony_timestep.md'
- References:
- 'Python API reference': 'python_api.md'
- 'Code recipes': 'python_cookbook.md'
- 'Code recipes': 'ref_code_recipes.md'
- 'Blueprint Library': 'bp_library.md'
- 'C++ reference' : 'cpp_reference.md'
- 'Recorder binary file format': 'recorder_binary_file_format.md'
- 'C++ reference' : 'ref_cpp.md'
- 'Recorder binary file format': 'ref_recorder_binary_file_format.md'
- "Sensors reference": 'ref_sensors.md'
- How to... (general):
- 'Add a new sensor': 'dev/how_to_add_a_new_sensor.md'
- 'Add friction triggers': "how_to_add_friction_triggers.md"
- 'Control vehicle physics': "how_to_control_vehicle_physics.md"
- 'Control walker skeletons': "walker_bone_control.md"
- 'Creating standalone asset packages for distribution': 'asset_packages_for_dist.md'
- 'Generate pedestrian navigation': 'how_to_generate_pedestrians_navigation.md'
- "Link Epic's Automotive Materials": 'epic_automotive_materials.md'
- 'Map customization': 'dev/map_customization.md'
- How to... (content):
- 'Add assets': 'how_to_add_assets.md'
- 'Create and import a new map': 'how_to_make_a_new_map.md'
- 'Model vehicles': 'how_to_model_vehicles.md'
- Tutorials (general):
- 'Add friction triggers': "tuto_G_add_friction_triggers.md"
- 'Control vehicle physics': "tuto_G_control_vehicle_physics.md"
- 'Control walker skeletons': "tuto_G_control_walker_skeletons.md"
- Tutorials (assets):
- 'Import new assets': 'tuto_A_import_assets.md'
- 'Map creation': 'tuto_A_map_creation.md'
- 'Map customization': 'tuto_A_map_customization.md'
- 'Standalone asset packages': 'tuto_A_standalone_packages.md'
- "Use Epic's Automotive materials": 'tuto_A_epic_automotive_materials.md'
- 'Vehicle modelling': 'tuto_A_vehicle_modelling.md'
- Tutorials (developers):
- 'Contribute with assets': 'tuto_D_contribute_assets.md'
- 'Create a sensor': 'tuto_D_create_sensor.md'
- 'Make a release': 'tuto_D_make_release.md'
- 'Generate pedestrian navigation': 'tuto_D_generate_pedestrian_navigation.md'
- Contributing:
- 'Contribution guidelines': 'CONTRIBUTING.md'
- 'Coding standard': 'coding_standard.md'
- 'Documentation standard': 'doc_standard.md'
- 'Make a release': 'dev/how_to_make_a_release.md'
- 'Upgrade the content': 'dev/how_to_upgrade_content.md'
- 'Code of conduct': 'CODE_OF_CONDUCT.md'
- 'Contribution guidelines': 'cont_contribution_guidelines.md'
- 'Code of conduct': 'cont_code_of_conduct.md'
- 'Coding standard': 'cont_coding_standard.md'
- 'Documentation standard': 'cont_doc_standard.md'
markdown_extensions:
- admonition