Merge branch 'dev' into marcel/ubuntu_20

This commit is contained in:
Marcel Pi 2022-04-19 17:54:30 +02:00
commit bc714f3614
130 changed files with 3803 additions and 255 deletions

View File

@ -1,5 +1,10 @@
## Latest
* Added 4 new attributes to all vehicles:
- `base_type` can be use as a vehicle classification. The possible values are *car*, *truck*, *van*, *motorcycle* and *bycicle*.
- `special_type` provides more information about the vehicle. It is currently restricted to *electric*, *emergency* and *taxi*, and not all vehicles have this attribute filled.
- `has_dynamics_doors` can either be *true* or *false* depending on whether or not the vehicle has doors that can be opened using the API.
- `has_lights` works in the same way as *has_dynamic_doors*, but differentiates between vehicles with lights, and those that don't.
* Added native ackermann controller:
- `apply_ackermann_control`: to apply an ackermann control command to a vehicle
- `get_ackermann_controller_settings`: to get the last ackermann controller settings applied
@ -8,6 +13,7 @@
* Added `NormalsSensor`, a new sensor with normals information
* Added support for N wheeled vehicles
* Added support for new batch commands ConsoleCommand, ApplyLocation (to actor), SetTrafficLightState
* Added new API function: `set_day_night_cycle` at the LightManager, to (de)activate the automatic switch of the lights when the simulation changes from day to night mode, and viceversa.
* Switch to boost::variant2 for rpc::Command as that allows more than 20 RPC commands
* Added post process effects for rainy and dusty weathers.

View File

@ -0,0 +1,78 @@
# 3rd Party Integrations
CARLA has been developed to integrate with several 3rd party applications in order to maximise its utility and extensability. The following
- [__ROS bridge__](https://carla.readthedocs.io/projects/ros-bridge/en/latest/)
- [__SUMO__](adv_sumo.md)
- [__Scenic__](tuto_G_scenic.md)
- [__CarSIM__](tuto_G_carsim_integration.md)
- [__Chrono__](tuto_G_chrono.md)
- [__OpenDRIVE__](adv_opendrive.md)
- [__PTV Vissim__](adv_ptv.md)
- [__RSS__](adv_rss.md)
- [__AWS and RLlib__](tuto_G_rllib_integration.md)
---
## ROS bridge
__Full documentation of the ROS bridge is found [__here__](https://carla.readthedocs.io/projects/ros-bridge/en/latest/).__
The ROS bridge enables two-way communication between ROS and CARLA. The information from the CARLA server is translated to ROS topics. In the same way, the messages sent between nodes in ROS get translated to commands to be applied in CARLA.
The ROS bridge is compatible with both ROS 1 and ROS 2.
The ROS bridge boasts the following features:
- Provides sensor data for LIDAR, Semantic LIDAR, Cameras (depth, segmentation, rgb, dvs), GNSS, Radar and IMU.
- Provides object data such as transforms, traffic light status, visualisation markers, collision and lane invasion.
- Control of AD agents through steering, throttle and brake.
- Control of aspects of the CARLA simulation like synchronous mode, playing and pausing the simulation and setting simulation parameters.
---
## SUMO
CARLA has developed a co-simulation feature with [__SUMO__](https://www.eclipse.org/sumo/). This allows to distribute the tasks at will, and exploit the capabilities of each simulation in favour of the user.
Please refer to the full documentation [__here__](adv_sumo.md).
---
## PTV Vissim
[__PTV Vissim__](https://www.ptvgroup.com/en/solutions/products/ptv-vissim/) is a proprietary software package providing a comprehensive traffic simulation solution with a powerful GUI. To use PTV-Vissim with CARLA refer to [__this guide__](adv_ptv.md)
---
## Scenic
Scenic is a set of libraries and a language for scenario specification and scene generation. CARLA and scenic can work seemlessly together, read [__this guid__](tuto_G_scenic.md) to understand how to use scenic with CARLA.
If you need to learn more about Scenic, then read their ["Getting Started with Scenic"](https://scenic-lang.readthedocs.io/en/latest/quickstart.html) guide and have a look at their tutorials for creating [static](https://scenic-lang.readthedocs.io/en/latest/tutorials/tutorial.html) and [dynamic](https://scenic-lang.readthedocs.io/en/latest/tutorials/dynamics.html) scenarios.
---
## CarSIM
CARLA's integration with CarSim allows vehicle controls in CARLA to be forwarded to CarSim. CarSim will do all required physics calculations of the vehicle and return the new state to CARLA.
Learn how to use CARLA alongside CarSIM [here](tuto_G_carsim_integration.md).
## OpenDRIVE
[__OpenDRIVE__](https://www.asam.net/standards/detail/opendrive/) is an open format specification used to describe the logic of a road network intended to standardise the discription of road networks in digital format and allow different applications to exchange data on road networks. Please refer to the full documentation [__here__](adv_opendrive.md)
## RSS - Responsibility Sensitive Safety
CARLA integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in the client library. This feature allows users to investigate behaviours of RSS without having to implement anything. CARLA will take care of providing the input, and applying the output to the AD systems on the fly. Refer to the full documentation [__here__](adv_rss.md)
## AWS and RLlib integration
The RLlib integration brings support between the Ray/RLlib library and CARLA, allowing the easy use of the CARLA environment for training and inference purposes. Ray is an open source framework that provides a simple, universal API for building distributed applications. Ray is packaged with RLlib, a scalable reinforcement learning library, and Tune, a scalable hyperparameter tuning library. Read more about operating CARLA on AWS and RLlib [__here__](tuto_G_rllib_integration.md).
## Chrono physics
[__Chrono__](https://projectchrono.org/) is a multi-physics simulation engine providing high realism vehicle dynamics using templates. CARLA's Chrono integraion allows CARLA users to add Chrono templates to simulate vehicle dynamics. Please refer to the full documentation [__here__](tuto_G_chrono.md).
---

View File

@ -616,6 +616,11 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `base_type` (_String_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.audi.etron</font>**
@ -624,6 +629,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.audi.tt</font>**
@ -632,6 +641,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.bh.crossbike</font>**
@ -641,6 +654,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.bmw.grandtourer</font>**
@ -649,6 +666,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.carlamotors.carlacola</font>**
@ -657,6 +678,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.carlamotors.firetruck</font>**
@ -665,6 +690,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.chevrolet.impala</font>**
@ -673,6 +702,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.citroen.c3</font>**
@ -681,6 +714,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.diamondback.century</font>**
@ -690,6 +727,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.dodge.charger_2020</font>**
@ -698,6 +739,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.dodge.charger_police</font>**
@ -706,6 +751,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.dodge.charger_police_2020</font>**
@ -714,6 +763,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.ford.ambulance</font>**
@ -722,6 +775,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.ford.crown</font>**
@ -730,6 +787,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.ford.mustang</font>**
@ -738,6 +799,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.gazelle.omafiets</font>**
@ -747,6 +812,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.harley-davidson.low_rider</font>**
@ -756,6 +825,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.jeep.wrangler_rubicon</font>**
@ -764,6 +837,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.kawasaki.ninja</font>**
@ -773,6 +850,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.lincoln.mkz_2017</font>**
@ -781,6 +862,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.lincoln.mkz_2020</font>**
@ -790,6 +875,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.mercedes.coupe</font>**
@ -798,6 +887,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.mercedes.coupe_2020</font>**
@ -806,6 +899,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.mercedes.sprinter</font>**
@ -814,6 +911,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.micro.microlino</font>**
@ -822,6 +923,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.mini.cooper_s</font>**
@ -830,6 +935,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.mini.cooper_s_2021</font>**
@ -838,6 +947,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.nissan.micra</font>**
@ -846,6 +959,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.nissan.patrol</font>**
@ -854,6 +971,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.nissan.patrol_2021</font>**
@ -862,6 +983,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.seat.leon</font>**
@ -870,6 +995,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.tesla.cybertruck</font>**
@ -877,6 +1006,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.tesla.model3</font>**
@ -885,6 +1018,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.toyota.prius</font>**
@ -893,6 +1030,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.vespa.zx125</font>**
@ -902,6 +1043,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.volkswagen.t2</font>**
@ -910,6 +1055,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.volkswagen.t2_2021</font>**
@ -918,6 +1067,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>
- **<font color="#498efc">vehicle.yamaha.yzf</font>**
@ -927,6 +1080,10 @@ Check out the [introduction to blueprints](core_actors.md).
- `generation` (_Int_)
- `number_of_wheels` (_Int_)
- `object_type` (_String_)
- `base_type` (_String_)
- `special_type` (_String_)
- `has_dynamic_doors` (_Bool_)
- `has_lights` (_Bool_)
- `role_name` (_String_) <sub>_- Modifiable_</sub>
- `sticky_control` (_Bool_) <sub>_- Modifiable_</sub>

15
Docs/build_carla.md Normal file
View File

@ -0,0 +1,15 @@
# Building CARLA from source
Users can build CARLA from the source code for development purposes. This is recommended if you want to add extra features or capabilities to CARLA or if you want to use the Unreal Editor to create assets or manipulate maps.
Build instructions are available for Linux and Windows. You can also build CARLA in a Docker container for deployment in AWS, Azure or Google cloud services. Visit the [__CARLA GitHub__](https://github.com/carla-simulator/carla) and clone the repository.
* [__Linux build__](build_linux.md)
* [__Windows build__](build_windows.md)
* [__Docker__](build_docker.md)
* [__Docker with Unreal__](build_docker_unreal.md)
* [__Updating CARLA__](build_update.md)
* [__Build system__](build_system.md)
* [__FAQ__](build_faq.md)

View File

@ -27,7 +27,6 @@ If you come across errors or difficulties then have a look at the **[F.A.Q.](bui
* __An adequate GPU.__ CARLA aims for realistic simulations, so the server needs at least a 6 GB GPU although 8 GB is recommended. A dedicated GPU is highly recommended for machine learning.
* __Two TCP ports and good internet connection.__ 2000 and 2001 by default. Make sure that these ports are not blocked by firewalls or any other applications.
!!! Warning
__If you are upgrading from CARLA 0.9.12 to 0.9.13__: you must first upgrade the CARLA fork of the UE4 engine to the latest version. See the [__Unreal Engine__](#unreal-engine) section for details on upgrading UE4

View File

@ -33,7 +33,6 @@ In this section you will find details of system requirements, minor and major so
* __An adequate GPU.__ CARLA aims for realistic simulations, so the server needs at least a 6 GB GPU although 8 GB is recommended. A dedicated GPU is highly recommended for machine learning.
* __Two TCP ports and good internet connection.__ 2000 and 2001 by default. Make sure that these ports are not blocked by firewalls or any other applications.
!!! Warning
__If you are upgrading from CARLA 0.9.12 to 0.9.13__: you must first upgrade the CARLA fork of the UE4 engine to the latest version. See the [__Unreal Engine__](#unreal-engine) section for details on upgrading UE4

View File

@ -1,6 +1,6 @@
# 2nd. Actors and blueprints
# Actors and blueprints
Actors not only include vehicles and walkers, but also sensors, traffic signs, traffic lights, and the spectator. It is crucial to have full understanding on how to operate on them.
Actors in CARLA are the elements that perform actions within the simulation, and they can affect other actors. Actors in CARLA includes vehicles and walkers and also sensors, traffic signs, traffic lights and the spectator. It is crucial to have full understanding on how to operate on them.
This section will cover spawning, destruction, types, and how to manage them. However, the possibilities are almost endless. Experiment, take a look at the __tutorials__ in this documentation and share doubts and ideas in the [CARLA forum](https://github.com/carla-simulator/carla/discussions/).

View File

@ -1,4 +1,4 @@
# 3rd. Maps and navigation
# Maps and navigation
After discussing about the world and its actors, it is time to put everything into place and understand the map and how do the actors navigate it.
@ -15,6 +15,15 @@ After discussing about the world and its actors, it is time to put everything in
- [__CARLA maps__](#carla-maps)
- [Non-layered maps](#non-layered-maps)
- [Layered maps](#layered-maps)
- [__Custom maps__](#custom-maps)
- [Overview](tuto_M_custom_map_overview.md)
- [Road painting](tuto_M_custom_road_painter.md)
- [Custom buildings](tuto_M_custom_buildings.md)
- [Generate map](tuto_M_generate_map.md)
- [Add map package](tuto_M_add_map_package.md)
- [Add map source](tuto_M_add_map_source.md)
- [Alternative methods](tuto_M_add_map_alternative.md)
---
## The map
@ -266,20 +275,15 @@ See an example of all layers being loaded and unloaded in sequence:
---
That is a wrap as regarding maps and navigation in CARLA. The next step takes a closer look into sensors types, and the data they retrieve.
Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading.
<div text-align: center>
<div class="build-buttons">
<p>
<a href="https://github.com/carla-simulator/carla/discussions/" target="_blank" class="btn btn-neutral" title="CARLA forum">
CARLA forum</a>
</p>
</div>
<div class="build-buttons">
<p>
<a href="../core_sensors" target="_blank" class="btn btn-neutral" title="4th. Sensors and data">
4th. Sensors and data</a>
</p>
</div>
</div>
## Custom maps
CARLA is designed to be extensible and highly customisable for specialist applications. Therefore, in addition to the many maps and assets already avaiable in CARLA out of the box, it is possible to create and import new maps, road networks and assets to populate bespoke environments in a CARLA simulation. The following documents detail the steps needed to build and integrate custom maps:
* [__Overview__](tuto_M_custom_map_overview.md)
* [__Road painting__](tuto_M_custom_road_painter.md)
* [__Custom buildings__](tuto_M_custom_buildings.md)
* [__Generate map__](tuto_M_generate_map.md)
* [__Add map package__](tuto_M_add_map_package.md)
* [__Add map source__](tuto_M_add_map_source.md)
* [__Alternative methods__](tuto_M_add_map_alternative.md)

View File

@ -1,4 +1,4 @@
# 4th. Sensors and data
# Sensors and data
Sensors are actors that retrieve data from their surroundings. They are crucial to create learning environment for driving agents.
@ -12,7 +12,8 @@ This page summarizes everything necessary to start handling sensors. It introduc
* [__Types of sensors__](#types-of-sensors)
* [Cameras](#cameras)
* [Detectors](#detectors)
* [Other](#other)
* [Other](#other)
* [__Sensors reference__](ref_sensors.md)
---
## Sensors step-by-step
@ -109,11 +110,12 @@ Take a shot of the world from their point of view. For cameras that return [carl
|Sensor |Output | Overview |
| ----------------- | ---------- | ------------------ |
| Depth | [carla.Image](<../python_api#carlaimage>) |Renders the depth of the elements in the field of view in a gray-scale map. |
| RGB | [carla.Image](<../python_api#carlaimage>) | Provides clear vision of the surroundings. Looks like a normal photo of the scene. |
| Optical Flow | [carla.Image](<../python_api#carlaimage>) | Renders the motion of every pixel from the camera. |
| Semantic segmentation | [carla.Image](<../python_api#carlaimage>) | Renders elements in the field of view with a specific color according to their tags. |
| DVS | [carla.DVSEventArray](<../python_api#carladvseventarray>) | Measures changes of brightness intensity asynchronously as an event stream. |
| [Depth](ref_sensors.md#depth-camera) | [carla.Image](<../python_api#carlaimage>) |Renders the depth of the elements in the field of view in a gray-scale map. |
| [RGB](ref_sensors.md#rgb-camera) | [carla.Image](<../python_api#carlaimage>) | Provides clear vision of the surroundings. Looks like a normal photo of the scene. |
| [Optical Flow](ref_sensors.md#optical-flow-camera) | [carla.Image](<../python_api#carlaimage>) | Renders the motion of every pixel from the camera. |
| [Semantic segmentation](ref_sensors.md#semantic-segmentation-camera) | [carla.Image](<../python_api#carlaimage>) | Renders elements in the field of view with a specific color according to their tags. |
| [Instance segmentation](ref_sensors.md#instance-segmentation-camera) | [carla.Image](<../python_api#carlaimage>) | Renders elements in the field of view with a specific color according to their tags and a unique object ID. |
| [DVS](ref_sensors.md#dvs-camera) | [carla.DVSEventArray](<../python_api#carladvseventarray>) | Measures changes of brightness intensity asynchronously as an event stream. |
<br>
@ -128,9 +130,9 @@ Retrieve data when the object they are attached to registers a specific event.
| Sensor | Output | Overview |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| Collision | [carla.CollisionEvent](<../python_api#carlacollisionevent>) | Retrieves collisions between its parent and other actors. |
| Lane invasion | [carla.LaneInvasionEvent](<../python_api#carlalaneinvasionevent>) | Registers when its parent crosses a lane marking. |
| Obstacle | [carla.ObstacleDetectionEvent](<../python_api#carlaobstacledetectionevent>) | Detects possible obstacles ahead of its parent. |
| [Collision](ref_sensors.md#collision-detector) | [carla.CollisionEvent](<../python_api#carlacollisionevent>) | Retrieves collisions between its parent and other actors. |
| [Lane invasion](ref_sensors.md#lane-invasion-detector) | [carla.LaneInvasionEvent](<../python_api#carlalaneinvasionevent>) | Registers when its parent crosses a lane marking. |
| [Obstacle](ref_sensors.md#obstacle-detector) | [carla.ObstacleDetectionEvent](<../python_api#carlaobstacledetectionevent>) | Detects possible obstacles ahead of its parent. |
<br>
@ -144,12 +146,12 @@ Different functionalities such as navigation, measurement of physical properties
| Sensor | Output | Overview |
| ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- | ----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |
| GNSS | [carla.GNSSMeasurement](<../python_api#carlagnssmeasurement>) | Retrieves the geolocation of the sensor. |
| IMU | [carla.IMUMeasurement](<../python_api#carlaimumeasurement>) | Comprises an accelerometer, a gyroscope, and a compass. |
| LIDAR | [carla.LidarMeasurement](<../python_api#carlalidarmeasurement>) | A rotating LIDAR. Generates a 4D point cloud with coordinates and intensity per point to model the surroundings. |
| Radar | [carla.RadarMeasurement](<../python_api#carlaradarmeasurement>) | 2D point map modelling elements in sight and their movement regarding the sensor. |
| RSS | [carla.RssResponse](<../python_api#carlarssresponse>) | Modifies the controller applied to a vehicle according to safety checks. This sensor works in a different manner than the rest, and there is specific [RSS documentation](<../adv_rss>) for it. |
| Semantic LIDAR | [carla.SemanticLidarMeasurement](<../python_api#carlasemanticlidarmeasurement>) | A rotating LIDAR. Generates a 3D point cloud with extra information regarding instance and semantic segmentation. |
| [GNSS](ref_sensors.md#gnss-sensor) | [carla.GNSSMeasurement](<../python_api#carlagnssmeasurement>) | Retrieves the geolocation of the sensor. |
| [IMU](ref_sensors.md#imu-sensor) | [carla.IMUMeasurement](<../python_api#carlaimumeasurement>) | Comprises an accelerometer, a gyroscope, and a compass. |
| [LIDAR](ref_sensors.md#lidar-sensor) | [carla.LidarMeasurement](<../python_api#carlalidarmeasurement>) | A rotating LIDAR. Generates a 4D point cloud with coordinates and intensity per point to model the surroundings. |
| [Radar](ref_sensors.md#radar-sensor) | [carla.RadarMeasurement](<../python_api#carlaradarmeasurement>) | 2D point map modelling elements in sight and their movement regarding the sensor. |
| [RSS](ref_sensors.md#rss-sensor) | [carla.RssResponse](<../python_api#carlarssresponse>) | Modifies the controller applied to a vehicle according to safety checks. This sensor works in a different manner than the rest, and there is specific [RSS documentation](<../adv_rss>) for it. |
| [Semantic LIDAR](ref_sensors.md#semantic-lidar-sensor) | [carla.SemanticLidarMeasurement](<../python_api#carlasemanticlidarmeasurement>) | A rotating LIDAR. Generates a 3D point cloud with extra information regarding instance and semantic segmentation. |
<br>

View File

@ -0,0 +1,26 @@
# Custom assets
CARLA has a wealth of assets available out of the box including full towns and cities with road networks, buildings and infrastructure, vehicles and pedestrians to populate your simulations. However, for many applications, you may want to add your own assets and CARLA is fully capable of loading new assets created entirely by the user for maximum extensability.
The following documentation details numerous techniques for creating your own assets and adding them to CARLA.
- [__Adding props__](tuto_A_add_props.md)
- [__Adding vehicles__](tuto_A_add_vehicle.md)
- [__Packaging assets__](tuto_A_create_standalone.md)
- [__Material customisation__](tuto_A_material_customization.md)
## [Adding props](tuto_A_add_props.md)
Props are the assets populating the scene, other than the roads and vehicles. That includes streetlights, buildings, trees, and much more. The simulator can ingest new props anytime in a simple process. This is really useful to create customized environments in a map. [__This document__](tuto_A_add_props.md) demonstrates how to create and include custom props.
## Adding vehicles
Vehicles are the bread and butter of CARLA. They serve to simulate other road users and act as a virtual emulation of the vehicle that an autonomous agent is built to control. CARLA has a large, growing library of vehicles out of the box, but for specialised applications, CARLA is capable of loading custom designed vehicles. [__This document__](tuto_A_add_vehicle.md) details how to create and import custom vehicles.
## Packaging assets
It is a common practice in CARLA to manage assets with standalone packages. Keeping them aside allows to reduce the size of the build. These asset packages can be easily imported into a CARLA package anytime. They also become really useful to easily distribute assets in an organized way. [__This document__](tuto_A_create_standalone.md) demonstrates how to package assets for use in CARLA.
## Custom materials
The CARLA team prepares every asset to run under certain default settings. However, users that work in a build from source can modify these to best suit their needs. [__This document__](tuto_A_material_customization.md) demonstrates how to achieve this.

View File

@ -0,0 +1,39 @@
# Development
CARLA is open source and designed to be highly extensible. This allows users to create custom functionality or content to suit specialized applications or specific needs. The following tutorials detail how to achieve specific development aims with the CARLA codebase:
- [__Make release__](tuto_D_make_release.md)
- [__Upgrading content__](tuto_D_contribute_assets.md)
- [__Create semantic tags__](tuto_D_create_semantic_tags.md)
- [__Create new sensor__](tuto_D_create_sensor.md)
- [__Preformance benchmarking__](adv_benchmarking.md)
- [__Recorder file format__](ref_recorder_binary_file_format.md)
- [__Collision boundaries__](tuto_D_generate_colliders.md)
## Make a release
If you want to develop your own fork of CARLA and publish releases of your code, follow [__this guide__](tuto_D_make_release.md).
## Upgrading content
Our content resides on a separate Git LFS repository. As part of our build system, we generate and upload a package containing the latest version of this content tagged with the current date and commit. Regularly, we upgrade the CARLA repository with a link to the latest version of the content package. Please follow [__these instructions__](tuto_D_contribute_assets.md) to upgrade content.
## Create semantic tags
CARLA has a set of semantic tags already defined suitable for most use cases. However, if you need additional classes you can add them as detailed in [__this guide__](tuto_D_create_semantic_tags.md)
## Creating a new sensor
You can modify CARLA's C++ code to create new sensors for your custom use cases. Please find the details [__here__](tuto_D_create_sensor.md)
## Benchmarking performance
CARLA has a benchmarking script to help with benchmarking performance on your system. Find the full details [__here__](adv_benchmarking.md)
## Recorder binary file format
Details on the binary file format for the recorder can be found [__here__](ref_recorder_binary_file_format.md)
## Generating collision boundaries
Details on generating more accurate collision boundaries for vehicles can be found in [__this guide__](tuto_D_generate_colliders.md)

43
Docs/ext_docs.md Normal file
View File

@ -0,0 +1,43 @@
# Extended documentation
Below, you will find in depth documentation on the many extensive features of CARLA.
## Advanced concepts
[__Recorder__](adv_recorder.md) — Register the events in a simulation and play it again.
[__Rendering options__](adv_rendering_options.md) — From quality settings to no-render or off-screen modes.
[__Synchrony and time-step__](adv_synchrony_timestep.md) — Client-server communication and simulation time.
[__Benchmarking Performance__](adv_benchmarking.md) — Perform benchmarking using our prepared script.
[__CARLA Agents__](adv_agents.md) — Agents scripts allow single vehicles to roam the map or drive to a set destination.
## Traffic Simulation
[__ Traffic Simulation Overview__](ts_traffic_simulation_overview.md) — An overview of the different options available to populate your scenes with traffic.
[__Traffic Manager__](adv_traffic_manager.md) — Simulate urban traffic by setting vehicles to autopilot mode.
## References
[__Recorder binary file format__](ref_recorder_binary_file_format.md) — Detailed explanation of the recorder file format.
[__Sensors reference__](ref_sensors.md) — Everything about sensors and the data they retrieve.
## Custom Maps
[__Overview of custom maps in CARLA__](tuto_M_custom_map_overview.md) — An overview of the process and options involved in adding a custom, standard sized map.
[__Create a map in RoadRunner__](tuto_M_generate_map.md) — How to generate a customs, standard sized map in RoadRunner.
[__ Import map in CARLA package__](tuto_M_add_map_package.md) How to import a map in a CARLA package.
[__Import map in CARLA source build__](tuto_M_add_map_source.md) — How to import a map in CARLA built from source.
[__Alternative ways to import maps__](tuto_M_add_map_alternative.md) — Alternative methods to import maps.
[__ Manually prepare map package__](tuto_M_manual_map_package.md) — How to prepare a map for manual import.
[__Customizing maps: Layered maps__](tuto_M_custom_layers.md) — How to create sub-layers in your custom map.
[__ Customizing maps: Traffic lights and signs__](tuto_M_custom_add_tl.md) — How to add traffic lights and signs to your custom map.
[__ Customizing maps: Road painter__](tuto_M_custom_road_painter.md) — How to use the road painter tool to change the appearance of the road.
[__Customizing Maps: Procedural Buildings__](tuto_M_custom_buildings.md) — Populate your custom map with buildings.
[__ Customizing maps: Weather and landscape__](tuto_M_custom_weather_landscape.md) — Create the weather profile for your custom map and populate the landscape.
[__Generate pedestrian navigation__](tuto_M_generate_pedestrian_navigation.md) — Obtain the information needed for walkers to move around.
## Large Maps
[__Large maps overview__](large_map_overview.md) — An explanation of how large maps work in CARLA.
[__Create a Large Map in RoadRunner__](large_map_roadrunner.md) — How to create a large map in RoadRunner.
[__Import/Package a Large Map__](large_map_import.md) — How to import a large map.

226
Docs/foundations.md Normal file
View File

@ -0,0 +1,226 @@
# Foundations
This page introduces the fundamental concepts required to understand how the CARLA server and client operate and communicate through the API. CARLA operates using a server-client architecture, whereby the CARLA server runs the simulation and instructions are sent to it by the client(s). The client code communicates with the server using the [__API__](python_api.md). To use the Python API you must install the module through PIP:
```sh
pip install carla-simulator # Python 2
pip3 install carla-simulator # Python 3
```
Also make sure to import the CARLA package in your python scripts:
```py
import carla
```
- [__World and client__](#world-and-client)
- [Client](#client)
- [World](#world)
- [__Syncrhonous and asyncrhonous mode__](#synchronous-and-asynchronous-mode)
- [Setting synchronous mode](#setting-synchronous-mode)
- [Using synchronous mode](#using-synchronous-mode)
- [__Recorder__](#recorder)
- [Recording](#recording)
- [Simulation playback](#simulation-playback)
- [Recorder file format](#recorder-file-format)
---
## World and client
### Client
__The client__ is the module the user runs to ask for information or changes in the simulation. A client runs with an IP and a specific port. It communicates with the server via terminal. There can be many clients running at the same time. Advanced multiclient managing requires thorough understanding of CARLA and [synchrony](adv_synchrony_timestep.md).
Set up the client using the CARLA client object:
```py
client = carla.Client('localhost', 2000)
```
This sets up the client to communicate with a CARLA server running on `localhost`, the local machine. Alternatively, the IP address of a network machine can be used if running the client on a separate machine. The second argument is the port number. By default, the CARLA server will run on port 2000, you can alter this in the settings when you launch CARLA if necessary.
The client object can be used for a number of functions including loading new maps, recording the simulation and initialising the traffic manager:
```py
client.load_world('Town07')
client.start_recorder('recording.log')
```
### World
__The world__ is an object representing the simulation. It acts as an abstract layer containing the main methods to spawn actors, change the weather, get the current state of the world, etc. There is only one world per simulation. It will be destroyed and substituted for a new one when the map is changed.
The world object is retrieved using the client object:
```py
world = client.get_world()
```
The world object can be used to access objects within the simulation, such as weather, vehicles, traffic lights, buildings and the map using its many methods:
```py
level = world.get_map()
weather = world.get_weather()
blueprint_library = world.get_blueprint_library()
```
## Synchronous and asynchronous mode
CARLA has a client-server architecture. The server runs the simulation. The client retrieves information and requests changes in the simulation. This section deals with communication between client and server.
By default, CARLA runs in __asynchronous mode__.
Essentially, in __asynchronous mode__ the CARLA server runs as fast as it can. Client requests are handled on the fly. In __synchronous mode__ the client, running your Python code, takes the reigns and tells the server when to update.
__Asynchronous mode__ is an appropriate mode to run CARLA if you are experimenting or setting up a simulation, so you can fly around the map with the spectator as you place your actors. When you want to start producing training data or deploying an agent within the simulation, it is advised that you use the __synchronous mode__ since this will give you more control and predictability.
Read more about [__synchronous and asynchronous modes__](adv_synchrony_timestep.md).
!!! Note
In a multiclient architecture, only one client should tick. The server reacts to every tick received as if it came from the same client. Many client ticks will make the create inconsistencies between server and clients.
### Setting synchronous mode
Changing between synchronous and asynchronous mode is just a matter of a boolean state.
```py
settings = world.get_settings()
settings.synchronous_mode = True # Enables synchronous mode
settings.fixed_delta_seconds = 0.05
world.apply_settings(settings)
```
!!! Warning
If synchronous mode is enabled, and there is a Traffic Manager running, this must be set to sync mode too. Read [this](adv_traffic_manager.md#synchronous-mode) to learn how to do it.
To disable synchronous mode just set the variable to `False` or use the script `PythonAPI/util/config.py`.
```sh
cd PythonAPI/util && python3 config.py --no-sync # Disables synchronous mode
```
Synchronous mode cannot be enabled using the script, only disabled. Enabling the synchronous mode makes the server wait for a client tick. Using this script, the user cannot send ticks when desired.
### Using synchronous mode
The synchronous mode becomes specially relevant with slow client applications, and when synchrony between different elements, such as sensors, is needed. If the client is too slow and the server does not wait, there will be an overflow of information. The client will not be able to manage everything, and it will be lost or mixed. On a similar tune, with many sensors and asynchrony, it would be impossible to know if all the sensors are using data from the same moment in the simulation.
The following fragment of code extends the previous one. The client creates a camera sensor, stores the image data of the current step in a queue, and ticks the server after retrieving it from the queue. A more complex example regarding several sensors can be found [here][syncmodelink].
```py
settings = world.get_settings()
settings.synchronous_mode = True
world.apply_settings(settings)
camera = world.spawn_actor(blueprint, transform)
image_queue = queue.Queue()
camera.listen(image_queue.put)
while True:
world.tick()
image = image_queue.get()
```
[syncmodelink]: https://github.com/carla-simulator/carla/blob/master/PythonAPI/examples/synchronous_mode.py
!!! Important
Data coming from GPU-based sensors, mostly cameras, is usually generated with a delay of a couple of frames. Synchrony is essential here.
The world has asynchrony methods to make the client wait for a server tick, or do something when it is received.
```py
# Wait for the next tick and retrieve the snapshot of the tick.
world_snapshot = world.wait_for_tick()
# Register a callback to get called every time we receive a new snapshot.
world.on_tick(lambda world_snapshot: do_something(world_snapshot))
```
## Recorder
The recorder enables all data required to reproduce a previous simulation to be saved into a file. The data includes details like the position and speed of vehicles, the state of traffic lights, the position and speed of pedestrians and the position of the sun and weather conditions. The data gets recorded into a binary file that can be loaded at a later time by the carla server to exactly reproduce the simulation.
Actors are updated on every frame according to the data contained in the recorded file. Actors in the current simulation that appear in the recording will be either moved or re-spawned to emulate it. Those that do not appear in the recording will continue their way as if nothing happened.
!!! Important
By the end of the playback, vehicles will be set to autopilot, but __pedestrians will stop__.
The recorder file includes information regarding many different elements.
* __Actors__ — creation and destruction, bounding and trigger boxes.
* __Traffic lights__ — state changes and time settings.
* __Vehicles__ — position and orientation, linear and angular velocity, light state, and physics control.
* __Pedestrians__ — position and orientation, and linear and angular velocity.
* __Lights__ — Light states from buildings, streets, and vehicles.
### Recording
To start recording there is only need for a file name. Using `\`, `/` or `:` characters in the file name will define it as an absolute path. If no path is detailed, the file will be saved in `CarlaUE4/Saved`.
```py
client.start_recorder("/home/carla/recording01.log")
```
By default, the recorder is set to store only the necessary information to play the simulation back. In order to save all the information previously mentioned, the argument `additional_data` has to be configured when starting the recording.
```py
client.start_recorder("/home/carla/recording01.log", True)
```
!!! Note
Additional data includes: linear and angular velocity of vehicles and pedestrians, traffic light time settings, execution time, actors' trigger and bounding boxes, and physics controls for vehicles.
To stop the recording, the call is also straightforward.
```py
client.stop_recorder()
```
!!! Note
As an estimate, 1h recording with 50 traffic lights and 100 vehicles takes around 200MB in size.
### Simulation playback
A playback can be started at any point during a simulation. Besides the path to the log file, this method needs some parameters.
```py
client.replay_file("recording01.log", start, duration, camera)
```
| Parameter | Description | Notes |
| -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- | -------------------------------------------------------------------------------------------------------------------------------- |
| `start` | Recording time in seconds to start the simulation at. | If positive, time will be considered from the beginning of the recording. <br> If negative, it will be considered from the end. |
| `duration` | Seconds to playback. 0 is all the recording. | By the end of the playback, vehicles will be set to autopilot and pedestrians will stop. |
| `camera` | ID of the actor that the camera will focus on. | Set it to `0` to let the spectator move freely. |
<br>
### Recorder file format
The recorder saves all of the data in a custom binary file format specified in [__this document__]
---
## Rendering
CARLA offers a number of options regarding rendering quality and efficiency. At the most basic level, CARLA offers two quality options to enable operation on both high and low spec hardware with the best results:
### Epic mode
`./CarlaUE4.sh -quality-level=Epic`
![Epic mode screenshot](img/rendering_quality_epic.jpg)
*Epic mode screenshot*
### Low mode
`./CarlaUE4.sh -quality-level=Low`
![Low mode screenshot](img/rendering_quality_low.jpg)
*Low mode screenshot*
CARLA also offers options to suspend rendering or render offscreen, to enable simulations to be recorded or run more efficiently.
More details on rendering options can be found [__here__](adv_rendering_options.md).

BIN
Docs/img/base_nw.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 11 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

BIN
Docs/img/nwheel_config.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 66 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

BIN
Docs/img/nwheels.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 25 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 19 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 393 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 MiB

View File

@ -0,0 +1,67 @@
crl_root, crl_hips__C
crl_hips__C, crl_spine__C
crl_hips__C, crl_thigh__R
crl_hips__C, crl_thigh__L
crl_spine__C, crl_spine01__C
crl_spine01__C, crl_shoulder__L
crl_spine01__C, crl_neck__C
crl_spine01__C, crl_shoulder__R
crl_shoulder__L, crl_arm__L
crl_arm__L, crl_foreArm__L
crl_foreArm__L, crl_hand__L
crl_hand__L, crl_handThumb__L
crl_hand__L, crl_handIndex__L
crl_hand__L, crl_handMiddle__L
crl_hand__L, crl_handRing__L
crl_hand__L, crl_handPinky__L
crl_handThumb__L, crl_handThumb01__L
crl_handThumb01__L, crl_handThumb02__L
crl_handThumb02__L, crl_handThumbEnd__L
crl_handIndex__L, crl_handIndex01__L
crl_handIndex01__L, crl_handIndex02__L
crl_handIndex02__L, crl_handIndexEnd__L
crl_handMiddle__L, crl_handMiddle01__L
crl_handMiddle01__L, crl_handMiddle02__L
crl_handMiddle02__L, crl_handMiddleEnd__L
crl_handRing__L, crl_handRing01__L
crl_handRing01__L, crl_handRing02__L
crl_handRing02__L, crl_handRingEnd__L
crl_handPinky__L, crl_handPinky01__L
crl_handPinky01__L, crl_handPinky02__L
crl_handPinky02__L, crl_handPinkyEnd__L
crl_neck__C, crl_Head__C
crl_Head__C, crl_eye__L
crl_Head__C, crl_eye__R
crl_Head__C, crl_eye__L_A
crl_Head__C, crl_eye__R_A
crl_shoulder__R, crl_arm__R
crl_arm__R, crl_foreArm__R
crl_foreArm__R, crl_hand__R
crl_hand__R, crl_handThumb__R
crl_hand__R, crl_handIndex__R
crl_hand__R, crl_handMiddle__R
crl_hand__R, crl_handRing__R
crl_hand__R, crl_handPinky__R
crl_handThumb__R, crl_handThumb01__R
crl_handThumb01__R, crl_handThumb02__R
crl_handThumb02__R, crl_handThumbEnd__R
crl_handIndex__R, crl_handIndex01__R
crl_handIndex01__R, crl_handIndex02__R
crl_handIndex02__R, crl_handIndexEnd__R
crl_handMiddle__R, crl_handMiddle01__R
crl_handMiddle01__R, crl_handMiddle02__R
crl_handMiddle02__R, crl_handMiddleEnd__R
crl_handRing__R, crl_handRing01__R
crl_handRing01__R, crl_handRing02__R
crl_handRing02__R, crl_handRingEnd__R
crl_handPinky__R, crl_handPinky01__R
crl_handPinky01__R, crl_handPinky02__R
crl_handPinky02__R, crl_handPinkyEnd__R
crl_thigh__R, crl_leg__R
crl_leg__R, crl_foot__R
crl_foot__R, crl_toe__R
crl_toe__R, crl_toeEnd__R
crl_thigh__L, crl_leg__L
crl_leg__L, crl_foot__L
crl_foot__L, crl_toe__L
crl_toe__L, crl_toeEnd__L

Binary file not shown.

After

Width:  |  Height:  |  Size: 31 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.3 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 703 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 365 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 625 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 929 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 912 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.9 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 16 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 151 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 440 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 120 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 35 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 3.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 571 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 918 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 116 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 639 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 64 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 38 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 160 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 15 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 957 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 109 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 44 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 584 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 435 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 612 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 728 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 52 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.4 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 714 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 674 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 576 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 241 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 24 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 188 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 MiB

View File

@ -4,8 +4,8 @@ Welcome to the CARLA documentation.
This home page contains an index with a brief description of the different sections in the documentation. Feel free to read in whatever order preferred. In any case, here are a few suggestions for newcomers.
* __Install CARLA.__ Either follow the [Quick start installation](start_quickstart.md) to get a CARLA release or [make the build](build_linux.md) for a desired platform.
* __Start using CARLA.__ The section titled [First steps](core_concepts.md) is an introduction to the most important concepts.
* __Install CARLA.__ Either follow the [Quick start installation](start_quickstart.md) to get a CARLA release or [make the build](build_carla.md) for a desired platform.
* __Start using CARLA.__ The section titled [Foundations](foundations.md) is an introduction to the most important concepts and the [first steps tutorial](tuto_first_steps.md) shows you how to get started.
* __Check the API.__ there is a handy [Python API reference](python_api.md) to look up the classes and methods available.
The CARLA forum is available to post any doubts or suggestions that may arise during the reading.
@ -24,106 +24,48 @@ CARLA forum</a>
## Getting started
[__Introduction__](start_introduction.md) — What to expect from CARLA.
[__Quick start package installation__](start_quickstart.md) — Get the CARLA releases.
[__Quick start package installation__](start_quickstart.md) — Get the CARLA releases.
[__First steps__](tuto_G_getting_started.md) — Taking the first steps in CARLA.
[__Building CARLA__](build_carla.md) — How to build CARLA from source.
## CARLA components
[__Foundations__](core_concepts.md) — Overview of the fundamental building blocks of CARLA.
[__Actors__](core_actors.md) — Learn about actors and how to handle them.
[__Maps__](core_map.md) — Discover the different maps and how do vehicles move around.
[__Sensors and data__](core_sensors.md) — Retrieve simulation data using sensors.
[__Traffic__](ts_traffic_simulation_overview.md) — An overview of the different options available to populate your scenes with traffic.
[__3rd party integrations__](3rd_party_integrations.md) — Integrations with 3rd party applications and libraries.
[__Development__](development_tutorials.md) — Information on how to develop custom features for CARLA.
[__Custom assets__](custom_assets_tutorials.md) — Information on how to develop custom assets
## Building CARLA
## Resources
[__Blueprint library__](bp_library.md) — Blueprints provided to spawn actors.
[__Python API__](python_api.md) — Classes and methods in the Python API.
[__C++ reference__](ref_cpp.md) — Classes and methods in CARLA C++.
[__Linux build__](build_linux.md) — Make the build on Linux.
[__Windows build__](build_windows.md) — Make the build on Windows.
[__Update CARLA__](build_update.md) — Get up to date with the latest content.
[__Build system__](build_system.md) — Learn about the build and how it is made.
[__CARLA in Docker__](build_docker.md) — Run CARLA using a container solution.
[__F.A.Q.__](build_faq.md) — Some of the most frequent installation issues.
## CARLA ecosystem
## First steps
[__Core concepts__](core_concepts.md) — Overview of the basic concepts in CARLA.
[__1st. World and client__](core_world.md) — Manage and access the simulation.
[__2nd. Actors and blueprints__](core_actors.md) — Learn about actors and how to handle them.
[__3rd. Maps and navigation__](core_map.md) — Discover the different maps and how do vehicles move around.
[__4th. Sensors and data__](core_sensors.md) — Retrieve simulation data using sensors.
## Advanced concepts
[__OpenDRIVE standalone mode__](adv_opendrive.md) — Use any OpenDRIVE file as a CARLA map.
[__PTV-Vissim co-simulation__](adv_ptv.md) — Run a synchronous simulation between CARLA and PTV-Vissim.
[__Recorder__](adv_recorder.md) — Register the events in a simulation and play it again.
[__Rendering options__](adv_rendering_options.md) — From quality settings to no-render or off-screen modes.
[__RSS__](adv_rss.md) — An implementation of RSS in the CARLA client library.
[__Synchrony and time-step__](adv_synchrony_timestep.md) — Client-server communication and simulation time.
[__Benchmarking Performance__](adv_benchmarking.md) — Perform benchmarking using our prepared script.
[__CARLA Agents__](adv_agents.md) — Agents scripts allow single vehicles to roam the map or drive to a set destination.
## Traffic Simulation
[__ Traffic Simulation Overview__](ts_traffic_simulation_overview.md) — An overview of the different options available to populate your scenes with traffic
[__Traffic Manager__](adv_traffic_manager.md) — Simulate urban traffic by setting vehicles to autopilot mode.
[__SUMO co-simulation__](adv_sumo.md) — Run a synchronous simulation between CARLA and SUMO.
[__Scenic__](tuto_G_scenic.md) — Follow an example of defining different scenarios using the Scenic library.
## References
[__Python API reference__](python_api.md) — Classes and methods in the Python API.
[__Blueprint library__](bp_library.md) — Blueprints provided to spawn actors.
[__C++ reference__](ref_cpp.md) — Classes and methods in CARLA C++.
[__Recorder binary file format__](ref_recorder_binary_file_format.md) — Detailed explanation of the recorder file format.
[__Sensors reference__](ref_sensors.md) — Everything about sensors and the data they retrieve.
## Plugins
[__carlaviz — web visualizer__](plugins_carlaviz.md) — Plugin that listens the simulation and shows the scene and some simulation data in a web browser.
## ROS bridge
[__ROS bridge documentation__](ros_documentation.md) — Brief overview of the ROS bridge and a link to the full documentation
[__MathWorks__](large_map_roadrunner.md) — Overview of creating a map in RoadRunner.
[__SUMO__](adv_sumo.md) — Details of the co-simulation feature with SUMO.
[__Scenic__](tuto_G_scenic.md) — How to use Scenic with CARLA to generate scenarios.
[__Chrono__](tuto_G_chrono.md) — Details of the Chrono physics simulation integration with CARLA.
[__OpenDrive__](adv_opendrive.md) — Details of the OpenDrive support in CARLA.
[__PTV-Vissim__](adv_ptv.md) — Details of the co-simulation feature with PTV-Vissim.
[__RSS__](adv_rss.md) — Details of the Responsibility Sensitive Safety library integration with CARLA.
[__AWS__](tuto_G_rllib_integration) — Details of using RLlib to run CARLA as a distributed application on Amazon Web Services.
[__ANSYS__](ecosys_ansys.md) — Brief overview of how the Ansys Real Time Radar Model was integrated into CARLA.
[__carlaviz — web visualizer__](plugins_carlaviz.md) — Plugin that listens the simulation and shows the scene and some simulation data in a web browser.
## Custom Maps
## Contributing to CARLA
[__Guidelines__](cont_contribution_guidelines.md) — Guidelines on contributing to the development of the CARLA simulator and its ecosystem.
[__Coding standards__](cont_coding_standard.md) — Details on the best coding practices when contributing to CARLA development.
[__Documentation standard__](cont_doc_standard.md) — Details on the documentation standards for CARLA docs.
[__Overview of custom maps in CARLA__](tuto_M_custom_map_overview.md) — An overview of the process and options involved in adding a custom, standard sized map
[__Create a map in RoadRunner__](tuto_M_generate_map.md) — How to generate a customs, standard sized map in RoadRunner
[__ Import map in CARLA package__](tuto_M_add_map_package.md) How to import a map in a CARLA package
[__Import map in CARLA source build__](tuto_M_add_map_source.md) — How to import a map in CARLA built from source
[__Alternative ways to import maps__](tuto_M_add_map_alternative.md) — Alternative methods to import maps
[__ Manually prepare map package__](tuto_M_manual_map_package.md) — How to prepare a map for manual import
[__Customizing maps: Layered maps__](tuto_M_custom_layers.md) — How to create sub-layers in your custom map
[__ Customizing maps: Traffic lights and signs__](tuto_M_custom_add_tl.md) — How to add traffic lights and signs to your custom map
[__ Customizing maps: Road painter__](tuto_M_custom_road_painter.md) — How to use the road painter tool to change the apearance of the road
[__Customizing Maps: Procedural Buildings__](tuto_M_custom_buildings.md) — Populate your custom map with buildings
[__ Customizing maps: Weather and landscape__](tuto_M_custom_weather_landscape.md) — Create the weather profile for your custom map and populate the landscape
[__Generate pedestrian navigation__](tuto_M_generate_pedestrian_navigation.md) — Obtain the information needed for walkers to move around.
## Tutorials
## Large Maps
There are numerous tutorials covering CARLA features with code and guidelines for varied use cases. Please check the [tutorials page](tutorials.md) for help with your work.
[__Large maps overview__](large_map_overview.md) — An explanation of how large maps work in CARLA
[__Create a Large Map in RoadRunner__](large_map_roadrunner.md) — How to create a large map in RoadRunner
[__Import/Package a Large Map__](large_map_import.md) — How to import a large map
## Tutorials — General
[__Add friction triggers__](tuto_G_add_friction_triggers.md) — Define dynamic box triggers for wheels.
[__Control vehicle physics__](tuto_G_control_vehicle_physics.md) — Set runtime changes on a vehicle physics.
[__Control walker skeletons__](tuto_G_control_walker_skeletons.md) — Animate walkers using skeletons.
[__Generate maps with OpenStreetMap__](tuto_G_openstreetmap.md) — Use OpenStreetMap to generate maps for use in simulations.
[__Retrieve simulation data__](tuto_G_retrieve_data.md) — A step by step guide to properly gather data using the recorder.
[__CarSim Integration__](tuto_G_carsim_integration.md) — Tutorial on how to run a simulation using the CarSim vehicle dynamics engine.
[__RLlib Integration__](tuto_G_rllib_integration.md) — Find out how to run your own experiment using the RLlib library.
[__Chrono Integration__](tuto_G_chrono.md) — Use the Chrono integration to simulation physics
[__Build Unreal Engine and CARLA in Docker__](build_docker_unreal.md) — Build Unreal Engine and CARLA in Docker
## Extended documentation
## Tutorials — Assets
[__Add a new vehicle__](tuto_A_add_vehicle.md) — Prepare a vehicle to be used in CARLA.
[__Add new props__](tuto_A_add_props.md) — Import additional props into CARLA.
[__Create standalone packages__](tuto_A_create_standalone.md) — Generate and handle standalone packages for assets.
[__Material customization__](tuto_A_material_customization.md) — Edit vehicle and building materials.
## Tutorials — Developers
[__How to upgrade content__](tuto_D_contribute_assets.md) — Add new content to CARLA.
[__Create a sensor__](tuto_D_create_sensor.md) — Develop a new sensor to be used in CARLA.
[__Create semantic tags__](tuto_D_create_semantic_tags.md) — Define customized tags for semantic segmentation.
[__Customize vehicle suspension__](tuto_D_customize_vehicle_suspension.md) — Modify the suspension system of a vehicle.
[__Generate detailed colliders__](tuto_D_generate_colliders.md) — Create detailed colliders for vehicles.
[__Make a release__](tuto_D_make_release.md) — How to make a release of CARLA
## CARLA Ecosystem
[__Ansys Real Time Radar Model__](ecosys_ansys.md) — Details about the Ansys RTR Webinair
## Contributing
[__Contribution guidelines__](cont_contribution_guidelines.md) — The different ways to contribute to CARLA.
[__Code of conduct__](cont_code_of_conduct.md) — Standard rights and duties for contributors.
[__Coding standard__](cont_coding_standard.md) — Guidelines to write proper code.
[__Documentation standard__](cont_doc_standard.md) — Guidelines to write proper documentation.
The pages above cover most of the core concepts and features of CARLA. There is additional documentation in the [extended documentation](ext_docs.md) section covering advanced features in more depth.

11
Docs/maps_tutorials.md Normal file
View File

@ -0,0 +1,11 @@
# Custom maps
In CARLA, a map includes the 3D model of a town and a definition of its road network. The road network is defined through the [__OpenDRIVE__](https://www.asam.net/standards/detail/opendrive/) standard. CARLA provides a diverse array of maps out of the box ready to use for a multitude of applications. User's can also create their own maps and load them into CARLA. The following set of tutorials detail the necessary steps for creating and loading custom maps into CARLA
* [__Overview__](tuto_M_custom_map_overview.md)
* [__Road painting__](tuto_M_custom_road_painter.md)
* [__Custom buildings__](tuto_M_custom_buildings.md)
* [__Generate map__](tuto_M_generate_map.md)
* [__Add map package__](tuto_M_add_map_package.md)
* [__Add map source__](tuto_M_add_map_source.md)
* [__Alternative methods__](tuto_M_add_map_alternative.md)

View File

@ -1374,6 +1374,10 @@ Changes the color of each element in `lights` to the corresponding in `colors`.
- **Parameters:**
- `lights` (_list([carla.Light](#carla.Light))_) - List of lights to be changed.
- `colors` (_list([carla.Color](#carla.Color))_) - List of colors to be applied.
- <a name="carla.LightManager.set_day_night_cycle"></a>**<font color="#7fb800">set_day_night_cycle</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**active**</font>)
All scene lights have a day-night cycle, automatically turning on and off with the altitude of the sun. This interferes in cases where full control of the scene lights is required, so setting this to __False__ deactivates it. It can reactivated by setting it to __True__.
- **Parameters:**
- `active` (_bool_) - (De)activation of the day-night cycle.
- <a name="carla.LightManager.set_intensities"></a>**<font color="#7fb800">set_intensities</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**lights**</font>, <font color="#00a6ed">**intensities**</font>)
Changes the intensity of each element in `lights` to the corresponding in `intensities`.
- **Parameters:**
@ -4454,4 +4458,4 @@ for (let i = 0; i < buttons.length; i++) {
buttons[i].addEventListener("click",function(){ButtonAction(buttons[i].id);},true);
}
window.onresize = WindowResize;
</script>
</script>

View File

@ -12,6 +12,7 @@
- [__RSS sensor__](#rss-sensor)
- [__Semantic LIDAR sensor__](#semantic-lidar-sensor)
- [__Semantic segmentation camera__](#semantic-segmentation-camera)
- [__Instance segmentation camera__](#instance-segmentation-camera)
- [__DVS camera__](#dvs-camera)
- [__Optical Flow camera__](#optical-flow-camera)
@ -742,7 +743,19 @@ The following tags are currently available:
<br>
!!! Note
Read [this](tuto_D_create_semantic_tags.md) tutorial to create new semantic tags.
Read [this](tuto_D_create_semantic_tags.md) tutorial to create new semantic tags.
## Instance segmentation camera
* __Blueprint:__ sensor.camera.instance_segmentation
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
This camera classifies every object in the field of view both by class and also by instance ID.
When the simulation starts, every element in scene is created with a tag. So it happens when an actor is spawned. The objects are classified by their relative file path in the project. For example, meshes stored in `Unreal/CarlaUE4/Content/Static/Pedestrians` are tagged as `Pedestrian`.
![ImageInstanceSegmentation](img/instance_segmentation.png)
The server provides an image with the tag information __encoded in the red channel__: A pixel with a red value of `x` belongs to an object with tag `x`. The green and blue values of the pixel define the object's unique ID. For example a pixel with an 8 bit RGB value of [10, 20, 55] is a vehicle (Semantic tag 10) with a unique instance ID `20-55`.
#### Basic camera attributes

View File

@ -1 +1,3 @@
mkdocs == 1.1
mkdocs == 1.2.3
jinja2==3.0.3

View File

@ -2,13 +2,32 @@
Traffic simulation is integral to the accurate and efficient training and testing of autonomous driving stacks. CARLA provides a number of different options to simulate traffic and specific traffic scenarios. This section is an overview of the options available to help decide which is the best fit for your use case.
- [__Scenario Runner and OpenScenario__](#scenario-runner-and-openscenario)
- [__Traffic Manager__](#traffic-manager)
- [__Scenario Runner and OpenScenario__](#scenario-runner-and-openscenario)
- [__Scenic__](#scenic)
- [__SUMO__](#sumo)
---
## Traffic Manager
[__Traffic Manager__](adv_traffic_manager.md) is a module within CARLA that controls certain vehicles in a simulation from the client side. Vehicles are registered to Traffic Manager via the [`carla.Vehicle.set_autopilot`](https://carla.readthedocs.io/en/latest/python_api/#carla.Vehicle.set_autopilot) method or [`command.SetAutopilot`](https://carla.readthedocs.io/en/latest/python_api/#commandsetautopilot) class. Control of each vehicle is managed through a cycle of [distinct stages](adv_traffic_manager.md#stages) which each run on a different thread.
__Useful for:__
- Populating a simulation with realistic urban traffic conditions.
- [Customizing traffic behaviours](tuto_G_traffic_manager.md) to set specific learning circumstances.
- Developing phase-related functionalities and data structures while improving computational efficiency.
<div class="build-buttons">
<p>
<a href="https://carla.readthedocs.io/en/docs-preview/adv_traffic_manager/" target="_blank" class="btn btn-neutral" title="Go to Traffic Manager">
Go to Traffic Manager</a>
</p>
</div>
---
## Scenario Runner and OpenScenario
Scenario Runner provides [predefined traffic scenarios](https://carla-scenariorunner.readthedocs.io/en/latest/list_of_scenarios/) out of the box and also allows users to [define their own](https://carla-scenariorunner.readthedocs.io/en/latest/creating_new_scenario/) scenarios using either Python or the [OpenSCENARIO 1.0 standard](https://releases.asam.net/OpenSCENARIO/1.0.0/ASAM_OpenSCENARIO_BS-1-2_User-Guide_V1-0-0.html#_foreword).
@ -31,25 +50,6 @@ Go to Scenario Runner</a>
---
## Traffic Manager
Traffic Manager is a module within CARLA that controls certain vehicles in a simulation from the client side. Vehicles are registered to Traffic Manager via the [`carla.Vehicle.set_autopilot`](https://carla.readthedocs.io/en/latest/python_api/#carla.Vehicle.set_autopilot) method or [`command.SetAutopilot`](https://carla.readthedocs.io/en/latest/python_api/#commandsetautopilot) class. Control of each vehicle is managed through a cycle of [distinct stages](adv_traffic_manager.md#stages) which each run on a different thread.
__Useful for:__
- Populating a simulation with realistic urban traffic conditions.
- [Customizing traffic behaviours](adv_traffic_manager.md#general-considerations) to set specific learning circumstances.
- Developing phase-related functionalities and data structures while improving computational efficiency.
<div class="build-buttons">
<p>
<a href="https://carla.readthedocs.io/en/latest/adv_traffic_manager/" target="_blank" class="btn btn-neutral" title="Go to Traffic Manager">
Go to Traffic Manager</a>
</p>
</div>
---
## Scenic
[Scenic](https://scenic-lang.readthedocs.io) is a domain-specific probabilistic programming language for modeling the environments of cyber-physical systems like robots and autonomous cars. Scenic provides an [specialized domain](https://scenic-lang.readthedocs.io/en/latest/modules/scenic.simulators.carla.html) to facilitate execution of Scenic scripts on the CARLA simulator.

View File

@ -236,6 +236,45 @@ python3 manual_control.py --filter <model_name> # The make or model defined in s
!!! Note
Even if you used upper case characters in your make and model, they need to be converted to lower case when passed to the filter.
---
## Add an N wheeled vehicle
Adding an N wheeled vehicle follows the same import pipeline as that for 4 wheeled vehicles above with a few steps that are different.
__5.__ __Configure the Animation Blueprint for an N wheeled vehicle__
Search for `BaseVehiclePawnNW` and press **_Select_**.
![n_wheel_base](../img/base_nw.png)
__6.__ __Prepare the vehicle and wheel blueprints__
Go to the folder of any native CARLA vehicles in Carla/Blueprints/Vehicles. From the Content Browser, copy the four wheel blueprints into the blueprint folder for your own vehicle. Rename the files to replace the old vehicle name with your own vehicle name.
Copy the four wheels and copy again for additional wheels. In the case of a 6 wheeled vehicle, you will need 6 different wheels: FLW, FRW, MLW, MRW, RLW, RRW.
![n_wheel_bps](../img/nwheels.png)
__7.__ __Configure the wheel blueprints__
Follow section __7__ as above for the 4 wheeled vehicle. The key difference in the case of an N wheeled vehicle is those affected by handbrake and steering parameters. In some vehicles (like for example a long wheelbase truck) the front 2 pairs of wheels will steer, and one set may steer more than others. The rearmost pairs may be affected by handbrake, the specifics will depend upon the vehicle you are modelling.
__8.__ __Configure vehicle blueprint__
In the Details panel, search for `wheel`. You will find settings for each of the wheels. For each one, click on Wheel Class and search for the BP_<vehicle_name>_<wheel_name> file that corresponds to the correct wheel position.
This is correct, but just to specify, in the case of N wheeled vehicles, you need to set ALL the wheels. This is an example with a 6 wheeled vehicle:
![n_wheel_config](../img/nwheel_config.png)
Finally, an additional consideration is setting the differential. In the case of a 4 wheeled vehicle, we have different presets of differentials (Limited Slip, Open 4W etc.) but with N wheeled vehicles, you need to choose on which wheels you want to apply torque. In this case, we have chosen only the middle and rear wheels have torque, while the front wheels dont, you can specify other configurations. The numbers are going to be the same as the image above this text (e.g. 0 will be the Front Left Wheel, as specified above).
![n_wheel_mech](../img/nwheel_mech_setup.png)
All other parameters such as engine, transmission, steering curve, are the same as 4 wheeled vehicles.
---
## Add a 2 wheeled vehicle

View File

@ -0,0 +1,548 @@
# Bounding boxes
A significant factor in the problem of enabling autonomous vehicles to understand their environments lies in estimating the position and orientation of objects surrounding the vehicle. For this purpose, it is necessary to infer the position of the object's bounding box.
Objects within the CARLA simulation all have a bounding box and the CARLA Python API provides functions to access the bounding box of each object. This tutorial shows how to access bounding boxes and then project them into the camera plane.
## Set up the simulator
Let's lay down the standard CARLA boilerplate code, set up the client and world objects, spawn a vehicle and attach a camera to it:
```py
import carla
import math
import random
import time
import queue
import numpy as np
import cv2
client = carla.Client('localhost', 2000)
world = client.get_world()
bp_lib = world.get_blueprint_library()
# spawn vehicle
vehicle_bp =bp_lib.find('vehicle.lincoln.mkz_2020')
vehicle = world.try_spawn_actor(vehicle_bp, random.choice(spawn_points))
# spawn camera
camera_bp = bp_lib.find('sensor.camera.rgb')
camera_init_trans = carla.Transform(carla.Location(z=2))
camera = world.spawn_actor(camera_bp, camera_init_trans, attach_to=vehicle)
vehicle.set_autopilot(True)
# Set up the simulator in synchronous mode
settings = world.get_settings()
settings.synchronous_mode = True # Enables synchronous mode
settings.fixed_delta_seconds = 0.05
world.apply_settings(settings)
# Get the map spawn points
spawn_points = world.get_map().get_spawn_points()
# Create a queue to store and retrieve the sensor data
image_queue = queue.Queue()
camera.listen(image_queue.put)
```
## Geometric transformations
We want to take 3D points from the simulation and project them into the 2D plane of the camera. Firstly, we need to construct the camera projection matrix:
```py
def build_projection_matrix(w, h, fov):
focal = w / (2.0 * np.tan(fov * np.pi / 360.0))
K = np.identity(3)
K[0, 0] = K[1, 1] = focal
K[0, 2] = w / 2.0
K[1, 2] = h / 2.0
return K
```
We want to use the camera projection matrix to project 3D to 2D points. The first step is to transform the 3D coordinates in world coordinates into camera coordinates, using the inverse camera transform that can be retrieved using `camera.get_transform().get_inverse_matrix()`. Following this, we use the camera projection matrix to project the 3D points in camera coordinates into the 2D camera plane:
```py
def get_image_point(loc, K, w2c):
# Calculate 2D projection of 3D coordinate
# Format the input coordinate (loc is a carla.Position object)
point = np.array([loc.x, loc.y, loc.z, 1])
# transform to camera coordinates
point_camera = np.dot(w2c, point)
# New we must change from UE4's coordinate system to an "standard"
# (x, y ,z) -> (y, -z, x)
# and we remove the fourth componebonent also
point_camera = [point_camera[1], -point_camera[2], point_camera[0]]
# now project 3D->2D using the camera matrix
point_img = np.dot(K, point_camera)
# normalize
point_img[0] /= point_img[2]
point_img[1] /= point_img[2]
return point_img[0:2]
```
Now that we have the functions to project 3D -> 2D we retrieve the camera specifications:
```py
# Get the world to camera matrix
world_2_camera = np.array(camera.get_transform().get_inverse_matrix())
# Get the attributes from the camera
image_w = camera_bp.get_attribute("image_size_x").as_int()
image_h = camera_bp.get_attribute("image_size_y").as_int()
fov = camera_bp.get_attribute("fov").as_float()
# Calculate the camera projection matrix to project from 3D -> 2D
K = build_projection_matrix(image_w, image_h, fov)
```
## Bounding boxes
CARLA objects all have an associated bounding box. CARLA [actors](python_api.md#carla.Actor) have a `bounding_box` attribute which has a [carla.BoundingBox](python_api.md#carla.BoundingBox) object type. The vertices for a bounding box can be retrieved through one of the getter functions `.get_world_vertices()` or `get_local_vertices()`.
It is important to note that to get the 3D coordinates of the bounding box in world coordinates, you need to include the transform of the actor as an argument to the `get_world_vertices()` method like so:
```py
actor.get_world_vertices(actor.get_transform())
```
For objects in the map like buildings, traffic lights and road signs, the bounding box can be retrieved through the [carla.World]((python_api.md#carla.World)) method `get_level_bbs()`. A [carla.CityObjectLabel]((python_api.md#carla.CityObjectLabel)) can be used as an argument to filter the bounding box list to relevant objects:
```py
# Retrieve all bounding boxes for traffic lights within the level
bounding_box_set = world.get_level_bbs(carla.CityObjectLabel.TrafficLight)
# Filter the list to extract bounding boxes within a 50m radius
nearby_bboxes = []
for bbox in bounding_box_set:
if bbox.location.distance(actor.get_transform().location) < 50:
nearby_bboxes
```
This list can be further filtered using actor location to identify objects that are nearby and therefore likely to be within the field of view of a camera attached to an actor.
In order to draw a bounding box onto the camera image, we will need to join the vertices in the appropriate order to create edges. To achieve this we need the following list of edge pairs:
```py
edges = [[0,1], [1,3], [3,2], [2,0], [0,4], [4,5], [5,1], [5,7], [7,6], [6,4], [6,2], [7,3]]
```
## Rendering the bounding boxes
Now that we have our geometric projections and our simulation set up, we can progress to creating the game loop and rendering the bounding boxes into a scene.
```py
# Set up the set of bounding boxes from the level
# We filter for traffic lights and traffic signs
bounding_box_set = world.get_level_bbs(carla.CityObjectLabel.TrafficLight)
bounding_box_set.extend(world.get_level_bbs(carla.CityObjectLabel.TrafficSigns))
# Remember the edge pairs
edges = [[0,1], [1,3], [3,2], [2,0], [0,4], [4,5], [5,1], [5,7], [7,6], [6,4], [6,2], [7,3]]
```
To see the bounding boxes, we will use an OpenCV window to display the camera output.
```py
# Retrieve the first image
world.tick()
image = image_queue.get()
# Reshape the raw data into an RGB array
img = np.reshape(np.copy(image.raw_data), (image.height, image.width, 4))
# Display the image in an OpenCV display window
cv2.namedWindow('ImageWindowName', cv2.WINDOW_AUTOSIZE)
cv2.imshow('ImageWindowName',img)
cv2.waitKey(1)
```
Now we will start the game loop:
```py
while True:
# Retrieve and reshape the image
world.tick()
image = image_queue.get()
img = np.reshape(np.copy(image.raw_data), (image.height, image.width, 4))
# Get the camera matrix
world_2_camera = np.array(camera.get_transform().get_inverse_matrix())
for bb in bounding_box_set:
# Filter for distance from ego vehicle
if bb.location.distance(vehicle.get_transform().location) < 50:
# Calculate the dot product between the forward vector
# of the vehicle and the vector between the vehicle
# and the bounding box. We threshold this dot product
# to limit to drawing bounding boxes IN FRONT OF THE CAMERA
forward_vec = vehicle.get_transform().get_forward_vector()
ray = bb.location - vehicle.get_transform().location
if forward_vec.dot(ray) > 1:
# Cycle through the vertices
verts = [v for v in bb.get_world_vertices(carla.Transform())]
for edge in edges:
# Join the vertices into edges
p1 = get_image_point(verts[edge[0]], K, world_2_camera)
p2 = get_image_point(verts[edge[1]], K, world_2_camera)
# Draw the edges into the camera output
cv2.line(img, (int(p1[0]),int(p1[1])), (int(p2[0]),int(p2[1])), (0,0,255, 255), 1)
# Now draw the image into the OpenCV display window
cv2.imshow('ImageWindowName',img)
# Break the loop if the user presses the Q key
if cv2.waitKey(1) == ord('q'):
break
# Close the OpenCV display window when the game loop stops
cv2.destroyAllWindows()
```
Now we are rendering 3D bounding boxes into the images so we can observe them in the camera sensor output.
![3D_bbox_traffic_lights](img/tuto_G_bounding_box/3d_bbox_traffic_lights.gif)
## Vehicle bounding boxes
We may also want to render the bounding boxes for actors, particularly for vehicles.
Firstly, let's add some other vehicles to our simulation:
```py
for i in range(50):
vehicle_bp = random.choice(bp_lib.filter('vehicle'))
npc = world.try_spawn_actor(vehicle_bp, random.choice(spawn_points))
if npc:
npc.set_autopilot(True)
```
Retrieve the first image and set up the OpenCV display window as before:
```py
# Retrieve the first image
world.tick()
image = image_queue.get()
# Reshape the raw data into an RGB array
img = np.reshape(np.copy(image.raw_data), (image.height, image.width, 4))
# Display the image in an OpenCV display window
cv2.namedWindow('ImageWindowName', cv2.WINDOW_AUTOSIZE)
cv2.imshow('ImageWindowName',img)
cv2.waitKey(1)
```
Now we use a modified game loop to draw the vehicle bounding boxes:
```py
while True:
# Retrieve and reshape the image
world.tick()
image = image_queue.get()
img = np.reshape(np.copy(image.raw_data), (image.height, image.width, 4))
# Get the camera matrix
world_2_camera = np.array(camera.get_transform().get_inverse_matrix())
for npc in world.get_actors().filter('*vehicle*'):
# Filter out the ego vehicle
if npc.id != vehicle.id:
bb = npc.bounding_box
dist = npc.get_transform().location.distance(vehicle.get_transform().location)
# Filter for the vehicles within 50m
if dist < 50:
# Calculate the dot product between the forward vector
# of the vehicle and the vector between the vehicle
# and the other vehicle. We threshold this dot product
# to limit to drawing bounding boxes IN FRONT OF THE CAMERA
forward_vec = vehicle.get_transform().get_forward_vector()
ray = npc.get_transform().location - vehicle.get_transform().location
if forward_vec.dot(ray) > 1:
p1 = get_image_point(bb.location, K, world_2_camera)
verts = [v for v in bb.get_world_vertices(npc.get_transform())]
for edge in edges:
p1 = get_image_point(verts[edge[0]], K, world_2_camera)
p2 = get_image_point(verts[edge[1]], K, world_2_camera)
cv2.line(img, (int(p1[0]),int(p1[1])), (int(p2[0]),int(p2[1])), (255,0,0, 255), 1)
cv2.imshow('ImageWindowName',img)
if cv2.waitKey(1) == ord('q'):
break
cv2.destroyAllWindows()
```
![3D_bbox_vehicles](img/tuto_G_bounding_box/3d_bbox_vehicle.gif)
### 2D bounding boxes
It is common for neural networks to be trained to detect 2D bounding boxes rather than the 3D bounding boxes demonstrated above. The previous script can be easily extended to generate 2D bounding boxes. We simply need to use the extremities of the 3D bounding boxes. We find, for each bounding box we render, the leftmost, rightmost, highest and lowest projected vertex in image coordinates.
```py
while True:
# Retrieve and reshape the image
world.tick()
image = image_queue.get()
img = np.reshape(np.copy(image.raw_data), (image.height, image.width, 4))
# Get the camera matrix
world_2_camera = np.array(camera.get_transform().get_inverse_matrix())
for npc in world.get_actors().filter('*vehicle*'):
# Filter out the ego vehicle
if npc.id != vehicle.id:
bb = npc.bounding_box
dist = npc.get_transform().location.distance(vehicle.get_transform().location)
# Filter for the vehicles within 50m
if dist < 50:
# Calculate the dot product between the forward vector
# of the vehicle and the vector between the vehicle
# and the other vehicle. We threshold this dot product
# to limit to drawing bounding boxes IN FRONT OF THE CAMERA
forward_vec = vehicle.get_transform().get_forward_vector()
ray = npc.get_transform().location - vehicle.get_transform().location
if forward_vec.dot(ray) > 1:
p1 = get_image_point(bb.location, K, world_2_camera)http://host.robots.ox.ac.uk/pascal/VOC/
verts = [v for v in bb.get_world_vertices(npc.get_transform())]
x_max = -10000
x_min = 10000
y_max = -10000
y_min = 10000
for vert in verts:
p = get_image_point(vert, K, world_2_camera)
# Find the rightmost vertex
if p[0] > x_max:
x_max = p[0]
# Find the leftmost vertex
if p[0] < x_min:
x_min = p[0]
# Find the highest vertex
if p[1] > y_max:
y_max = p[1]
# Find the lowest vertex
if p[1] < y_min:
y_min = p[1]
cv2.line(img, (int(x_min),int(y_min)), (int(x_max),int(y_min)), (0,0,255, 255), 1)
cv2.line(img, (int(x_min),int(y_max)), (int(x_max),int(y_max)), (0,0,255, 255), 1)
cv2.line(img, (int(x_min),int(y_min)), (int(x_min),int(y_max)), (0,0,255, 255), 1)
cv2.line(img, (int(x_max),int(y_min)), (int(x_max),int(y_max)), (0,0,255, 255), 1)
cv2.imshow('ImageWindowName',img)
if cv2.waitKey(1) == ord('q'):
break
cv2.destroyAllWindows()
```
![2D_bbox_vehicles](img/tuto_G_bounding_box/2d_bbox.gif)
## Exporting bounding boxes
Rendering bounding boxes is useful for us to ensure the bounding boxes are correct for debugging purposes. However, if we wanted to use them practically in training a neural network, we will want to export them. There are a number of different formats used by the common data repositories used for autonomous driving and object detection, such as [__KITTI__](http://www.cvlibs.net/datasets/kitti/) or [__PASCAL VOC__](http://host.robots.ox.ac.uk/pascal/VOC/) or [__MicroSoft COCO__](https://cocodataset.org/#home).
### Pascal VOC format
These datasets commonly use JSON or XML formats to store annotations. There is a convenient Python library for the PASCAL VOC format.
```py
from pascal_voc_writer import Writer
...
...
...
while True:
# Retrieve the image
world.tick()
image = image_queue.get()
# Get the camera matrix
world_2_camera = np.array(camera.get_transform().get_inverse_matrix())
frame_path = 'output/%06d' % image.frame
# Save the image
image.save_to_disk(frame_path + '.png')
# Initialize the exporter
writer = Writer(frame_path + '.png', image_w, image_h)
for npc in world.get_actors().filter('*vehicle*'):
if npc.id != vehicle.id:
bb = npc.bounding_box
dist = npc.get_transform().location.distance(vehicle.get_transform().location)
if dist < 50:
forward_vec = vehicle.get_transform().get_forward_vector()
ray = npc.get_transform().location - vehicle.get_transform().location
if forward_vec.dot(ray) > 1:
p1 = get_image_point(bb.location, K, world_2_camera)
verts = [v for v in bb.get_world_vertices(npc.get_transform())]
x_max = -10000
x_min = 10000
y_max = -10000
y_min = 10000
for vert in verts:
p = get_image_point(vert, K, world_2_camera)
if p[0] > x_max:
x_max = p[0]
if p[0] < x_min:
x_min = p[0]
if p[1] > y_max:
y_max = p[1]
if p[1] < y_min:
y_min = p[1]
# Add the object to the frame (ensure it is inside the image)
if x_min > 0 and x_max < image_w and y_min > 0 and y_max < image_h:
writer.addObject('vehicle', x_min, y_min, x_max, y_max)
# Save the bounding boxes in the scene
writer.save(frame_path + '.xml')
```
For every rendered frame of your simulation, you will now export an accompanying XML file containing the details of the bounding boxes in the frame.
![xml_bbox_files](img/tuto_G_bounding_box/xml_bbox_files.png)
In the PASCAL VOC format, the XML files contain information referring to the accompanying image file, the image dimensions and can include details such as vehicle type if needed.
```xml
<!-- Example PASCAL VOC format file-->
<annotation>
<folder>output</folder>
<filename>023235.png</filename>
<path>/home/matt/Documents/temp/output/023235.png</path>
<source>
<database>Unknown</database>
</source>
<size>
<width>800</width>
<height>600</height>
<depth>3</depth>
</size>
<segmented>0</segmented>
<object>
<name>vehicle</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>503</xmin>
<ymin>310</ymin>
<xmax>511</xmax>
<ymax>321</ymax>
</bndbox>
</object> <object>
<name>vehicle</name>
<pose>Unspecified</pose>
<truncated>0</truncated>
<difficult>0</difficult>
<bndbox>
<xmin>490</xmin>
<ymin>310</ymin>
<xmax>498</xmax>
<ymax>321</ymax>
</bndbox>
</object>
</annotation>
```
### MicroSoft COCO format
Another popular export format is [__MicroSoft COCO__](https://cocodataset.org/#home). The COCO format uses JSON files to save references to images and annotations. The format includes the images and annotations in the fields of a single JSON file, along with information on the dataset and licenses. In contrast to some other formats, references to all collected images and all associated annotations go in the same file.
You should create a JSON dictionary similar to the following example:
```py
simulation_dataset = {
"info": {},
"licenses": [
{
"url": "http://creativecommons.org/licenses/by-nc-sa/2.0/",
"id": 1,
"name": "Attribution-NonCommercial-ShareAlike License"
}],
"images": [...,
{
"license": 1,
"file_name": "023235.png",
"height": 600,
"width": 800,
"date_captured": "2022-04-14 17:02:52",
"id": 23235
},
...
],
"categories": [...
{"supercategory": "vehicle", "id": 10, "name": "vehicle" },
...],
"annotations": [
...,
{
"segmentation": [],
"area": 9262.89,
"iscrowd": 0,
"image_id": 23235,
"bbox": [503.3, 310.4, 118.3, 78.3]
},
...
]
}
```
The info and licenses sections should be filled accordingly or left empty. The images from your simulation should be stored in an array in the `images` field of the dictionary. The bounding boxes should be stored in the `annotations` field of the dictionary with the matching `image_id`. The bounding box is stored as `[x_min, y_min, width, height]`.
The Python JSON library can then be used to save the dictionary as a JSON file:
```py
import json
with open('simulation_data.json', 'w') as json_file:
json.dump(simulation_dataset, json_file)
```
More details about the COCO data format can be found [__here__](https://www.immersivelimit.com/tutorials/create-coco-annotations-from-scratch/#create-custom-coco-dataset)
*It should be noted that in this tutorial we have not accounted for overlapping bounding boxes. Additional work would be required in order to identify foreground bounding boxes in the case where they overlap.*

View File

@ -0,0 +1,238 @@
# Getting started with CARLA
The CARLA simulator is a comprehensive solution for producing synthetic training data for applications in autonomous driving (AD) and also other robotics applications. CARLA simulates a highly realistic environment emulating real world towns, cities and highways and the vehicles and other objects that occupy these driving spaces.
The CARLA simulator is further useful as an evaluation and testing environment. You can deploy the AD agents you have trained within the simulation to test and evaluate their performance and safety, all within the security of a simulated environment, with no risk to hardware or other road users.
In this tutorial, we will cover some of the basic steps of getting started with CARLA, from using the spectator to navigate the environment, populating your simulation with vehicles and pedestrians and then adding sensors and cameras to gather simulated data to feed into neural networks for training or testing.
## Starting CARLA and connecting the client
CARLA can be launched using the command line using the executable in Windows or the shell script in Linux. Follow the installation instructions for [__Linux__](start_quickstart.md) and [__Windows__](start_quickstart.md) then [__launch CARLA__](start_quickstart.md#running-carla) from the command line.
To manipulate CARLA through the Python API, we need to connect the Python client to the server through an open port:
```py
import carla
import random
# Connect to the client and retrieve the world object
client = carla.Client('localhost', 2000)
world = client.get_world()
```
The [__client__](python_api#carlaclient) object serves to maintain the client's connection to the server and has a number of functions for applying commands and loading or exporting data. We can load an alternative map or reload the current one (resetting to initial state) using the client object:
```py
# Print available maps
client.get_available_maps()
# Load new map
client.load_world('Town07')
# Reload current map and reset state
client.reload_world()
```
The port can be chosen as any available port and is 2000 by default, you can also choose a host different from *localhost* by using a computer's IP address. This way, the CARLA server can be run on a networked machine, while the python client can be run from a personal computer. This is particularly useful for differentiating the GPU used for running the CARLA simulator and that used for neural network training, both of which can be highly demanding on graphics hardware.
!!! Note
The following presumes that CARLA is running in the default [__asynchronous__](adv_synchrony_timestep.md) mode. If you have engaged synchronous mode, some of the code in the following sections might not work as expected.
## The world object
In the CARLA API, the [__world__](python_api#carlaworld) object provides access to all elements of the simulation, including the map, objects within the map, such as buildings, traffic lights, vehicles and pedestrians.
We can use the world object to query and access objects within the simulation:
```py
# Get names of all objects
world.get_names_of_all_objects()
# Filter the list of names for buildings
filter(lambda x: 'Building' in x, world.get_names_of_all_objects())
# Get a list of all actors, such as vehicles and pedestrians
world.get_actors()
# Filter the list to find the vehicles
world.get_actors().filter('*vehicle*')
```
The world object is used to add things to the simulation, such as vehicles and pedestrians through the spawn methods. Vehicles and pedestrians have a special place within the CARLA simulation since they exhibit behaviors, i.e. they can move around and affect other objects, so we call them actors. This differentiates them from static, inanimate objects like buildings that are just features in the map. Other objects such as traffic lights are also actors since they exhibit behaviors that affect other objects.
To spawn objects, we need a [__blueprint__](python_api#carlaactorblueprint) for the object. Blueprints are recipes containing all the parts necessary for an actor such as the mesh, textures and materials that govern it's appearance within the simulation and all the logic that governs its behavior and physics - how it interacts with other objects in the simulation. Let's find a blueprint for a vehicle and spawn it.
```py
# Get the blueprint library and filter for the vehicle blueprints
vehicle_bps = world.get_blueprint_library().filter('*vehicle*')
# Randomly choose a vehicle blueprint to spawn
vehicle_bp = random.choice(vehicle_bps)
# We need a place to spawn the vehicle that will work so we will
# use the predefined spawn points for the map and randomly select one
spawn_point = random.choice(world.get_map().get_spawn_points())
# Now let's spawn the vehicle
world.spawn_actor(vehicle_bp, spawn_point)
```
For various reasons, this spawn attempt might fail, so to avoid our code crashing, we can use a fault tolerant spawn method. This returns a NoneType object if the spawn fails. If the spawn succeeds, it will return a reference to the vehicle itself, that can be used to control it in various ways, including applying control inputs to move and steer it, handing over control to the Traffic Manager or destroying it.
```py
vehicle = world.try_spawn_actor(vehicle_bp, spawn_point)
```
The spawn may fail if there is already a vehicle or other actor at or close to the chosen spawn point, or if the spawn point is in an inappropriate location such as within a building or other static item of the map that's not a road or pavement.
## The spectator
The spectator is a view into the simulation. By default, the spectator opens in a new window when you run the CARLA server on a computer with a screen attached, unless you specify the `-RenderOffScreen` command line option.
The spectator is helpful to visualize your simulation. Using the spectator, you can familiarize yourself with the map you've loaded, and see the result of any changes you are making, such as adding vehicles, changing the weather, turning on/off various layers of the map and for debugging purposes.
You can fly the spectator around the world using the mouse to control the pitch and yaw of the spectator view and the QWE-ASD keys to move the spectator:
- Q - move upwards (towards the top edge of the window)
- E - move downwards (towards the lower edge of the window)
- W - move forwards
- S - move backwards
- A - move left
- D - move right
Left click and drag the mouse in the spectator window up and down to control pitch and left and right to control yaw.
![flying_spectator](../img/tuto_G_getting_started/flying_spectator.gif)
The spectator and its properties can be accessed and manipulated through the Python API:
```py
# Retrieve the spectator object
spectator = world.get_spectator()
# Get the location and rotation of the spectator through its transform
transform = spectator.get_transform()
location = transform.location
rotation = transform.rotation
# Set the spectator with an empty transform
spectator.set_transform(carla.Transform())
# This will set the spectator at the origin of the map, with 0 degrees
# pitch, yaw and roll - a good way to orient yourself in the map
```
## Finding a custom spawn point using the spectator
The spectator is particularly useful to verify your actors are spawning correctly and also to determine locations for spawning.
We have two options to define spawn points. We can define our own custom spawn points, or we can use predefined spawn points that are provided with each map.
If we want to define a custom spawn point, we need to know the coordinates of the spawn point. Here we can use the spectator to help us since we can access its location.
First, use the controls defined above to fly the spectator to a point of interest.
Now, let's spawn a vehicle where the spectator is:
```py
vehicle = world.try_spawn_actor(vehicle_bp, spectator.get_transform())
```
![spawn_vehicle](../img/tuto_G_getting_started/spawn_vehicle.gif)
You'll now see a vehicle spawned at the point where the spectator is. It will take on both the location and the rotation of the spectator, so be sure to orient the spectator in the direction you want the vehicle to face. If you navigate close to the ground, the spectator will end up inside the vehicle, and if it is too close to the ground, the spawn may fail. If you spawn the vehicle with the spectator high in the air, the vehicle will drop to the ground.
We can also record this point for later use, manually recording it or printing to a file:
```py
print(spectator.get_transform())
>>> Transform(Location(x=25.761623, y=13.169240, z=0.539901), Rotation(pitch=0.862031, yaw=-2.056274, roll=0.000069))
```
## Using and visualizing map spawn points
Manually defining spawn points is useful for custom scenarios, however, if we need to create a whole city full of traffic, it could be very time consuming. For this reason, each map provides a set of predefined spawn points distributed evenly throughout the map to make creating large volumes of NPC traffic efficient.
```py
# Get the map's spawn points
spawn_points = world.get_map().get_spawn_points()
# Get the blueprint library and filter for the vehicle blueprints
vehicle_bps = world.get_blueprint_library().filter('*vehicle*')
# Spawn 50 vehicles randomly distributed throughout the map
for i in range(0,50):
world.try_spawn_actor(random.choice(vehicle_bps, random.choice(spawn_points)))
```
This is useful, however, we don't really know where the vehicles are going to end up. Luckily CARLA's debug tools give us some ways of visualizing locations in the map. For example, if we wanted to be slightly more specific about which spawn points we wanted to use, in the case that we wanted to create congestion in one particular part of town, we could specify a set of spawn points for instantiating new vehicles in the simulation.
To do this, we can visualize the spawn points in the map.
```py
# Get the map spawn points
spawn_points = world.get_map().get_spawn_points()
for i, spawn_point in enumerate(spawn_points):
# Draw in the spectator window the spawn point index
world.debug.draw_string(spawn_point.location, str(i), life_time=100)
# We can also draw an arrow to see the orientation of the spawn point
# (i.e. which way the vehicle will be facing when spawned)
world.debug.draw_arrow(spawn_point.location, spawn_point.location + spawn_point.get_forward_vector(), life_time=100)
```
![spawn_points](../img/tuto_G_getting_started/spawn_points.png)
Now we can note down the spawn point indices we are interested in and fill this street with vehicles:
```py
for ind in [89, 95, 99, 102, 103, 104, 110, 111, 115, 126, 135, 138, 139, 140, 141]:
world.try_spawn_actor(random.choice(vehicle_bps), spawn_points[ind])
```
Or spawn randomly throughout the map:
```py
for ind in range(0, 100):
world.try_spawn_actor(random.choice(vehicle_bps), random.choice(spawn_points))
```
![vehicle_street](../img/tuto_G_getting_started/vehicle_street.png)
## Actors and blueprints
[__Actors__](python_api#carlaactor) are the objects within the CARLA simulation that have an affect or *act* upon other objects in the simulation. CARLA actors include vehicles, pedestrians, traffic lights, road signs, obstacles, cameras and sensors. Each actor requires a [__blueprint__](python_api#carlaactorblueprint). The blueprint defines all the necessary elements needed for an actor, including assets such as meshes, textures and materials and also any logic required to govern the behavior of the actor. To spawn an actor, we need to define it with a blueprint.
CARLA provides a comprehensive library of blueprints including numerous types and models of vehicles, numerous pedestrian models and traffic lights, boxes, trash cans, shopping carts and traffic signals.
We can use CARLA's [__blueprint library__](python_api#carlablueprintlibrary) to find and choose an appropriate blueprint for our needs:
```py
# Print all available blueprints
for actor in world.get_blueprint_library():
print(actor)
```
The blueprint library can be filtered to narrow down our search:
```py
# Print all available vehicle blueprints
for actor in world.get_blueprint_library().filter('vehicle'):
print(actor)
vehicle_blueprint = world.get_blueprint_library().find('vehicle.audi.tt')
```

Some files were not shown because too many files have changed in this diff Show More