Sergi e/rss docs (#2686)

* RSS first add

* RSS docs first draft.·

* Second draft.

* Codacy fixes

* Readme update
This commit is contained in:
sergi.e 2020-04-03 13:57:11 +02:00 committed by GitHub
parent bdd0aaaac9
commit 29e8e14cc1
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 773 additions and 118 deletions

128
Docs/adv_rss.md Normal file
View File

@ -0,0 +1,128 @@
# RSS Sensor
CARLA integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in the client library. This feature allows users to investigate behaviours of RSS without having to implement anything. CARLA will take care of providing the input, and applying the output to the AD systems on the fly.
* [__Overview__](#overview)
* [__Compilation__](#compilation)
* [Dependencies](#dependencies)
* [Build](#build)
* [__Current state__](#current-state)
* [RssSensor](#rsssensor)
* [RssRestrictor](#rssrestrictor)
!!! Important
This feature is a work in progress. Right now, it is only available for the Linux build.
---
## Overview
The RSS library implements a mathematical model for safety assurance. It receives sensor information, and provides restrictions to the controllers of a vehicle. To sum up, the RSS module uses the sensor data to define __situations__. A situation describes the state of the ego vehicle with an element of the environment. For each situation, safety checks are made, and a proper response is calculated. The overall response is the result of all of the combined. For specific information on the library, read the [documentation](https://intel.github.io/ad-rss-lib/), especially the [Background section](https://intel.github.io/ad-rss-lib/ad_rss/Overview/).
This is implemented in CARLA using two elements.
* __RssSensor__ is in charge of the situation analysis, and response generation using the *ad-rss-lib*.
* __RssRestrictor__ applies the response by restricting the commands of the vehicle.
The following image sketches the integration of __RSS__ into the CARLA architecture.
![Interate RSS into CARLA](img/rss_carla_integration_architecture.png)
__1. The server.__
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ Sends a camera image to the client. <small>(Only if the client needs visualization).</small>
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ Provides the RssSensor with world data.
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ Sends a physics model of the vehicle to the RssRestrictor. <small>(Only if the default values are overwritten).</small>
__2. The client.__
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ Provides the *RssSensor* with some [parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) to be considered.
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ Sends to the *RssResrictor* an initial [carla.VehicleControl](python_api.md#carla.VehicleControl).
__3. The RssSensor.__
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ Uses the *ad-rss-lib* to extract situations, do safety checks, and generate a response.
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ Sends the *RssRestrictor* a response containing the proper response and aceleration restrictions to be applied.
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ Asks the server to do some debug drawings to visualize the results of the calculations.
__4. The RssRestrictor__
&nbsp;&nbsp;&nbsp;&nbsp;&nbsp;__-__ If the client asks for it, applies the response to the [carla.VehicleControl](python_api.md#carla.VehicleControl), and returns the resulting one.
!!! Important
Debug drawings can delay the RSS response, so they should be disabled during automated RSS evaluations. Use [carla.RssVisualizationMode](python_api.md#carla.RssVisualizationMode) to change the visualization settings.
[![RSS sensor in CARLA](img/rss_carla_integration.png)](https://www.youtube.com/watch?v=UxKPXPT2T8Q)
<div style="text-align: right"><i>Visualization of the RssSensor results.</i></div>
---
## Compilation
The RSS integration has to be built aside from the rest of CARLA. The __ad-rss-lib__ comes with an LGPL-2.1 open-source license that creates conflict. It has to be linked statically into *libCarla*.
As a reminder, the feature is only available for the Linux build so far.
### Dependencies
There are additional prerequisites required for building RSS and its dependencies. Take a look at the [official documentation](https://intel.github.io/ad-rss-lib/BUILDING)) to know more about this.
Dependencies provided by Ubunutu (>= 16.04).
```sh
sudo apt-get install libgtest-dev libpython-dev libpugixml-dev libproj-dev libtbb-dev
```
The dependencies are built using [colcon](https://colcon.readthedocs.io/en/released/user/installation.html), so it has to be installed.
```sh
pip3 install --user -U colcon-common-extensions
```
There are some additional dependencies for the Python bindings.
```sh
sudo apt-get install castxml
pip install --user pygccxml
pip install --user https://bitbucket.org/ompl/pyplusplus/get/1.8.1.zip
```
### Build
Once this is done, the full set of dependencies and RSS components can be built.
* Compile LibCarla to work with RSS.
```sh
make LibCarla.client.rss
```
* Compile the PythonAPI to include the RSS feature.
```sh
make PythonAPI.rss
```
* As an alternative, a package can be built directly.
```sh
make package.rss
```
---
## Current state
### RssSensor
[__carla.RssSensor__](python_api.md#carla.RssSensor) supports [ad-rss-lib v3.0.0 feature set](https://intel.github.io/ad-rss-lib/RELEASE_NOTES_AND_DISCLAIMERS) completely, including intersections and [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) support.
So far, the server provides the sensor with ground truth data of the surroundings that includes the state of other vehicles and traffic lights. Future improvements of this feature will add to the equation pedestrians, and more information of the OpenDRIVE map among others.
### RssRestrictor
When the client calls for it, the [__carla.RssRestrictor__](python_api.md#carla.RssRestrictor) will modify the vehicle controller to best reach the desired accelerations or decelerations by a given response.
Due to the stucture of [carla.VehicleControl](python_api.md#carla.VehicleControl) objects, the restrictions applied have certain limitations. These controllers include `throttle`, `brake` and `streering` values. However, due to car physics and the simple control options these might not be met. The restriction intervenes in lateral direction simply by counter steering towards the parallel lane direction. The brake will be activated if deceleration requested by RSS. This depends on vehicle mass and brake torques provided by the [carla.Vehicle](python_api.md#carla.Vehicle).
!!! Note
In an automated vehicle controller it might be possible to adapt the planned trajectory to the restrictions. A fast control loop (>1KHz) can be used to ensure these are followed.
---
That sets the basics regarding the RSS sensor in CARLA. Find out more about the specific attributes and parameters in the [sensor reference](ref_sensors.md#rss-sensor).
Open CARLA and mess around for a while. If there are any doubts, feel free to post these in the forum.
<div class="build-buttons">
<p>
<a href="https://forum.carla.org/" target="_blank" class="btn btn-neutral" title="Go to the CARLA forum">
CARLA forum</a>
</p>
</div>

Binary file not shown.

After

Width:  |  Height:  |  Size: 241 KiB

View File

@ -63,6 +63,8 @@ CARLA forum</a>
— Register the events in a simulation and play it again. — Register the events in a simulation and play it again.
[__Rendering options__](adv_rendering_options.md) [__Rendering options__](adv_rendering_options.md)
— From quality settings to no-render or off-screen modes. — From quality settings to no-render or off-screen modes.
[__RSS sensor__](adv_rss.md)
— An implementation of RSS in the CARLA client library.
[__Synchrony and time-step__](adv_synchrony_timestep.md) [__Synchrony and time-step__](adv_synchrony_timestep.md)
— Client-server communication and simulation time. — Client-server communication and simulation time.
[__Traffic Manager__](adv_traffic_manager.md) [__Traffic Manager__](adv_traffic_manager.md)

View File

@ -1150,21 +1150,158 @@ Parses the axis' orientations to string.
--- ---
## carla.RssEgoDynamicsOnRoute<a name="carla.RssEgoDynamicsOnRoute"></a>
Part of the data contained inside a [carla.RssResponse](#carla.RssResponse) describing the state of the vehicle. The parameters include its current dynamics, and how it is heading regarding the target route.
<h3>Instance Variables</h3>
- <a name="carla.RssEgoDynamicsOnRoute.ego_speed"></a>**<font color="#f8805a">ego_speed</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Speed.html">libad_physics_python.Speed</a>_)
The ego vehicle's speed.
- <a name="carla.RssEgoDynamicsOnRoute.min_stopping_distance"></a>**<font color="#f8805a">min_stopping_distance</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Distance.html">libad_physics_python.Distance</a>_)
The current minimum stopping distance.
- <a name="carla.RssEgoDynamicsOnRoute.ego_center"></a>**<font color="#f8805a">ego_center</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/structad_1_1map_1_1point_1_1ENUPoint.html">libad_map_access_python.ENUPoint</a>_)
The considered enu position of the ego vehicle.
- <a name="carla.RssEgoDynamicsOnRoute.ego_heading"></a>**<font color="#f8805a">ego_heading</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/classad_1_1map_1_1point_1_1ENUHeading.html">libad_map_access_python.ENUHeading</a>_)
The considered heading of the ego vehicle.
- <a name="carla.RssEgoDynamicsOnRoute.ego_center_within_route"></a>**<font color="#f8805a">ego_center_within_route</font>** (_bool_)
States if the ego vehicle's center is within the route.
- <a name="carla.RssEgoDynamicsOnRoute.crossing_border"></a>**<font color="#f8805a">crossing_border</font>** (_bool_)
States if the vehicle is already crossing one of the lane borders.
- <a name="carla.RssEgoDynamicsOnRoute.route_heading"></a>**<font color="#f8805a">route_heading</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/classad_1_1map_1_1point_1_1ENUHeading.html">libad_map_access_python.ENUHeading</a>_)
The considered heading of the route.
- <a name="carla.RssEgoDynamicsOnRoute.route_nominal_center"></a>**<font color="#f8805a">route_nominal_center</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/structad_1_1map_1_1point_1_1ENUPoint.html">libad_map_access_python.ENUPoint</a>_)
The considered nominal center of the current route.
- <a name="carla.RssEgoDynamicsOnRoute.heading_diff"></a>**<font color="#f8805a">heading_diff</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/classad_1_1map_1_1point_1_1ENUHeading.html">libad_map_access_python.ENUHeading</a>_)
The considered heading diff towards the route.
- <a name="carla.RssEgoDynamicsOnRoute.route_speed_lat"></a>**<font color="#f8805a">route_speed_lat</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Speed.html">libad_physics_python.Speed</a>_)
The ego vehicle's speed component _lat_ regarding the route.
- <a name="carla.RssEgoDynamicsOnRoute.route_speed_lon"></a>**<font color="#f8805a">route_speed_lon</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Speed.html">libad_physics_python.Speed</a>_)
The ego vehicle's speed component _lon_ regarding the route.
- <a name="carla.RssEgoDynamicsOnRoute.route_accel_lat"></a>**<font color="#f8805a">route_accel_lat</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Acceleration.html">libad_physics_python.Acceleration</a>_)
The ego vehicle's acceleration component _lat_ regarding the route.
- <a name="carla.RssEgoDynamicsOnRoute.route_accel_lon"></a>**<font color="#f8805a">route_accel_lon</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Acceleration.html">libad_physics_python.Acceleration</a>_)
The ego vehicle's acceleration component _lon_ regarding the route.
- <a name="carla.RssEgoDynamicsOnRoute.avg_route_accel_lat"></a>**<font color="#f8805a">avg_route_accel_lat</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Acceleration.html">libad_physics_python.Acceleration</a>_)
The ego vehicle's acceleration component _lat_ regarding the route smoothened by an average filter.
- <a name="carla.RssEgoDynamicsOnRoute.avg_route_accel_lon"></a>**<font color="#f8805a">avg_route_accel_lon</font>** (_<a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Acceleration.html">libad_physics_python.Acceleration</a>_)
The ego acceleration component _lon_ regarding the route smoothened by an average filter.
<h3>Methods</h3>
<h3>Dunder methods</h3>
- <a name="carla.RssEgoDynamicsOnRoute.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
---
## carla.RssResponse<a name="carla.RssResponse"></a>
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.SensorData](#carla.SensorData)_</b></small></div></p><p>Class that contains the output of a [carla.RssSensor](#carla.RssSensor). This is the result of the RSS calculations performed for the parent vehicle of the sensor.
A [carla.RssRestrictor](#carla.RssRestrictor) will use the data to modify the [carla.VehicleControl](#carla.VehicleControl) of the vehicle.
<h3>Instance Variables</h3>
- <a name="carla.RssResponse.response_valid"></a>**<font color="#f8805a">response_valid</font>** (_bool_)
States if the response is valid. It is __False__ if calculations failed or an exception occured.
- <a name="carla.RssResponse.proper_response"></a>**<font color="#f8805a">proper_response</font>** (_<a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1state_1_1ProperResponse.html">libad_rss_python.ProperResponse</a>_)
The proper response that the RSS calculated for the vehicle.
- <a name="carla.RssResponse.acceleration_restriction"></a>**<font color="#f8805a">acceleration_restriction</font>** (_<a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1world_1_1AccelerationRestriction.html">libad_rss_python.AccelerationRestriction</a>_)
Acceleration restrictions to be applied, according to the RSS calculation.
- <a name="carla.RssResponse.rss_state_snapshot"></a>**<font color="#f8805a">rss_state_snapshot</font>** (_<a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1state_1_1RssStateSnapshot.html">libad_rss_python.RssStateSnapshot</a>_)
Detailed RSS states at the current moment in time.
- <a name="carla.RssResponse.ego_dynamics_on_route"></a>**<font color="#f8805a">ego_dynamics_on_route</font>** (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_)
Current ego vehicle dynamics regarding the route.
<h3>Methods</h3>
<h3>Dunder methods</h3>
- <a name="carla.RssResponse.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
---
## carla.RssRestrictor<a name="carla.RssRestrictor"></a>
These objects apply restrictions to a [carla.VehicleControl](#carla.VehicleControl). It is part of the CARLA implementation of the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib). This class works hand in hand with a [rss sensor](ref_sensors.md#rss-sensor), which provides the data of the restrictions to be applied.
<h3>Methods</h3>
- <a name="carla.RssRestrictor.restrict_vehicle_control"></a>**<font color="#7fb800">restrict_vehicle_control</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**vehicle_control**</font>, <font color="#00a6ed">**restriction**</font>, <font color="#00a6ed">**ego_dynamics_on_route**</font>, <font color="#00a6ed">**vehicle_physics**</font>)
Applies the safety restrictions given by a [carla.RssSensor](#carla.RssSensor) to a [carla.VehicleControl](#carla.VehicleControl).
- **Parameters:**
- `vehicle_control` (_[carla.VehicleControl](#carla.VehicleControl)_) The input vehicle control to be restricted.
- `restriction` (_<a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1world_1_1AccelerationRestriction.html">libad_rss_python.AccelerationRestriction</a>_) Part of the response generated by the sensor. Contains restrictions to be applied to the acceleration of the vehicle.
- `ego_dynamics_on_route` (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_) Part of the response generated by the sensor. Contains dynamics and heading of the vehicle regarding its route.
- `vehicle_physics` (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_) The current physics of the vehicle. Used to apply the restrictions properly.
- **Return:** _[carla.VehicleControl](#carla.VehicleControl)_
---
## carla.RssRoadBoundariesMode<a name="carla.RssRoadBoundariesMode"></a>
Enum declaration used in [carla.RssSensor](#carla.RssSensor) to enable or disable the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. In summary, this feature considers the road boundaries as virtual objects. The minimum safety distance check is applied to these virtual walls, in order to make sure the vehicle does not drive off the road.
<h3>Instance Variables</h3>
- <a name="carla.RssRoadBoundariesMode.On"></a>**<font color="#f8805a">On</font>**
Enables the _stay on road_ feature.
- <a name="carla.RssRoadBoundariesMode.Off"></a>**<font color="#f8805a">Off</font>**
Disables the _stay on road_ feature.
---
## carla.RssSensor<a name="carla.RssSensor"></a>
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.Sensor](#carla.Sensor)_</b></small></div></p><p>This sensor works a bit differently than the rest. Take look at the [specific documentation](adv_rss.md), and the [rss sensor reference](ref_sensors.md#rss-sensor) to gain full understanding of it.
The RSS sensor uses world information, and a [RSS library](https://github.com/intel/ad-rss-lib) to make safety checks on a vehicle. The output retrieved by the sensor is a [carla.RssResponse](#carla.RssResponse). This will be used by a [carla.RssRestrictor](#carla.RssRestrictor) to modify a [carla.VehicleControl](#carla.VehicleControl) before applying it to a vehicle.
<h3>Instance Variables</h3>
- <a name="carla.RssSensor.ego_vehicle_dynamics"></a>**<font color="#f8805a">ego_vehicle_dynamics</font>** (_libad_rss_python.RssDynamics_)
States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the ego vehicle.
- <a name="carla.RssSensor.other_vehicle_dynamics"></a>**<font color="#f8805a">other_vehicle_dynamics</font>** (_libad_rss_python.RssDynamics_)
States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the rest of vehicles.
- <a name="carla.RssSensor.road_boundaries_mode"></a>**<font color="#f8805a">road_boundaries_mode</font>** (_[carla.RssRoadBoundariesMode](#carla.RssRoadBoundariesMode)_)
Switches the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. By default is __On__.
- <a name="carla.RssSensor.visualization_mode"></a>**<font color="#f8805a">visualization_mode</font>** (_[carla.RssVisualizationMode](#carla.RssVisualizationMode)_)
Sets the visualization of the RSS on the server side. By default is __All__. These drawings may delay de RSS so it is best to set this to __Off__ when evaluating RSS performance.
- <a name="carla.RssSensor.routing_targets"></a>**<font color="#f8805a">routing_targets</font>** (_vector<[carla.Transform](#carla.Transform)>_)
The current list of targets considered to route the vehicle. If no routing targets are defined, a route is generated at random.
<h3>Methods</h3>
- <a name="carla.RssSensor.append_routing_target"></a>**<font color="#7fb800">append_routing_target</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**routing_target**</font>)
Appends a new target position to the current route of the vehicle.
- **Parameters:**
- `routing_target` (_[carla.Transform](#carla.Transform)_) New target point for the route. Choose these after the intersections to force the route to take the desired turn.
- <a name="carla.RssSensor.reset_routing_targets"></a>**<font color="#7fb800">reset_routing_targets</font>**(<font color="#00a6ed">**self**</font>)
Erases the targets that have been appended to the route.
- <a name="carla.RssSensor.drop_route"></a>**<font color="#7fb800">drop_route</font>**(<font color="#00a6ed">**self**</font>)
Discards the current route. If there are targets remaining in **<font color="#f8805a">routing_targets</font>**, creates a new route using those. Otherwise, a new route is created at random.
<h3>Dunder methods</h3>
- <a name="carla.RssSensor.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
---
## carla.RssVisualizationMode<a name="carla.RssVisualizationMode"></a>
Enum declaration used to state the visualization RSS calculations server side. Depending on these, the [carla.RssSensor](#carla.RssSensor) will use a [carla.DebugHelper](#carla.DebugHelper) to draw different elements. These drawings take some time and might delay the RSS responses. It is best to disable them when evaluating RSS performance.
<h3>Instance Variables</h3>
- <a name="carla.RssVisualizationMode.Off"></a>**<font color="#f8805a">Off</font>**
- <a name="carla.RssVisualizationMode.RouteOnly"></a>**<font color="#f8805a">RouteOnly</font>**
- <a name="carla.RssVisualizationMode.VehicleStateOnly"></a>**<font color="#f8805a">VehicleStateOnly</font>**
- <a name="carla.RssVisualizationMode.VehicleStateAndRoute"></a>**<font color="#f8805a">VehicleStateAndRoute</font>**
- <a name="carla.RssVisualizationMode.All"></a>**<font color="#f8805a">All</font>**
---
## carla.Sensor<a name="carla.Sensor"></a> ## carla.Sensor<a name="carla.Sensor"></a>
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.Actor](#carla.Actor)_</b></small></div></p><p>Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at [carla.World](#carla.World) to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from [carla.SensorData](#carla.SensorData) (depending on the sensor). <div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.Actor](#carla.Actor)_</b></small></div></p><p>Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at [carla.World](#carla.World) to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from [carla.SensorData](#carla.SensorData) (depending on the sensor).
Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in [carla.BlueprintLibrary](#carla.BlueprintLibrary). All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow: Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in [carla.BlueprintLibrary](#carla.BlueprintLibrary). All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
<b>Receive data on every tick:</b> <b>Receive data on every tick.</b>
- [Gnss sensor](ref_sensors.md#gnss-sensor). - [Depth camera](ref_sensors.md#depth-camera).
- [IMU sensor](ref_sensors.md#imu-sensor). - [Gnss sensor](ref_sensors.md#gnss-sensor).
- [Radar](ref_sensors.md#radar-sensor). - [IMU sensor](ref_sensors.md#imu-sensor).
- [Depth camera](ref_sensors.md#depth-camera). - [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor). - [Radar](ref_sensors.md#radar-sensor).
- [RGB camera](ref_sensors.md#rgb-camera). - [RGB camera](ref_sensors.md#rgb-camera).
- [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera). - [RSS sensor](ref_sensors.md#rss-sensor).
<b>Only receive data when triggered:</b> - [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
- [Collision detector](ref_sensors.md#collision-detector). <b>Only receive data when triggered.</b>
- [Lane invasion detector](ref_sensors.md#lane-invasion-detector). - [Collision detector](ref_sensors.md#collision-detector).
- [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
- [Obstacle detector](ref_sensors.md#obstacle-detector). - [Obstacle detector](ref_sensors.md#obstacle-detector).
<h3>Instance Variables</h3> <h3>Instance Variables</h3>
@ -1185,15 +1322,16 @@ Commands the sensor to stop listening for data.
--- ---
## carla.SensorData<a name="carla.SensorData"></a> ## carla.SensorData<a name="carla.SensorData"></a>
Base class for all the objects containing data generated by a [carla.Sensor](#carla.Sensor). This objects should be the argument of the function said sensor is listening to, in order to work with them. Each of these sensors needs for a specific type of sensor data. The relation between available sensors and their corresponding data goes like: Base class for all the objects containing data generated by a [carla.Sensor](#carla.Sensor). This objects should be the argument of the function said sensor is listening to, in order to work with them. Each of these sensors needs for a specific type of sensor data. Hereunder is a list of the sensors and their corresponding data.
- Cameras (RGB, depth and semantic segmentation): [carla.Image](#carla.Image). - Cameras (RGB, depth and semantic segmentation): [carla.Image](#carla.Image).
- Collision detector: [carla.CollisionEvent](#carla.CollisionEvent). - Collision detector: [carla.CollisionEvent](#carla.CollisionEvent).
- Gnss detector: [carla.GnssMeasurement](#carla.GnssMeasurement). - Gnss detector: [carla.GnssMeasurement](#carla.GnssMeasurement).
- IMU detector: [carla.IMUMeasurement](#carla.IMUMeasurement). - IMU detector: [carla.IMUMeasurement](#carla.IMUMeasurement).
- Lane invasion detector: [carla.LaneInvasionEvent](#carla.LaneInvasionEvent). - Lane invasion detector: [carla.LaneInvasionEvent](#carla.LaneInvasionEvent).
- Lidar raycast: [carla.LidarMeasurement](#carla.LidarMeasurement). - Lidar raycast: [carla.LidarMeasurement](#carla.LidarMeasurement).
- Obstacle detector: [carla.ObstacleDetectionEvent](#carla.ObstacleDetectionEvent). - Obstacle detector: [carla.ObstacleDetectionEvent](#carla.ObstacleDetectionEvent).
- Radar detector: [carla.RadarMeasurement](#carla.RadarMeasurement). - Radar detector: [carla.RadarMeasurement](#carla.RadarMeasurement).
- RSS sensor: [carla.RssResponse](#carla.RssResponse).
<h3>Instance Variables</h3> <h3>Instance Variables</h3>
- <a name="carla.SensorData.frame"></a>**<font color="#f8805a">frame</font>** (_int_) - <a name="carla.SensorData.frame"></a>**<font color="#f8805a">frame</font>** (_int_)

View File

@ -1,15 +1,16 @@
# Sensors reference # Sensors reference
* [__Collision detector__](#collision-detector) * [__Collision detector__](#collision-detector)
* [__Depth camera__](#depth-camera) * [__Depth camera__](#depth-camera)
* [__GNSS sensor__](#gnss-sensor) * [__GNSS sensor__](#gnss-sensor)
* [__IMU sensor__](#imu-sensor) * [__IMU sensor__](#imu-sensor)
* [__Lane invasion detector__](#lane-invasion-detector) * [__Lane invasion detector__](#lane-invasion-detector)
* [__Lidar raycast sensor__](#lidar-raycast-sensor) * [__Lidar raycast sensor__](#lidar-raycast-sensor)
* [__Obstacle detector__](#obstacle-detector) * [__Obstacle detector__](#obstacle-detector)
* [__Radar sensor__](#radar-sensor) * [__Radar sensor__](#radar-sensor)
* [__RGB camera__](#rgb-camera) * [__RGB camera__](#rgb-camera)
* [__Semantic segmentation camera__](#semantic-segmentation-camera) * [__RSS sensor__](#rss-sensor)
* [__Semantic segmentation camera__](#semantic-segmentation-camera)
--- ---
@ -18,7 +19,7 @@
* __Blueprint:__ sensor.other.collision * __Blueprint:__ sensor.other.collision
* __Output:__ [carla.CollisionEvent](python_api.md#carla.CollisionEvent) per collision. * __Output:__ [carla.CollisionEvent](python_api.md#carla.CollisionEvent) per collision.
This sensor registers an event each time its parent actor collisions against something in the world. Several collisions may be detected during a single simulation step. This sensor registers an event each time its parent actor collisions against something in the world. Several collisions may be detected during a single simulation step.
To ensure that collisions with any kind of object are detected, the server creates "fake" actors for elements such as buildings or bushes so the semantic tag can be retrieved to identify it. To ensure that collisions with any kind of object are detected, the server creates "fake" actors for elements such as buildings or bushes so the semantic tag can be retrieved to identify it.
Collision detectors do not have any configurable attribute. Collision detectors do not have any configurable attribute.
@ -63,9 +64,9 @@ Collision detectors do not have any configurable attribute.
## Depth camera ## Depth camera
* __Blueprint:__ sensor.camera.depth * __Blueprint:__ sensor.camera.depth
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise). * __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
The camera provides a raw data of the scene codifying the distance of each pixel to the camera (also known as **depth buffer** or **z-buffer**) to create a depth map of the elements. The camera provides a raw data of the scene codifying the distance of each pixel to the camera (also known as **depth buffer** or **z-buffer**) to create a depth map of the elements.
The image codifies depth value per pixel using 3 channels of the RGB color space, from less to more significant bytes: _R -> G -> B_. The actual distance in meters can be The image codifies depth value per pixel using 3 channels of the RGB color space, from less to more significant bytes: _R -> G -> B_. The actual distance in meters can be
decoded with: decoded with:
@ -75,8 +76,8 @@ normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
in_meters = 1000 * normalized in_meters = 1000 * normalized
``` ```
The output [carla.Image](python_api.md#carla.Image) should then be saved to disk using a [carla.colorConverter](python_api.md#carla.ColorConverter) that will turn the distance stored in RGB channels into a __[0,1]__ float containing the distance and then translate this to grayscale. The output [carla.Image](python_api.md#carla.Image) should then be saved to disk using a [carla.colorConverter](python_api.md#carla.ColorConverter) that will turn the distance stored in RGB channels into a __[0,1]__ float containing the distance and then translate this to grayscale.
There are two options in [carla.colorConverter](python_api.md#carla.ColorConverter) to get a depth view: __Depth__ and __Logaritmic depth__. The precision is milimetric in both, but the logarithmic approach provides better results for closer objects. There are two options in [carla.colorConverter](python_api.md#carla.ColorConverter) to get a depth view: __Depth__ and __Logaritmic depth__. The precision is milimetric in both, but the logarithmic approach provides better results for closer objects.
![ImageDepth](img/capture_depth.png) ![ImageDepth](img/capture_depth.png)
@ -203,7 +204,7 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
## GNSS sensor ## GNSS sensor
* __Blueprint:__ sensor.other.gnss * __Blueprint:__ sensor.other.gnss
* __Output:__ [carla.GNSSMeasurement](python_api.md#carla.GnssMeasurement) per step (unless `sensor_tick` says otherwise). * __Output:__ [carla.GNSSMeasurement](python_api.md#carla.GnssMeasurement) per step (unless `sensor_tick` says otherwise).
Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gnss) of its parent object. This is calculated by adding the metric position to an initial geo reference location defined within the OpenDRIVE map definition. Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gnss) of its parent object. This is calculated by adding the metric position to an initial geo reference location defined within the OpenDRIVE map definition.
@ -416,17 +417,17 @@ Provides measures that accelerometer, gyroscope and compass would retrieve for t
* __Blueprint:__ sensor.other.lane_invasion * __Blueprint:__ sensor.other.lane_invasion
* __Output:__ [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent) per crossing. * __Output:__ [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent) per crossing.
Registers an event each time its parent crosses a lane marking. Registers an event each time its parent crosses a lane marking.
The sensor uses road data provided by the OpenDRIVE description of the map to determine whether the parent vehicle is invading another lane by considering the space between wheels. The sensor uses road data provided by the OpenDRIVE description of the map to determine whether the parent vehicle is invading another lane by considering the space between wheels.
However there are some things to be taken into consideration: However there are some things to be taken into consideration:
* Discrepancies between the OpenDRIVE file and the map will create irregularities such as crossing lanes that are not visible in the map. * Discrepancies between the OpenDRIVE file and the map will create irregularities such as crossing lanes that are not visible in the map.
* The output retrieves a list of crossed lane markings: the computation is done in OpenDRIVE and considering the whole space between the four wheels as a whole. Thus, there may be more than one lane being crossed at the same time. * The output retrieves a list of crossed lane markings: the computation is done in OpenDRIVE and considering the whole space between the four wheels as a whole. Thus, there may be more than one lane being crossed at the same time.
This sensor does not have any configurable attribute. This sensor does not have any configurable attribute.
!!! Important !!! Important
This sensor works fully on the client-side. This sensor works fully on the client-side.
#### Output attributes #### Output attributes
@ -466,11 +467,11 @@ This sensor does not have any configurable attribute.
* __Blueprint:__ sensor.lidar.ray_cast * __Blueprint:__ sensor.lidar.ray_cast
* __Output:__ [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement) per step (unless `sensor_tick` says otherwise). * __Output:__ [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement) per step (unless `sensor_tick` says otherwise).
This sensor simulates a rotating Lidar implemented using ray-casting. This sensor simulates a rotating Lidar implemented using ray-casting.
The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step: The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step:
`points_per_channel_each_step = points_per_second / (FPS * channels)` `points_per_channel_each_step = points_per_second / (FPS * channels)`
A Lidar measurement contains a packet with all the points generated during a `1/FPS` interval. During this interval the physics are not updated so all the points in a measurement reflect the same "static picture" of the scene. A Lidar measurement contains a packet with all the points generated during a `1/FPS` interval. During this interval the physics are not updated so all the points in a measurement reflect the same "static picture" of the scene.
This output contains a cloud of simulation points and thus, can be iterated to retrieve a list of their [`carla.Location`](python_api.md#carla.Location): This output contains a cloud of simulation points and thus, can be iterated to retrieve a list of their [`carla.Location`](python_api.md#carla.Location):
@ -577,10 +578,10 @@ for location in lidar_measurement:
## Obstacle detector ## Obstacle detector
* __Blueprint:__ sensor.other.obstacle * __Blueprint:__ sensor.other.obstacle
* __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent) per obstacle (unless `sensor_tick` says otherwise). * __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent) per obstacle (unless `sensor_tick` says otherwise).
Registers an event every time the parent actor has an obstacle ahead. Registers an event every time the parent actor has an obstacle ahead.
In order to anticipate obstacles, the sensor creates a capsular shape ahead of the parent vehicle and uses it to check for collisions. In order to anticipate obstacles, the sensor creates a capsular shape ahead of the parent vehicle and uses it to check for collisions.
To ensure that collisions with any kind of object are detected, the server creates "fake" actors for elements such as buildings or bushes so the semantic tag can be retrieved to identify it. To ensure that collisions with any kind of object are detected, the server creates "fake" actors for elements such as buildings or bushes so the semantic tag can be retrieved to identify it.
<table class ="defTable"> <table class ="defTable">
@ -660,19 +661,19 @@ To ensure that collisions with any kind of object are detected, the server creat
## Radar sensor ## Radar sensor
* __Blueprint:__ sensor.other.radar * __Blueprint:__ sensor.other.radar
* __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) per step (unless `sensor_tick` says otherwise). * __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) per step (unless `sensor_tick` says otherwise).
The sensor creates a conic view that is translated to a 2D point map of the elements in sight and their speed regarding the sensor. This can be used to shape elements and evaluate their movement and direction. Due to the use of polar coordinates, the points will concentrate around the center of the view. The sensor creates a conic view that is translated to a 2D point map of the elements in sight and their speed regarding the sensor. This can be used to shape elements and evaluate their movement and direction. Due to the use of polar coordinates, the points will concentrate around the center of the view.
Points measured are contained in [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) as an array of [carla.RadarDetection](python_api.md#carla.RadarDetection), which specifies their polar coordinates, distance and velocity. Points measured are contained in [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) as an array of [carla.RadarDetection](python_api.md#carla.RadarDetection), which specifies their polar coordinates, distance and velocity.
This raw data provided by the radar sensor can be easily converted to a format manageable by __numpy__: This raw data provided by the radar sensor can be easily converted to a format manageable by __numpy__:
```py ```py
# To get a numpy [[vel, altitude, azimuth, depth],...[,,,]]: # To get a numpy [[vel, altitude, azimuth, depth],...[,,,]]:
points = np.frombuffer(radar_data.raw_data, dtype=np.dtype('f4')) points = np.frombuffer(radar_data.raw_data, dtype=np.dtype('f4'))
points = np.reshape(points, (len(radar_data), 4)) points = np.reshape(points, (len(radar_data), 4))
``` ```
The provided script `manual_control.py` uses this sensor to show the points being detected and paint them white when static, red when moving towards the object and blue when moving away: The provided script `manual_control.py` uses this sensor to show the points being detected and paint them white when static, red when moving towards the object and blue when moving away:
![ImageRadar](img/sensor_radar.png) ![ImageRadar](img/sensor_radar.png)
@ -762,17 +763,17 @@ The provided script `manual_control.py` uses this sensor to show the points bein
* __Blueprint:__ sensor.camera.rgb * __Blueprint:__ sensor.camera.rgb
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).. * __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise)..
The "RGB" camera acts as a regular camera capturing images from the scene. The "RGB" camera acts as a regular camera capturing images from the scene.
[carla.colorConverter](python_api.md#carla.ColorConverter) [carla.colorConverter](python_api.md#carla.ColorConverter)
If `enable_postprocess_effects` is enabled, a set of post-process effects is applied to the image for the sake of realism: If `enable_postprocess_effects` is enabled, a set of post-process effects is applied to the image for the sake of realism:
* __Vignette:__ darkens the border of the screen. * __Vignette:__ darkens the border of the screen.
* __Grain jitter:__ adds some noise to the render. * __Grain jitter:__ adds some noise to the render.
* __Bloom:__ intense lights burn the area around them. * __Bloom:__ intense lights burn the area around them.
* __Auto exposure:__ modifies the image gamma to simulate the eye adaptation to darker or brighter areas. * __Auto exposure:__ modifies the image gamma to simulate the eye adaptation to darker or brighter areas.
* __Lens flares:__ simulates the reflection of bright objects on the lens. * __Lens flares:__ simulates the reflection of bright objects on the lens.
* __Depth of field:__ blurs objects near or very far away of the camera. * __Depth of field:__ blurs objects near or very far away of the camera.
The `sensor_tick` tells how fast we want the sensor to capture the data. The `sensor_tick` tells how fast we want the sensor to capture the data.
@ -1066,18 +1067,161 @@ Since these effects are provided by UE, please make sure to check their document
<td>Array of BGRA 32-bit pixels.</td> <td>Array of BGRA 32-bit pixels.</td>
</tbody> </tbody>
</table> </table>
<br>
---
## RSS sensor
* __Blueprint:__ sensor.other.rss
* __Output:__ [carla.RssResponse](python_api.md#carla.RssResponse) per step (unless `sensor_tick` says otherwise).
!!! Important
It is highly recommended to read the specific [rss documentation](adv_rss.md) before reading this.
This sensor integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in CARLA. It is disabled by default in CARLA, and it has to be explicitly built in order to be used.
The RSS sensor calculates the RSS state of a vehicle and retrieves the current RSS Response as sensor data. The [carla.RssRestrictor](python_api.md#carla.RssRestrictor) will use this data to adapt a [carla.VehicleControl](python_api.md#carla.VehicleControl) before applying it to a vehicle.
These controllers can be generated by an *Automated Driving* stack or user input. For instance, hereunder there is a fragment of code from `PythonAPI/examples/manual_control_rss.py`, where the user input is modified using RSS when necessary.
__1.__ Checks if the __RssSensor__ generates a valid response containing restrictions.
__2.__ Gathers the current dynamics of the vehicle and the vehicle physics.
__3.__ Applies restrictions to the vehicle control using the response from the RssSensor, and the current dynamics and physicis of the vehicle.
```py
rss_restriction = self._world.rss_sensor.acceleration_restriction if self._world.rss_sensor and self._world.rss_sensor.response_valid else None
if rss_restriction:
rss_ego_dynamics_on_route = self._world.rss_sensor.ego_dynamics_on_route
vehicle_physics = world.player.get_physics_control()
...
vehicle_control = self._restrictor.restrict_vehicle_control(
vehicle_control, rss_restriction, rss_ego_dynamics_on_route, vehicle_physics)
```
#### The carla.RssSensor class
The blueprint for this sensor has no modifiable attributes. However, the [carla.RssSensor](python_api.md#carla.RssSensor) object that it instantiates has attributes and methods that are detailed in the Python API reference. Here is a summary of them.
<table class ="defTable">
<thead>
<th><a href="../python_api#carlarsssensor">carla.RssSensor variables</a></th>
<th>Type</th>
<th>Description</th>
</thead>
<tbody>
<td>
<code>ego_vehicle_dynamics</code> </td>
<td><a href="https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/">libad_rss_python.RssDynamics</a></td>
<td>RSS parameters to be applied for the ego vehicle </td>
<tr>
<td>
<code>other_vehicle_dynamics</code> </td>
<td><a href="https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/">libad_rss_python.RssDynamics</a></td>
<td>RSS parameters to be applied for the other vehicles</td>
<tr>
<td><code>road_boundaries_mode</code></td>
<td><a href="../python_api#carlarssroadboundariesmode">carla.RssRoadBoundariesMode</a></td>
<td>Enables/Disables the <a href="https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries">stay on road</a> feature. Default is <b>On</b>.</td>
<tr>
<td><code>visualization_mode</code></td>
<td><a href="../python_api#carlarssvisualizationmode">carla.RssVisualizationMode</a></td>
<td>States the visualization of the RSS calculations. Default is <b>All</b>.</td>
</table>
<br>
```py
# Fragment of manual_control_rss.py
# The carla.RssSensor is updated when listening for a new carla.RssResponse
def _on_rss_response(weak_self, response):
...
self.timestamp = response.timestamp
self.response_valid = response.response_valid
self.proper_response = response.proper_response
self.acceleration_restriction = response.acceleration_restriction
self.ego_dynamics_on_route = response.ego_dynamics_on_route
```
!!! Warning
This sensor works fully on the client side. There is no blueprint in the server. Changes on the attributes will have effect __after__ the *listen()* has been called.
The methods available in this class are related to the routing of the vehicle. RSS calculations are always based on a route of the ego vehicle through the road network.
The sensor allows to control the considered route by providing some key points, which could be the [carla.Transform](python_api.md#carla.Transform) in a [carla.Waypoint](python_api.md#carla.Waypoint). These points are best selected after the intersections to force the route to take the desired turn.
<table class ="defTable">
<thead>
<th><a href="../python_api#carlarsssensor">carla.RssSensor methods</a></th>
<th>Description</th>
</thead>
<tbody>
<td><code>routing_targets</code></td>
<td>Get the current list of routing targets used for route.</td>
<tr>
<td><code>append_routing_target</code></td>
<td>Append an additional position to the current routing targets.</td>
<tr>
<td><code>reset_routing_targets</code></td>
<td>Deletes the appended routing targets.</td>
<tr>
<td><code>drop_route</code></td>
<td>Discards the current route and creates a new one.</td>
</table>
<br>
```py
# Update the current route
self.sensor.reset_routing_targets()
if routing_targets:
for target in routing_targets:
self.sensor.append_routing_target(target)
```
!!! Note
If no routing targets are defined, a random route is created.
#### Output attributes
<table class ="defTable">
<thead>
<th><a href="../python_api#carlarssresponse">carla.RssResponse attributes</a></th>
<th>Type</th>
<th>Description</th>
</thead>
<tbody>
<td><code>response_valid</code></td>
<td>bool</td>
<td>Validity of the response data.</td>
<tr>
<td><code>proper_response</code> </td>
<td><a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1state_1_1ProperResponse.html">libad_rss_python.ProperResponse</a></td>
<td>Proper response that the RSS calculated for the vehicle.</td>
<tr>
<td><code>acceleration_restriction</code></td>
<td><a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1world_1_1AccelerationRestriction.html">libad_rss_python.AccelerationRestriction</a></td>
<td>Acceleration restrictions of the RSS calculation.</td>
<tr>
<td><code>rss_state_snapshot</code></td>
<td><a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1state_1_1RssStateSnapshot.html">libad_rss_python.RssStateSnapshot</a></td>
<td>RSS states at the current point in time.</td>
<tr>
<td><code>ego_dynamics_on_route</code></td>
<td><a href="../python_api#carlarssegodynamicsonroute">carla.RssEgoDynamicsOnRoute</a></td>
<td>Current ego vehicle dynamics regarding the route.</td>
</tbody>
</table>
--- ---
## Semantic segmentation camera ## Semantic segmentation camera
* __Blueprint:__ sensor.camera.semantic_segmentation * __Blueprint:__ sensor.camera.semantic_segmentation
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise). * __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
This camera classifies every object in sight by displaying it in a different color according to its tags (e.g., pedestrians in a different color than vehicles). This camera classifies every object in sight by displaying it in a different color according to its tags (e.g., pedestrians in a different color than vehicles).
When the simulation starts, every element in scene is created with a tag. So it happens when an actor is spawned. The objects are classified by their relative file path in the project. For example, meshes stored in `Unreal/CarlaUE4/Content/Static/Pedestrians` are tagged as `Pedestrian`. When the simulation starts, every element in scene is created with a tag. So it happens when an actor is spawned. The objects are classified by their relative file path in the project. For example, meshes stored in `Unreal/CarlaUE4/Content/Static/Pedestrians` are tagged as `Pedestrian`.
The server provides an image with the tag information __encoded in the red channel__: A pixel with a red value of `x` belongs to an object with tag `x`. The server provides an image with the tag information __encoded in the red channel__: A pixel with a red value of `x` belongs to an object with tag `x`.
This raw [carla.Image](python_api.md#carla.Image) can be stored and converted it with the help of __CityScapesPalette__ in [carla.ColorConverter](python_api.md#carla.ColorConverter) to apply the tags information and show picture with the semantic segmentation. This raw [carla.Image](python_api.md#carla.Image) can be stored and converted it with the help of __CityScapesPalette__ in [carla.ColorConverter](python_api.md#carla.ColorConverter) to apply the tags information and show picture with the semantic segmentation.
The following tags are currently available: The following tags are currently available:
<table class ="defTable"> <table class ="defTable">
@ -1266,4 +1410,3 @@ The following tags are currently available:
</table> </table>
<br> <br>

View File

@ -6,28 +6,29 @@
parent: carla.Actor parent: carla.Actor
# - DESCRIPTION ------------------------ # - DESCRIPTION ------------------------
doc: > doc: >
Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at carla.World to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from carla.SensorData (depending on the sensor). Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at carla.World to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from carla.SensorData (depending on the sensor).
Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in carla.BlueprintLibrary. All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow: Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in carla.BlueprintLibrary. All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
<b>Receive data on every tick:</b> <b>Receive data on every tick.</b>
- [Gnss sensor](ref_sensors.md#gnss-sensor). - [Depth camera](ref_sensors.md#depth-camera).
- [IMU sensor](ref_sensors.md#imu-sensor). - [Gnss sensor](ref_sensors.md#gnss-sensor).
- [Radar](ref_sensors.md#radar-sensor). - [IMU sensor](ref_sensors.md#imu-sensor).
- [Depth camera](ref_sensors.md#depth-camera). - [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor). - [Radar](ref_sensors.md#radar-sensor).
- [RGB camera](ref_sensors.md#rgb-camera). - [RGB camera](ref_sensors.md#rgb-camera).
- [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera). - [RSS sensor](ref_sensors.md#rss-sensor).
<b>Only receive data when triggered:</b> - [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
- [Collision detector](ref_sensors.md#collision-detector). <b>Only receive data when triggered.</b>
- [Lane invasion detector](ref_sensors.md#lane-invasion-detector). - [Collision detector](ref_sensors.md#collision-detector).
- [Obstacle detector](ref_sensors.md#obstacle-detector). - [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
- [Obstacle detector](ref_sensors.md#obstacle-detector).
# - PROPERTIES ------------------------- # - PROPERTIES -------------------------
instance_variables: instance_variables:
- var_name: is_listening - var_name: is_listening
type: boolean type: boolean
doc: > doc: >
When <b>True</b> the sensor will be waiting for data. When <b>True</b> the sensor will be waiting for data.
# - METHODS ---------------------------- # - METHODS ----------------------------
methods: methods:
- def_name: listen - def_name: listen
@ -37,7 +38,7 @@
doc: > doc: >
The called function with one argument containing the sensor data. The called function with one argument containing the sensor data.
doc: > doc: >
The function the sensor will be calling to every time a new measurement is received. This function needs for an argument containing an object type carla.SensorData to work with. The function the sensor will be calling to every time a new measurement is received. This function needs for an argument containing an object type carla.SensorData to work with.
# -------------------------------------- # --------------------------------------
- def_name: stop - def_name: stop
doc: > doc: >
@ -45,4 +46,120 @@
# -------------------------------------- # --------------------------------------
- def_name: __str__ - def_name: __str__
# -------------------------------------- # --------------------------------------
- class_name: RssSensor
parent: carla.Sensor
# - DESCRIPTION ------------------------
doc: >
This sensor works a bit differently than the rest. Take look at the [specific documentation](adv_rss.md), and the [rss sensor reference](ref_sensors.md#rss-sensor) to gain full understanding of it.
The RSS sensor uses world information, and a [RSS library](https://github.com/intel/ad-rss-lib) to make safety checks on a vehicle. The output retrieved by the sensor is a carla.RssResponse. This will be used by a carla.RssRestrictor to modify a carla.VehicleControl before applying it to a vehicle.
# - PROPERTIES -------------------------
instance_variables:
- var_name: ego_vehicle_dynamics
type: libad_rss_python.RssDynamics
doc: >
States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the ego vehicle.
- var_name: other_vehicle_dynamics
type: libad_rss_python.RssDynamics
doc: >
States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the rest of vehicles.
- var_name: road_boundaries_mode
type: carla.RssRoadBoundariesMode
doc: >
Switches the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. By default is __On__.
- var_name: visualization_mode
type: carla.RssVisualizationMode
doc: >
Sets the visualization of the RSS on the server side. By default is __All__. These drawings may delay de RSS so it is best to set this to __Off__ when evaluating RSS performance.
- var_name: routing_targets
type: vector<carla.Transform>
doc: >
The current list of targets considered to route the vehicle. If no routing targets are defined, a route is generated at random.
# - METHODS ----------------------------
methods:
- def_name: append_routing_target
params:
- param_name: routing_target
type: carla.Transform
doc: >
New target point for the route. Choose these after the intersections to force the route to take the desired turn.
doc: >
Appends a new target position to the current route of the vehicle.
- def_name: reset_routing_targets
doc: >
Erases the targets that have been appended to the route.
- def_name: drop_route
doc: >
Discards the current route. If there are targets remaining in **<font color="#f8805a">routing_targets</font>**, creates a new route using those. Otherwise, a new route is created at random.
# --------------------------------------
- def_name: __str__
# --------------------------------------
- class_name: RssRestrictor
parent:
# - DESCRIPTION ------------------------
doc: >
These objects apply restrictions to a carla.VehicleControl. It is part of the CARLA implementation of the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib). This class works hand in hand with a [rss sensor](ref_sensors.md#rss-sensor), which provides the data of the restrictions to be applied.
# - PROPERTIES -------------------------
instance_variables:
# - METHODS ----------------------------
methods:
- def_name: restrict_vehicle_control
params:
- param_name: vehicle_control
type: carla.VehicleControl
doc: >
The input vehicle control to be restricted.
- param_name: restriction
type: <a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1world_1_1AccelerationRestriction.html">libad_rss_python.AccelerationRestriction</a>
doc: >
Part of the response generated by the sensor. Contains restrictions to be applied to the acceleration of the vehicle.
- param_name: ego_dynamics_on_route
type: carla.RssEgoDynamicsOnRoute
doc: >
Part of the response generated by the sensor. Contains dynamics and heading of the vehicle regarding its route.
- param_name: vehicle_physics
type: carla.RssEgoDynamicsOnRoute
doc: >
The current physics of the vehicle. Used to apply the restrictions properly.
return:
carla.VehicleControl
doc: >
Applies the safety restrictions given by a carla.RssSensor to a carla.VehicleControl.
# --------------------------------------
- class_name: RssRoadBoundariesMode
# - DESCRIPTION ------------------------
doc: >
Enum declaration used in carla.RssSensor to enable or disable the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. In summary, this feature considers the road boundaries as virtual objects. The minimum safety distance check is applied to these virtual walls, in order to make sure the vehicle does not drive off the road.
# - PROPERTIES -------------------------
instance_variables:
- var_name: 'On'
doc: >
Enables the _stay on road_ feature.
# --------------------------------------
- var_name: 'Off'
doc: >
Disables the _stay on road_ feature.
# --------------------------------------
- class_name: RssVisualizationMode
# - DESCRIPTION ------------------------
doc: >
Enum declaration used to state the visualization RSS calculations server side. Depending on these, the carla.RssSensor will use a carla.DebugHelper to draw different elements. These drawings take some time and might delay the RSS responses. It is best to disable them when evaluating RSS performance.
# - PROPERTIES -------------------------
instance_variables:
- var_name: 'Off'
# --------------------------------------
- var_name: RouteOnly
# --------------------------------------
- var_name: VehicleStateOnly
# --------------------------------------
- var_name: VehicleStateAndRoute
# --------------------------------------
- var_name: All
# --------------------------------------
... ...

View File

@ -5,15 +5,16 @@
- class_name: SensorData - class_name: SensorData
# - DESCRIPTION ------------------------ # - DESCRIPTION ------------------------
doc: > doc: >
Base class for all the objects containing data generated by a carla.Sensor. This objects should be the argument of the function said sensor is listening to, in order to work with them. Each of these sensors needs for a specific type of sensor data. The relation between available sensors and their corresponding data goes like: Base class for all the objects containing data generated by a carla.Sensor. This objects should be the argument of the function said sensor is listening to, in order to work with them. Each of these sensors needs for a specific type of sensor data. Hereunder is a list of the sensors and their corresponding data.
- Cameras (RGB, depth and semantic segmentation): carla.Image. - Cameras (RGB, depth and semantic segmentation): carla.Image.
- Collision detector: carla.CollisionEvent. - Collision detector: carla.CollisionEvent.
- Gnss detector: carla.GnssMeasurement. - Gnss detector: carla.GnssMeasurement.
- IMU detector: carla.IMUMeasurement. - IMU detector: carla.IMUMeasurement.
- Lane invasion detector: carla.LaneInvasionEvent. - Lane invasion detector: carla.LaneInvasionEvent.
- Lidar raycast: carla.LidarMeasurement. - Lidar raycast: carla.LidarMeasurement.
- Obstacle detector: carla.ObstacleDetectionEvent. - Obstacle detector: carla.ObstacleDetectionEvent.
- Radar detector: carla.RadarMeasurement. - Radar detector: carla.RadarMeasurement.
- RSS sensor: carla.RssResponse.
# - PROPERTIES ------------------------- # - PROPERTIES -------------------------
instance_variables: instance_variables:
- var_name: frame - var_name: frame
@ -33,21 +34,21 @@
- class_name: ColorConverter - class_name: ColorConverter
# - DESCRIPTION ------------------------ # - DESCRIPTION ------------------------
doc: > doc: >
Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as <b>float</b> that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for <b>sensor.camera.semantic_segmentation</b>. Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as <b>float</b> that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for <b>sensor.camera.semantic_segmentation</b>.
# - PROPERTIES ------------------------- # - PROPERTIES -------------------------
instance_variables: instance_variables:
- var_name: CityScapesPalette - var_name: CityScapesPalette
doc: > doc: >
Converts the image to a segmentated map using tags provided by the blueprint library. Used by <b>sensor.camera.semantic_segmentation</b>. Converts the image to a segmentated map using tags provided by the blueprint library. Used by <b>sensor.camera.semantic_segmentation</b>.
- var_name: Depth - var_name: Depth
doc: > doc: >
Converts the image to a linear depth map. Used by <b>sensor.camera.depth</b>. Converts the image to a linear depth map. Used by <b>sensor.camera.depth</b>.
- var_name: LogarithmicDepth - var_name: LogarithmicDepth
doc: > doc: >
Converts the image to a depth map using a logarithmic scale, leading to better precision for small distances at the expense of losing it when further away. Converts the image to a depth map using a logarithmic scale, leading to better precision for small distances at the expense of losing it when further away.
- var_name: Raw - var_name: Raw
doc: > doc: >
No changes applied to the image. No changes applied to the image.
- class_name: Image - class_name: Image
parent: carla.SensorData parent: carla.SensorData
@ -63,7 +64,7 @@
- var_name: height - var_name: height
type: int type: int
doc: > doc: >
Image height in pixels. Image height in pixels.
- var_name: width - var_name: width
type: int type: int
doc: > doc: >
@ -84,12 +85,12 @@
- param_name: path - param_name: path
type: str type: str
doc: > doc: >
Path that will contain the image. Path that will contain the image.
- param_name: color_converter - param_name: color_converter
type: carla.ColorConverter type: carla.ColorConverter
default: Raw default: Raw
doc: > doc: >
Default <b>Raw</b> will make no changes. Default <b>Raw</b> will make no changes.
doc: > doc: >
Saves the image to disk using a converter pattern stated as `color_converter`. The default conversion pattern is <b>Raw</b> that will make no changes to the image. Saves the image to disk using a converter pattern stated as `color_converter`. The default conversion pattern is <b>Raw</b> that will make no changes to the image.
# -------------------------------------- # --------------------------------------
@ -122,7 +123,7 @@
- var_name: channels - var_name: channels
type: int type: int
doc: > doc: >
Number of lasers shot. Number of lasers shot.
- var_name: horizontal_angle - var_name: horizontal_angle
type: float type: float
doc: > doc: >
@ -130,7 +131,7 @@
- var_name: raw_data - var_name: raw_data
type: bytes type: bytes
doc: > doc: >
List of 3D points received as data. List of 3D points received as data.
# - METHODS ---------------------------- # - METHODS ----------------------------
methods: methods:
- def_name: save_to_disk - def_name: save_to_disk
@ -138,7 +139,7 @@
- param_name: path - param_name: path
type: str type: str
doc: > doc: >
Saves the point cloud to disk as a <b>.ply</b> file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated. Saves the point cloud to disk as a <b>.ply</b> file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
# -------------------------------------- # --------------------------------------
- def_name: get_point_count - def_name: get_point_count
params: params:
@ -170,7 +171,7 @@
parent: carla.SensorData parent: carla.SensorData
# - DESCRIPTION ------------------------ # - DESCRIPTION ------------------------
doc: > doc: >
Class that defines a collision data for <b>sensor.other.collision</b>. The sensor creates one of this for every collision detected which may be many for one simulation step. Learn more about this [here](ref_sensors.md#collision-detector). Class that defines a collision data for <b>sensor.other.collision</b>. The sensor creates one of this for every collision detected which may be many for one simulation step. Learn more about this [here](ref_sensors.md#collision-detector).
# - PROPERTIES ------------------------- # - PROPERTIES -------------------------
instance_variables: instance_variables:
- var_name: actor - var_name: actor
@ -180,7 +181,7 @@
- var_name: other_actor - var_name: other_actor
type: carla.Actor type: carla.Actor
doc: > doc: >
The second actor involved in the collision. The second actor involved in the collision.
- var_name: normal_impulse - var_name: normal_impulse
type: carla.Vector3D type: carla.Vector3D
doc: > doc: >
@ -190,13 +191,13 @@
parent: carla.SensorData parent: carla.SensorData
# - DESCRIPTION ------------------------ # - DESCRIPTION ------------------------
doc: > doc: >
Class that defines the obstacle data for <b>sensor.other.obstacle</b>. Learn more about this [here](ref_sensors.md#obstacle-detector). Class that defines the obstacle data for <b>sensor.other.obstacle</b>. Learn more about this [here](ref_sensors.md#obstacle-detector).
# - PROPERTIES ------------------------- # - PROPERTIES -------------------------
instance_variables: instance_variables:
- var_name: actor - var_name: actor
type: carla.Actor type: carla.Actor
doc: > doc: >
The actor the sensor is attached to. The actor the sensor is attached to.
- var_name: other_actor - var_name: other_actor
type: carla.Actor type: carla.Actor
doc: > doc: >
@ -204,7 +205,7 @@
- var_name: distance - var_name: distance
type: float type: float
doc: > doc: >
Distance between `actor` and `other`. Distance between `actor` and `other`.
# - METHODS ---------------------------- # - METHODS ----------------------------
methods: methods:
- def_name: __str__ - def_name: __str__
@ -214,7 +215,7 @@
parent: carla.SensorData parent: carla.SensorData
# - DESCRIPTION ------------------------ # - DESCRIPTION ------------------------
doc: > doc: >
Class that defines lanes invasion for <b>sensor.other.lane_invasion</b>. It works only client-side and is dependant on OpenDRIVE to provide reliable information. The sensor creates one of this every time there is a lane invasion, which may be more than once per simulation step. Learn more about this [here](ref_sensors.md#lane-invasion-detector). Class that defines lanes invasion for <b>sensor.other.lane_invasion</b>. It works only client-side and is dependant on OpenDRIVE to provide reliable information. The sensor creates one of this every time there is a lane invasion, which may be more than once per simulation step. Learn more about this [here](ref_sensors.md#lane-invasion-detector).
# - PROPERTIES ------------------------- # - PROPERTIES -------------------------
instance_variables: instance_variables:
- var_name: actor - var_name: actor
@ -258,7 +259,7 @@
parent: carla.SensorData parent: carla.SensorData
# - DESCRIPTION ------------------------ # - DESCRIPTION ------------------------
doc: > doc: >
Class that defines the data registered by a <b>sensor.other.imu</b>, regarding the sensor's transformation according to the current carla.World. It essentially acts as accelerometer, gyroscope and compass. Class that defines the data registered by a <b>sensor.other.imu</b>, regarding the sensor's transformation according to the current carla.World. It essentially acts as accelerometer, gyroscope and compass.
# - PROPERTIES ------------------------- # - PROPERTIES -------------------------
instance_variables: instance_variables:
- var_name: accelerometer - var_name: accelerometer
@ -288,12 +289,12 @@
- var_name: raw_data - var_name: raw_data
type: bytes type: bytes
doc: > doc: >
The complete information of the carla.RadarDetection the radar has registered. The complete information of the carla.RadarDetection the radar has registered.
# - METHODS ---------------------------- # - METHODS ----------------------------
methods: methods:
- def_name: get_detection_count - def_name: get_detection_count
doc: > doc: >
Retrieves the number of entries generated, same as **<font color="#7fb800">\__str__()</font>**. Retrieves the number of entries generated, same as **<font color="#7fb800">\__str__()</font>**.
# -------------------------------------- # --------------------------------------
- def_name: __getitem__ - def_name: __getitem__
params: params:
@ -317,7 +318,7 @@
- class_name: RadarDetection - class_name: RadarDetection
# - DESCRIPTION ------------------------ # - DESCRIPTION ------------------------
doc: > doc: >
Data contained inside a carla.RadarMeasurement. Each of these represents one of the points in the cloud that a <b>sensor.other.radar</b> registers and contains the distance, angle and velocity in relation to the radar. Data contained inside a carla.RadarMeasurement. Each of these represents one of the points in the cloud that a <b>sensor.other.radar</b> registers and contains the distance, angle and velocity in relation to the radar.
# - PROPERTIES ------------------------- # - PROPERTIES -------------------------
instance_variables: instance_variables:
- var_name: altitude - var_name: altitude
@ -343,4 +344,129 @@
methods: methods:
- def_name: __str__ - def_name: __str__
# -------------------------------------- # --------------------------------------
- class_name: RssResponse
parent: carla.SensorData
# - DESCRIPTION ------------------------
doc: >
Class that contains the output of a carla.RssSensor. This is the result of the RSS calculations performed for the parent vehicle of the sensor.
A carla.RssRestrictor will use the data to modify the carla.VehicleControl of the vehicle.
# - PROPERTIES -------------------------
instance_variables:
- var_name: response_valid
type: bool
doc: >
States if the response is valid. It is __False__ if calculations failed or an exception occured.
# --------------------------------------
- var_name: proper_response
type: <a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1state_1_1ProperResponse.html">libad_rss_python.ProperResponse</a>
doc: >
The proper response that the RSS calculated for the vehicle.
# --------------------------------------
- var_name: acceleration_restriction
type: <a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1world_1_1AccelerationRestriction.html">libad_rss_python.AccelerationRestriction</a>
doc: >
Acceleration restrictions to be applied, according to the RSS calculation.
# --------------------------------------
- var_name: rss_state_snapshot
type: <a href="https://intel.github.io/ad-rss-lib/doxygen/ad_rss/structad_1_1rss_1_1state_1_1RssStateSnapshot.html">libad_rss_python.RssStateSnapshot</a>
doc: >
Detailed RSS states at the current moment in time.
# --------------------------------------
- var_name: ego_dynamics_on_route
type: carla.RssEgoDynamicsOnRoute
doc: >
Current ego vehicle dynamics regarding the route.
# - METHODS ----------------------------
methods:
- def_name: __str__
# --------------------------------------
- class_name: RssEgoDynamicsOnRoute
# - DESCRIPTION ------------------------
doc: >
Part of the data contained inside a carla.RssResponse describing the state of the vehicle. The parameters include its current dynamics, and how it is heading regarding the target route.
# - PROPERTIES -------------------------
instance_variables:
- var_name: ego_speed
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Speed.html">libad_physics_python.Speed</a>
doc: >
The ego vehicle's speed.
# --------------------------------------
- var_name: min_stopping_distance
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Distance.html">libad_physics_python.Distance</a>
doc: >
The current minimum stopping distance.
# --------------------------------------
- var_name: ego_center
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/structad_1_1map_1_1point_1_1ENUPoint.html">libad_map_access_python.ENUPoint</a>
doc: >
The considered enu position of the ego vehicle.
# --------------------------------------
- var_name: ego_heading
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/classad_1_1map_1_1point_1_1ENUHeading.html">libad_map_access_python.ENUHeading</a>
doc: >
The considered heading of the ego vehicle.
# --------------------------------------
- var_name: ego_center_within_route
type: bool
doc: >
States if the ego vehicle's center is within the route.
# --------------------------------------
- var_name: crossing_border
type: bool
doc: >
States if the vehicle is already crossing one of the lane borders.
# --------------------------------------
- var_name: route_heading
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/classad_1_1map_1_1point_1_1ENUHeading.html">libad_map_access_python.ENUHeading</a>
doc: >
The considered heading of the route.
# --------------------------------------
- var_name: route_nominal_center
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/structad_1_1map_1_1point_1_1ENUPoint.html">libad_map_access_python.ENUPoint</a>
doc: >
The considered nominal center of the current route.
# --------------------------------------
- var_name: heading_diff
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_map_access/apidoc/html/classad_1_1map_1_1point_1_1ENUHeading.html">libad_map_access_python.ENUHeading</a>
doc: >
The considered heading diff towards the route.
# --------------------------------------
- var_name: route_speed_lat
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Speed.html">libad_physics_python.Speed</a>
doc: >
The ego vehicle's speed component _lat_ regarding the route.
# --------------------------------------
- var_name: route_speed_lon
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Speed.html">libad_physics_python.Speed</a>
doc: >
The ego vehicle's speed component _lon_ regarding the route.
# --------------------------------------
- var_name: route_accel_lat
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Acceleration.html">libad_physics_python.Acceleration</a>
doc: >
The ego vehicle's acceleration component _lat_ regarding the route.
# --------------------------------------
- var_name: route_accel_lon
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Acceleration.html">libad_physics_python.Acceleration</a>
doc: >
The ego vehicle's acceleration component _lon_ regarding the route.
# --------------------------------------
- var_name: avg_route_accel_lat
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Acceleration.html">libad_physics_python.Acceleration</a>
doc: >
The ego vehicle's acceleration component _lat_ regarding the route smoothened by an average filter.
# --------------------------------------
- var_name: avg_route_accel_lon
type: <a href="https://ad-map-access.readthedocs.io/en/latest/ad_physics/apidoc/html/classad_1_1physics_1_1Acceleration.html">libad_physics_python.Acceleration</a>
doc: >
The ego acceleration component _lon_ regarding the route smoothened by an average filter.
# - METHODS ----------------------------
methods:
- def_name: __str__
# --------------------------------------
... ...

View File

@ -97,6 +97,6 @@ CARLA specific code is distributed under MIT License.
CARLA specific assets are distributed under CC-BY License. CARLA specific assets are distributed under CC-BY License.
The ad-rss-lib library compiled and linked by the [RSS Integration build variant](Docs/rss_lib_integration.md) introduces LGPL-2.1-only License. The ad-rss-lib library compiled and linked by the [RSS Integration build variant](Docs/adv_rss.md) introduces LGPL-2.1-only License.
Note that UE4 itself follows its own license terms. Note that UE4 itself follows its own license terms.

View File

@ -26,6 +26,7 @@ nav:
- Advanced steps: - Advanced steps:
- 'Recorder': 'adv_recorder.md' - 'Recorder': 'adv_recorder.md'
- 'Rendering options': 'adv_rendering_options.md' - 'Rendering options': 'adv_rendering_options.md'
- 'RSS sensor': 'adv_rss.md'
- 'Synchrony and time-step': 'adv_synchrony_timestep.md' - 'Synchrony and time-step': 'adv_synchrony_timestep.md'
- 'Traffic Manager': 'adv_traffic_manager.md' - 'Traffic Manager': 'adv_traffic_manager.md'
- References: - References: