diff --git a/Docs/adv_rss.md b/Docs/adv_rss.md
new file mode 100644
index 000000000..d14527c2c
--- /dev/null
+++ b/Docs/adv_rss.md
@@ -0,0 +1,128 @@
+# RSS Sensor
+
+CARLA integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in the client library. This feature allows users to investigate behaviours of RSS without having to implement anything. CARLA will take care of providing the input, and applying the output to the AD systems on the fly.
+
+* [__Overview__](#overview)
+* [__Compilation__](#compilation)
+ * [Dependencies](#dependencies)
+ * [Build](#build)
+* [__Current state__](#current-state)
+ * [RssSensor](#rsssensor)
+ * [RssRestrictor](#rssrestrictor)
+
+!!! Important
+ This feature is a work in progress. Right now, it is only available for the Linux build.
+
+---
+## Overview
+
+The RSS library implements a mathematical model for safety assurance. It receives sensor information, and provides restrictions to the controllers of a vehicle. To sum up, the RSS module uses the sensor data to define __situations__. A situation describes the state of the ego vehicle with an element of the environment. For each situation, safety checks are made, and a proper response is calculated. The overall response is the result of all of the combined. For specific information on the library, read the [documentation](https://intel.github.io/ad-rss-lib/), especially the [Background section](https://intel.github.io/ad-rss-lib/ad_rss/Overview/).
+
+This is implemented in CARLA using two elements.
+
+* __RssSensor__ is in charge of the situation analysis, and response generation using the *ad-rss-lib*.
+* __RssRestrictor__ applies the response by restricting the commands of the vehicle.
+
+The following image sketches the integration of __RSS__ into the CARLA architecture.
+
+![Interate RSS into CARLA](img/rss_carla_integration_architecture.png)
+
+__1. The server.__
+ __-__ Sends a camera image to the client. (Only if the client needs visualization).
+ __-__ Provides the RssSensor with world data.
+ __-__ Sends a physics model of the vehicle to the RssRestrictor. (Only if the default values are overwritten).
+__2. The client.__
+ __-__ Provides the *RssSensor* with some [parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) to be considered.
+ __-__ Sends to the *RssResrictor* an initial [carla.VehicleControl](python_api.md#carla.VehicleControl).
+__3. The RssSensor.__
+ __-__ Uses the *ad-rss-lib* to extract situations, do safety checks, and generate a response.
+ __-__ Sends the *RssRestrictor* a response containing the proper response and aceleration restrictions to be applied.
+ __-__ Asks the server to do some debug drawings to visualize the results of the calculations.
+__4. The RssRestrictor__
+ __-__ If the client asks for it, applies the response to the [carla.VehicleControl](python_api.md#carla.VehicleControl), and returns the resulting one.
+
+!!! Important
+ Debug drawings can delay the RSS response, so they should be disabled during automated RSS evaluations. Use [carla.RssVisualizationMode](python_api.md#carla.RssVisualizationMode) to change the visualization settings.
+
+[![RSS sensor in CARLA](img/rss_carla_integration.png)](https://www.youtube.com/watch?v=UxKPXPT2T8Q)
+
Visualization of the RssSensor results.
+
+---
+## Compilation
+
+The RSS integration has to be built aside from the rest of CARLA. The __ad-rss-lib__ comes with an LGPL-2.1 open-source license that creates conflict. It has to be linked statically into *libCarla*.
+
+As a reminder, the feature is only available for the Linux build so far.
+
+### Dependencies
+
+There are additional prerequisites required for building RSS and its dependencies. Take a look at the [official documentation](https://intel.github.io/ad-rss-lib/BUILDING)) to know more about this.
+
+Dependencies provided by Ubunutu (>= 16.04).
+```sh
+sudo apt-get install libgtest-dev libpython-dev libpugixml-dev libproj-dev libtbb-dev
+```
+
+The dependencies are built using [colcon](https://colcon.readthedocs.io/en/released/user/installation.html), so it has to be installed.
+```sh
+pip3 install --user -U colcon-common-extensions
+```
+
+There are some additional dependencies for the Python bindings.
+```sh
+sudo apt-get install castxml
+pip install --user pygccxml
+pip install --user https://bitbucket.org/ompl/pyplusplus/get/1.8.1.zip
+```
+
+### Build
+
+Once this is done, the full set of dependencies and RSS components can be built.
+
+* Compile LibCarla to work with RSS.
+
+```sh
+make LibCarla.client.rss
+```
+
+* Compile the PythonAPI to include the RSS feature.
+
+```sh
+make PythonAPI.rss
+```
+
+* As an alternative, a package can be built directly.
+```sh
+make package.rss
+```
+
+---
+## Current state
+
+### RssSensor
+
+[__carla.RssSensor__](python_api.md#carla.RssSensor) supports [ad-rss-lib v3.0.0 feature set](https://intel.github.io/ad-rss-lib/RELEASE_NOTES_AND_DISCLAIMERS) completely, including intersections and [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) support.
+
+So far, the server provides the sensor with ground truth data of the surroundings that includes the state of other vehicles and traffic lights. Future improvements of this feature will add to the equation pedestrians, and more information of the OpenDRIVE map among others.
+
+### RssRestrictor
+
+When the client calls for it, the [__carla.RssRestrictor__](python_api.md#carla.RssRestrictor) will modify the vehicle controller to best reach the desired accelerations or decelerations by a given response.
+
+Due to the stucture of [carla.VehicleControl](python_api.md#carla.VehicleControl) objects, the restrictions applied have certain limitations. These controllers include `throttle`, `brake` and `streering` values. However, due to car physics and the simple control options these might not be met. The restriction intervenes in lateral direction simply by counter steering towards the parallel lane direction. The brake will be activated if deceleration requested by RSS. This depends on vehicle mass and brake torques provided by the [carla.Vehicle](python_api.md#carla.Vehicle).
+
+!!! Note
+ In an automated vehicle controller it might be possible to adapt the planned trajectory to the restrictions. A fast control loop (>1KHz) can be used to ensure these are followed.
+
+---
+
+That sets the basics regarding the RSS sensor in CARLA. Find out more about the specific attributes and parameters in the [sensor reference](ref_sensors.md#rss-sensor).
+
+Open CARLA and mess around for a while. If there are any doubts, feel free to post these in the forum.
+
+
diff --git a/Docs/img/rss_carla_integration_architecture.png b/Docs/img/rss_carla_integration_architecture.png
new file mode 100644
index 000000000..256d105b8
Binary files /dev/null and b/Docs/img/rss_carla_integration_architecture.png differ
diff --git a/Docs/index.md b/Docs/index.md
index 7dcc308af..d545e4ea8 100644
--- a/Docs/index.md
+++ b/Docs/index.md
@@ -63,6 +63,8 @@ CARLA forum
— Register the events in a simulation and play it again.
[__Rendering options__](adv_rendering_options.md)
— From quality settings to no-render or off-screen modes.
+ [__RSS sensor__](adv_rss.md)
+ — An implementation of RSS in the CARLA client library.
[__Synchrony and time-step__](adv_synchrony_timestep.md)
— Client-server communication and simulation time.
[__Traffic Manager__](adv_traffic_manager.md)
diff --git a/Docs/python_api.md b/Docs/python_api.md
index e7e0642b9..41dfc1fa9 100644
--- a/Docs/python_api.md
+++ b/Docs/python_api.md
@@ -1150,21 +1150,158 @@ Parses the axis' orientations to string.
---
+## carla.RssEgoDynamicsOnRoute
+Part of the data contained inside a [carla.RssResponse](#carla.RssResponse) describing the state of the vehicle. The parameters include its current dynamics, and how it is heading regarding the target route.
+
+Instance Variables
+- **ego_speed** (_libad_physics_python.Speed_)
+The ego vehicle's speed.
+- **min_stopping_distance** (_libad_physics_python.Distance_)
+The current minimum stopping distance.
+- **ego_center** (_libad_map_access_python.ENUPoint_)
+The considered enu position of the ego vehicle.
+- **ego_heading** (_libad_map_access_python.ENUHeading_)
+The considered heading of the ego vehicle.
+- **ego_center_within_route** (_bool_)
+States if the ego vehicle's center is within the route.
+- **crossing_border** (_bool_)
+States if the vehicle is already crossing one of the lane borders.
+- **route_heading** (_libad_map_access_python.ENUHeading_)
+The considered heading of the route.
+- **route_nominal_center** (_libad_map_access_python.ENUPoint_)
+The considered nominal center of the current route.
+- **heading_diff** (_libad_map_access_python.ENUHeading_)
+The considered heading diff towards the route.
+- **route_speed_lat** (_libad_physics_python.Speed_)
+The ego vehicle's speed component _lat_ regarding the route.
+- **route_speed_lon** (_libad_physics_python.Speed_)
+The ego vehicle's speed component _lon_ regarding the route.
+- **route_accel_lat** (_libad_physics_python.Acceleration_)
+The ego vehicle's acceleration component _lat_ regarding the route.
+- **route_accel_lon** (_libad_physics_python.Acceleration_)
+The ego vehicle's acceleration component _lon_ regarding the route.
+- **avg_route_accel_lat** (_libad_physics_python.Acceleration_)
+The ego vehicle's acceleration component _lat_ regarding the route smoothened by an average filter.
+- **avg_route_accel_lon** (_libad_physics_python.Acceleration_)
+The ego acceleration component _lon_ regarding the route smoothened by an average filter.
+
+Methods
+
+Dunder methods
+- **\__str__**(**self**)
+
+---
+
+## carla.RssResponse
+Inherited from _[carla.SensorData](#carla.SensorData)_
Class that contains the output of a [carla.RssSensor](#carla.RssSensor). This is the result of the RSS calculations performed for the parent vehicle of the sensor.
+
+A [carla.RssRestrictor](#carla.RssRestrictor) will use the data to modify the [carla.VehicleControl](#carla.VehicleControl) of the vehicle.
+
+
Instance Variables
+- **response_valid** (_bool_)
+States if the response is valid. It is __False__ if calculations failed or an exception occured.
+- **proper_response** (_libad_rss_python.ProperResponse_)
+The proper response that the RSS calculated for the vehicle.
+- **acceleration_restriction** (_libad_rss_python.AccelerationRestriction_)
+Acceleration restrictions to be applied, according to the RSS calculation.
+- **rss_state_snapshot** (_libad_rss_python.RssStateSnapshot_)
+Detailed RSS states at the current moment in time.
+- **ego_dynamics_on_route** (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_)
+Current ego vehicle dynamics regarding the route.
+
+Methods
+
+Dunder methods
+- **\__str__**(**self**)
+
+---
+
+## carla.RssRestrictor
+These objects apply restrictions to a [carla.VehicleControl](#carla.VehicleControl). It is part of the CARLA implementation of the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib). This class works hand in hand with a [rss sensor](ref_sensors.md#rss-sensor), which provides the data of the restrictions to be applied.
+
+Methods
+- **restrict_vehicle_control**(**self**, **vehicle_control**, **restriction**, **ego_dynamics_on_route**, **vehicle_physics**)
+Applies the safety restrictions given by a [carla.RssSensor](#carla.RssSensor) to a [carla.VehicleControl](#carla.VehicleControl).
+ - **Parameters:**
+ - `vehicle_control` (_[carla.VehicleControl](#carla.VehicleControl)_) – The input vehicle control to be restricted.
+ - `restriction` (_libad_rss_python.AccelerationRestriction_) – Part of the response generated by the sensor. Contains restrictions to be applied to the acceleration of the vehicle.
+ - `ego_dynamics_on_route` (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_) – Part of the response generated by the sensor. Contains dynamics and heading of the vehicle regarding its route.
+ - `vehicle_physics` (_[carla.RssEgoDynamicsOnRoute](#carla.RssEgoDynamicsOnRoute)_) – The current physics of the vehicle. Used to apply the restrictions properly.
+ - **Return:** _[carla.VehicleControl](#carla.VehicleControl)_
+
+---
+
+## carla.RssRoadBoundariesMode
+Enum declaration used in [carla.RssSensor](#carla.RssSensor) to enable or disable the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. In summary, this feature considers the road boundaries as virtual objects. The minimum safety distance check is applied to these virtual walls, in order to make sure the vehicle does not drive off the road.
+
+Instance Variables
+- **On**
+Enables the _stay on road_ feature.
+- **Off**
+Disables the _stay on road_ feature.
+
+---
+
+## carla.RssSensor
+Inherited from _[carla.Sensor](#carla.Sensor)_
This sensor works a bit differently than the rest. Take look at the [specific documentation](adv_rss.md), and the [rss sensor reference](ref_sensors.md#rss-sensor) to gain full understanding of it.
+
+The RSS sensor uses world information, and a [RSS library](https://github.com/intel/ad-rss-lib) to make safety checks on a vehicle. The output retrieved by the sensor is a [carla.RssResponse](#carla.RssResponse). This will be used by a [carla.RssRestrictor](#carla.RssRestrictor) to modify a [carla.VehicleControl](#carla.VehicleControl) before applying it to a vehicle.
+
+
Instance Variables
+- **ego_vehicle_dynamics** (_libad_rss_python.RssDynamics_)
+States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the ego vehicle.
+- **other_vehicle_dynamics** (_libad_rss_python.RssDynamics_)
+States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the rest of vehicles.
+- **road_boundaries_mode** (_[carla.RssRoadBoundariesMode](#carla.RssRoadBoundariesMode)_)
+Switches the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. By default is __On__.
+- **visualization_mode** (_[carla.RssVisualizationMode](#carla.RssVisualizationMode)_)
+Sets the visualization of the RSS on the server side. By default is __All__. These drawings may delay de RSS so it is best to set this to __Off__ when evaluating RSS performance.
+- **routing_targets** (_vector<[carla.Transform](#carla.Transform)>_)
+The current list of targets considered to route the vehicle. If no routing targets are defined, a route is generated at random.
+
+Methods
+- **append_routing_target**(**self**, **routing_target**)
+Appends a new target position to the current route of the vehicle.
+ - **Parameters:**
+ - `routing_target` (_[carla.Transform](#carla.Transform)_) – New target point for the route. Choose these after the intersections to force the route to take the desired turn.
+- **reset_routing_targets**(**self**)
+Erases the targets that have been appended to the route.
+- **drop_route**(**self**)
+Discards the current route. If there are targets remaining in **routing_targets**, creates a new route using those. Otherwise, a new route is created at random.
+
+Dunder methods
+- **\__str__**(**self**)
+
+---
+
+## carla.RssVisualizationMode
+Enum declaration used to state the visualization RSS calculations server side. Depending on these, the [carla.RssSensor](#carla.RssSensor) will use a [carla.DebugHelper](#carla.DebugHelper) to draw different elements. These drawings take some time and might delay the RSS responses. It is best to disable them when evaluating RSS performance.
+
+Instance Variables
+- **Off**
+- **RouteOnly**
+- **VehicleStateOnly**
+- **VehicleStateAndRoute**
+- **All**
+
+---
+
## carla.Sensor
-Inherited from _[carla.Actor](#carla.Actor)_
Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at [carla.World](#carla.World) to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from [carla.SensorData](#carla.SensorData) (depending on the sensor).
-
- Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in [carla.BlueprintLibrary](#carla.BlueprintLibrary). All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow:
- Receive data on every tick:
- - [Gnss sensor](ref_sensors.md#gnss-sensor).
- - [IMU sensor](ref_sensors.md#imu-sensor).
- - [Radar](ref_sensors.md#radar-sensor).
- - [Depth camera](ref_sensors.md#depth-camera).
- - [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
- - [RGB camera](ref_sensors.md#rgb-camera).
- - [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
- Only receive data when triggered:
- - [Collision detector](ref_sensors.md#collision-detector).
- - [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
+
Inherited from _[carla.Actor](#carla.Actor)_
Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at [carla.World](#carla.World) to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from [carla.SensorData](#carla.SensorData) (depending on the sensor).
+
+ Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in [carla.BlueprintLibrary](#carla.BlueprintLibrary). All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
+ Receive data on every tick.
+ - [Depth camera](ref_sensors.md#depth-camera).
+ - [Gnss sensor](ref_sensors.md#gnss-sensor).
+ - [IMU sensor](ref_sensors.md#imu-sensor).
+ - [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
+ - [Radar](ref_sensors.md#radar-sensor).
+ - [RGB camera](ref_sensors.md#rgb-camera).
+ - [RSS sensor](ref_sensors.md#rss-sensor).
+ - [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
+ Only receive data when triggered.
+ - [Collision detector](ref_sensors.md#collision-detector).
+ - [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
- [Obstacle detector](ref_sensors.md#obstacle-detector).
Instance Variables
@@ -1185,15 +1322,16 @@ Commands the sensor to stop listening for data.
---
## carla.SensorData
-Base class for all the objects containing data generated by a [carla.Sensor](#carla.Sensor). This objects should be the argument of the function said sensor is listening to, in order to work with them. Each of these sensors needs for a specific type of sensor data. The relation between available sensors and their corresponding data goes like:
- - Cameras (RGB, depth and semantic segmentation): [carla.Image](#carla.Image).
- - Collision detector: [carla.CollisionEvent](#carla.CollisionEvent).
- - Gnss detector: [carla.GnssMeasurement](#carla.GnssMeasurement).
- - IMU detector: [carla.IMUMeasurement](#carla.IMUMeasurement).
- - Lane invasion detector: [carla.LaneInvasionEvent](#carla.LaneInvasionEvent).
- - Lidar raycast: [carla.LidarMeasurement](#carla.LidarMeasurement).
- - Obstacle detector: [carla.ObstacleDetectionEvent](#carla.ObstacleDetectionEvent).
+Base class for all the objects containing data generated by a [carla.Sensor](#carla.Sensor). This objects should be the argument of the function said sensor is listening to, in order to work with them. Each of these sensors needs for a specific type of sensor data. Hereunder is a list of the sensors and their corresponding data.
+ - Cameras (RGB, depth and semantic segmentation): [carla.Image](#carla.Image).
+ - Collision detector: [carla.CollisionEvent](#carla.CollisionEvent).
+ - Gnss detector: [carla.GnssMeasurement](#carla.GnssMeasurement).
+ - IMU detector: [carla.IMUMeasurement](#carla.IMUMeasurement).
+ - Lane invasion detector: [carla.LaneInvasionEvent](#carla.LaneInvasionEvent).
+ - Lidar raycast: [carla.LidarMeasurement](#carla.LidarMeasurement).
+ - Obstacle detector: [carla.ObstacleDetectionEvent](#carla.ObstacleDetectionEvent).
- Radar detector: [carla.RadarMeasurement](#carla.RadarMeasurement).
+ - RSS sensor: [carla.RssResponse](#carla.RssResponse).
Instance Variables
- **frame** (_int_)
diff --git a/Docs/ref_sensors.md b/Docs/ref_sensors.md
index bffd52f87..4f9309155 100644
--- a/Docs/ref_sensors.md
+++ b/Docs/ref_sensors.md
@@ -1,15 +1,16 @@
# Sensors reference
- * [__Collision detector__](#collision-detector)
- * [__Depth camera__](#depth-camera)
- * [__GNSS sensor__](#gnss-sensor)
- * [__IMU sensor__](#imu-sensor)
- * [__Lane invasion detector__](#lane-invasion-detector)
- * [__Lidar raycast sensor__](#lidar-raycast-sensor)
- * [__Obstacle detector__](#obstacle-detector)
- * [__Radar sensor__](#radar-sensor)
- * [__RGB camera__](#rgb-camera)
- * [__Semantic segmentation camera__](#semantic-segmentation-camera)
+ * [__Collision detector__](#collision-detector)
+ * [__Depth camera__](#depth-camera)
+ * [__GNSS sensor__](#gnss-sensor)
+ * [__IMU sensor__](#imu-sensor)
+ * [__Lane invasion detector__](#lane-invasion-detector)
+ * [__Lidar raycast sensor__](#lidar-raycast-sensor)
+ * [__Obstacle detector__](#obstacle-detector)
+ * [__Radar sensor__](#radar-sensor)
+ * [__RGB camera__](#rgb-camera)
+ * [__RSS sensor__](#rss-sensor)
+ * [__Semantic segmentation camera__](#semantic-segmentation-camera)
---
@@ -18,7 +19,7 @@
* __Blueprint:__ sensor.other.collision
* __Output:__ [carla.CollisionEvent](python_api.md#carla.CollisionEvent) per collision.
-This sensor registers an event each time its parent actor collisions against something in the world. Several collisions may be detected during a single simulation step.
+This sensor registers an event each time its parent actor collisions against something in the world. Several collisions may be detected during a single simulation step.
To ensure that collisions with any kind of object are detected, the server creates "fake" actors for elements such as buildings or bushes so the semantic tag can be retrieved to identify it.
Collision detectors do not have any configurable attribute.
@@ -63,9 +64,9 @@ Collision detectors do not have any configurable attribute.
## Depth camera
* __Blueprint:__ sensor.camera.depth
-* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
+* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
-The camera provides a raw data of the scene codifying the distance of each pixel to the camera (also known as **depth buffer** or **z-buffer**) to create a depth map of the elements.
+The camera provides a raw data of the scene codifying the distance of each pixel to the camera (also known as **depth buffer** or **z-buffer**) to create a depth map of the elements.
The image codifies depth value per pixel using 3 channels of the RGB color space, from less to more significant bytes: _R -> G -> B_. The actual distance in meters can be
decoded with:
@@ -75,8 +76,8 @@ normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
in_meters = 1000 * normalized
```
-The output [carla.Image](python_api.md#carla.Image) should then be saved to disk using a [carla.colorConverter](python_api.md#carla.ColorConverter) that will turn the distance stored in RGB channels into a __[0,1]__ float containing the distance and then translate this to grayscale.
-There are two options in [carla.colorConverter](python_api.md#carla.ColorConverter) to get a depth view: __Depth__ and __Logaritmic depth__. The precision is milimetric in both, but the logarithmic approach provides better results for closer objects.
+The output [carla.Image](python_api.md#carla.Image) should then be saved to disk using a [carla.colorConverter](python_api.md#carla.ColorConverter) that will turn the distance stored in RGB channels into a __[0,1]__ float containing the distance and then translate this to grayscale.
+There are two options in [carla.colorConverter](python_api.md#carla.ColorConverter) to get a depth view: __Depth__ and __Logaritmic depth__. The precision is milimetric in both, but the logarithmic approach provides better results for closer objects.
![ImageDepth](img/capture_depth.png)
@@ -203,7 +204,7 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
## GNSS sensor
* __Blueprint:__ sensor.other.gnss
-* __Output:__ [carla.GNSSMeasurement](python_api.md#carla.GnssMeasurement) per step (unless `sensor_tick` says otherwise).
+* __Output:__ [carla.GNSSMeasurement](python_api.md#carla.GnssMeasurement) per step (unless `sensor_tick` says otherwise).
Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gnss) of its parent object. This is calculated by adding the metric position to an initial geo reference location defined within the OpenDRIVE map definition.
@@ -416,17 +417,17 @@ Provides measures that accelerometer, gyroscope and compass would retrieve for t
* __Blueprint:__ sensor.other.lane_invasion
* __Output:__ [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent) per crossing.
-Registers an event each time its parent crosses a lane marking.
-The sensor uses road data provided by the OpenDRIVE description of the map to determine whether the parent vehicle is invading another lane by considering the space between wheels.
-However there are some things to be taken into consideration:
+Registers an event each time its parent crosses a lane marking.
+The sensor uses road data provided by the OpenDRIVE description of the map to determine whether the parent vehicle is invading another lane by considering the space between wheels.
+However there are some things to be taken into consideration:
-* Discrepancies between the OpenDRIVE file and the map will create irregularities such as crossing lanes that are not visible in the map.
-* The output retrieves a list of crossed lane markings: the computation is done in OpenDRIVE and considering the whole space between the four wheels as a whole. Thus, there may be more than one lane being crossed at the same time.
+* Discrepancies between the OpenDRIVE file and the map will create irregularities such as crossing lanes that are not visible in the map.
+* The output retrieves a list of crossed lane markings: the computation is done in OpenDRIVE and considering the whole space between the four wheels as a whole. Thus, there may be more than one lane being crossed at the same time.
This sensor does not have any configurable attribute.
!!! Important
- This sensor works fully on the client-side.
+ This sensor works fully on the client-side.
#### Output attributes
@@ -466,11 +467,11 @@ This sensor does not have any configurable attribute.
* __Blueprint:__ sensor.lidar.ray_cast
* __Output:__ [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement) per step (unless `sensor_tick` says otherwise).
-This sensor simulates a rotating Lidar implemented using ray-casting.
-The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step:
+This sensor simulates a rotating Lidar implemented using ray-casting.
+The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step:
`points_per_channel_each_step = points_per_second / (FPS * channels)`
-A Lidar measurement contains a packet with all the points generated during a `1/FPS` interval. During this interval the physics are not updated so all the points in a measurement reflect the same "static picture" of the scene.
+A Lidar measurement contains a packet with all the points generated during a `1/FPS` interval. During this interval the physics are not updated so all the points in a measurement reflect the same "static picture" of the scene.
This output contains a cloud of simulation points and thus, can be iterated to retrieve a list of their [`carla.Location`](python_api.md#carla.Location):
@@ -577,10 +578,10 @@ for location in lidar_measurement:
## Obstacle detector
* __Blueprint:__ sensor.other.obstacle
-* __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent) per obstacle (unless `sensor_tick` says otherwise).
+* __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent) per obstacle (unless `sensor_tick` says otherwise).
-Registers an event every time the parent actor has an obstacle ahead.
-In order to anticipate obstacles, the sensor creates a capsular shape ahead of the parent vehicle and uses it to check for collisions.
+Registers an event every time the parent actor has an obstacle ahead.
+In order to anticipate obstacles, the sensor creates a capsular shape ahead of the parent vehicle and uses it to check for collisions.
To ensure that collisions with any kind of object are detected, the server creates "fake" actors for elements such as buildings or bushes so the semantic tag can be retrieved to identify it.
@@ -660,19 +661,19 @@ To ensure that collisions with any kind of object are detected, the server creat
## Radar sensor
* __Blueprint:__ sensor.other.radar
-* __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) per step (unless `sensor_tick` says otherwise).
+* __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) per step (unless `sensor_tick` says otherwise).
-The sensor creates a conic view that is translated to a 2D point map of the elements in sight and their speed regarding the sensor. This can be used to shape elements and evaluate their movement and direction. Due to the use of polar coordinates, the points will concentrate around the center of the view.
+The sensor creates a conic view that is translated to a 2D point map of the elements in sight and their speed regarding the sensor. This can be used to shape elements and evaluate their movement and direction. Due to the use of polar coordinates, the points will concentrate around the center of the view.
-Points measured are contained in [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) as an array of [carla.RadarDetection](python_api.md#carla.RadarDetection), which specifies their polar coordinates, distance and velocity.
+Points measured are contained in [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) as an array of [carla.RadarDetection](python_api.md#carla.RadarDetection), which specifies their polar coordinates, distance and velocity.
This raw data provided by the radar sensor can be easily converted to a format manageable by __numpy__:
```py
# To get a numpy [[vel, altitude, azimuth, depth],...[,,,]]:
points = np.frombuffer(radar_data.raw_data, dtype=np.dtype('f4'))
points = np.reshape(points, (len(radar_data), 4))
-```
+```
-The provided script `manual_control.py` uses this sensor to show the points being detected and paint them white when static, red when moving towards the object and blue when moving away:
+The provided script `manual_control.py` uses this sensor to show the points being detected and paint them white when static, red when moving towards the object and blue when moving away:
![ImageRadar](img/sensor_radar.png)
@@ -762,17 +763,17 @@ The provided script `manual_control.py` uses this sensor to show the points bein
* __Blueprint:__ sensor.camera.rgb
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise)..
-The "RGB" camera acts as a regular camera capturing images from the scene.
+The "RGB" camera acts as a regular camera capturing images from the scene.
[carla.colorConverter](python_api.md#carla.ColorConverter)
-If `enable_postprocess_effects` is enabled, a set of post-process effects is applied to the image for the sake of realism:
+If `enable_postprocess_effects` is enabled, a set of post-process effects is applied to the image for the sake of realism:
-* __Vignette:__ darkens the border of the screen.
-* __Grain jitter:__ adds some noise to the render.
-* __Bloom:__ intense lights burn the area around them.
-* __Auto exposure:__ modifies the image gamma to simulate the eye adaptation to darker or brighter areas.
-* __Lens flares:__ simulates the reflection of bright objects on the lens.
-* __Depth of field:__ blurs objects near or very far away of the camera.
+* __Vignette:__ darkens the border of the screen.
+* __Grain jitter:__ adds some noise to the render.
+* __Bloom:__ intense lights burn the area around them.
+* __Auto exposure:__ modifies the image gamma to simulate the eye adaptation to darker or brighter areas.
+* __Lens flares:__ simulates the reflection of bright objects on the lens.
+* __Depth of field:__ blurs objects near or very far away of the camera.
The `sensor_tick` tells how fast we want the sensor to capture the data.
@@ -1066,18 +1067,161 @@ Since these effects are provided by UE, please make sure to check their document
Array of BGRA 32-bit pixels. |
+
+
+---
+## RSS sensor
+
+* __Blueprint:__ sensor.other.rss
+* __Output:__ [carla.RssResponse](python_api.md#carla.RssResponse) per step (unless `sensor_tick` says otherwise).
+
+!!! Important
+ It is highly recommended to read the specific [rss documentation](adv_rss.md) before reading this.
+
+This sensor integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in CARLA. It is disabled by default in CARLA, and it has to be explicitly built in order to be used.
+
+The RSS sensor calculates the RSS state of a vehicle and retrieves the current RSS Response as sensor data. The [carla.RssRestrictor](python_api.md#carla.RssRestrictor) will use this data to adapt a [carla.VehicleControl](python_api.md#carla.VehicleControl) before applying it to a vehicle.
+
+These controllers can be generated by an *Automated Driving* stack or user input. For instance, hereunder there is a fragment of code from `PythonAPI/examples/manual_control_rss.py`, where the user input is modified using RSS when necessary.
+
+__1.__ Checks if the __RssSensor__ generates a valid response containing restrictions.
+__2.__ Gathers the current dynamics of the vehicle and the vehicle physics.
+__3.__ Applies restrictions to the vehicle control using the response from the RssSensor, and the current dynamics and physicis of the vehicle.
+
+```py
+rss_restriction = self._world.rss_sensor.acceleration_restriction if self._world.rss_sensor and self._world.rss_sensor.response_valid else None
+if rss_restriction:
+ rss_ego_dynamics_on_route = self._world.rss_sensor.ego_dynamics_on_route
+ vehicle_physics = world.player.get_physics_control()
+...
+ vehicle_control = self._restrictor.restrict_vehicle_control(
+ vehicle_control, rss_restriction, rss_ego_dynamics_on_route, vehicle_physics)
+```
+
+
+#### The carla.RssSensor class
+
+The blueprint for this sensor has no modifiable attributes. However, the [carla.RssSensor](python_api.md#carla.RssSensor) object that it instantiates has attributes and methods that are detailed in the Python API reference. Here is a summary of them.
+
+
+
+
+```py
+# Fragment of manual_control_rss.py
+# The carla.RssSensor is updated when listening for a new carla.RssResponse
+def _on_rss_response(weak_self, response):
+...
+ self.timestamp = response.timestamp
+ self.response_valid = response.response_valid
+ self.proper_response = response.proper_response
+ self.acceleration_restriction = response.acceleration_restriction
+ self.ego_dynamics_on_route = response.ego_dynamics_on_route
+```
+
+!!! Warning
+ This sensor works fully on the client side. There is no blueprint in the server. Changes on the attributes will have effect __after__ the *listen()* has been called.
+
+The methods available in this class are related to the routing of the vehicle. RSS calculations are always based on a route of the ego vehicle through the road network.
+
+The sensor allows to control the considered route by providing some key points, which could be the [carla.Transform](python_api.md#carla.Transform) in a [carla.Waypoint](python_api.md#carla.Waypoint). These points are best selected after the intersections to force the route to take the desired turn.
+
+
+
+carla.RssSensor methods |
+Description |
+
+
+routing_targets |
+Get the current list of routing targets used for route. |
+
+append_routing_target |
+Append an additional position to the current routing targets. |
+
+reset_routing_targets |
+Deletes the appended routing targets. |
+
+drop_route |
+Discards the current route and creates a new one. |
+
+
+
+```py
+# Update the current route
+self.sensor.reset_routing_targets()
+if routing_targets:
+ for target in routing_targets:
+ self.sensor.append_routing_target(target)
+```
+
+!!! Note
+ If no routing targets are defined, a random route is created.
+
+#### Output attributes
+
+
---
## Semantic segmentation camera
-* __Blueprint:__ sensor.camera.semantic_segmentation
-* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
+* __Blueprint:__ sensor.camera.semantic_segmentation
+* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
-This camera classifies every object in sight by displaying it in a different color according to its tags (e.g., pedestrians in a different color than vehicles).
+This camera classifies every object in sight by displaying it in a different color according to its tags (e.g., pedestrians in a different color than vehicles).
When the simulation starts, every element in scene is created with a tag. So it happens when an actor is spawned. The objects are classified by their relative file path in the project. For example, meshes stored in `Unreal/CarlaUE4/Content/Static/Pedestrians` are tagged as `Pedestrian`.
-
-The server provides an image with the tag information __encoded in the red channel__: A pixel with a red value of `x` belongs to an object with tag `x`.
-This raw [carla.Image](python_api.md#carla.Image) can be stored and converted it with the help of __CityScapesPalette__ in [carla.ColorConverter](python_api.md#carla.ColorConverter) to apply the tags information and show picture with the semantic segmentation.
+
+The server provides an image with the tag information __encoded in the red channel__: A pixel with a red value of `x` belongs to an object with tag `x`.
+This raw [carla.Image](python_api.md#carla.Image) can be stored and converted it with the help of __CityScapesPalette__ in [carla.ColorConverter](python_api.md#carla.ColorConverter) to apply the tags information and show picture with the semantic segmentation.
The following tags are currently available:
@@ -1266,4 +1410,3 @@ The following tags are currently available:
-
diff --git a/PythonAPI/docs/sensor.yml b/PythonAPI/docs/sensor.yml
index 8bf6af47b..d7f0027f2 100644
--- a/PythonAPI/docs/sensor.yml
+++ b/PythonAPI/docs/sensor.yml
@@ -6,28 +6,29 @@
parent: carla.Actor
# - DESCRIPTION ------------------------
doc: >
- Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at carla.World to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from carla.SensorData (depending on the sensor).
-
- Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in carla.BlueprintLibrary. All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow:
- Receive data on every tick:
- - [Gnss sensor](ref_sensors.md#gnss-sensor).
- - [IMU sensor](ref_sensors.md#imu-sensor).
- - [Radar](ref_sensors.md#radar-sensor).
- - [Depth camera](ref_sensors.md#depth-camera).
- - [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
- - [RGB camera](ref_sensors.md#rgb-camera).
- - [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
- Only receive data when triggered:
- - [Collision detector](ref_sensors.md#collision-detector).
- - [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
- - [Obstacle detector](ref_sensors.md#obstacle-detector).
+ Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at carla.World to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from carla.SensorData (depending on the sensor).
+
+ Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in carla.BlueprintLibrary. All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
+ Receive data on every tick.
+ - [Depth camera](ref_sensors.md#depth-camera).
+ - [Gnss sensor](ref_sensors.md#gnss-sensor).
+ - [IMU sensor](ref_sensors.md#imu-sensor).
+ - [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
+ - [Radar](ref_sensors.md#radar-sensor).
+ - [RGB camera](ref_sensors.md#rgb-camera).
+ - [RSS sensor](ref_sensors.md#rss-sensor).
+ - [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
+ Only receive data when triggered.
+ - [Collision detector](ref_sensors.md#collision-detector).
+ - [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
+ - [Obstacle detector](ref_sensors.md#obstacle-detector).
# - PROPERTIES -------------------------
instance_variables:
- var_name: is_listening
type: boolean
doc: >
- When True the sensor will be waiting for data.
+ When True the sensor will be waiting for data.
# - METHODS ----------------------------
methods:
- def_name: listen
@@ -37,7 +38,7 @@
doc: >
The called function with one argument containing the sensor data.
doc: >
- The function the sensor will be calling to every time a new measurement is received. This function needs for an argument containing an object type carla.SensorData to work with.
+ The function the sensor will be calling to every time a new measurement is received. This function needs for an argument containing an object type carla.SensorData to work with.
# --------------------------------------
- def_name: stop
doc: >
@@ -45,4 +46,120 @@
# --------------------------------------
- def_name: __str__
# --------------------------------------
+
+ - class_name: RssSensor
+ parent: carla.Sensor
+ # - DESCRIPTION ------------------------
+ doc: >
+ This sensor works a bit differently than the rest. Take look at the [specific documentation](adv_rss.md), and the [rss sensor reference](ref_sensors.md#rss-sensor) to gain full understanding of it.
+
+
+ The RSS sensor uses world information, and a [RSS library](https://github.com/intel/ad-rss-lib) to make safety checks on a vehicle. The output retrieved by the sensor is a carla.RssResponse. This will be used by a carla.RssRestrictor to modify a carla.VehicleControl before applying it to a vehicle.
+ # - PROPERTIES -------------------------
+ instance_variables:
+ - var_name: ego_vehicle_dynamics
+ type: libad_rss_python.RssDynamics
+ doc: >
+ States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the ego vehicle.
+ - var_name: other_vehicle_dynamics
+ type: libad_rss_python.RssDynamics
+ doc: >
+ States the [RSS parameters](https://intel.github.io/ad-rss-lib/ad_rss/Appendix-ParameterDiscussion/) that the sensor will consider for the rest of vehicles.
+ - var_name: road_boundaries_mode
+ type: carla.RssRoadBoundariesMode
+ doc: >
+ Switches the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. By default is __On__.
+ - var_name: visualization_mode
+ type: carla.RssVisualizationMode
+ doc: >
+ Sets the visualization of the RSS on the server side. By default is __All__. These drawings may delay de RSS so it is best to set this to __Off__ when evaluating RSS performance.
+ - var_name: routing_targets
+ type: vector
+ doc: >
+ The current list of targets considered to route the vehicle. If no routing targets are defined, a route is generated at random.
+ # - METHODS ----------------------------
+ methods:
+ - def_name: append_routing_target
+ params:
+ - param_name: routing_target
+ type: carla.Transform
+ doc: >
+ New target point for the route. Choose these after the intersections to force the route to take the desired turn.
+ doc: >
+ Appends a new target position to the current route of the vehicle.
+ - def_name: reset_routing_targets
+ doc: >
+ Erases the targets that have been appended to the route.
+ - def_name: drop_route
+ doc: >
+ Discards the current route. If there are targets remaining in **routing_targets**, creates a new route using those. Otherwise, a new route is created at random.
+ # --------------------------------------
+ - def_name: __str__
+ # --------------------------------------
+
+ - class_name: RssRestrictor
+ parent:
+ # - DESCRIPTION ------------------------
+ doc: >
+ These objects apply restrictions to a carla.VehicleControl. It is part of the CARLA implementation of the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib). This class works hand in hand with a [rss sensor](ref_sensors.md#rss-sensor), which provides the data of the restrictions to be applied.
+ # - PROPERTIES -------------------------
+ instance_variables:
+
+ # - METHODS ----------------------------
+ methods:
+ - def_name: restrict_vehicle_control
+ params:
+ - param_name: vehicle_control
+ type: carla.VehicleControl
+ doc: >
+ The input vehicle control to be restricted.
+ - param_name: restriction
+ type: libad_rss_python.AccelerationRestriction
+ doc: >
+ Part of the response generated by the sensor. Contains restrictions to be applied to the acceleration of the vehicle.
+ - param_name: ego_dynamics_on_route
+ type: carla.RssEgoDynamicsOnRoute
+ doc: >
+ Part of the response generated by the sensor. Contains dynamics and heading of the vehicle regarding its route.
+ - param_name: vehicle_physics
+ type: carla.RssEgoDynamicsOnRoute
+ doc: >
+ The current physics of the vehicle. Used to apply the restrictions properly.
+ return:
+ carla.VehicleControl
+ doc: >
+ Applies the safety restrictions given by a carla.RssSensor to a carla.VehicleControl.
+ # --------------------------------------
+
+ - class_name: RssRoadBoundariesMode
+ # - DESCRIPTION ------------------------
+ doc: >
+ Enum declaration used in carla.RssSensor to enable or disable the [stay on road](https://intel.github.io/ad-rss-lib/ad_rss_map_integration/HandleRoadBoundaries/) feature. In summary, this feature considers the road boundaries as virtual objects. The minimum safety distance check is applied to these virtual walls, in order to make sure the vehicle does not drive off the road.
+ # - PROPERTIES -------------------------
+ instance_variables:
+ - var_name: 'On'
+ doc: >
+ Enables the _stay on road_ feature.
+ # --------------------------------------
+ - var_name: 'Off'
+ doc: >
+ Disables the _stay on road_ feature.
+ # --------------------------------------
+
+ - class_name: RssVisualizationMode
+ # - DESCRIPTION ------------------------
+ doc: >
+ Enum declaration used to state the visualization RSS calculations server side. Depending on these, the carla.RssSensor will use a carla.DebugHelper to draw different elements. These drawings take some time and might delay the RSS responses. It is best to disable them when evaluating RSS performance.
+ # - PROPERTIES -------------------------
+ instance_variables:
+ - var_name: 'Off'
+ # --------------------------------------
+ - var_name: RouteOnly
+ # --------------------------------------
+ - var_name: VehicleStateOnly
+ # --------------------------------------
+ - var_name: VehicleStateAndRoute
+ # --------------------------------------
+ - var_name: All
+ # --------------------------------------
...
diff --git a/PythonAPI/docs/sensor_data.yml b/PythonAPI/docs/sensor_data.yml
index 342a49a2a..71ae80f4f 100644
--- a/PythonAPI/docs/sensor_data.yml
+++ b/PythonAPI/docs/sensor_data.yml
@@ -5,15 +5,16 @@
- class_name: SensorData
# - DESCRIPTION ------------------------
doc: >
- Base class for all the objects containing data generated by a carla.Sensor. This objects should be the argument of the function said sensor is listening to, in order to work with them. Each of these sensors needs for a specific type of sensor data. The relation between available sensors and their corresponding data goes like:
- - Cameras (RGB, depth and semantic segmentation): carla.Image.
- - Collision detector: carla.CollisionEvent.
- - Gnss detector: carla.GnssMeasurement.
- - IMU detector: carla.IMUMeasurement.
- - Lane invasion detector: carla.LaneInvasionEvent.
- - Lidar raycast: carla.LidarMeasurement.
- - Obstacle detector: carla.ObstacleDetectionEvent.
- - Radar detector: carla.RadarMeasurement.
+ Base class for all the objects containing data generated by a carla.Sensor. This objects should be the argument of the function said sensor is listening to, in order to work with them. Each of these sensors needs for a specific type of sensor data. Hereunder is a list of the sensors and their corresponding data.
+ - Cameras (RGB, depth and semantic segmentation): carla.Image.
+ - Collision detector: carla.CollisionEvent.
+ - Gnss detector: carla.GnssMeasurement.
+ - IMU detector: carla.IMUMeasurement.
+ - Lane invasion detector: carla.LaneInvasionEvent.
+ - Lidar raycast: carla.LidarMeasurement.
+ - Obstacle detector: carla.ObstacleDetectionEvent.
+ - Radar detector: carla.RadarMeasurement.
+ - RSS sensor: carla.RssResponse.
# - PROPERTIES -------------------------
instance_variables:
- var_name: frame
@@ -33,21 +34,21 @@
- class_name: ColorConverter
# - DESCRIPTION ------------------------
doc: >
- Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
+ Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
# - PROPERTIES -------------------------
instance_variables:
- var_name: CityScapesPalette
doc: >
- Converts the image to a segmentated map using tags provided by the blueprint library. Used by sensor.camera.semantic_segmentation.
+ Converts the image to a segmentated map using tags provided by the blueprint library. Used by sensor.camera.semantic_segmentation.
- var_name: Depth
doc: >
- Converts the image to a linear depth map. Used by sensor.camera.depth.
+ Converts the image to a linear depth map. Used by sensor.camera.depth.
- var_name: LogarithmicDepth
doc: >
- Converts the image to a depth map using a logarithmic scale, leading to better precision for small distances at the expense of losing it when further away.
+ Converts the image to a depth map using a logarithmic scale, leading to better precision for small distances at the expense of losing it when further away.
- var_name: Raw
doc: >
- No changes applied to the image.
+ No changes applied to the image.
- class_name: Image
parent: carla.SensorData
@@ -63,7 +64,7 @@
- var_name: height
type: int
doc: >
- Image height in pixels.
+ Image height in pixels.
- var_name: width
type: int
doc: >
@@ -84,12 +85,12 @@
- param_name: path
type: str
doc: >
- Path that will contain the image.
+ Path that will contain the image.
- param_name: color_converter
type: carla.ColorConverter
default: Raw
doc: >
- Default Raw will make no changes.
+ Default Raw will make no changes.
doc: >
Saves the image to disk using a converter pattern stated as `color_converter`. The default conversion pattern is Raw that will make no changes to the image.
# --------------------------------------
@@ -122,7 +123,7 @@
- var_name: channels
type: int
doc: >
- Number of lasers shot.
+ Number of lasers shot.
- var_name: horizontal_angle
type: float
doc: >
@@ -130,7 +131,7 @@
- var_name: raw_data
type: bytes
doc: >
- List of 3D points received as data.
+ List of 3D points received as data.
# - METHODS ----------------------------
methods:
- def_name: save_to_disk
@@ -138,7 +139,7 @@
- param_name: path
type: str
doc: >
- Saves the point cloud to disk as a .ply file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
+ Saves the point cloud to disk as a .ply file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
# --------------------------------------
- def_name: get_point_count
params:
@@ -170,7 +171,7 @@
parent: carla.SensorData
# - DESCRIPTION ------------------------
doc: >
- Class that defines a collision data for sensor.other.collision. The sensor creates one of this for every collision detected which may be many for one simulation step. Learn more about this [here](ref_sensors.md#collision-detector).
+ Class that defines a collision data for sensor.other.collision. The sensor creates one of this for every collision detected which may be many for one simulation step. Learn more about this [here](ref_sensors.md#collision-detector).
# - PROPERTIES -------------------------
instance_variables:
- var_name: actor
@@ -180,7 +181,7 @@
- var_name: other_actor
type: carla.Actor
doc: >
- The second actor involved in the collision.
+ The second actor involved in the collision.
- var_name: normal_impulse
type: carla.Vector3D
doc: >
@@ -190,13 +191,13 @@
parent: carla.SensorData
# - DESCRIPTION ------------------------
doc: >
- Class that defines the obstacle data for sensor.other.obstacle. Learn more about this [here](ref_sensors.md#obstacle-detector).
+ Class that defines the obstacle data for sensor.other.obstacle. Learn more about this [here](ref_sensors.md#obstacle-detector).
# - PROPERTIES -------------------------
instance_variables:
- var_name: actor
type: carla.Actor
doc: >
- The actor the sensor is attached to.
+ The actor the sensor is attached to.
- var_name: other_actor
type: carla.Actor
doc: >
@@ -204,7 +205,7 @@
- var_name: distance
type: float
doc: >
- Distance between `actor` and `other`.
+ Distance between `actor` and `other`.
# - METHODS ----------------------------
methods:
- def_name: __str__
@@ -214,7 +215,7 @@
parent: carla.SensorData
# - DESCRIPTION ------------------------
doc: >
- Class that defines lanes invasion for sensor.other.lane_invasion. It works only client-side and is dependant on OpenDRIVE to provide reliable information. The sensor creates one of this every time there is a lane invasion, which may be more than once per simulation step. Learn more about this [here](ref_sensors.md#lane-invasion-detector).
+ Class that defines lanes invasion for sensor.other.lane_invasion. It works only client-side and is dependant on OpenDRIVE to provide reliable information. The sensor creates one of this every time there is a lane invasion, which may be more than once per simulation step. Learn more about this [here](ref_sensors.md#lane-invasion-detector).
# - PROPERTIES -------------------------
instance_variables:
- var_name: actor
@@ -258,7 +259,7 @@
parent: carla.SensorData
# - DESCRIPTION ------------------------
doc: >
- Class that defines the data registered by a sensor.other.imu, regarding the sensor's transformation according to the current carla.World. It essentially acts as accelerometer, gyroscope and compass.
+ Class that defines the data registered by a sensor.other.imu, regarding the sensor's transformation according to the current carla.World. It essentially acts as accelerometer, gyroscope and compass.
# - PROPERTIES -------------------------
instance_variables:
- var_name: accelerometer
@@ -288,12 +289,12 @@
- var_name: raw_data
type: bytes
doc: >
- The complete information of the carla.RadarDetection the radar has registered.
+ The complete information of the carla.RadarDetection the radar has registered.
# - METHODS ----------------------------
methods:
- def_name: get_detection_count
doc: >
- Retrieves the number of entries generated, same as **\__str__()**.
+ Retrieves the number of entries generated, same as **\__str__()**.
# --------------------------------------
- def_name: __getitem__
params:
@@ -317,7 +318,7 @@
- class_name: RadarDetection
# - DESCRIPTION ------------------------
doc: >
- Data contained inside a carla.RadarMeasurement. Each of these represents one of the points in the cloud that a sensor.other.radar registers and contains the distance, angle and velocity in relation to the radar.
+ Data contained inside a carla.RadarMeasurement. Each of these represents one of the points in the cloud that a sensor.other.radar registers and contains the distance, angle and velocity in relation to the radar.
# - PROPERTIES -------------------------
instance_variables:
- var_name: altitude
@@ -343,4 +344,129 @@
methods:
- def_name: __str__
# --------------------------------------
+
+ - class_name: RssResponse
+ parent: carla.SensorData
+ # - DESCRIPTION ------------------------
+ doc: >
+ Class that contains the output of a carla.RssSensor. This is the result of the RSS calculations performed for the parent vehicle of the sensor.
+
+
+ A carla.RssRestrictor will use the data to modify the carla.VehicleControl of the vehicle.
+ # - PROPERTIES -------------------------
+ instance_variables:
+ - var_name: response_valid
+ type: bool
+ doc: >
+ States if the response is valid. It is __False__ if calculations failed or an exception occured.
+ # --------------------------------------
+ - var_name: proper_response
+ type: libad_rss_python.ProperResponse
+ doc: >
+ The proper response that the RSS calculated for the vehicle.
+ # --------------------------------------
+ - var_name: acceleration_restriction
+ type: libad_rss_python.AccelerationRestriction
+ doc: >
+ Acceleration restrictions to be applied, according to the RSS calculation.
+ # --------------------------------------
+ - var_name: rss_state_snapshot
+ type: libad_rss_python.RssStateSnapshot
+ doc: >
+ Detailed RSS states at the current moment in time.
+ # --------------------------------------
+ - var_name: ego_dynamics_on_route
+ type: carla.RssEgoDynamicsOnRoute
+ doc: >
+ Current ego vehicle dynamics regarding the route.
+ # - METHODS ----------------------------
+ methods:
+ - def_name: __str__
+ # --------------------------------------
+
+ - class_name: RssEgoDynamicsOnRoute
+ # - DESCRIPTION ------------------------
+ doc: >
+ Part of the data contained inside a carla.RssResponse describing the state of the vehicle. The parameters include its current dynamics, and how it is heading regarding the target route.
+ # - PROPERTIES -------------------------
+ instance_variables:
+ - var_name: ego_speed
+ type: libad_physics_python.Speed
+ doc: >
+ The ego vehicle's speed.
+ # --------------------------------------
+ - var_name: min_stopping_distance
+ type: libad_physics_python.Distance
+ doc: >
+ The current minimum stopping distance.
+ # --------------------------------------
+ - var_name: ego_center
+ type: libad_map_access_python.ENUPoint
+ doc: >
+ The considered enu position of the ego vehicle.
+ # --------------------------------------
+ - var_name: ego_heading
+ type: libad_map_access_python.ENUHeading
+ doc: >
+ The considered heading of the ego vehicle.
+ # --------------------------------------
+ - var_name: ego_center_within_route
+ type: bool
+ doc: >
+ States if the ego vehicle's center is within the route.
+ # --------------------------------------
+ - var_name: crossing_border
+ type: bool
+ doc: >
+ States if the vehicle is already crossing one of the lane borders.
+ # --------------------------------------
+ - var_name: route_heading
+ type: libad_map_access_python.ENUHeading
+ doc: >
+ The considered heading of the route.
+ # --------------------------------------
+ - var_name: route_nominal_center
+ type: libad_map_access_python.ENUPoint
+ doc: >
+ The considered nominal center of the current route.
+ # --------------------------------------
+ - var_name: heading_diff
+ type: libad_map_access_python.ENUHeading
+ doc: >
+ The considered heading diff towards the route.
+ # --------------------------------------
+ - var_name: route_speed_lat
+ type: libad_physics_python.Speed
+ doc: >
+ The ego vehicle's speed component _lat_ regarding the route.
+ # --------------------------------------
+ - var_name: route_speed_lon
+ type: libad_physics_python.Speed
+ doc: >
+ The ego vehicle's speed component _lon_ regarding the route.
+ # --------------------------------------
+ - var_name: route_accel_lat
+ type: libad_physics_python.Acceleration
+ doc: >
+ The ego vehicle's acceleration component _lat_ regarding the route.
+ # --------------------------------------
+ - var_name: route_accel_lon
+ type: libad_physics_python.Acceleration
+ doc: >
+ The ego vehicle's acceleration component _lon_ regarding the route.
+ # --------------------------------------
+ - var_name: avg_route_accel_lat
+ type: libad_physics_python.Acceleration
+ doc: >
+ The ego vehicle's acceleration component _lat_ regarding the route smoothened by an average filter.
+ # --------------------------------------
+ - var_name: avg_route_accel_lon
+ type: libad_physics_python.Acceleration
+ doc: >
+ The ego acceleration component _lon_ regarding the route smoothened by an average filter.
+ # - METHODS ----------------------------
+ methods:
+ - def_name: __str__
+ # --------------------------------------
+
...
diff --git a/README.md b/README.md
index c409c509f..9eec0ab81 100644
--- a/README.md
+++ b/README.md
@@ -97,6 +97,6 @@ CARLA specific code is distributed under MIT License.
CARLA specific assets are distributed under CC-BY License.
-The ad-rss-lib library compiled and linked by the [RSS Integration build variant](Docs/rss_lib_integration.md) introduces LGPL-2.1-only License.
+The ad-rss-lib library compiled and linked by the [RSS Integration build variant](Docs/adv_rss.md) introduces LGPL-2.1-only License.
Note that UE4 itself follows its own license terms.
diff --git a/mkdocs.yml b/mkdocs.yml
index b71166154..b37277f55 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -26,6 +26,7 @@ nav:
- Advanced steps:
- 'Recorder': 'adv_recorder.md'
- 'Rendering options': 'adv_rendering_options.md'
+ - 'RSS sensor': 'adv_rss.md'
- 'Synchrony and time-step': 'adv_synchrony_timestep.md'
- 'Traffic Manager': 'adv_traffic_manager.md'
- References: