From 187b72090280b69ccf4b209c6ac2982a5c8f0f6c Mon Sep 17 00:00:00 2001
From: Daniel Santos-Olivan
Date: Thu, 30 Jul 2020 15:38:18 +0200
Subject: [PATCH] Updated documentation for RawLidar
---
Docs/bp_library.md | 10 +++
Docs/python_api.md | 54 ++++++++++++
Docs/ref_sensors.md | 156 ++++++++++++++++++++++++++++-----
PythonAPI/docs/sensor.yml | 1 +
PythonAPI/docs/sensor_data.yml | 85 ++++++++++++++++++
5 files changed, 285 insertions(+), 21 deletions(-)
diff --git a/Docs/bp_library.md b/Docs/bp_library.md
index 86f6f6196..1f7405ff3 100755
--- a/Docs/bp_library.md
+++ b/Docs/bp_library.md
@@ -155,6 +155,16 @@ Check out the [introduction to blueprints](core_actors.md).
- `rotation_frequency` (_Float_)_ – Modifiable_
- `sensor_tick` (_Float_)_ – Modifiable_
- `upper_fov` (_Float_)_ – Modifiable_
+- **sensor.lidar.ray_cast_raw**
+ - **Attributes:**
+ - `channels` (_Int_)_ – Modifiable_
+ - `lower_fov` (_Float_)_ – Modifiable_
+ - `points_per_second` (_Int_)_ – Modifiable_
+ - `range` (_Float_)_ – Modifiable_
+ - `role_name` (_String_)_ – Modifiable_
+ - `rotation_frequency` (_Float_)_ – Modifiable_
+ - `sensor_tick` (_Float_)_ – Modifiable_
+ - `upper_fov` (_Float_)_ – Modifiable_
- **sensor.other.collision**
- **Attributes:**
- `role_name` (_String_)_ – Modifiable_
diff --git a/Docs/python_api.md b/Docs/python_api.md
index 07e548e87..52acfc4bb 100644
--- a/Docs/python_api.md
+++ b/Docs/python_api.md
@@ -1055,6 +1055,58 @@ Retrieves the number of points sorted by channel that are generated by this meas
---
+## carla.LidarRawDetection
+Data contained inside a [carla.LidarRawMeasurement](#carla.LidarRawMeasurement). Each of these represents one of the points in the cloud with its location and its asociated intensity.
+
+Instance Variables
+- **point** (_[carla.Location](#carla.Location)_)
+Point in xyz coordinates.
+- **cos_inc_angle** (_float_)
+Cosine of the incident angle between the ray and the normal of the hit object.
+- **object_idx** (_uint_)
+Carla index of the hitted actor.
+- **object_tag** (_uint_)
+Semantic tag of the hitted component.
+
+Methods
+
+Dunder methods
+- **\__str__**(**self**)
+
+---
+
+## carla.LidarRawMeasurement
+Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the raw lidar data retrieved by a sensor.lidar.ray_cast_raw. This essentially simulates a rotating lidar using ray-casting. Learn more about this [here](ref_sensors.md#rawlidar-raycast-sensor).
+
+
Instance Variables
+- **channels** (_int_)
+Number of lasers shot.
+- **horizontal_angle** (_float_)
+Horizontal angle the Lidar is rotated at the time of the measurement (in radians).
+- **raw_data** (_bytes_)
+Received list of raw detection points. Each point consists in a 3D-xyz data plus cosine of the incident angle, the idx of the hit actor and its semantic tag.
+
+Methods
+- **save_to_disk**(**self**, **path**)
+Saves the point cloud to disk as a .ply file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
+ - **Parameters:**
+ - `path` (_str_)
+
+Getters
+- **get_point_count**(**self**, **channel**)
+Retrieves the number of points sorted by channel that are generated by this measure. Sorting by channel allows to identify the original channel for every point.
+ - **Parameters:**
+ - `channel` (_int_)
+
+Dunder methods
+- **\__getitem__**(**self**, **pos**=int)
+- **\__iter__**(**self**)
+- **\__len__**(**self**)
+- **\__setitem__**(**self**, **pos**=int, **detection**=[carla.LidarRawDetection](#carla.LidarRawDetection))
+- **\__str__**(**self**)
+
+---
+
## carla.Light
This class exposes the lights that exist in the scene, except for vehicle lights. The properties of a light can be queried and changed at will.
Lights are automatically turned on when the simulator enters night mode (sun altitude is below zero).
@@ -1694,6 +1746,7 @@ Sets the log level.
- [Gnss sensor](ref_sensors.md#gnss-sensor).
- [IMU sensor](ref_sensors.md#imu-sensor).
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
+ - [RawLidar raycast](ref_sensors.md#rawlidar-raycast-sensor).
- [Radar](ref_sensors.md#radar-sensor).
- [RGB camera](ref_sensors.md#rgb-camera).
- [RSS sensor](ref_sensors.md#rss-sensor).
@@ -1728,6 +1781,7 @@ Base class for all the objects containing data generated by a [carla.Sensor](#ca
- IMU detector: [carla.IMUMeasurement](#carla.IMUMeasurement).
- Lane invasion detector: [carla.LaneInvasionEvent](#carla.LaneInvasionEvent).
- Lidar raycast: [carla.LidarMeasurement](#carla.LidarMeasurement).
+ - RawLidar raycast: [carla.LidarRawMeasurement](#carla.LidarRawMeasurement).
- Obstacle detector: [carla.ObstacleDetectionEvent](#carla.ObstacleDetectionEvent).
- Radar detector: [carla.RadarMeasurement](#carla.RadarMeasurement).
- RSS sensor: [carla.RssResponse](#carla.RssResponse).
diff --git a/Docs/ref_sensors.md b/Docs/ref_sensors.md
index f69b44086..b48fa4778 100644
--- a/Docs/ref_sensors.md
+++ b/Docs/ref_sensors.md
@@ -6,6 +6,7 @@
* [__IMU sensor__](#imu-sensor)
* [__Lane invasion detector__](#lane-invasion-detector)
* [__Lidar raycast sensor__](#lidar-raycast-sensor)
+ * [__RawLidar raycast sensor__](#rawlidar-raycast-sensor)
* [__Obstacle detector__](#obstacle-detector)
* [__Radar sensor__](#radar-sensor)
* [__RGB camera__](#rgb-camera)
@@ -485,16 +486,16 @@ where a is the attenuation coefficient and d is the distance to the sensor.
In order to increase the realism, we add the possibility of dropping cloud points. This is done in two different ways. In a general way, we can randomly drop points with a probability given by dropoff_general_rate. In this case, the drop off of points is done before tracing the ray cast so adjust this parameter can increase our performance. If that parameter is set to zero it will be ignored. The second way to regulate the drop off of points is in a rate proportional to the intensity. This drop off rate will be proportional to the intensity from zero at dropoff_intensity_limit to dropoff_zero_intensity at zero intensity.
-This output contains a cloud of simulation points and thus, can be iterated to retrieve a list of their [`carla.Location`](python_api.md#carla.Location):
+This output contains a cloud of simulation points with its intensity and thus, can be iterated to retrieve a list of their [`carla.LidarDetection`](python_api.md#carla.LidarDetection):
```py
-for location in lidar_measurement:
- print(location)
+for detection in lidar_measurement:
+ print(detection)
```
-The rotation of the LIDAR can be tuned to cover a specific angle on every simulation step (using a [fixed time-step](adv_synchrony_timestep.md)). For example, to rotate once per step (full circle output, as in the picture below), the rotation frequency and the simulated FPS should be equal.
__1.__ Set the sensor's frequency `sensors_bp['lidar'][0].set_attribute('rotation_frequency','10')`.
__2.__ Run the simulation using `python config.py --fps=10`.
+The rotation of the LIDAR can be tuned to cover a specific angle on every simulation step (using a [fixed time-step](adv_synchrony_timestep.md)). For example, to rotate once per step (full circle output, as in the picture below), the rotation frequency and the simulated FPS should be equal.
__1.__ Set the sensor's frequency `sensors_bp['lidar'][0].set_attribute('rotation_frequency','10')`.
__2.__ Run the simulation using `python config.py --fps=10`.
-![LidarPointCloud](img/lidar_point_cloud.gif)
+![LidarPointCloud](img/lidar_point_cloud.png)
#### Lidar attributes
@@ -610,6 +611,119 @@ The rotation of the LIDAR can be tuned to cover a specific angle on every simula
+
+---
+## RawLidar raycast sensor
+
+* __Blueprint:__ sensor.lidar.ray_cast_raw
+* __Output:__ [carla.LidarRawMeasurement](python_api.md#carla.LidarRawMeasurement) per step (unless `sensor_tick` says otherwise).
+
+This sensor simulates a rotating Lidar implemented using ray-casting that exposes all the information about the hit. Its behaviour is quite similar to the [Lidar raycast sensor](python_api.md#lidar-raycast-sensor) but this sensor does not have any of the intensity, dropoff and noise featuers and its output is more complete.
+The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step:
+`points_per_channel_each_step = points_per_second / (FPS * channels)`
+
+A Lidar measurement contains a packet with all the points generated during a `1/FPS` interval. During this interval the physics are not updated so all the points in a measurement reflect the same "static picture" of the scene.
+
+This output contains a cloud of lidar raw detections and therefore, it can be iterated to retrieve a list of their [`carla.LidarRawDetection`](python_api.md#carla.LidarRawDetection):
+
+```py
+for detection in lidar_raw_measurement:
+ print(detection)
+```
+
+The rotation of the LIDAR can be tuned to cover a specific angle on every simulation step (using a [fixed time-step](adv_synchrony_timestep.md)). For example, to rotate once per step (full circle output, as in the picture below), the rotation frequency and the simulated FPS should be equal.
__1.__ Set the sensor's frequency `sensors_bp['lidar'][0].set_attribute('rotation_frequency','10')`.
__2.__ Run the simulation using `python config.py --fps=10`.
+
+![LidarPointCloud](img/rawlidar_point_cloud.png)
+
+#### Lidar attributes
+
+
+
+Blueprint attribute |
+Type |
+Default |
+Description |
+
+
+
+channels |
+int |
+32 |
+Number of lasers. |
+
+range |
+float |
+10.0 |
+Maximum distance to measure/raycast in meters (centimeters for CARLA 0.9.6 or previous). |
+
+points_per_second |
+int |
+56000 |
+Points generated by all lasers per second. |
+
+rotation_frequency |
+float |
+10.0 |
+Lidar rotation frequency. |
+
+upper_fov |
+float |
+10.0 |
+Angle in degrees of the highest laser. |
+
+lower_fov |
+float |
+-30.0 |
+Angle in degrees of the lowest laser. |
+
+sensor_tick |
+float |
+0.0 |
+Simulation seconds between sensor captures (ticks). |
+
+
+
+
+#### Output attributes
+
+
+
+Sensor data attribute |
+Type |
+Description |
+
+
+
+frame |
+int |
+Frame number when the measurement took place. |
+
+timestamp |
+double |
+Simulation time of the measurement in seconds since the beginning of the episode. |
+
+transform |
+carla.Transform |
+Location and rotation in world coordinates of the sensor at the time of the measurement. |
+
+horizontal_angle |
+float |
+Angle (radians) in the XY plane of the lidar this frame. |
+
+channels |
+int |
+Number of channels (lasers) of the lidar. |
+
+get_point_count(channel) |
+int |
+Number of points per channel captured this frame. |
+
+raw_data |
+bytes |
+Array that can be transform in raw detections, each of them have four 32-bits floats (XYZ of each point and consine of the incident angle) and two unsigned int (idx of the hitted actor and its semantic tag). |
+
+
+
---
## Obstacle detector
@@ -1118,21 +1232,21 @@ Since these effects are provided by UE, please make sure to check their document
---
## RSS sensor
-* __Blueprint:__ sensor.other.rss
-* __Output:__ [carla.RssResponse](python_api.md#carla.RssResponse) per step (unless `sensor_tick` says otherwise).
+* __Blueprint:__ sensor.other.rss
+* __Output:__ [carla.RssResponse](python_api.md#carla.RssResponse) per step (unless `sensor_tick` says otherwise).
!!! Important
- It is highly recommended to read the specific [rss documentation](adv_rss.md) before reading this.
+ It is highly recommended to read the specific [rss documentation](adv_rss.md) before reading this.
-This sensor integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in CARLA. It is disabled by default in CARLA, and it has to be explicitly built in order to be used.
+This sensor integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in CARLA. It is disabled by default in CARLA, and it has to be explicitly built in order to be used.
-The RSS sensor calculates the RSS state of a vehicle and retrieves the current RSS Response as sensor data. The [carla.RssRestrictor](python_api.md#carla.RssRestrictor) will use this data to adapt a [carla.VehicleControl](python_api.md#carla.VehicleControl) before applying it to a vehicle.
+The RSS sensor calculates the RSS state of a vehicle and retrieves the current RSS Response as sensor data. The [carla.RssRestrictor](python_api.md#carla.RssRestrictor) will use this data to adapt a [carla.VehicleControl](python_api.md#carla.VehicleControl) before applying it to a vehicle.
-These controllers can be generated by an *Automated Driving* stack or user input. For instance, hereunder there is a fragment of code from `PythonAPI/examples/manual_control_rss.py`, where the user input is modified using RSS when necessary.
+These controllers can be generated by an *Automated Driving* stack or user input. For instance, hereunder there is a fragment of code from `PythonAPI/examples/manual_control_rss.py`, where the user input is modified using RSS when necessary.
-__1.__ Checks if the __RssSensor__ generates a valid response containing restrictions.
-__2.__ Gathers the current dynamics of the vehicle and the vehicle physics.
-__3.__ Applies restrictions to the vehicle control using the response from the RssSensor, and the current dynamics and physicis of the vehicle.
+__1.__ Checks if the __RssSensor__ generates a valid response containing restrictions.
+__2.__ Gathers the current dynamics of the vehicle and the vehicle physics.
+__3.__ Applies restrictions to the vehicle control using the response from the RssSensor, and the current dynamics and physicis of the vehicle.
```py
rss_restriction = self._world.rss_sensor.acceleration_restriction if self._world.rss_sensor and self._world.rss_sensor.response_valid else None
@@ -1147,7 +1261,7 @@ if rss_restriction:
#### The carla.RssSensor class
-The blueprint for this sensor has no modifiable attributes. However, the [carla.RssSensor](python_api.md#carla.RssSensor) object that it instantiates has attributes and methods that are detailed in the Python API reference. Here is a summary of them.
+The blueprint for this sensor has no modifiable attributes. However, the [carla.RssSensor](python_api.md#carla.RssSensor) object that it instantiates has attributes and methods that are detailed in the Python API reference. Here is a summary of them.
@@ -1191,7 +1305,7 @@ def _on_rss_response(weak_self, response):
!!! Warning
This sensor works fully on the client side. There is no blueprint in the server. Changes on the attributes will have effect __after__ the *listen()* has been called.
-The methods available in this class are related to the routing of the vehicle. RSS calculations are always based on a route of the ego vehicle through the road network.
+The methods available in this class are related to the routing of the vehicle. RSS calculations are always based on a route of the ego vehicle through the road network.
The sensor allows to control the considered route by providing some key points, which could be the [carla.Transform](python_api.md#carla.Transform) in a [carla.Waypoint](python_api.md#carla.Waypoint). These points are best selected after the intersections to force the route to take the desired turn.
@@ -1298,11 +1412,11 @@ def _on_actor_constellation_request(self, actor_constellation_data):
---
## Semantic segmentation camera
-* __Blueprint:__ sensor.camera.semantic_segmentation
-* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
+* __Blueprint:__ sensor.camera.semantic_segmentation
+* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
This camera classifies every object in sight by displaying it in a different color according to its tags (e.g., pedestrians in a different color than vehicles).
-When the simulation starts, every element in scene is created with a tag. So it happens when an actor is spawned. The objects are classified by their relative file path in the project. For example, meshes stored in `Unreal/CarlaUE4/Content/Static/Pedestrians` are tagged as `Pedestrian`.
+When the simulation starts, every element in scene is created with a tag. So it happens when an actor is spawned. The objects are classified by their relative file path in the project. For example, meshes stored in `Unreal/CarlaUE4/Content/Static/Pedestrians` are tagged as `Pedestrian`.
The server provides an image with the tag information __encoded in the red channel__: A pixel with a red value of `x` belongs to an object with tag `x`.
This raw [carla.Image](python_api.md#carla.Image) can be stored and converted it with the help of __CityScapesPalette__ in [carla.ColorConverter](python_api.md#carla.ColorConverter) to apply the tags information and show picture with the semantic segmentation.
@@ -1498,8 +1612,8 @@ The following tags are currently available:
---
## DVS camera
-* __Blueprint:__ sensor.camera.dvs
-* __Output:__ [carla.DVSEventArray](python_api.md#carla.DVSEventArray) per step (unless `sensor_tick` says otherwise).
+* __Blueprint:__ sensor.camera.dvs
+* __Output:__ [carla.DVSEventArray](python_api.md#carla.DVSEventArray) per step (unless `sensor_tick` says otherwise).
A Dynamic Vision Sensor (DVS) or Event camera is a sensor that works radically differently from a conventional camera. Instead of capturing
diff --git a/PythonAPI/docs/sensor.yml b/PythonAPI/docs/sensor.yml
index 370c67ca7..21b93fba2 100644
--- a/PythonAPI/docs/sensor.yml
+++ b/PythonAPI/docs/sensor.yml
@@ -14,6 +14,7 @@
- [Gnss sensor](ref_sensors.md#gnss-sensor).
- [IMU sensor](ref_sensors.md#imu-sensor).
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
+ - [RawLidar raycast](ref_sensors.md#rawlidar-raycast-sensor).
- [Radar](ref_sensors.md#radar-sensor).
- [RGB camera](ref_sensors.md#rgb-camera).
- [RSS sensor](ref_sensors.md#rss-sensor).
diff --git a/PythonAPI/docs/sensor_data.yml b/PythonAPI/docs/sensor_data.yml
index f38d1547b..7aa07e153 100644
--- a/PythonAPI/docs/sensor_data.yml
+++ b/PythonAPI/docs/sensor_data.yml
@@ -12,6 +12,7 @@
- IMU detector: carla.IMUMeasurement.
- Lane invasion detector: carla.LaneInvasionEvent.
- Lidar raycast: carla.LidarMeasurement.
+ - RawLidar raycast: carla.LidarRawMeasurement.
- Obstacle detector: carla.ObstacleDetectionEvent.
- Radar detector: carla.RadarMeasurement.
- RSS sensor: carla.RssResponse.
@@ -187,6 +188,90 @@
- def_name: __str__
# --------------------------------------
+ - class_name: LidarRawMeasurement
+ parent: carla.SensorData
+ # - DESCRIPTION ------------------------
+ doc: >
+ Class that defines the raw lidar data retrieved by a sensor.lidar.ray_cast_raw. This essentially simulates a rotating lidar using ray-casting. Learn more about this [here](ref_sensors.md#rawlidar-raycast-sensor).
+ # - PROPERTIES -------------------------
+ instance_variables:
+ - var_name: channels
+ type: int
+ doc: >
+ Number of lasers shot.
+ - var_name: horizontal_angle
+ type: float
+ doc: >
+ Horizontal angle the Lidar is rotated at the time of the measurement (in radians).
+ - var_name: raw_data
+ type: bytes
+ doc: >
+ Received list of raw detection points. Each point consists in a 3D-xyz data plus cosine of the incident angle, the idx of the hit actor and its semantic tag.
+ # - METHODS ----------------------------
+ methods:
+ - def_name: save_to_disk
+ params:
+ - param_name: path
+ type: str
+ doc: >
+ Saves the point cloud to disk as a .ply file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
+ # --------------------------------------
+ - def_name: get_point_count
+ params:
+ - param_name: channel
+ type: int
+ doc: >
+ Retrieves the number of points sorted by channel that are generated by this measure. Sorting by channel allows to identify the original channel for every point.
+ # --------------------------------------
+ - def_name: __getitem__
+ params:
+ - param_name: pos
+ type: int
+ # --------------------------------------
+ - def_name: __iter__
+ # --------------------------------------
+ - def_name: __len__
+ # --------------------------------------
+ - def_name: __setitem__
+ params:
+ - param_name: pos
+ type: int
+ - param_name: detection
+ type: carla.LidarRawDetection
+ # --------------------------------------
+ - def_name: __str__
+ # --------------------------------------
+
+ - class_name: LidarRawDetection
+ # - DESCRIPTION ------------------------
+ doc: >
+ Data contained inside a carla.LidarRawMeasurement. Each of these represents one of the points in the cloud with its location and its asociated intensity.
+ # - PROPERTIES -------------------------
+ instance_variables:
+ - var_name: point
+ type: carla.Location
+ doc: >
+ Point in xyz coordinates.
+ # --------------------------------------
+ - var_name: cos_inc_angle
+ type: float
+ doc: >
+ Cosine of the incident angle between the ray and the normal of the hit object.
+ # --------------------------------------
+ - var_name: object_idx
+ type: uint
+ doc: >
+ Carla index of the hitted actor.
+ # --------------------------------------
+ - var_name: object_tag
+ type: uint
+ doc: >
+ Semantic tag of the hitted component.
+ # - METHODS ----------------------------
+ methods:
+ - def_name: __str__
+ # --------------------------------------
+
- class_name: CollisionEvent
parent: carla.SensorData
# - DESCRIPTION ------------------------