Updated documentation for RawLidar
This commit is contained in:
parent
c2cc075d23
commit
187b720902
|
@ -155,6 +155,16 @@ Check out the [introduction to blueprints](core_actors.md).
|
|||
- `rotation_frequency` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- `sensor_tick` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- `upper_fov` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- **<font color="#498efc">sensor.lidar.ray_cast_raw</font>**
|
||||
- **Attributes:**
|
||||
- `channels` (_Int_)<sub>_ – Modifiable_</sub>
|
||||
- `lower_fov` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- `points_per_second` (_Int_)<sub>_ – Modifiable_</sub>
|
||||
- `range` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- `role_name` (_String_)<sub>_ – Modifiable_</sub>
|
||||
- `rotation_frequency` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- `sensor_tick` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- `upper_fov` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- **<font color="#498efc">sensor.other.collision</font>**
|
||||
- **Attributes:**
|
||||
- `role_name` (_String_)<sub>_ – Modifiable_</sub>
|
||||
|
|
|
@ -1055,6 +1055,58 @@ Retrieves the number of points sorted by channel that are generated by this meas
|
|||
|
||||
---
|
||||
|
||||
## carla.LidarRawDetection<a name="carla.LidarRawDetection"></a>
|
||||
Data contained inside a [carla.LidarRawMeasurement](#carla.LidarRawMeasurement). Each of these represents one of the points in the cloud with its location and its asociated intensity.
|
||||
|
||||
<h3>Instance Variables</h3>
|
||||
- <a name="carla.LidarRawDetection.point"></a>**<font color="#f8805a">point</font>** (_[carla.Location](#carla.Location)_)
|
||||
Point in xyz coordinates.
|
||||
- <a name="carla.LidarRawDetection.cos_inc_angle"></a>**<font color="#f8805a">cos_inc_angle</font>** (_float_)
|
||||
Cosine of the incident angle between the ray and the normal of the hit object.
|
||||
- <a name="carla.LidarRawDetection.object_idx"></a>**<font color="#f8805a">object_idx</font>** (_uint_)
|
||||
Carla index of the hitted actor.
|
||||
- <a name="carla.LidarRawDetection.object_tag"></a>**<font color="#f8805a">object_tag</font>** (_uint_)
|
||||
Semantic tag of the hitted component.
|
||||
|
||||
<h3>Methods</h3>
|
||||
|
||||
<h5 style="margin-top: -20px">Dunder methods</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.LidarRawDetection.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
|
||||
---
|
||||
|
||||
## carla.LidarRawMeasurement<a name="carla.LidarRawMeasurement"></a>
|
||||
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.SensorData](#carla.SensorData)_</b></small></div></p><p>Class that defines the raw lidar data retrieved by a <b>sensor.lidar.ray_cast_raw</b>. This essentially simulates a rotating lidar using ray-casting. Learn more about this [here](ref_sensors.md#rawlidar-raycast-sensor).
|
||||
|
||||
<h3>Instance Variables</h3>
|
||||
- <a name="carla.LidarRawMeasurement.channels"></a>**<font color="#f8805a">channels</font>** (_int_)
|
||||
Number of lasers shot.
|
||||
- <a name="carla.LidarRawMeasurement.horizontal_angle"></a>**<font color="#f8805a">horizontal_angle</font>** (_float_)
|
||||
Horizontal angle the Lidar is rotated at the time of the measurement (in radians).
|
||||
- <a name="carla.LidarRawMeasurement.raw_data"></a>**<font color="#f8805a">raw_data</font>** (_bytes_)
|
||||
Received list of raw detection points. Each point consists in a 3D-xyz data plus cosine of the incident angle, the idx of the hit actor and its semantic tag.
|
||||
|
||||
<h3>Methods</h3>
|
||||
- <a name="carla.LidarRawMeasurement.save_to_disk"></a>**<font color="#7fb800">save_to_disk</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**path**</font>)
|
||||
Saves the point cloud to disk as a <b>.ply</b> file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
|
||||
- **Parameters:**
|
||||
- `path` (_str_)
|
||||
|
||||
<h5 style="margin-top: -20px">Getters</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.LidarRawMeasurement.get_point_count"></a>**<font color="#7fb800">get_point_count</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**channel**</font>)
|
||||
Retrieves the number of points sorted by channel that are generated by this measure. Sorting by channel allows to identify the original channel for every point.
|
||||
- **Parameters:**
|
||||
- `channel` (_int_)
|
||||
|
||||
<h5 style="margin-top: -20px">Dunder methods</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.LidarRawMeasurement.__getitem__"></a>**<font color="#7fb800">\__getitem__</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**pos**=int</font>)
|
||||
- <a name="carla.LidarRawMeasurement.__iter__"></a>**<font color="#7fb800">\__iter__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
- <a name="carla.LidarRawMeasurement.__len__"></a>**<font color="#7fb800">\__len__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
- <a name="carla.LidarRawMeasurement.__setitem__"></a>**<font color="#7fb800">\__setitem__</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**pos**=int</font>, <font color="#00a6ed">**detection**=[carla.LidarRawDetection](#carla.LidarRawDetection)</font>)
|
||||
- <a name="carla.LidarRawMeasurement.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
|
||||
---
|
||||
|
||||
## carla.Light<a name="carla.Light"></a>
|
||||
This class exposes the lights that exist in the scene, except for vehicle lights. The properties of a light can be queried and changed at will.
|
||||
Lights are automatically turned on when the simulator enters night mode (sun altitude is below zero).
|
||||
|
@ -1694,6 +1746,7 @@ Sets the log level.
|
|||
- [Gnss sensor](ref_sensors.md#gnss-sensor).
|
||||
- [IMU sensor](ref_sensors.md#imu-sensor).
|
||||
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
|
||||
- [RawLidar raycast](ref_sensors.md#rawlidar-raycast-sensor).
|
||||
- [Radar](ref_sensors.md#radar-sensor).
|
||||
- [RGB camera](ref_sensors.md#rgb-camera).
|
||||
- [RSS sensor](ref_sensors.md#rss-sensor).
|
||||
|
@ -1728,6 +1781,7 @@ Base class for all the objects containing data generated by a [carla.Sensor](#ca
|
|||
- IMU detector: [carla.IMUMeasurement](#carla.IMUMeasurement).
|
||||
- Lane invasion detector: [carla.LaneInvasionEvent](#carla.LaneInvasionEvent).
|
||||
- Lidar raycast: [carla.LidarMeasurement](#carla.LidarMeasurement).
|
||||
- RawLidar raycast: [carla.LidarRawMeasurement](#carla.LidarRawMeasurement).
|
||||
- Obstacle detector: [carla.ObstacleDetectionEvent](#carla.ObstacleDetectionEvent).
|
||||
- Radar detector: [carla.RadarMeasurement](#carla.RadarMeasurement).
|
||||
- RSS sensor: [carla.RssResponse](#carla.RssResponse).
|
||||
|
|
|
@ -6,6 +6,7 @@
|
|||
* [__IMU sensor__](#imu-sensor)
|
||||
* [__Lane invasion detector__](#lane-invasion-detector)
|
||||
* [__Lidar raycast sensor__](#lidar-raycast-sensor)
|
||||
* [__RawLidar raycast sensor__](#rawlidar-raycast-sensor)
|
||||
* [__Obstacle detector__](#obstacle-detector)
|
||||
* [__Radar sensor__](#radar-sensor)
|
||||
* [__RGB camera__](#rgb-camera)
|
||||
|
@ -485,16 +486,16 @@ where a is the attenuation coefficient and d is the distance to the sensor.
|
|||
|
||||
In order to increase the realism, we add the possibility of dropping cloud points. This is done in two different ways. In a general way, we can randomly drop points with a probability given by <b>dropoff_general_rate</b>. In this case, the drop off of points is done before tracing the ray cast so adjust this parameter can increase our performance. If that parameter is set to zero it will be ignored. The second way to regulate the drop off of points is in a rate proportional to the intensity. This drop off rate will be proportional to the intensity from zero at <b>dropoff_intensity_limit</b> to <b>dropoff_zero_intensity</b> at zero intensity.
|
||||
|
||||
This output contains a cloud of simulation points and thus, can be iterated to retrieve a list of their [`carla.Location`](python_api.md#carla.Location):
|
||||
This output contains a cloud of simulation points with its intensity and thus, can be iterated to retrieve a list of their [`carla.LidarDetection`](python_api.md#carla.LidarDetection):
|
||||
|
||||
```py
|
||||
for location in lidar_measurement:
|
||||
print(location)
|
||||
for detection in lidar_measurement:
|
||||
print(detection)
|
||||
```
|
||||
|
||||
The rotation of the LIDAR can be tuned to cover a specific angle on every simulation step (using a [fixed time-step](adv_synchrony_timestep.md)). For example, to rotate once per step (full circle output, as in the picture below), the rotation frequency and the simulated FPS should be equal. <br> __1.__ Set the sensor's frequency `sensors_bp['lidar'][0].set_attribute('rotation_frequency','10')`. <br> __2.__ Run the simulation using `python config.py --fps=10`.
|
||||
The rotation of the LIDAR can be tuned to cover a specific angle on every simulation step (using a [fixed time-step](adv_synchrony_timestep.md)). For example, to rotate once per step (full circle output, as in the picture below), the rotation frequency and the simulated FPS should be equal. <br> __1.__ Set the sensor's frequency `sensors_bp['lidar'][0].set_attribute('rotation_frequency','10')`. <br> __2.__ Run the simulation using `python config.py --fps=10`.
|
||||
|
||||
![LidarPointCloud](img/lidar_point_cloud.gif)
|
||||
![LidarPointCloud](img/lidar_point_cloud.png)
|
||||
|
||||
#### Lidar attributes
|
||||
|
||||
|
@ -610,6 +611,119 @@ The rotation of the LIDAR can be tuned to cover a specific angle on every simula
|
|||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
---
|
||||
## RawLidar raycast sensor
|
||||
|
||||
* __Blueprint:__ sensor.lidar.ray_cast_raw
|
||||
* __Output:__ [carla.LidarRawMeasurement](python_api.md#carla.LidarRawMeasurement) per step (unless `sensor_tick` says otherwise).
|
||||
|
||||
This sensor simulates a rotating Lidar implemented using ray-casting that exposes all the information about the hit. Its behaviour is quite similar to the [Lidar raycast sensor](python_api.md#lidar-raycast-sensor) but this sensor does not have any of the intensity, dropoff and noise featuers and its output is more complete.
|
||||
The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step:
|
||||
`points_per_channel_each_step = points_per_second / (FPS * channels)`
|
||||
|
||||
A Lidar measurement contains a packet with all the points generated during a `1/FPS` interval. During this interval the physics are not updated so all the points in a measurement reflect the same "static picture" of the scene.
|
||||
|
||||
This output contains a cloud of lidar raw detections and therefore, it can be iterated to retrieve a list of their [`carla.LidarRawDetection`](python_api.md#carla.LidarRawDetection):
|
||||
|
||||
```py
|
||||
for detection in lidar_raw_measurement:
|
||||
print(detection)
|
||||
```
|
||||
|
||||
The rotation of the LIDAR can be tuned to cover a specific angle on every simulation step (using a [fixed time-step](adv_synchrony_timestep.md)). For example, to rotate once per step (full circle output, as in the picture below), the rotation frequency and the simulated FPS should be equal. <br> __1.__ Set the sensor's frequency `sensors_bp['lidar'][0].set_attribute('rotation_frequency','10')`. <br> __2.__ Run the simulation using `python config.py --fps=10`.
|
||||
|
||||
![LidarPointCloud](img/rawlidar_point_cloud.png)
|
||||
|
||||
#### Lidar attributes
|
||||
|
||||
<table class ="defTable">
|
||||
<thead>
|
||||
<th>Blueprint attribute</th>
|
||||
<th>Type</th>
|
||||
<th>Default</th>
|
||||
<th>Description</th>
|
||||
</thead>
|
||||
<tbody>
|
||||
<td>
|
||||
<code>channels</code> </td>
|
||||
<td>int</td>
|
||||
<td>32</td>
|
||||
<td>Number of lasers.</td>
|
||||
<tr>
|
||||
<td><code>range</code></td>
|
||||
<td>float</td>
|
||||
<td>10.0</td>
|
||||
<td>Maximum distance to measure/raycast in meters (centimeters for CARLA 0.9.6 or previous).</td>
|
||||
<tr>
|
||||
<td><code>points_per_second</code></td>
|
||||
<td>int</td>
|
||||
<td>56000</td>
|
||||
<td>Points generated by all lasers per second.</td>
|
||||
<tr>
|
||||
<td><code>rotation_frequency</code></td>
|
||||
<td>float</td>
|
||||
<td>10.0</td>
|
||||
<td>Lidar rotation frequency.</td>
|
||||
<tr>
|
||||
<td><code>upper_fov</code></td>
|
||||
<td>float</td>
|
||||
<td>10.0</td>
|
||||
<td>Angle in degrees of the highest laser.</td>
|
||||
<tr>
|
||||
<td><code>lower_fov</code></td>
|
||||
<td>float</td>
|
||||
<td>-30.0</td>
|
||||
<td>Angle in degrees of the lowest laser.</td>
|
||||
<tr>
|
||||
<td><code>sensor_tick</code></td>
|
||||
<td>float</td>
|
||||
<td>0.0</td>
|
||||
<td>Simulation seconds between sensor captures (ticks).</td>
|
||||
</tbody>
|
||||
</table>
|
||||
<br>
|
||||
|
||||
#### Output attributes
|
||||
|
||||
<table class ="defTable">
|
||||
<thead>
|
||||
<th>Sensor data attribute</th>
|
||||
<th>Type</th>
|
||||
<th>Description</th>
|
||||
</thead>
|
||||
<tbody>
|
||||
<td>
|
||||
<code>frame</code> </td>
|
||||
<td>int</td>
|
||||
<td>Frame number when the measurement took place.</td>
|
||||
<tr>
|
||||
<td><code>timestamp</code></td>
|
||||
<td>double</td>
|
||||
<td>Simulation time of the measurement in seconds since the beginning of the episode.</td>
|
||||
<tr>
|
||||
<td><code>transform</code></td>
|
||||
<td><a href="../python_api#carlatransform">carla.Transform</a></td>
|
||||
<td>Location and rotation in world coordinates of the sensor at the time of the measurement.</td>
|
||||
<tr>
|
||||
<td><code>horizontal_angle</code></td>
|
||||
<td>float</td>
|
||||
<td>Angle (radians) in the XY plane of the lidar this frame.</td>
|
||||
<tr>
|
||||
<td><code>channels</code></td>
|
||||
<td>int</td>
|
||||
<td>Number of channels (lasers) of the lidar.</td>
|
||||
<tr>
|
||||
<td><code>get_point_count(channel)</code></td>
|
||||
<td>int</td>
|
||||
<td>Number of points per channel captured this frame.</td>
|
||||
<tr>
|
||||
<td><code>raw_data</code></td>
|
||||
<td>bytes</td>
|
||||
<td>Array that can be transform in raw detections, each of them have four 32-bits floats (XYZ of each point and consine of the incident angle) and two unsigned int (idx of the hitted actor and its semantic tag).</td>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
---
|
||||
## Obstacle detector
|
||||
|
||||
|
@ -1118,21 +1232,21 @@ Since these effects are provided by UE, please make sure to check their document
|
|||
---
|
||||
## RSS sensor
|
||||
|
||||
* __Blueprint:__ sensor.other.rss
|
||||
* __Output:__ [carla.RssResponse](python_api.md#carla.RssResponse) per step (unless `sensor_tick` says otherwise).
|
||||
* __Blueprint:__ sensor.other.rss
|
||||
* __Output:__ [carla.RssResponse](python_api.md#carla.RssResponse) per step (unless `sensor_tick` says otherwise).
|
||||
|
||||
!!! Important
|
||||
It is highly recommended to read the specific [rss documentation](adv_rss.md) before reading this.
|
||||
It is highly recommended to read the specific [rss documentation](adv_rss.md) before reading this.
|
||||
|
||||
This sensor integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in CARLA. It is disabled by default in CARLA, and it has to be explicitly built in order to be used.
|
||||
This sensor integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in CARLA. It is disabled by default in CARLA, and it has to be explicitly built in order to be used.
|
||||
|
||||
The RSS sensor calculates the RSS state of a vehicle and retrieves the current RSS Response as sensor data. The [carla.RssRestrictor](python_api.md#carla.RssRestrictor) will use this data to adapt a [carla.VehicleControl](python_api.md#carla.VehicleControl) before applying it to a vehicle.
|
||||
The RSS sensor calculates the RSS state of a vehicle and retrieves the current RSS Response as sensor data. The [carla.RssRestrictor](python_api.md#carla.RssRestrictor) will use this data to adapt a [carla.VehicleControl](python_api.md#carla.VehicleControl) before applying it to a vehicle.
|
||||
|
||||
These controllers can be generated by an *Automated Driving* stack or user input. For instance, hereunder there is a fragment of code from `PythonAPI/examples/manual_control_rss.py`, where the user input is modified using RSS when necessary.
|
||||
These controllers can be generated by an *Automated Driving* stack or user input. For instance, hereunder there is a fragment of code from `PythonAPI/examples/manual_control_rss.py`, where the user input is modified using RSS when necessary.
|
||||
|
||||
__1.__ Checks if the __RssSensor__ generates a valid response containing restrictions.
|
||||
__2.__ Gathers the current dynamics of the vehicle and the vehicle physics.
|
||||
__3.__ Applies restrictions to the vehicle control using the response from the RssSensor, and the current dynamics and physicis of the vehicle.
|
||||
__1.__ Checks if the __RssSensor__ generates a valid response containing restrictions.
|
||||
__2.__ Gathers the current dynamics of the vehicle and the vehicle physics.
|
||||
__3.__ Applies restrictions to the vehicle control using the response from the RssSensor, and the current dynamics and physicis of the vehicle.
|
||||
|
||||
```py
|
||||
rss_restriction = self._world.rss_sensor.acceleration_restriction if self._world.rss_sensor and self._world.rss_sensor.response_valid else None
|
||||
|
@ -1147,7 +1261,7 @@ if rss_restriction:
|
|||
|
||||
#### The carla.RssSensor class
|
||||
|
||||
The blueprint for this sensor has no modifiable attributes. However, the [carla.RssSensor](python_api.md#carla.RssSensor) object that it instantiates has attributes and methods that are detailed in the Python API reference. Here is a summary of them.
|
||||
The blueprint for this sensor has no modifiable attributes. However, the [carla.RssSensor](python_api.md#carla.RssSensor) object that it instantiates has attributes and methods that are detailed in the Python API reference. Here is a summary of them.
|
||||
|
||||
<table class ="defTable">
|
||||
<thead>
|
||||
|
@ -1191,7 +1305,7 @@ def _on_rss_response(weak_self, response):
|
|||
!!! Warning
|
||||
This sensor works fully on the client side. There is no blueprint in the server. Changes on the attributes will have effect __after__ the *listen()* has been called.
|
||||
|
||||
The methods available in this class are related to the routing of the vehicle. RSS calculations are always based on a route of the ego vehicle through the road network.
|
||||
The methods available in this class are related to the routing of the vehicle. RSS calculations are always based on a route of the ego vehicle through the road network.
|
||||
|
||||
The sensor allows to control the considered route by providing some key points, which could be the [carla.Transform](python_api.md#carla.Transform) in a [carla.Waypoint](python_api.md#carla.Waypoint). These points are best selected after the intersections to force the route to take the desired turn.
|
||||
|
||||
|
@ -1298,11 +1412,11 @@ def _on_actor_constellation_request(self, actor_constellation_data):
|
|||
---
|
||||
## Semantic segmentation camera
|
||||
|
||||
* __Blueprint:__ sensor.camera.semantic_segmentation
|
||||
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
|
||||
* __Blueprint:__ sensor.camera.semantic_segmentation
|
||||
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
|
||||
|
||||
This camera classifies every object in sight by displaying it in a different color according to its tags (e.g., pedestrians in a different color than vehicles).
|
||||
When the simulation starts, every element in scene is created with a tag. So it happens when an actor is spawned. The objects are classified by their relative file path in the project. For example, meshes stored in `Unreal/CarlaUE4/Content/Static/Pedestrians` are tagged as `Pedestrian`.
|
||||
When the simulation starts, every element in scene is created with a tag. So it happens when an actor is spawned. The objects are classified by their relative file path in the project. For example, meshes stored in `Unreal/CarlaUE4/Content/Static/Pedestrians` are tagged as `Pedestrian`.
|
||||
|
||||
The server provides an image with the tag information __encoded in the red channel__: A pixel with a red value of `x` belongs to an object with tag `x`.
|
||||
This raw [carla.Image](python_api.md#carla.Image) can be stored and converted it with the help of __CityScapesPalette__ in [carla.ColorConverter](python_api.md#carla.ColorConverter) to apply the tags information and show picture with the semantic segmentation.
|
||||
|
@ -1498,8 +1612,8 @@ The following tags are currently available:
|
|||
---
|
||||
## DVS camera
|
||||
|
||||
* __Blueprint:__ sensor.camera.dvs
|
||||
* __Output:__ [carla.DVSEventArray](python_api.md#carla.DVSEventArray) per step (unless `sensor_tick` says otherwise).
|
||||
* __Blueprint:__ sensor.camera.dvs
|
||||
* __Output:__ [carla.DVSEventArray](python_api.md#carla.DVSEventArray) per step (unless `sensor_tick` says otherwise).
|
||||
|
||||
|
||||
A Dynamic Vision Sensor (DVS) or Event camera is a sensor that works radically differently from a conventional camera. Instead of capturing
|
||||
|
|
|
@ -14,6 +14,7 @@
|
|||
- [Gnss sensor](ref_sensors.md#gnss-sensor).
|
||||
- [IMU sensor](ref_sensors.md#imu-sensor).
|
||||
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
|
||||
- [RawLidar raycast](ref_sensors.md#rawlidar-raycast-sensor).
|
||||
- [Radar](ref_sensors.md#radar-sensor).
|
||||
- [RGB camera](ref_sensors.md#rgb-camera).
|
||||
- [RSS sensor](ref_sensors.md#rss-sensor).
|
||||
|
|
|
@ -12,6 +12,7 @@
|
|||
- IMU detector: carla.IMUMeasurement.
|
||||
- Lane invasion detector: carla.LaneInvasionEvent.
|
||||
- Lidar raycast: carla.LidarMeasurement.
|
||||
- RawLidar raycast: carla.LidarRawMeasurement.
|
||||
- Obstacle detector: carla.ObstacleDetectionEvent.
|
||||
- Radar detector: carla.RadarMeasurement.
|
||||
- RSS sensor: carla.RssResponse.
|
||||
|
@ -187,6 +188,90 @@
|
|||
- def_name: __str__
|
||||
# --------------------------------------
|
||||
|
||||
- class_name: LidarRawMeasurement
|
||||
parent: carla.SensorData
|
||||
# - DESCRIPTION ------------------------
|
||||
doc: >
|
||||
Class that defines the raw lidar data retrieved by a <b>sensor.lidar.ray_cast_raw</b>. This essentially simulates a rotating lidar using ray-casting. Learn more about this [here](ref_sensors.md#rawlidar-raycast-sensor).
|
||||
# - PROPERTIES -------------------------
|
||||
instance_variables:
|
||||
- var_name: channels
|
||||
type: int
|
||||
doc: >
|
||||
Number of lasers shot.
|
||||
- var_name: horizontal_angle
|
||||
type: float
|
||||
doc: >
|
||||
Horizontal angle the Lidar is rotated at the time of the measurement (in radians).
|
||||
- var_name: raw_data
|
||||
type: bytes
|
||||
doc: >
|
||||
Received list of raw detection points. Each point consists in a 3D-xyz data plus cosine of the incident angle, the idx of the hit actor and its semantic tag.
|
||||
# - METHODS ----------------------------
|
||||
methods:
|
||||
- def_name: save_to_disk
|
||||
params:
|
||||
- param_name: path
|
||||
type: str
|
||||
doc: >
|
||||
Saves the point cloud to disk as a <b>.ply</b> file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
|
||||
# --------------------------------------
|
||||
- def_name: get_point_count
|
||||
params:
|
||||
- param_name: channel
|
||||
type: int
|
||||
doc: >
|
||||
Retrieves the number of points sorted by channel that are generated by this measure. Sorting by channel allows to identify the original channel for every point.
|
||||
# --------------------------------------
|
||||
- def_name: __getitem__
|
||||
params:
|
||||
- param_name: pos
|
||||
type: int
|
||||
# --------------------------------------
|
||||
- def_name: __iter__
|
||||
# --------------------------------------
|
||||
- def_name: __len__
|
||||
# --------------------------------------
|
||||
- def_name: __setitem__
|
||||
params:
|
||||
- param_name: pos
|
||||
type: int
|
||||
- param_name: detection
|
||||
type: carla.LidarRawDetection
|
||||
# --------------------------------------
|
||||
- def_name: __str__
|
||||
# --------------------------------------
|
||||
|
||||
- class_name: LidarRawDetection
|
||||
# - DESCRIPTION ------------------------
|
||||
doc: >
|
||||
Data contained inside a carla.LidarRawMeasurement. Each of these represents one of the points in the cloud with its location and its asociated intensity.
|
||||
# - PROPERTIES -------------------------
|
||||
instance_variables:
|
||||
- var_name: point
|
||||
type: carla.Location
|
||||
doc: >
|
||||
Point in xyz coordinates.
|
||||
# --------------------------------------
|
||||
- var_name: cos_inc_angle
|
||||
type: float
|
||||
doc: >
|
||||
Cosine of the incident angle between the ray and the normal of the hit object.
|
||||
# --------------------------------------
|
||||
- var_name: object_idx
|
||||
type: uint
|
||||
doc: >
|
||||
Carla index of the hitted actor.
|
||||
# --------------------------------------
|
||||
- var_name: object_tag
|
||||
type: uint
|
||||
doc: >
|
||||
Semantic tag of the hitted component.
|
||||
# - METHODS ----------------------------
|
||||
methods:
|
||||
- def_name: __str__
|
||||
# --------------------------------------
|
||||
|
||||
- class_name: CollisionEvent
|
||||
parent: carla.SensorData
|
||||
# - DESCRIPTION ------------------------
|
||||
|
|
Loading…
Reference in New Issue