Refactored documentation
This commit is contained in:
parent
a131e16ddb
commit
2454972098
|
@ -155,7 +155,7 @@ Check out the [introduction to blueprints](core_actors.md).
|
|||
- `rotation_frequency` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- `sensor_tick` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- `upper_fov` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
- **<font color="#498efc">sensor.lidar.ray_cast_raw</font>**
|
||||
- **<font color="#498efc">sensor.lidar.ray_cast_semantic</font>**
|
||||
- **Attributes:**
|
||||
- `channels` (_Int_)<sub>_ – Modifiable_</sub>
|
||||
- `lower_fov` (_Float_)<sub>_ – Modifiable_</sub>
|
||||
|
|
Before Width: | Height: | Size: 137 KiB After Width: | Height: | Size: 137 KiB |
|
@ -1055,58 +1055,6 @@ Retrieves the number of points sorted by channel that are generated by this meas
|
|||
|
||||
---
|
||||
|
||||
## carla.LidarRawDetection<a name="carla.LidarRawDetection"></a>
|
||||
Data contained inside a [carla.LidarRawMeasurement](#carla.LidarRawMeasurement). Each of these represents one of the points in the cloud with its location and its asociated intensity.
|
||||
|
||||
<h3>Instance Variables</h3>
|
||||
- <a name="carla.LidarRawDetection.point"></a>**<font color="#f8805a">point</font>** (_[carla.Location](#carla.Location)_)
|
||||
Point in xyz coordinates.
|
||||
- <a name="carla.LidarRawDetection.cos_inc_angle"></a>**<font color="#f8805a">cos_inc_angle</font>** (_float_)
|
||||
Cosine of the incident angle between the ray and the normal of the hit object.
|
||||
- <a name="carla.LidarRawDetection.object_idx"></a>**<font color="#f8805a">object_idx</font>** (_uint_)
|
||||
Carla index of the hitted actor.
|
||||
- <a name="carla.LidarRawDetection.object_tag"></a>**<font color="#f8805a">object_tag</font>** (_uint_)
|
||||
Semantic tag of the hitted component.
|
||||
|
||||
<h3>Methods</h3>
|
||||
|
||||
<h5 style="margin-top: -20px">Dunder methods</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.LidarRawDetection.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
|
||||
---
|
||||
|
||||
## carla.LidarRawMeasurement<a name="carla.LidarRawMeasurement"></a>
|
||||
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.SensorData](#carla.SensorData)_</b></small></div></p><p>Class that defines the raw lidar data retrieved by a <b>sensor.lidar.ray_cast_raw</b>. This essentially simulates a rotating lidar using ray-casting. Learn more about this [here](ref_sensors.md#rawlidar-raycast-sensor).
|
||||
|
||||
<h3>Instance Variables</h3>
|
||||
- <a name="carla.LidarRawMeasurement.channels"></a>**<font color="#f8805a">channels</font>** (_int_)
|
||||
Number of lasers shot.
|
||||
- <a name="carla.LidarRawMeasurement.horizontal_angle"></a>**<font color="#f8805a">horizontal_angle</font>** (_float_)
|
||||
Horizontal angle the Lidar is rotated at the time of the measurement (in radians).
|
||||
- <a name="carla.LidarRawMeasurement.raw_data"></a>**<font color="#f8805a">raw_data</font>** (_bytes_)
|
||||
Received list of raw detection points. Each point consists in a 3D-xyz data plus cosine of the incident angle, the idx of the hit actor and its semantic tag.
|
||||
|
||||
<h3>Methods</h3>
|
||||
- <a name="carla.LidarRawMeasurement.save_to_disk"></a>**<font color="#7fb800">save_to_disk</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**path**</font>)
|
||||
Saves the point cloud to disk as a <b>.ply</b> file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
|
||||
- **Parameters:**
|
||||
- `path` (_str_)
|
||||
|
||||
<h5 style="margin-top: -20px">Getters</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.LidarRawMeasurement.get_point_count"></a>**<font color="#7fb800">get_point_count</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**channel**</font>)
|
||||
Retrieves the number of points sorted by channel that are generated by this measure. Sorting by channel allows to identify the original channel for every point.
|
||||
- **Parameters:**
|
||||
- `channel` (_int_)
|
||||
|
||||
<h5 style="margin-top: -20px">Dunder methods</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.LidarRawMeasurement.__getitem__"></a>**<font color="#7fb800">\__getitem__</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**pos**=int</font>)
|
||||
- <a name="carla.LidarRawMeasurement.__iter__"></a>**<font color="#7fb800">\__iter__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
- <a name="carla.LidarRawMeasurement.__len__"></a>**<font color="#7fb800">\__len__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
- <a name="carla.LidarRawMeasurement.__setitem__"></a>**<font color="#7fb800">\__setitem__</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**pos**=int</font>, <font color="#00a6ed">**detection**=[carla.LidarRawDetection](#carla.LidarRawDetection)</font>)
|
||||
- <a name="carla.LidarRawMeasurement.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
|
||||
---
|
||||
|
||||
## carla.Light<a name="carla.Light"></a>
|
||||
This class exposes the lights that exist in the scene, except for vehicle lights. The properties of a light can be queried and changed at will.
|
||||
Lights are automatically turned on when the simulator enters night mode (sun altitude is below zero).
|
||||
|
@ -1737,6 +1685,58 @@ Sets the log level.
|
|||
|
||||
---
|
||||
|
||||
## carla.SemanticLidarDetection<a name="carla.SemanticLidarDetection"></a>
|
||||
Data contained inside a [carla.SemanticLidarMeasurement](#carla.SemanticLidarMeasurement). Each of these represents one of the points in the cloud with its location and its asociated intensity.
|
||||
|
||||
<h3>Instance Variables</h3>
|
||||
- <a name="carla.SemanticLidarDetection.point"></a>**<font color="#f8805a">point</font>** (_[carla.Location](#carla.Location)_)
|
||||
Point in xyz coordinates.
|
||||
- <a name="carla.SemanticLidarDetection.cos_inc_angle"></a>**<font color="#f8805a">cos_inc_angle</font>** (_float_)
|
||||
Cosine of the incident angle between the ray and the normal of the hit object.
|
||||
- <a name="carla.SemanticLidarDetection.object_idx"></a>**<font color="#f8805a">object_idx</font>** (_uint_)
|
||||
Carla index of the hitted actor.
|
||||
- <a name="carla.SemanticLidarDetection.object_tag"></a>**<font color="#f8805a">object_tag</font>** (_uint_)
|
||||
Semantic tag of the hitted component.
|
||||
|
||||
<h3>Methods</h3>
|
||||
|
||||
<h5 style="margin-top: -20px">Dunder methods</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.SemanticLidarDetection.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
|
||||
---
|
||||
|
||||
## carla.SemanticLidarMeasurement<a name="carla.SemanticLidarMeasurement"></a>
|
||||
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.SensorData](#carla.SensorData)_</b></small></div></p><p>Class that defines the semantic lidar data retrieved by a <b>sensor.lidar.ray_cast_semantic</b>. This essentially simulates a rotating lidar using ray-casting. Learn more about this [here](ref_sensors.md#semanticlidar-raycast-sensor).
|
||||
|
||||
<h3>Instance Variables</h3>
|
||||
- <a name="carla.SemanticLidarMeasurement.channels"></a>**<font color="#f8805a">channels</font>** (_int_)
|
||||
Number of lasers shot.
|
||||
- <a name="carla.SemanticLidarMeasurement.horizontal_angle"></a>**<font color="#f8805a">horizontal_angle</font>** (_float_)
|
||||
Horizontal angle the Lidar is rotated at the time of the measurement (in radians).
|
||||
- <a name="carla.SemanticLidarMeasurement.raw_data"></a>**<font color="#f8805a">raw_data</font>** (_bytes_)
|
||||
Received list of raw detection points. Each point consists in a 3D-xyz data plus cosine of the incident angle, the idx of the hit actor and its semantic tag.
|
||||
|
||||
<h3>Methods</h3>
|
||||
- <a name="carla.SemanticLidarMeasurement.save_to_disk"></a>**<font color="#7fb800">save_to_disk</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**path**</font>)
|
||||
Saves the point cloud to disk as a <b>.ply</b> file describing data from 3D scanners. The files generated are ready to be used within [MeshLab](http://www.meshlab.net/), an open source system for processing said files. Just take into account that axis may differ from Unreal Engine and so, need to be reallocated.
|
||||
- **Parameters:**
|
||||
- `path` (_str_)
|
||||
|
||||
<h5 style="margin-top: -20px">Getters</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.SemanticLidarMeasurement.get_point_count"></a>**<font color="#7fb800">get_point_count</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**channel**</font>)
|
||||
Retrieves the number of points sorted by channel that are generated by this measure. Sorting by channel allows to identify the original channel for every point.
|
||||
- **Parameters:**
|
||||
- `channel` (_int_)
|
||||
|
||||
<h5 style="margin-top: -20px">Dunder methods</h5>
|
||||
<div style="padding-left:30px;margin-top:-25px"></div>- <a name="carla.SemanticLidarMeasurement.__getitem__"></a>**<font color="#7fb800">\__getitem__</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**pos**=int</font>)
|
||||
- <a name="carla.SemanticLidarMeasurement.__iter__"></a>**<font color="#7fb800">\__iter__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
- <a name="carla.SemanticLidarMeasurement.__len__"></a>**<font color="#7fb800">\__len__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
- <a name="carla.SemanticLidarMeasurement.__setitem__"></a>**<font color="#7fb800">\__setitem__</font>**(<font color="#00a6ed">**self**</font>, <font color="#00a6ed">**pos**=int</font>, <font color="#00a6ed">**detection**=[carla.SemanticLidarDetection](#carla.SemanticLidarDetection)</font>)
|
||||
- <a name="carla.SemanticLidarMeasurement.__str__"></a>**<font color="#7fb800">\__str__</font>**(<font color="#00a6ed">**self**</font>)
|
||||
|
||||
---
|
||||
|
||||
## carla.Sensor<a name="carla.Sensor"></a>
|
||||
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.Actor](#carla.Actor)_</b></small></div></p><p>Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at [carla.World](#carla.World) to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from [carla.SensorData](#carla.SensorData) (depending on the sensor).
|
||||
|
||||
|
@ -1746,7 +1746,7 @@ Sets the log level.
|
|||
- [Gnss sensor](ref_sensors.md#gnss-sensor).
|
||||
- [IMU sensor](ref_sensors.md#imu-sensor).
|
||||
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
|
||||
- [RawLidar raycast](ref_sensors.md#rawlidar-raycast-sensor).
|
||||
- [SemanticLidar raycast](ref_sensors.md#semanticlidar-raycast-sensor).
|
||||
- [Radar](ref_sensors.md#radar-sensor).
|
||||
- [RGB camera](ref_sensors.md#rgb-camera).
|
||||
- [RSS sensor](ref_sensors.md#rss-sensor).
|
||||
|
@ -1781,7 +1781,7 @@ Base class for all the objects containing data generated by a [carla.Sensor](#ca
|
|||
- IMU detector: [carla.IMUMeasurement](#carla.IMUMeasurement).
|
||||
- Lane invasion detector: [carla.LaneInvasionEvent](#carla.LaneInvasionEvent).
|
||||
- Lidar raycast: [carla.LidarMeasurement](#carla.LidarMeasurement).
|
||||
- RawLidar raycast: [carla.LidarRawMeasurement](#carla.LidarRawMeasurement).
|
||||
- SemanticLidar raycast: [carla.SemanticLidarMeasurement](#carla.SemanticLidarMeasurement).
|
||||
- Obstacle detector: [carla.ObstacleDetectionEvent](#carla.ObstacleDetectionEvent).
|
||||
- Radar detector: [carla.RadarMeasurement](#carla.RadarMeasurement).
|
||||
- RSS sensor: [carla.RssResponse](#carla.RssResponse).
|
||||
|
|
|
@ -6,7 +6,7 @@
|
|||
* [__IMU sensor__](#imu-sensor)
|
||||
* [__Lane invasion detector__](#lane-invasion-detector)
|
||||
* [__Lidar raycast sensor__](#lidar-raycast-sensor)
|
||||
* [__RawLidar raycast sensor__](#rawlidar-raycast-sensor)
|
||||
* [__SemanticLidar raycast sensor__](#semanticlidar-raycast-sensor)
|
||||
* [__Obstacle detector__](#obstacle-detector)
|
||||
* [__Radar sensor__](#radar-sensor)
|
||||
* [__RGB camera__](#rgb-camera)
|
||||
|
@ -486,16 +486,16 @@ where a is the attenuation coefficient and d is the distance to the sensor.
|
|||
|
||||
In order to increase the realism, we add the possibility of dropping cloud points. This is done in two different ways. In a general way, we can randomly drop points with a probability given by <b>dropoff_general_rate</b>. In this case, the drop off of points is done before tracing the ray cast so adjust this parameter can increase our performance. If that parameter is set to zero it will be ignored. The second way to regulate the drop off of points is in a rate proportional to the intensity. This drop off rate will be proportional to the intensity from zero at <b>dropoff_intensity_limit</b> to <b>dropoff_zero_intensity</b> at zero intensity.
|
||||
|
||||
This output contains a cloud of simulation points with its intensity and thus, can be iterated to retrieve a list of their [`carla.LidarDetection`](python_api.md#carla.LidarDetection):
|
||||
This output contains a cloud of simulation points and thus, can be iterated to retrieve a list of their [`carla.Location`](python_api.md#carla.Location):
|
||||
|
||||
```py
|
||||
for detection in lidar_measurement:
|
||||
print(detection)
|
||||
for location in lidar_measurement:
|
||||
print(location)
|
||||
```
|
||||
|
||||
The rotation of the LIDAR can be tuned to cover a specific angle on every simulation step (using a [fixed time-step](adv_synchrony_timestep.md)). For example, to rotate once per step (full circle output, as in the picture below), the rotation frequency and the simulated FPS should be equal. <br> __1.__ Set the sensor's frequency `sensors_bp['lidar'][0].set_attribute('rotation_frequency','10')`. <br> __2.__ Run the simulation using `python config.py --fps=10`.
|
||||
|
||||
![LidarPointCloud](img/lidar_point_cloud.png)
|
||||
![LidarPointCloud](img/lidar_point_cloud.gif)
|
||||
|
||||
#### Lidar attributes
|
||||
|
||||
|
@ -613,10 +613,10 @@ The rotation of the LIDAR can be tuned to cover a specific angle on every simula
|
|||
|
||||
|
||||
---
|
||||
## RawLidar raycast sensor
|
||||
## SemanticLidar raycast sensor
|
||||
|
||||
* __Blueprint:__ sensor.lidar.ray_cast_raw
|
||||
* __Output:__ [carla.LidarRawMeasurement](python_api.md#carla.LidarRawMeasurement) per step (unless `sensor_tick` says otherwise).
|
||||
* __Blueprint:__ sensor.lidar.ray_cast_semantic
|
||||
* __Output:__ [carla.SemanticLidarMeasurement](python_api.md#carla.SemanticLidarMeasurement) per step (unless `sensor_tick` says otherwise).
|
||||
|
||||
This sensor simulates a rotating Lidar implemented using ray-casting that exposes all the information about the raycast hit. Its behaviour is quite similar to the [Lidar raycast sensor](#lidar-raycast-sensor) but this sensor does not have any of the intensity, dropoff or noise features and its output is more complete.
|
||||
The points are computed by adding a laser for each channel distributed in the vertical FOV. The rotation is simulated computing the horizontal angle that the Lidar rotated in a frame. The point cloud is calculated by doing a ray-cast for each laser in every step:
|
||||
|
@ -624,18 +624,18 @@ The points are computed by adding a laser for each channel distributed in the ve
|
|||
|
||||
A Lidar measurement contains a packet with all the points generated during a `1/FPS` interval. During this interval the physics are not updated so all the points in a measurement reflect the same "static picture" of the scene.
|
||||
|
||||
This output contains a cloud of lidar raw detections and therefore, it can be iterated to retrieve a list of their [`carla.LidarRawDetection`](python_api.md#carla.LidarRawDetection):
|
||||
This output contains a cloud of lidar semantic detections and therefore, it can be iterated to retrieve a list of their [`carla.SemanticLidarDetection`](python_api.md#carla.SemanticLidarDetection):
|
||||
|
||||
```py
|
||||
for detection in lidar_raw_measurement:
|
||||
for detection in semantic_lidar_measurement:
|
||||
print(detection)
|
||||
```
|
||||
|
||||
The rotation of the LIDAR can be tuned to cover a specific angle on every simulation step (using a [fixed time-step](adv_synchrony_timestep.md)). For example, to rotate once per step (full circle output, as in the picture below), the rotation frequency and the simulated FPS should be equal. <br> __1.__ Set the sensor's frequency `sensors_bp['lidar'][0].set_attribute('rotation_frequency','10')`. <br> __2.__ Run the simulation using `python config.py --fps=10`.
|
||||
|
||||
![LidarPointCloud](img/rawlidar_point_cloud.png)
|
||||
![LidarPointCloud](img/semantic_lidar_point_cloud.png)
|
||||
|
||||
#### Lidar attributes
|
||||
#### SemanticLidar attributes
|
||||
|
||||
<table class ="defTable">
|
||||
<thead>
|
||||
|
@ -720,10 +720,11 @@ The rotation of the LIDAR can be tuned to cover a specific angle on every simula
|
|||
<tr>
|
||||
<td><code>raw_data</code></td>
|
||||
<td>bytes</td>
|
||||
<td>Array that can be transform in raw detections, each of them have four 32-bits floats (XYZ of each point and consine of the incident angle) and two unsigned int (idx of the hitted actor and its semantic tag).</td>
|
||||
<td>Array that can be transform into semantic detections, each of them have four 32-bits floats (XYZ of each point and consine of the incident angle) and two unsigned int (idx of the hitted actor and its semantic tag).</td>
|
||||
</tbody>
|
||||
</table>
|
||||
|
||||
|
||||
---
|
||||
## Obstacle detector
|
||||
|
||||
|
|
|
@ -14,7 +14,7 @@
|
|||
- [Gnss sensor](ref_sensors.md#gnss-sensor).
|
||||
- [IMU sensor](ref_sensors.md#imu-sensor).
|
||||
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
|
||||
- [RawLidar raycast](ref_sensors.md#rawlidar-raycast-sensor).
|
||||
- [SemanticLidar raycast](ref_sensors.md#semanticlidar-raycast-sensor).
|
||||
- [Radar](ref_sensors.md#radar-sensor).
|
||||
- [RGB camera](ref_sensors.md#rgb-camera).
|
||||
- [RSS sensor](ref_sensors.md#rss-sensor).
|
||||
|
|
|
@ -12,7 +12,7 @@
|
|||
- IMU detector: carla.IMUMeasurement.
|
||||
- Lane invasion detector: carla.LaneInvasionEvent.
|
||||
- Lidar raycast: carla.LidarMeasurement.
|
||||
- RawLidar raycast: carla.LidarRawMeasurement.
|
||||
- SemanticLidar raycast: carla.SemanticLidarMeasurement.
|
||||
- Obstacle detector: carla.ObstacleDetectionEvent.
|
||||
- Radar detector: carla.RadarMeasurement.
|
||||
- RSS sensor: carla.RssResponse.
|
||||
|
@ -188,11 +188,11 @@
|
|||
- def_name: __str__
|
||||
# --------------------------------------
|
||||
|
||||
- class_name: LidarRawMeasurement
|
||||
- class_name: SemanticLidarMeasurement
|
||||
parent: carla.SensorData
|
||||
# - DESCRIPTION ------------------------
|
||||
doc: >
|
||||
Class that defines the raw lidar data retrieved by a <b>sensor.lidar.ray_cast_raw</b>. This essentially simulates a rotating lidar using ray-casting. Learn more about this [here](ref_sensors.md#rawlidar-raycast-sensor).
|
||||
Class that defines the semantic lidar data retrieved by a <b>sensor.lidar.ray_cast_semantic</b>. This essentially simulates a rotating lidar using ray-casting. Learn more about this [here](ref_sensors.md#semanticlidar-raycast-sensor).
|
||||
# - PROPERTIES -------------------------
|
||||
instance_variables:
|
||||
- var_name: channels
|
||||
|
@ -237,15 +237,15 @@
|
|||
- param_name: pos
|
||||
type: int
|
||||
- param_name: detection
|
||||
type: carla.LidarRawDetection
|
||||
type: carla.SemanticLidarDetection
|
||||
# --------------------------------------
|
||||
- def_name: __str__
|
||||
# --------------------------------------
|
||||
|
||||
- class_name: LidarRawDetection
|
||||
- class_name: SemanticLidarDetection
|
||||
# - DESCRIPTION ------------------------
|
||||
doc: >
|
||||
Data contained inside a carla.LidarRawMeasurement. Each of these represents one of the points in the cloud with its location and its asociated intensity.
|
||||
Data contained inside a carla.SemanticLidarMeasurement. Each of these represents one of the points in the cloud with its location and its asociated intensity.
|
||||
# - PROPERTIES -------------------------
|
||||
instance_variables:
|
||||
- var_name: point
|
||||
|
|
|
@ -22,8 +22,8 @@ to 1.5M per second. In this mode we do not render anything but processing
|
|||
of the data is done.
|
||||
For example for profiling one lidar:
|
||||
python raycast_sensor_testing.py -ln 1 --profiling
|
||||
For example for profiling one raw lidar:
|
||||
python raycast_sensor_testing.py -rln 1 --profiling
|
||||
For example for profiling one semantic lidar:
|
||||
python raycast_sensor_testing.py -sln 1 --profiling
|
||||
And for profiling one radar:
|
||||
python raycast_sensor_testing.py -rn 1 --profiling
|
||||
|
||||
|
@ -156,8 +156,8 @@ class SensorManager:
|
|||
lidar.listen(self.save_lidar_image)
|
||||
|
||||
return lidar
|
||||
elif sensor_type == 'RawLiDAR':
|
||||
lidar_bp = self.world.get_blueprint_library().find('sensor.lidar.ray_cast_raw')
|
||||
elif sensor_type == 'SemanticLiDAR':
|
||||
lidar_bp = self.world.get_blueprint_library().find('sensor.lidar.ray_cast_semantic')
|
||||
lidar_bp.set_attribute('range', '100')
|
||||
|
||||
for key in sensor_options:
|
||||
|
@ -165,7 +165,7 @@ class SensorManager:
|
|||
|
||||
lidar = self.world.spawn_actor(lidar_bp, transform, attach_to=attached)
|
||||
|
||||
lidar.listen(self.save_rawlidar_image)
|
||||
lidar.listen(self.save_semanticlidar_image)
|
||||
|
||||
return lidar
|
||||
elif sensor_type == "Radar":
|
||||
|
@ -225,7 +225,7 @@ class SensorManager:
|
|||
self.time_processing += (t_end-t_start)
|
||||
self.tics_processing += 1
|
||||
|
||||
def save_rawlidar_image(self, image):
|
||||
def save_semanticlidar_image(self, image):
|
||||
t_start = self.timer.time()
|
||||
|
||||
disp_size = self.display_man.get_display_size()
|
||||
|
@ -334,17 +334,17 @@ def one_run(args, client):
|
|||
SensorManager(world, display_manager, 'LiDAR', carla.Transform(carla.Location(x=0, z=2.4)), vehicle, {'channels' : '64', 'range' : '200', 'points_per_second': lidar_points_per_second, 'rotation_frequency': '20'}, [1, 1])
|
||||
|
||||
|
||||
# If any, we instanciate the required rawlidars
|
||||
rawlidar_points_per_second = args.rawlidar_points
|
||||
# If any, we instanciate the required semantic lidars
|
||||
semanticlidar_points_per_second = args.semanticlidar_points
|
||||
|
||||
if args.rawlidar_number >= 3:
|
||||
SensorManager(world, display_manager, 'RawLiDAR', carla.Transform(carla.Location(x=0, z=2.4)), vehicle, {'channels' : '64', 'range' : '50', 'points_per_second': rawlidar_points_per_second, 'rotation_frequency': '20'}, [1, 0])
|
||||
if args.semanticlidar_number >= 3:
|
||||
SensorManager(world, display_manager, 'SemanticLiDAR', carla.Transform(carla.Location(x=0, z=2.4)), vehicle, {'channels' : '64', 'range' : '50', 'points_per_second': semanticlidar_points_per_second, 'rotation_frequency': '20'}, [1, 0])
|
||||
|
||||
if args.rawlidar_number >= 2:
|
||||
SensorManager(world, display_manager, 'RawLiDAR', carla.Transform(carla.Location(x=0, z=2.4)), vehicle, {'channels' : '64', 'range' : '100', 'points_per_second': rawlidar_points_per_second, 'rotation_frequency': '20'}, [0, 1])
|
||||
if args.semanticlidar_number >= 2:
|
||||
SensorManager(world, display_manager, 'SemanticLiDAR', carla.Transform(carla.Location(x=0, z=2.4)), vehicle, {'channels' : '64', 'range' : '100', 'points_per_second': semanticlidar_points_per_second, 'rotation_frequency': '20'}, [0, 1])
|
||||
|
||||
if args.rawlidar_number >= 1:
|
||||
SensorManager(world, display_manager, 'RawLiDAR', carla.Transform(carla.Location(x=0, z=2.4)), vehicle, {'channels' : '64', 'range' : '200', 'points_per_second': rawlidar_points_per_second, 'rotation_frequency': '20'}, [1, 1])
|
||||
if args.semanticlidar_number >= 1:
|
||||
SensorManager(world, display_manager, 'SemanticLiDAR', carla.Transform(carla.Location(x=0, z=2.4)), vehicle, {'channels' : '64', 'range' : '200', 'points_per_second': semanticlidar_points_per_second, 'rotation_frequency': '20'}, [1, 1])
|
||||
|
||||
|
||||
# If any, we instanciate the required radars
|
||||
|
@ -415,7 +415,7 @@ def one_run(args, client):
|
|||
time_procc = 0
|
||||
for sensor in display_manager.sensor_list:
|
||||
time_procc += sensor.time_processing
|
||||
prof_str = "%-10s %-9s %-9s %-15s %-7.2f %-20.3f" % (args.lidar_number, args.rawlidar_number, args.radar_number, lidar_points_per_second, float(frame) / time_frames, time_procc/time_frames)
|
||||
prof_str = "%-10s %-9s %-9s %-15s %-7.2f %-20.3f" % (args.lidar_number, args.semanticlidar_number, args.radar_number, lidar_points_per_second, float(frame) / time_frames, time_procc/time_frames)
|
||||
break
|
||||
|
||||
if call_exit:
|
||||
|
@ -484,17 +484,17 @@ def main():
|
|||
choices=range(0, 4),
|
||||
help='Number of lidars to render (from zero to three)')
|
||||
argparser.add_argument(
|
||||
'-rlp', '--rawlidar_points',
|
||||
metavar='RLP',
|
||||
'-slp', '--semanticlidar_points',
|
||||
metavar='SLP',
|
||||
default='100000',
|
||||
help='lidar points per second (default: "100000")')
|
||||
help='semantic lidar points per second (default: "100000")')
|
||||
argparser.add_argument(
|
||||
'-rln', '--rawlidar_number',
|
||||
metavar='RLN',
|
||||
'-sln', '--semanticlidar_number',
|
||||
metavar='SLN',
|
||||
default=0,
|
||||
type=int,
|
||||
choices=range(0, 4),
|
||||
help='Number of raw lidars to render (from zero to three)')
|
||||
help='Number of semantic lidars to render (from zero to three)')
|
||||
argparser.add_argument(
|
||||
'-rp', '--radar_points',
|
||||
metavar='RP',
|
||||
|
@ -538,7 +538,7 @@ def main():
|
|||
|
||||
if args.profiling:
|
||||
print("-------------------------------------------------------")
|
||||
print("# Running profiling with %s lidars, %s raw lidars and %s radars." % (args.lidar_number, args.rawlidar_number, args.radar_number))
|
||||
print("# Running profiling with %s lidars, %s semantic lidars and %s radars." % (args.lidar_number, args.semanticlidar_number, args.radar_number))
|
||||
args.render_cam = False
|
||||
args.render_window = False
|
||||
runs_output = []
|
||||
|
@ -548,7 +548,7 @@ def main():
|
|||
'1100000', '1200000', '1300000', '1400000', '1500000']
|
||||
for points in points_range:
|
||||
args.lidar_points = points
|
||||
args.rawlidar_points = points
|
||||
args.semanticlidar_points = points
|
||||
args.radar_points = points
|
||||
run_str = one_run(args, client)
|
||||
runs_output.append(run_str)
|
||||
|
@ -562,7 +562,7 @@ def main():
|
|||
print("#Hardware information not available, please install the " \
|
||||
"multiprocessing module")
|
||||
|
||||
print("#NumLidars NumRawLids NumRadars PointsPerSecond FPS PercentageProcessing")
|
||||
print("#NumLidars NumSemLids NumRadars PointsPerSecond FPS PercentageProcessing")
|
||||
for o in runs_output:
|
||||
print(o)
|
||||
|
||||
|
|
Loading…
Reference in New Issue