Update cameras and sensors documentation
This commit is contained in:
parent
559ebe6c23
commit
e464913014
|
@ -1,194 +1,163 @@
|
|||
<h1>Cameras and sensors</h1>
|
||||
|
||||
!!! important
|
||||
This document still refers to the 0.8.X API (stable version), this API is
|
||||
currently located under _"Deprecated/PythonClient"_. The proceedings stated
|
||||
here may not apply to latest versions, 0.9.0 or later. Latest versions
|
||||
introduced significant changes in the API, we are still working on
|
||||
documenting everything, sorry for the inconvenience.
|
||||
|
||||
!!! important
|
||||
Since version 0.8.0 the positions of the sensors are specified in meters
|
||||
instead of centimeters. Always relative to the vehicle.
|
||||
|
||||
Cameras and sensors can be added to the player vehicle by defining them in the
|
||||
settings sent by the client on every new episode. This can be done either by
|
||||
filling a `CarlaSettings` Python class ([client_example.py][clientexamplelink])
|
||||
or by loading an INI settings file ([CARLA Settings example][settingslink]).
|
||||
|
||||
This document describes the details of the different cameras/sensors currently
|
||||
available as well as the resulting images produced by them.
|
||||
|
||||
Although we plan to extend the sensor suite of CARLA in the near future, at the
|
||||
moment there are four different sensors available.
|
||||
Sensors are one type of actor with the characteristic of having a listen
|
||||
function, you can subscribe to the sensor by providing a callback function. This
|
||||
callback is called each time a new measurement is received from the sensor.
|
||||
|
||||
* [Camera: Scene final](#camera-scene-final)
|
||||
* [Camera: Depth map](#camera-depth-map)
|
||||
* [Camera: Semantic segmentation](#camera-semantic-segmentation)
|
||||
* [Ray-cast based lidar](#ray-cast-based-lidar)
|
||||
You typically add a sensor to a vehicle with the following Python code, here we
|
||||
are adding an HD camera
|
||||
|
||||
!!! note
|
||||
The images are sent by the server as a BGRA array of bytes. The provided
|
||||
Python client retrieves the images in this format, it's up to the users to
|
||||
parse the images and convert them to the desired format. There are some
|
||||
examples in the Deprecated/PythonClient folder showing how to parse the
|
||||
images.
|
||||
```py
|
||||
# Find the blueprint of the sensor.
|
||||
blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
|
||||
# Modify the attributes of the blueprint to set image resolution and field of view.
|
||||
blueprint.set_attribute('image_size_x', '1920')
|
||||
blueprint.set_attribute('image_size_y', '1080')
|
||||
blueprint.set_attribute('fov', '100')
|
||||
# Provide the position of the sensor relative to the vehicle.
|
||||
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
|
||||
# Tell the world to spawn the sensor, don't forget to attach it to your vehicle actor.
|
||||
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
|
||||
# Subscribe to the sensor stream by providing a callback function, this function is
|
||||
# called each time a new image is generated by the sensor.
|
||||
sensor.listen(lambda image: do_something(image))
|
||||
```
|
||||
|
||||
There is a fourth post-processing effect available for cameras, _None_, which
|
||||
provides a view with of the scene with no effect, not even scene lighting; we
|
||||
will skip this one in the following descriptions.
|
||||
Note that each sensor has a different set of attributes and produces different
|
||||
type of data. However, the data produced by a sensor comes always tagged with a
|
||||
**frame number** and a **transform**. The frame number is used to identify the
|
||||
frame at which the measurement took place, the transform gives you the
|
||||
transformation in world coordinates of the sensor at that same frame.
|
||||
|
||||
We provide a tool to convert raw depth and semantic segmentation images in bulk
|
||||
to a more human readable palette of colors. It can be found at
|
||||
["Util/ImageConverter"][imgconvlink]. Alternatively, they can also be converted
|
||||
using the functions at `carla.image_converter` Python module.
|
||||
This is the list of sensors currently available in CARLA
|
||||
|
||||
Note that all the sensor data comes with a _frame number_ stamp, this _frame
|
||||
number_ matches the one received in the measurements. This is especially useful
|
||||
for running the simulator in asynchronous mode and synchronize sensor data on
|
||||
the client side.
|
||||
* [sensor.camera.rgb](#sensorcamerargb)
|
||||
* [sensor.camera.depth](#sensorcameradepth)
|
||||
* [sensor.camera.semantic_segmentation](#sensorcamerasemantic_segmentation)
|
||||
* [sensor.lidar.ray_cast](#sensorlidarray_cast)
|
||||
* [sensor.other.collision](#sensorothercollision)
|
||||
* [sensor.other.lane_detector](#sensorotherlane_detector)
|
||||
|
||||
[clientexamplelink]: https://github.com/carla-simulator/carla/blob/master/Deprecated/PythonClient/client_example.py
|
||||
[settingslink]: https://github.com/carla-simulator/carla/blob/master/Docs/Example.CarlaSettings.ini
|
||||
[imgconvlink]: https://github.com/carla-simulator/carla/tree/master/Util/ImageConverter
|
||||
sensor.camera.rgb
|
||||
-----------------
|
||||
|
||||
Camera: Scene final
|
||||
-------------------
|
||||
![ImageRGB](img/capture_scenefinal.png)
|
||||
|
||||
![SceneFinal](img/capture_scenefinal.png)
|
||||
The "RGB" camera acts as a regular camera capturing images from the scene.
|
||||
|
||||
The "scene final" camera provides a view of the scene after applying some
|
||||
post-processing effects to create a more realistic feel. These are actually
|
||||
stored in the Level, in an actor called [PostProcessVolume][postprolink] and not
|
||||
in the Camera. We use the following post process effects:
|
||||
| Blueprint attribute | Type | Default | Description |
|
||||
| ------------------- | ---- | ------- | ----------- |
|
||||
| `image_size_x` | int | 800 | Image width in pixels |
|
||||
| `image_size_y` | int | 600 | Image height in pixels |
|
||||
| `fov` | float | 90.0 | Field of view in degrees |
|
||||
| `enable_postprocess_effects` | bool | True | Whether the post-process effect in the scene affect the image |
|
||||
|
||||
If `enable_postprocess_effects` is enabled, a set of post-process effects is
|
||||
applied to the image to create a more realistic feel
|
||||
|
||||
* **Vignette** Darkens the border of the screen.
|
||||
* **Grain jitter** Adds a bit of noise to the render.
|
||||
* **Bloom** Intense lights burn the area around them.
|
||||
* **Auto exposure** Modifies the image gamma to simulate the eye adaptation to darker or brighter areas.
|
||||
* **Auto exposure** Modifies the image gamma to simulate the eye adaptation to
|
||||
darker or brighter areas.
|
||||
* **Lens flares** Simulates the reflection of bright objects on the lens.
|
||||
* **Depth of field** Blurs objects near or very far away of the camera.
|
||||
|
||||
[postprolink]: https://docs.unrealengine.com/latest/INT/Engine/Rendering/PostProcessEffects/
|
||||
This sensor produces `carla.Image` objects.
|
||||
|
||||
<h6>Python</h6>
|
||||
| Sensor data attribute | Type | Description |
|
||||
| --------------------- | ---- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `width` | int | Image width in pixels |
|
||||
| `height` | int | Image height in pixels |
|
||||
| `fov` | float | Field of view in degrees |
|
||||
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
||||
|
||||
```py
|
||||
camera = carla.sensor.Camera('MyCamera', PostProcessing='SceneFinal')
|
||||
camera.set(FOV=90.0)
|
||||
camera.set_image_size(800, 600)
|
||||
camera.set_position(x=0.30, y=0, z=1.30)
|
||||
camera.set_rotation(pitch=0, yaw=0, roll=0)
|
||||
sensor.camera.depth
|
||||
-------------------
|
||||
|
||||
carla_settings.add_sensor(camera)
|
||||
![ImageDepth](img/capture_depth.png)
|
||||
|
||||
The "Depth" camera provides a view over the scene codifying the distance of each
|
||||
pixel to the camera (also known as **depth buffer** or **z-buffer**).
|
||||
|
||||
| Blueprint attribute | Type | Default | Description |
|
||||
| ------------------- | ---- | ------- | ----------- |
|
||||
| `image_size_x` | int | 800 | Image width in pixels |
|
||||
| `image_size_y` | int | 600 | Image height in pixels |
|
||||
| `fov` | float | 90.0 | Field of view in degrees |
|
||||
|
||||
This sensor produces `carla.Image` objects.
|
||||
|
||||
| Sensor data attribute | Type | Description |
|
||||
| --------------------- | ---- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `width` | int | Image width in pixels |
|
||||
| `height` | int | Image height in pixels |
|
||||
| `fov` | float | Field of view in degrees |
|
||||
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
||||
|
||||
|
||||
The image codifies the depth in 3 channels of the RGB color space, from less to
|
||||
more significant bytes: R -> G -> B. The actual distance in meters can be
|
||||
decoded with
|
||||
|
||||
```
|
||||
normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
|
||||
in_meters = 1000 * normalized
|
||||
```
|
||||
|
||||
<h6>CarlaSettings.ini</h6>
|
||||
sensor.camera.semantic_segmentation
|
||||
-----------------------------------
|
||||
|
||||
```ini
|
||||
[CARLA/Sensor/MyCamera]
|
||||
SensorType=CAMERA
|
||||
PostProcessing=SceneFinal
|
||||
ImageSizeX=800
|
||||
ImageSizeY=600
|
||||
FOV=90
|
||||
PositionX=0.30
|
||||
PositionY=0
|
||||
PositionZ=1.30
|
||||
RotationPitch=0
|
||||
RotationRoll=0
|
||||
RotationYaw=0
|
||||
```
|
||||
![ImageSemanticSegmentation](img/capture_semseg.png)
|
||||
|
||||
Camera: Depth map
|
||||
-----------------
|
||||
|
||||
![Depth](img/capture_depth.png)
|
||||
|
||||
The "depth map" camera provides an image with 24 bit floating precision point
|
||||
codified in the 3 channels of the RGB color space. The order from less to more
|
||||
significant bytes is R -> G -> B.
|
||||
|
||||
| R | G | B | int24 | |
|
||||
|----------|----------|----------|----------|------------|
|
||||
| 00000000 | 00000000 | 00000000 | 0 | min (near) |
|
||||
| 11111111 | 11111111 | 11111111 | 16777215 | max (far) |
|
||||
|
||||
Our max render distance (far) is 1km.
|
||||
|
||||
1. To decodify our depth first we get the int24.
|
||||
|
||||
R + G*256 + B*256*256
|
||||
|
||||
2. Then normalize it in the range [0, 1].
|
||||
|
||||
Ans / ( 256*256*256 - 1 )
|
||||
|
||||
3. And finally multiply for the units that we want to get. We have set the far plane at 1000 metres.
|
||||
|
||||
Ans * far
|
||||
|
||||
The generated "depth map" images are usually converted to a logarithmic
|
||||
grayscale for display. A point cloud can also be extracted from depth images as
|
||||
seen in "Deprecated/PythonClient/point_cloud_example.py".
|
||||
|
||||
<h6>Python</h6>
|
||||
|
||||
```py
|
||||
camera = carla.sensor.Camera('MyCamera', PostProcessing='Depth')
|
||||
camera.set(FOV=90.0)
|
||||
camera.set_image_size(800, 600)
|
||||
camera.set_position(x=0.30, y=0, z=1.30)
|
||||
camera.set_rotation(pitch=0, yaw=0, roll=0)
|
||||
|
||||
carla_settings.add_sensor(camera)
|
||||
```
|
||||
|
||||
<h6>CarlaSettings.ini</h6>
|
||||
|
||||
```ini
|
||||
[CARLA/Sensor/MyCamera]
|
||||
SensorType=CAMERA
|
||||
PostProcessing=Depth
|
||||
ImageSizeX=800
|
||||
ImageSizeY=600
|
||||
FOV=90
|
||||
PositionX=0.30
|
||||
PositionY=0
|
||||
PositionZ=1.30
|
||||
RotationPitch=0
|
||||
RotationRoll=0
|
||||
RotationYaw=0
|
||||
```
|
||||
|
||||
Camera: Semantic segmentation
|
||||
-----------------------------
|
||||
|
||||
![SemanticSegmentation](img/capture_semseg.png)
|
||||
|
||||
The "semantic segmentation" camera classifies every object in the view by
|
||||
The "Semantic Segmentation" camera classifies every object in the view by
|
||||
displaying it in a different color according to the object class. E.g.,
|
||||
pedestrians appear in a different color than vehicles.
|
||||
|
||||
| Blueprint attribute | Type | Default | Description |
|
||||
| ------------------- | ---- | ------- | ----------- |
|
||||
| `image_size_x` | int | 800 | Image width in pixels |
|
||||
| `image_size_y` | int | 600 | Image height in pixels |
|
||||
| `fov` | float | 90.0 | Field of view in degrees |
|
||||
|
||||
This sensor produces `carla.Image` objects.
|
||||
|
||||
| Sensor data attribute | Type | Description |
|
||||
| --------------------- | ---- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `width` | int | Image width in pixels |
|
||||
| `height` | int | Image height in pixels |
|
||||
| `fov` | float | Field of view in degrees |
|
||||
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
||||
|
||||
The server provides an image with the tag information **encoded in the red
|
||||
channel**. A pixel with a red value of x displays an object with tag x. The
|
||||
following tags are currently available
|
||||
|
||||
Value | Tag
|
||||
-----:|:-----
|
||||
0 | None
|
||||
1 | Buildings
|
||||
2 | Fences
|
||||
3 | Other
|
||||
4 | Pedestrians
|
||||
5 | Poles
|
||||
6 | RoadLines
|
||||
7 | Roads
|
||||
8 | Sidewalks
|
||||
9 | Vegetation
|
||||
10 | Vehicles
|
||||
11 | Walls
|
||||
12 | TrafficSigns
|
||||
|
||||
| Value | Tag | Converted color |
|
||||
| -----:|:------------ | --------------- |
|
||||
| 0 | Unlabeled | ( 0, 0, 0) |
|
||||
| 1 | Building | ( 70, 70, 70) |
|
||||
| 2 | Fence | (190, 153, 153) |
|
||||
| 3 | Other | (250, 170, 160) |
|
||||
| 4 | Pedestrian | (220, 20, 60) |
|
||||
| 5 | Pole | (153, 153, 153) |
|
||||
| 6 | Road line | (157, 234, 50) |
|
||||
| 7 | Road | (128, 64, 128) |
|
||||
| 8 | Sidewalk | (244, 35, 232) |
|
||||
| 9 | Vegetation | (107, 142, 35) |
|
||||
| 10 | Car | ( 0, 0, 142) |
|
||||
| 11 | Wall | (102, 102, 156) |
|
||||
| 12 | Traffic sign | (220, 220, 0) |
|
||||
|
||||
This is implemented by tagging every object in the scene before hand (either at
|
||||
begin play or on spawn). The objects are classified by their relative file path
|
||||
|
@ -202,91 +171,98 @@ _"Unreal/CarlaUE4/Content/Static/Pedestrians"_ folder it's tagged as pedestrian.
|
|||
and its corresponding filepath check inside `GetLabelByFolderName()`
|
||||
function in "Tagger.cpp".
|
||||
|
||||
<h6>Python</h6>
|
||||
|
||||
```py
|
||||
camera = carla.sensor.Camera('MyCamera', PostProcessing='SemanticSegmentation')
|
||||
camera.set(FOV=90.0)
|
||||
camera.set_image_size(800, 600)
|
||||
camera.set_position(x=0.30, y=0, z=1.30)
|
||||
camera.set_rotation(pitch=0, yaw=0, roll=0)
|
||||
|
||||
carla_settings.add_sensor(camera)
|
||||
```
|
||||
|
||||
<h6>CarlaSettings.ini</h6>
|
||||
|
||||
```ini
|
||||
[CARLA/Sensor/MyCamera]
|
||||
SensorType=CAMERA
|
||||
PostProcessing=SemanticSegmentation
|
||||
ImageSizeX=800
|
||||
ImageSizeY=600
|
||||
FOV=90
|
||||
PositionX=0.30
|
||||
PositionY=0
|
||||
PositionZ=1.30
|
||||
RotationPitch=0
|
||||
RotationRoll=0
|
||||
RotationYaw=0
|
||||
```
|
||||
|
||||
Ray-cast based Lidar
|
||||
--------------------
|
||||
sensor.lidar.ray_cast
|
||||
---------------------
|
||||
|
||||
![LidarPointCloud](img/lidar_point_cloud.gif)
|
||||
|
||||
A rotating Lidar implemented with ray-casting. The points are computed by adding
|
||||
a laser for each channel distributed in the vertical FOV, then the rotation is
|
||||
simulated computing the horizontal angle that the Lidar rotated this frame, and
|
||||
doing a ray-cast for each point that each laser was supposed to generate this
|
||||
frame; `PointsPerSecond / (FPS * Channels)`.
|
||||
This sensor simulates a rotating Lidar implemented using ray-casting. The points
|
||||
are computed by adding a laser for each channel distributed in the vertical FOV,
|
||||
then the rotation is simulated computing the horizontal angle that the Lidar
|
||||
rotated this frame, and doing a ray-cast for each point that each laser was
|
||||
supposed to generate this frame; `PointsPerSecond / (FPS * Channels)`.
|
||||
|
||||
Each frame the server sends a packet with all the points generated during a
|
||||
`1/FPS` interval. During the interval the physics wasn’t updated so all the
|
||||
points in a packet reflect the same "static picture" of the scene.
|
||||
| Blueprint attribute | Type | Default | Description |
|
||||
| -------------------- | ---- | ------- | ----------- |
|
||||
| `channels` | int | 32 | Number of lasers |
|
||||
| `range` | float | 1000 | Maximum measurement distance in meters |
|
||||
| `points_per_second` | int | 56000 | Points generated by all lasers per second |
|
||||
| `rotation_frequency` | float | 10.0 | Lidar rotation frequency |
|
||||
| `upper_fov` | float | 10.0 | Angle in degrees of the upper most laser |
|
||||
| `lower_fov` | float | -30.0 | Angle in degrees of the lower most laser |
|
||||
|
||||
The received `LidarMeasurement` object contains the following information
|
||||
This sensor produces `carla.LidarMeasurement` objects.
|
||||
|
||||
Key | Type | Description
|
||||
-------------------------- | ---------- | ------------
|
||||
horizontal_angle | float | Angle in XY plane of the lidar this frame (in degrees).
|
||||
channels | uint32 | Number of channels (lasers) of the lidar.
|
||||
point_count_by_channel | uint32 | Number of points per channel captured this frame.
|
||||
point_cloud | PointCloud | Captured points this frame.
|
||||
| Sensor data attribute | Type | Description |
|
||||
| -------------------------- | ---------- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `horizontal_angle` | float | Angle in XY plane of the lidar this frame (in degrees) |
|
||||
| `channels` | int | Number of channels (lasers) of the lidar |
|
||||
| `get_point_count(channel)` | int | Number of points per channel captured this frame |
|
||||
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point) |
|
||||
|
||||
<h6>Python</h6>
|
||||
The object also acts as a Python list of `carla.Location`
|
||||
|
||||
```py
|
||||
lidar = carla.sensor.Lidar('MyLidar')
|
||||
lidar.set(
|
||||
Channels=32,
|
||||
Range=50,
|
||||
PointsPerSecond=100000,
|
||||
RotationFrequency=10,
|
||||
UpperFovLimit=10,
|
||||
LowerFovLimit=-30)
|
||||
lidar.set_position(x=0, y=0, z=1.40)
|
||||
lidar.set_rotation(pitch=0, yaw=0, roll=0)
|
||||
|
||||
carla_settings.add_sensor(lidar)
|
||||
for location in lidar_measurement:
|
||||
print(location)
|
||||
```
|
||||
|
||||
<h6>CarlaSettings.ini</h6>
|
||||
A Lidar measurement contains a packet with all the points generated during a
|
||||
`1/FPS` interval. During this interval the physics is not updated so all the
|
||||
points in a measurement reflect the same "static picture" of the scene.
|
||||
|
||||
```ini
|
||||
[CARLA/Sensor/MyLidar]
|
||||
SensorType=LIDAR_RAY_CAST
|
||||
Channels=32
|
||||
Range=50
|
||||
PointsPerSecond=100000
|
||||
RotationFrequency=10
|
||||
UpperFOVLimit=10
|
||||
LowerFOVLimit=-30
|
||||
PositionX=0
|
||||
PositionY=0
|
||||
PositionZ=1.40
|
||||
RotationPitch=0
|
||||
RotationYaw=0
|
||||
RotationRoll=0
|
||||
```
|
||||
!!! tip
|
||||
Running the simulator at
|
||||
[fixed time-step](configuring_the_simulation.md#fixed-time-step) it is
|
||||
possible to tune the horizontal angle of each measurement. By adjusting the
|
||||
frame rate and the rotation frequency is possible, for instance, to get a
|
||||
360 view each measurement.
|
||||
|
||||
sensor.other.collision
|
||||
----------------------
|
||||
|
||||
This sensor, when attached to an actor, it registers an event each time the
|
||||
actor collisions against something in the world. This sensor does not have any
|
||||
configurable attribute.
|
||||
|
||||
This sensor produces a `carla.CollisionEvent` object for each collision
|
||||
registered
|
||||
|
||||
| Sensor data attribute | Type | Description |
|
||||
| ---------------------- | ----------- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `actor` | carla.Actor | Actor that measured the collision ("self" actor) |
|
||||
| `other_actor` | carla.Actor | Actor against whom we collide |
|
||||
| `normal_impulse` | carla.Vector3D | Normal impulse result of the collision |
|
||||
|
||||
Note that several collision events might be registered during a single
|
||||
simulation update.
|
||||
|
||||
sensor.other.lane_detector
|
||||
--------------------------
|
||||
|
||||
> _This sensor is a work in progress, currently very limited._
|
||||
|
||||
This sensor, when attached to an actor, it registers an event each time the
|
||||
actor crosses a lane marking. This sensor is somehow special as it works fully
|
||||
on the client-side. The lane detector uses the road data of the active map to
|
||||
determine whether a vehicle is invading another lane. This information is based
|
||||
on the OpenDrive file provided by the map, therefore it is subject to the
|
||||
fidelity of the OpenDrive description. In some places there might be
|
||||
discrepancies between the lanes visible by the cameras and the lanes registered
|
||||
by this sensor.
|
||||
|
||||
This sensor does not have any configurable attribute.
|
||||
|
||||
This sensor produces a `carla.LaneInvasionEvent` object for each lane marking
|
||||
crossed by the actor
|
||||
|
||||
| Sensor data attribute | Type | Description |
|
||||
| ----------------------- | ----------- | ----------- |
|
||||
| `frame_number` | int | Frame count when the measurement took place |
|
||||
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
||||
| `actor` | carla.Actor | Actor that invaded another lane ("self" actor) |
|
||||
| `crossed_lane_markings` | carla.LaneMarking list | List of lane markings that have been crossed |
|
||||
|
|
|
@ -16,6 +16,7 @@ namespace detail {
|
|||
#if __cplusplus >= 201703L // C++17
|
||||
inline
|
||||
#endif
|
||||
// Please update documentation if you change this.
|
||||
uint8_t CITYSCAPES_PALETTE_MAP[][3u] = {
|
||||
{ 0u, 0u, 0u}, // unlabeled = 0u,
|
||||
{ 70u, 70u, 70u}, // building = 1u,
|
||||
|
|
|
@ -306,7 +306,7 @@ void UActorBlueprintFunctionLibrary::MakeLidarDefinition(
|
|||
FActorVariation Range;
|
||||
Range.Id = TEXT("range");
|
||||
Range.Type = EActorAttributeType::Float;
|
||||
Range.RecommendedValues = { TEXT("5000.0") };
|
||||
Range.RecommendedValues = { TEXT("1000.0") };
|
||||
// Points per second.
|
||||
FActorVariation PointsPerSecond;
|
||||
PointsPerSecond.Id = TEXT("points_per_second");
|
||||
|
|
Loading…
Reference in New Issue