|
|
|
@ -1,61 +1,297 @@
|
|
|
|
|
<h1>Cameras and sensors</h1>
|
|
|
|
|
<h1>Sensor references</h1>
|
|
|
|
|
|
|
|
|
|
![Client window](img/client_window.png)
|
|
|
|
|
* [__Collision detector__](#collision-detector)
|
|
|
|
|
* [__Depth camera__](#depth-camera)
|
|
|
|
|
* [__GNSS sensor__](#gnss-sensor)
|
|
|
|
|
* [__IMU sensor__](#imu-sensor)
|
|
|
|
|
* [__Lane invasion detector__](#lane-invasion-sensor)
|
|
|
|
|
* [__Lidar raycast sensor__](#lidar-sensor)
|
|
|
|
|
* [__Obstacle detector__](#obstacle-detector)
|
|
|
|
|
* [__Radar sensor__](#radar-sensor)
|
|
|
|
|
* [__RGB camera__](#rgb-camera)
|
|
|
|
|
* [__Semantic segmentation camera__](#semantic-segmentation-camera)
|
|
|
|
|
|
|
|
|
|
Sensors are a special type of actor able to measure and stream data. All the
|
|
|
|
|
sensors have a [`listen`](python_api.md#carla.Sensor.listen) method that registers the
|
|
|
|
|
callback function that will be called each time the sensor produces a new measurement.
|
|
|
|
|
Sensors are typically attached to vehicles and produce data either each simulation update,
|
|
|
|
|
or when a certain event is registered.
|
|
|
|
|
|
|
|
|
|
The following Python excerpt shows how you would typically attach a sensor to a
|
|
|
|
|
vehicle, in this case we are adding a dashboard HD camera to a vehicle.
|
|
|
|
|
---------------
|
|
|
|
|
##Collision detector
|
|
|
|
|
|
|
|
|
|
```py
|
|
|
|
|
# Find the blueprint of the sensor.
|
|
|
|
|
blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
|
|
|
|
|
# Modify the attributes of the blueprint to set image resolution and field of view.
|
|
|
|
|
blueprint.set_attribute('image_size_x', '1920')
|
|
|
|
|
blueprint.set_attribute('image_size_y', '1080')
|
|
|
|
|
blueprint.set_attribute('fov', '110')
|
|
|
|
|
# Set the time in seconds between sensor captures
|
|
|
|
|
blueprint.set_attribute('sensor_tick', '1.0')
|
|
|
|
|
# Provide the position of the sensor relative to the vehicle.
|
|
|
|
|
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
|
|
|
|
|
# Tell the world to spawn the sensor, don't forget to attach it to your vehicle actor.
|
|
|
|
|
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
|
|
|
|
|
# Subscribe to the sensor stream by providing a callback function, this function is
|
|
|
|
|
# called each time a new image is generated by the sensor.
|
|
|
|
|
sensor.listen(lambda data: do_something(data))
|
|
|
|
|
```
|
|
|
|
|
* __Blueprint:__ sensor.other.collision
|
|
|
|
|
* __Output:__ [carla.CollisionEvent](python_api.md#carla.CollisionEvent)
|
|
|
|
|
|
|
|
|
|
Note that each sensor has a different set of attributes and produces different
|
|
|
|
|
type of data. However, the data produced by a sensor comes always tagged with:
|
|
|
|
|
This sensor, when attached to an actor, it registers an event each time the
|
|
|
|
|
actor collisions against something in the world. This sensor does not have any
|
|
|
|
|
configurable attribute.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| --------------------- | ------ | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
!!! note
|
|
|
|
|
This sensor creates "fake" actors when it collides with something that is not an actor, this is so we can retrieve the semantic tags of the object we hit.
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces a
|
|
|
|
|
[`carla.CollisionEvent`](python_api.md#carla.CollisionEvent)
|
|
|
|
|
object for each collision registered
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| ---------------------- | ----------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `actor` | carla.Actor | Actor that measured the collision ("self" actor) |
|
|
|
|
|
| `other_actor` | carla.Actor | Actor against whom we collide |
|
|
|
|
|
| `normal_impulse` | carla.Vector3D | Normal impulse result of the collision |
|
|
|
|
|
|
|
|
|
|
Note that several collision events might be registered during a single
|
|
|
|
|
simulation update.
|
|
|
|
|
|
|
|
|
|
---------------
|
|
|
|
|
##Depth camera
|
|
|
|
|
|
|
|
|
|
* __Blueprint:__ sensor.camera.depth
|
|
|
|
|
* __Output:__ [carla.Image](python_api.md#carla.Image)
|
|
|
|
|
|
|
|
|
|
[carla.colorConverter](python_api.md#carla.ColorConverter)
|
|
|
|
|
|
|
|
|
|
![ImageDepth](img/capture_depth.png)
|
|
|
|
|
|
|
|
|
|
The "Depth" camera provides a view over the scene codifying the distance of each
|
|
|
|
|
pixel to the camera (also known as **depth buffer** or **z-buffer**).
|
|
|
|
|
|
|
|
|
|
<h4>Basic camera attributes</h4>
|
|
|
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
|
| ------------------- | ---- | ------- | ----------- |
|
|
|
|
|
| `image_size_x` | int | 800 | Image width in pixels |
|
|
|
|
|
| `image_size_y` | int | 600 | Image height in pixels |
|
|
|
|
|
| `fov` | float | 90.0 | Horizontal field of view in degrees |
|
|
|
|
|
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
|
|
|
|
|
|
|
|
|
|
<h4>Camera lens distortion attributes</h4>
|
|
|
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
|
|---------------------|------|---------|-------------|
|
|
|
|
|
| `lens_circle_falloff` | float | 5.0 | Range: [0.0, 10.0] |
|
|
|
|
|
| `lens_circle_multiplier` | float | 0.0 | Range: [0.0, 10.0] |
|
|
|
|
|
| `lens_k` | float | -1.0 | Range: [-inf, inf] |
|
|
|
|
|
| `lens_kcube` | float | 0.0 | Range: [-inf, inf] |
|
|
|
|
|
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
|
|
|
|
|
| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces [`carla.Image`](python_api.md#carla.Image)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| --------------------- | ---- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `width` | int | Image width in pixels |
|
|
|
|
|
| `height` | int | Image height in pixels |
|
|
|
|
|
| `fov` | float | Horizontal field of view in degrees |
|
|
|
|
|
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
|
|
|
|
|
|
|
|
|
Most sensor data objects, like images and lidar measurements, have a function
|
|
|
|
|
for saving the measurements to disk.
|
|
|
|
|
|
|
|
|
|
This is the list of sensors currently available
|
|
|
|
|
The image codifies the depth in 3 channels of the RGB color space, from less to
|
|
|
|
|
more significant bytes: R -> G -> B. The actual distance in meters can be
|
|
|
|
|
decoded with
|
|
|
|
|
|
|
|
|
|
* [sensor.camera.rgb](#sensorcamerargb)
|
|
|
|
|
* [sensor.camera.depth](#sensorcameradepth)
|
|
|
|
|
* [sensor.camera.semantic_segmentation](#sensorcamerasemantic_segmentation)
|
|
|
|
|
* [sensor.lidar.ray_cast](#sensorlidarray_cast)
|
|
|
|
|
* [sensor.other.collision](#sensorothercollision)
|
|
|
|
|
* [sensor.other.lane_invasion](#sensorotherlane_invasion)
|
|
|
|
|
* [sensor.other.obstacle](#sensorotherobstacle)
|
|
|
|
|
```
|
|
|
|
|
normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
|
|
|
|
|
in_meters = 1000 * normalized
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
Camera sensors use [`carla.colorConverter`](python_api.md#carla.ColorConverter) in order to
|
|
|
|
|
convert the pixels of the original image.
|
|
|
|
|
---------------
|
|
|
|
|
##GNSS sensor
|
|
|
|
|
|
|
|
|
|
sensor.camera.rgb
|
|
|
|
|
-----------------
|
|
|
|
|
* __Blueprint:__ sensor.other.gnss
|
|
|
|
|
* __Output:__ [carla.GNSSMeasurement](python_api.md#carla.GNSSMeasurement)
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, reports its current gnss position.
|
|
|
|
|
The gnss position is internally calculated by adding the metric position to
|
|
|
|
|
an initial geo reference location defined within the OpenDRIVE map definition.
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces
|
|
|
|
|
[`carla.GnssMeasurement`](python_api.md#carla.GnssMeasurement)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| ---------------------- | ----------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `latitude` | double | Latitude position of the actor |
|
|
|
|
|
| `longitude` | double | Longitude position of the actor |
|
|
|
|
|
| `altitude` | double | Altitude of the actor |
|
|
|
|
|
|
|
|
|
|
---------------
|
|
|
|
|
##IMU sensor
|
|
|
|
|
|
|
|
|
|
* __Blueprint:__ sensor.other.imu
|
|
|
|
|
* __Output:__ [carla.IMUMeasurement](python_api.md#carla.IMUMeasurement)
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, the user can access to it's accelerometer, gyroscope and compass.
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces
|
|
|
|
|
[`carla.IMUMeasurement`](python_api.md#carla.IMUMeasurement)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| --------------------- | --------------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world |
|
|
|
|
|
| `accelerometer` | carla.Vector3D | Measures linear acceleration in `m/s^2` |
|
|
|
|
|
| `gyroscope` | carla.Vector3D | Measures angular velocity in `rad/sec` |
|
|
|
|
|
| `compass` | float | Orientation with respect to the North (`(0.0, -1.0, 0.0)` in Unreal) in radians |
|
|
|
|
|
|
|
|
|
|
---------------
|
|
|
|
|
##Lane invasion detector
|
|
|
|
|
|
|
|
|
|
* __Blueprint:__ sensor.other.lane_invasion
|
|
|
|
|
* __Output:__ [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent)
|
|
|
|
|
|
|
|
|
|
> _This sensor is a work in progress, currently very limited._
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, it registers an event each time the
|
|
|
|
|
actor crosses a lane marking. This sensor is somehow special as it works fully
|
|
|
|
|
on the client-side. The lane invasion uses the road data of the active map to
|
|
|
|
|
determine whether a vehicle is invading another lane. This information is based
|
|
|
|
|
on the OpenDrive file provided by the map, therefore it is subject to the
|
|
|
|
|
fidelity of the OpenDrive description. In some places there might be
|
|
|
|
|
discrepancies between the lanes visible by the cameras and the lanes registered
|
|
|
|
|
by this sensor.
|
|
|
|
|
|
|
|
|
|
This sensor does not have any configurable attribute.
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces a
|
|
|
|
|
[`carla.LaneInvasionEvent`](python_api.md#carla.LaneInvasionEvent)
|
|
|
|
|
object for each lane marking crossed by the actor
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| ----------------------- | ----------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `actor` | carla.Actor | Actor that invaded another lane ("self" actor) |
|
|
|
|
|
| `crossed_lane_markings` | carla.LaneMarking list | List of lane markings that have been crossed |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
---------------
|
|
|
|
|
##Lidar raycast sensor
|
|
|
|
|
|
|
|
|
|
* __Blueprint:__ sensor.lidar.ray_cast
|
|
|
|
|
* __Output:__ [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement)
|
|
|
|
|
|
|
|
|
|
![LidarPointCloud](img/lidar_point_cloud.gif)
|
|
|
|
|
|
|
|
|
|
This sensor simulates a rotating Lidar implemented using ray-casting. The points
|
|
|
|
|
are computed by adding a laser for each channel distributed in the vertical FOV,
|
|
|
|
|
then the rotation is simulated computing the horizontal angle that the Lidar
|
|
|
|
|
rotated this frame, and doing a ray-cast for each point that each laser was
|
|
|
|
|
supposed to generate this frame; `points_per_second / (FPS * channels)`.
|
|
|
|
|
|
|
|
|
|
<h4>Lidar attributes</h4>
|
|
|
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
|
| -------------------- | ---- | ------- | ----------- |
|
|
|
|
|
| `channels` | int | 32 | Number of lasers |
|
|
|
|
|
| `range` | float | 10.0 | Maximum measurement distance in meters _(<=0.9.6: is in centimeters)_ |
|
|
|
|
|
| `points_per_second` | int | 56000 | Points generated by all lasers per second |
|
|
|
|
|
| `rotation_frequency` | float | 10.0 | Lidar rotation frequency |
|
|
|
|
|
| `upper_fov` | float | 10.0 | Angle in degrees of the upper most laser |
|
|
|
|
|
| `lower_fov` | float | -30.0 | Angle in degrees of the lower most laser |
|
|
|
|
|
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces
|
|
|
|
|
[`carla.LidarMeasurement`](python_api.md#carla.LidarMeasurement)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| -------------------------- | ---------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `horizontal_angle` | float | Angle in XY plane of the lidar this frame (in radians) |
|
|
|
|
|
| `channels` | int | Number of channels (lasers) of the lidar |
|
|
|
|
|
| `get_point_count(channel)` | int | Number of points per channel captured this frame |
|
|
|
|
|
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point) |
|
|
|
|
|
|
|
|
|
|
The object also acts as a Python list of [`carla.Location`](python_api.md#carla.Location)
|
|
|
|
|
|
|
|
|
|
```py
|
|
|
|
|
for location in lidar_measurement:
|
|
|
|
|
print(location)
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
A Lidar measurement contains a packet with all the points generated during a
|
|
|
|
|
`1/FPS` interval. During this interval the physics is not updated so all the
|
|
|
|
|
points in a measurement reflect the same "static picture" of the scene.
|
|
|
|
|
|
|
|
|
|
!!! tip
|
|
|
|
|
Running the simulator at
|
|
|
|
|
[fixed time-step](configuring_the_simulation.md#fixed-time-step) it is
|
|
|
|
|
possible to tune the horizontal angle of each measurement. By adjusting the
|
|
|
|
|
frame rate and the rotation frequency is possible, for instance, to get a
|
|
|
|
|
360 view each measurement.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
---------------
|
|
|
|
|
##Obstacle detector
|
|
|
|
|
|
|
|
|
|
* __Blueprint:__ sensor.other.obstacle
|
|
|
|
|
* __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent)
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, reports if there is obstacles ahead.
|
|
|
|
|
|
|
|
|
|
!!! note
|
|
|
|
|
This sensor creates "fake" actors when it detects obstacles with something that is not an actor,
|
|
|
|
|
this is so we can retrieve the semantic tags of the object we hit.
|
|
|
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
|
| -------------------- | ---- | ------- | ----------- |
|
|
|
|
|
| `distance` | float | 5 | Distance to throw the trace to |
|
|
|
|
|
| `hit_radius` | float | 0.5 | Radius of the trace |
|
|
|
|
|
| `only_dynamics` | bool | false | If true, the trace will only look for dynamic objects |
|
|
|
|
|
| `debug_linetrace` | bool | false | If true, the trace will be visible |
|
|
|
|
|
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces
|
|
|
|
|
[`carla.ObstacleDetectionEvent`](python_api.md#carla.ObstacleDetectionEvent)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| ---------------------- | ----------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world |
|
|
|
|
|
| `actor` | carla.Actor | Actor that detected the obstacle ("self" actor) |
|
|
|
|
|
| `other_actor` | carla.Actor | Actor detected as obstacle |
|
|
|
|
|
| `distance` | float | Distance from actor to other_actor |
|
|
|
|
|
|
|
|
|
|
---------------
|
|
|
|
|
##Radar sensor
|
|
|
|
|
|
|
|
|
|
* __Blueprint:__ sensor.other.radar
|
|
|
|
|
* __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement)
|
|
|
|
|
|
|
|
|
|
---------------
|
|
|
|
|
##RGB camera
|
|
|
|
|
|
|
|
|
|
* __Blueprint:__ sensor.camera.rgb
|
|
|
|
|
* __Output:__ [carla.Image](python_api.md#carla.Image)
|
|
|
|
|
|
|
|
|
|
[carla.colorConverter](python_api.md#carla.ColorConverter)
|
|
|
|
|
|
|
|
|
|
![ImageRGB](img/capture_scenefinal.png)
|
|
|
|
|
|
|
|
|
@ -158,61 +394,13 @@ objects.
|
|
|
|
|
| `fov` | float | Horizontal field of view in degrees |
|
|
|
|
|
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
|
|
|
|
|
|
|
|
|
sensor.camera.depth
|
|
|
|
|
-------------------
|
|
|
|
|
---------------
|
|
|
|
|
##Semantic segmentation camera
|
|
|
|
|
|
|
|
|
|
![ImageDepth](img/capture_depth.png)
|
|
|
|
|
* __Blueprint:__ sensor.camera.semantic_segmentation
|
|
|
|
|
* __Output:__ [carla.Image](python_api.md#carla.Image)
|
|
|
|
|
|
|
|
|
|
The "Depth" camera provides a view over the scene codifying the distance of each
|
|
|
|
|
pixel to the camera (also known as **depth buffer** or **z-buffer**).
|
|
|
|
|
|
|
|
|
|
<h4>Basic camera attributes</h4>
|
|
|
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
|
| ------------------- | ---- | ------- | ----------- |
|
|
|
|
|
| `image_size_x` | int | 800 | Image width in pixels |
|
|
|
|
|
| `image_size_y` | int | 600 | Image height in pixels |
|
|
|
|
|
| `fov` | float | 90.0 | Horizontal field of view in degrees |
|
|
|
|
|
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
|
|
|
|
|
|
|
|
|
|
<h4>Camera lens distortion attributes</h4>
|
|
|
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
|
|---------------------|------|---------|-------------|
|
|
|
|
|
| `lens_circle_falloff` | float | 5.0 | Range: [0.0, 10.0] |
|
|
|
|
|
| `lens_circle_multiplier` | float | 0.0 | Range: [0.0, 10.0] |
|
|
|
|
|
| `lens_k` | float | -1.0 | Range: [-inf, inf] |
|
|
|
|
|
| `lens_kcube` | float | 0.0 | Range: [-inf, inf] |
|
|
|
|
|
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
|
|
|
|
|
| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces [`carla.Image`](python_api.md#carla.Image)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| --------------------- | ---- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `width` | int | Image width in pixels |
|
|
|
|
|
| `height` | int | Image height in pixels |
|
|
|
|
|
| `fov` | float | Horizontal field of view in degrees |
|
|
|
|
|
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
The image codifies the depth in 3 channels of the RGB color space, from less to
|
|
|
|
|
more significant bytes: R -> G -> B. The actual distance in meters can be
|
|
|
|
|
decoded with
|
|
|
|
|
|
|
|
|
|
```
|
|
|
|
|
normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
|
|
|
|
|
in_meters = 1000 * normalized
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
sensor.camera.semantic_segmentation
|
|
|
|
|
-----------------------------------
|
|
|
|
|
[carla.colorConverter](python_api.md#carla.ColorConverter)
|
|
|
|
|
|
|
|
|
|
![ImageSemanticSegmentation](img/capture_semseg.png)
|
|
|
|
|
|
|
|
|
@ -287,193 +475,3 @@ _"Unreal/CarlaUE4/Content/Static/Pedestrians"_ folder it's tagged as pedestrian.
|
|
|
|
|
the C++ code. Add a new label to the `ECityObjectLabel` enum in "Tagger.h",
|
|
|
|
|
and its corresponding filepath check inside `GetLabelByFolderName()`
|
|
|
|
|
function in "Tagger.cpp".
|
|
|
|
|
|
|
|
|
|
sensor.lidar.ray_cast
|
|
|
|
|
---------------------
|
|
|
|
|
|
|
|
|
|
![LidarPointCloud](img/lidar_point_cloud.gif)
|
|
|
|
|
|
|
|
|
|
This sensor simulates a rotating Lidar implemented using ray-casting. The points
|
|
|
|
|
are computed by adding a laser for each channel distributed in the vertical FOV,
|
|
|
|
|
then the rotation is simulated computing the horizontal angle that the Lidar
|
|
|
|
|
rotated this frame, and doing a ray-cast for each point that each laser was
|
|
|
|
|
supposed to generate this frame; `points_per_second / (FPS * channels)`.
|
|
|
|
|
|
|
|
|
|
<h4>Lidar attributes</h4>
|
|
|
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
|
| -------------------- | ---- | ------- | ----------- |
|
|
|
|
|
| `channels` | int | 32 | Number of lasers |
|
|
|
|
|
| `range` | float | 10.0 | Maximum measurement distance in meters _(<=0.9.6: is in centimeters)_ |
|
|
|
|
|
| `points_per_second` | int | 56000 | Points generated by all lasers per second |
|
|
|
|
|
| `rotation_frequency` | float | 10.0 | Lidar rotation frequency |
|
|
|
|
|
| `upper_fov` | float | 10.0 | Angle in degrees of the upper most laser |
|
|
|
|
|
| `lower_fov` | float | -30.0 | Angle in degrees of the lower most laser |
|
|
|
|
|
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces
|
|
|
|
|
[`carla.LidarMeasurement`](python_api.md#carla.LidarMeasurement)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| -------------------------- | ---------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `horizontal_angle` | float | Angle in XY plane of the lidar this frame (in radians) |
|
|
|
|
|
| `channels` | int | Number of channels (lasers) of the lidar |
|
|
|
|
|
| `get_point_count(channel)` | int | Number of points per channel captured this frame |
|
|
|
|
|
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point) |
|
|
|
|
|
|
|
|
|
|
The object also acts as a Python list of [`carla.Location`](python_api.md#carla.Location)
|
|
|
|
|
|
|
|
|
|
```py
|
|
|
|
|
for location in lidar_measurement:
|
|
|
|
|
print(location)
|
|
|
|
|
```
|
|
|
|
|
|
|
|
|
|
A Lidar measurement contains a packet with all the points generated during a
|
|
|
|
|
`1/FPS` interval. During this interval the physics is not updated so all the
|
|
|
|
|
points in a measurement reflect the same "static picture" of the scene.
|
|
|
|
|
|
|
|
|
|
!!! tip
|
|
|
|
|
Running the simulator at
|
|
|
|
|
[fixed time-step](simulation_time_and_synchrony.md) it is
|
|
|
|
|
possible to tune the horizontal angle of each measurement. By adjusting the
|
|
|
|
|
frame rate and the rotation frequency is possible, for instance, to get a
|
|
|
|
|
360 view each measurement.
|
|
|
|
|
|
|
|
|
|
sensor.other.collision
|
|
|
|
|
----------------------
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, it registers an event each time the
|
|
|
|
|
actor collisions against something in the world. This sensor does not have any
|
|
|
|
|
configurable attribute.
|
|
|
|
|
|
|
|
|
|
!!! note
|
|
|
|
|
This sensor creates "fake" actors when it collides with something that is not an actor,
|
|
|
|
|
this is so we can retrieve the semantic tags of the object we hit.
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces a
|
|
|
|
|
[`carla.CollisionEvent`](python_api.md#carla.CollisionEvent)
|
|
|
|
|
object for each collision registered
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| ---------------------- | ----------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `actor` | carla.Actor | Actor that measured the collision ("self" actor) |
|
|
|
|
|
| `other_actor` | carla.Actor | Actor against whom we collide |
|
|
|
|
|
| `normal_impulse` | carla.Vector3D | Normal impulse result of the collision |
|
|
|
|
|
|
|
|
|
|
Note that several collision events might be registered during a single
|
|
|
|
|
simulation update.
|
|
|
|
|
|
|
|
|
|
sensor.other.lane_invasion
|
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
|
|
> _This sensor is a work in progress, currently very limited._
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, it registers an event each time the
|
|
|
|
|
actor crosses a lane marking. This sensor is somehow special as it works fully
|
|
|
|
|
on the client-side. The lane invasion uses the road data of the active map to
|
|
|
|
|
determine whether a vehicle is invading another lane. This information is based
|
|
|
|
|
on the OpenDrive file provided by the map, therefore it is subject to the
|
|
|
|
|
fidelity of the OpenDrive description. In some places there might be
|
|
|
|
|
discrepancies between the lanes visible by the cameras and the lanes registered
|
|
|
|
|
by this sensor.
|
|
|
|
|
|
|
|
|
|
This sensor does not have any configurable attribute.
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces a
|
|
|
|
|
[`carla.LaneInvasionEvent`](python_api.md#carla.LaneInvasionEvent)
|
|
|
|
|
object for each lane marking crossed by the actor
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| ----------------------- | ----------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `actor` | carla.Actor | Actor that invaded another lane ("self" actor) |
|
|
|
|
|
| `crossed_lane_markings` | carla.LaneMarking list | List of lane markings that have been crossed |
|
|
|
|
|
|
|
|
|
|
sensor.other.gnss
|
|
|
|
|
-----------------
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, reports its current gnss position.
|
|
|
|
|
The gnss position is internally calculated by adding the metric position to
|
|
|
|
|
an initial geo reference location defined within the OpenDRIVE map definition.
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces
|
|
|
|
|
[`carla.GnssMeasurement`](python_api.md#carla.GnssMeasurement)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| ---------------------- | ----------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
|
| `latitude` | double | Latitude position of the actor |
|
|
|
|
|
| `longitude` | double | Longitude position of the actor |
|
|
|
|
|
| `altitude` | double | Altitude of the actor |
|
|
|
|
|
|
|
|
|
|
sensor.other.obstacle
|
|
|
|
|
---------------------
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, reports if there is obstacles ahead.
|
|
|
|
|
|
|
|
|
|
!!! note
|
|
|
|
|
This sensor creates "fake" actors when it detects obstacles with something that is not an actor,
|
|
|
|
|
this is so we can retrieve the semantic tags of the object we hit.
|
|
|
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
|
| -------------------- | ---- | ------- | ----------- |
|
|
|
|
|
| `distance` | float | 5 | Distance to throw the trace to |
|
|
|
|
|
| `hit_radius` | float | 0.5 | Radius of the trace |
|
|
|
|
|
| `only_dynamics` | bool | false | If true, the trace will only look for dynamic objects |
|
|
|
|
|
| `debug_linetrace` | bool | false | If true, the trace will be visible |
|
|
|
|
|
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces
|
|
|
|
|
[`carla.ObstacleDetectionEvent`](python_api.md#carla.ObstacleDetectionEvent)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| ---------------------- | ----------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world |
|
|
|
|
|
| `actor` | carla.Actor | Actor that detected the obstacle ("self" actor) |
|
|
|
|
|
| `other_actor` | carla.Actor | Actor detected as obstacle |
|
|
|
|
|
| `distance` | float | Distance from actor to other_actor |
|
|
|
|
|
|
|
|
|
|
sensor.other.imu
|
|
|
|
|
----------------
|
|
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, the user can access to it's accelerometer, gyroscope and compass.
|
|
|
|
|
|
|
|
|
|
<h4>Output attributes</h4>
|
|
|
|
|
|
|
|
|
|
This sensor produces
|
|
|
|
|
[`carla.IMUMeasurement`](python_api.md#carla.IMUMeasurement)
|
|
|
|
|
objects.
|
|
|
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
|
| --------------------- | --------------- | ----------- |
|
|
|
|
|
| `frame` | int | Frame number when the measurement took place |
|
|
|
|
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
|
|
|
|
|
| `transform` | carla.Transform | Transform in world |
|
|
|
|
|
| `accelerometer` | carla.Vector3D | Measures linear acceleration in `m/s^2` |
|
|
|
|
|
| `gyroscope` | carla.Vector3D | Measures angular velocity in `rad/sec` |
|
|
|
|
|
| `compass` | float | Orientation with respect to the North (`(0.0, -1.0, 0.0)` in Unreal) in radians |
|