2018-03-22 01:16:56 +08:00
|
|
|
<h1>Cameras and sensors</h1>
|
2017-11-17 00:26:50 +08:00
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
![Client window](img/client_window.png)
|
2017-11-17 00:26:50 +08:00
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
Sensors are a special type of actor able to measure and stream data. All the
|
|
|
|
sensors have a `listen` method that registers the callback function that will
|
|
|
|
be called each time the sensor produces a new measurement. Sensors are typically
|
|
|
|
attached to vehicles and produce data either each simulation update, or when a
|
|
|
|
certain event is registered.
|
2018-02-02 21:20:18 +08:00
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
The following Python excerpt shows how you would typically attach a sensor to a
|
|
|
|
vehicle, in this case we are adding a dashboard HD camera to a vehicle.
|
2017-11-17 21:28:23 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
```py
|
|
|
|
# Find the blueprint of the sensor.
|
|
|
|
blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
|
|
|
|
# Modify the attributes of the blueprint to set image resolution and field of view.
|
|
|
|
blueprint.set_attribute('image_size_x', '1920')
|
|
|
|
blueprint.set_attribute('image_size_y', '1080')
|
2018-12-16 07:06:03 +08:00
|
|
|
blueprint.set_attribute('fov', '110')
|
2018-12-16 00:35:04 +08:00
|
|
|
# Provide the position of the sensor relative to the vehicle.
|
|
|
|
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
|
|
|
|
# Tell the world to spawn the sensor, don't forget to attach it to your vehicle actor.
|
|
|
|
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
|
|
|
|
# Subscribe to the sensor stream by providing a callback function, this function is
|
|
|
|
# called each time a new image is generated by the sensor.
|
2018-12-16 07:06:03 +08:00
|
|
|
sensor.listen(lambda data: do_something(data))
|
2018-12-16 00:35:04 +08:00
|
|
|
```
|
2017-11-17 21:28:23 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
Note that each sensor has a different set of attributes and produces different
|
|
|
|
type of data. However, the data produced by a sensor comes always tagged with a
|
|
|
|
**frame number** and a **transform**. The frame number is used to identify the
|
|
|
|
frame at which the measurement took place, the transform gives you the
|
|
|
|
transformation in world coordinates of the sensor at that same frame.
|
2017-11-17 00:26:50 +08:00
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
Most sensor data objects, like images and lidar measurements, have a function
|
|
|
|
for saving the measurements to disk.
|
|
|
|
|
|
|
|
This is the list of sensors currently available
|
2018-02-28 21:34:14 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
* [sensor.camera.rgb](#sensorcamerargb)
|
|
|
|
* [sensor.camera.depth](#sensorcameradepth)
|
|
|
|
* [sensor.camera.semantic_segmentation](#sensorcamerasemantic_segmentation)
|
|
|
|
* [sensor.lidar.ray_cast](#sensorlidarray_cast)
|
|
|
|
* [sensor.other.collision](#sensorothercollision)
|
|
|
|
* [sensor.other.lane_detector](#sensorotherlane_detector)
|
2018-02-28 21:34:14 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
sensor.camera.rgb
|
|
|
|
-----------------
|
2018-02-28 21:34:14 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
![ImageRGB](img/capture_scenefinal.png)
|
2018-02-28 21:34:14 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
The "RGB" camera acts as a regular camera capturing images from the scene.
|
2017-11-17 00:26:50 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
| ------------------- | ---- | ------- | ----------- |
|
|
|
|
| `image_size_x` | int | 800 | Image width in pixels |
|
|
|
|
| `image_size_y` | int | 600 | Image height in pixels |
|
|
|
|
| `fov` | float | 90.0 | Field of view in degrees |
|
|
|
|
| `enable_postprocess_effects` | bool | True | Whether the post-process effect in the scene affect the image |
|
2018-02-02 21:20:18 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
If `enable_postprocess_effects` is enabled, a set of post-process effects is
|
|
|
|
applied to the image to create a more realistic feel
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
* **Vignette** Darkens the border of the screen.
|
|
|
|
* **Grain jitter** Adds a bit of noise to the render.
|
|
|
|
* **Bloom** Intense lights burn the area around them.
|
|
|
|
* **Auto exposure** Modifies the image gamma to simulate the eye adaptation to
|
|
|
|
darker or brighter areas.
|
|
|
|
* **Lens flares** Simulates the reflection of bright objects on the lens.
|
|
|
|
* **Depth of field** Blurs objects near or very far away of the camera.
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
|
|
|
|
objects.
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
| --------------------- | ---- | ----------- |
|
|
|
|
| `frame_number` | int | Frame count when the measurement took place |
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
| `width` | int | Image width in pixels |
|
|
|
|
| `height` | int | Image height in pixels |
|
|
|
|
| `fov` | float | Field of view in degrees |
|
|
|
|
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
sensor.camera.depth
|
|
|
|
-------------------
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
![ImageDepth](img/capture_depth.png)
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
The "Depth" camera provides a view over the scene codifying the distance of each
|
|
|
|
pixel to the camera (also known as **depth buffer** or **z-buffer**).
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
| ------------------- | ---- | ------- | ----------- |
|
|
|
|
| `image_size_x` | int | 800 | Image width in pixels |
|
|
|
|
| `image_size_y` | int | 600 | Image height in pixels |
|
|
|
|
| `fov` | float | 90.0 | Field of view in degrees |
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
|
|
|
|
objects.
|
2017-11-17 01:54:00 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
| --------------------- | ---- | ----------- |
|
|
|
|
| `frame_number` | int | Frame count when the measurement took place |
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
| `width` | int | Image width in pixels |
|
|
|
|
| `height` | int | Image height in pixels |
|
|
|
|
| `fov` | float | Field of view in degrees |
|
|
|
|
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
2018-02-28 21:34:14 +08:00
|
|
|
|
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
The image codifies the depth in 3 channels of the RGB color space, from less to
|
|
|
|
more significant bytes: R -> G -> B. The actual distance in meters can be
|
|
|
|
decoded with
|
2018-02-28 21:34:14 +08:00
|
|
|
|
|
|
|
```
|
2018-12-16 00:35:04 +08:00
|
|
|
normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
|
|
|
|
in_meters = 1000 * normalized
|
2018-02-28 21:34:14 +08:00
|
|
|
```
|
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
sensor.camera.semantic_segmentation
|
|
|
|
-----------------------------------
|
2017-11-17 00:26:50 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
![ImageSemanticSegmentation](img/capture_semseg.png)
|
2018-02-02 21:20:18 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
The "Semantic Segmentation" camera classifies every object in the view by
|
2017-11-17 00:26:50 +08:00
|
|
|
displaying it in a different color according to the object class. E.g.,
|
|
|
|
pedestrians appear in a different color than vehicles.
|
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
| ------------------- | ---- | ------- | ----------- |
|
|
|
|
| `image_size_x` | int | 800 | Image width in pixels |
|
|
|
|
| `image_size_y` | int | 600 | Image height in pixels |
|
|
|
|
| `fov` | float | 90.0 | Field of view in degrees |
|
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
|
|
|
|
objects.
|
2018-12-16 00:35:04 +08:00
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
| --------------------- | ---- | ----------- |
|
|
|
|
| `frame_number` | int | Frame count when the measurement took place |
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
| `width` | int | Image width in pixels |
|
|
|
|
| `height` | int | Image height in pixels |
|
|
|
|
| `fov` | float | Field of view in degrees |
|
|
|
|
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
|
|
|
|
|
2018-02-02 21:20:18 +08:00
|
|
|
The server provides an image with the tag information **encoded in the red
|
|
|
|
channel**. A pixel with a red value of x displays an object with tag x. The
|
2017-11-17 00:26:50 +08:00
|
|
|
following tags are currently available
|
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
|
|
|
|
| Value | Tag | Converted color |
|
|
|
|
| -----:|:------------ | --------------- |
|
|
|
|
| 0 | Unlabeled | ( 0, 0, 0) |
|
|
|
|
| 1 | Building | ( 70, 70, 70) |
|
|
|
|
| 2 | Fence | (190, 153, 153) |
|
|
|
|
| 3 | Other | (250, 170, 160) |
|
|
|
|
| 4 | Pedestrian | (220, 20, 60) |
|
|
|
|
| 5 | Pole | (153, 153, 153) |
|
|
|
|
| 6 | Road line | (157, 234, 50) |
|
|
|
|
| 7 | Road | (128, 64, 128) |
|
|
|
|
| 8 | Sidewalk | (244, 35, 232) |
|
|
|
|
| 9 | Vegetation | (107, 142, 35) |
|
|
|
|
| 10 | Car | ( 0, 0, 142) |
|
|
|
|
| 11 | Wall | (102, 102, 156) |
|
|
|
|
| 12 | Traffic sign | (220, 220, 0) |
|
2017-11-17 00:26:50 +08:00
|
|
|
|
|
|
|
This is implemented by tagging every object in the scene before hand (either at
|
2018-02-02 21:20:18 +08:00
|
|
|
begin play or on spawn). The objects are classified by their relative file path
|
2018-02-02 21:32:04 +08:00
|
|
|
in the project. E.g., every mesh stored in the
|
|
|
|
_"Unreal/CarlaUE4/Content/Static/Pedestrians"_ folder it's tagged as pedestrian.
|
|
|
|
|
|
|
|
!!! note
|
|
|
|
**Adding new tags**:
|
|
|
|
At the moment adding new tags is not very flexible and requires to modify
|
|
|
|
the C++ code. Add a new label to the `ECityObjectLabel` enum in "Tagger.h",
|
|
|
|
and its corresponding filepath check inside `GetLabelByFolderName()`
|
|
|
|
function in "Tagger.cpp".
|
2018-02-28 21:34:14 +08:00
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
sensor.lidar.ray_cast
|
|
|
|
---------------------
|
2018-02-28 21:34:14 +08:00
|
|
|
|
|
|
|
![LidarPointCloud](img/lidar_point_cloud.gif)
|
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
This sensor simulates a rotating Lidar implemented using ray-casting. The points
|
|
|
|
are computed by adding a laser for each channel distributed in the vertical FOV,
|
|
|
|
then the rotation is simulated computing the horizontal angle that the Lidar
|
|
|
|
rotated this frame, and doing a ray-cast for each point that each laser was
|
2018-12-16 07:06:03 +08:00
|
|
|
supposed to generate this frame; `points_per_second / (FPS * channels)`.
|
2018-12-16 00:35:04 +08:00
|
|
|
|
|
|
|
| Blueprint attribute | Type | Default | Description |
|
|
|
|
| -------------------- | ---- | ------- | ----------- |
|
|
|
|
| `channels` | int | 32 | Number of lasers |
|
|
|
|
| `range` | float | 1000 | Maximum measurement distance in meters |
|
|
|
|
| `points_per_second` | int | 56000 | Points generated by all lasers per second |
|
|
|
|
| `rotation_frequency` | float | 10.0 | Lidar rotation frequency |
|
|
|
|
| `upper_fov` | float | 10.0 | Angle in degrees of the upper most laser |
|
|
|
|
| `lower_fov` | float | -30.0 | Angle in degrees of the lower most laser |
|
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
This sensor produces
|
|
|
|
[`carla.LidarMeasurement`](python_api.md#carlalidarmeasurementcarlasensordata)
|
|
|
|
objects.
|
2018-12-16 00:35:04 +08:00
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
| -------------------------- | ---------- | ----------- |
|
|
|
|
| `frame_number` | int | Frame count when the measurement took place |
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
| `horizontal_angle` | float | Angle in XY plane of the lidar this frame (in degrees) |
|
|
|
|
| `channels` | int | Number of channels (lasers) of the lidar |
|
|
|
|
| `get_point_count(channel)` | int | Number of points per channel captured this frame |
|
|
|
|
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point) |
|
|
|
|
|
|
|
|
The object also acts as a Python list of `carla.Location`
|
2018-02-28 21:34:14 +08:00
|
|
|
|
|
|
|
```py
|
2018-12-16 00:35:04 +08:00
|
|
|
for location in lidar_measurement:
|
|
|
|
print(location)
|
2018-02-28 21:34:14 +08:00
|
|
|
```
|
|
|
|
|
2018-12-16 00:35:04 +08:00
|
|
|
A Lidar measurement contains a packet with all the points generated during a
|
|
|
|
`1/FPS` interval. During this interval the physics is not updated so all the
|
|
|
|
points in a measurement reflect the same "static picture" of the scene.
|
|
|
|
|
|
|
|
!!! tip
|
|
|
|
Running the simulator at
|
|
|
|
[fixed time-step](configuring_the_simulation.md#fixed-time-step) it is
|
|
|
|
possible to tune the horizontal angle of each measurement. By adjusting the
|
|
|
|
frame rate and the rotation frequency is possible, for instance, to get a
|
|
|
|
360 view each measurement.
|
|
|
|
|
|
|
|
sensor.other.collision
|
|
|
|
----------------------
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, it registers an event each time the
|
|
|
|
actor collisions against something in the world. This sensor does not have any
|
|
|
|
configurable attribute.
|
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
This sensor produces a
|
|
|
|
[`carla.CollisionEvent`](python_api.md#carlacollisioneventcarlasensordata)
|
|
|
|
object for each collision registered
|
2018-12-16 00:35:04 +08:00
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
| ---------------------- | ----------- | ----------- |
|
|
|
|
| `frame_number` | int | Frame count when the measurement took place |
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
| `actor` | carla.Actor | Actor that measured the collision ("self" actor) |
|
|
|
|
| `other_actor` | carla.Actor | Actor against whom we collide |
|
|
|
|
| `normal_impulse` | carla.Vector3D | Normal impulse result of the collision |
|
|
|
|
|
|
|
|
Note that several collision events might be registered during a single
|
|
|
|
simulation update.
|
|
|
|
|
|
|
|
sensor.other.lane_detector
|
|
|
|
--------------------------
|
|
|
|
|
|
|
|
> _This sensor is a work in progress, currently very limited._
|
|
|
|
|
|
|
|
This sensor, when attached to an actor, it registers an event each time the
|
|
|
|
actor crosses a lane marking. This sensor is somehow special as it works fully
|
|
|
|
on the client-side. The lane detector uses the road data of the active map to
|
|
|
|
determine whether a vehicle is invading another lane. This information is based
|
|
|
|
on the OpenDrive file provided by the map, therefore it is subject to the
|
|
|
|
fidelity of the OpenDrive description. In some places there might be
|
|
|
|
discrepancies between the lanes visible by the cameras and the lanes registered
|
|
|
|
by this sensor.
|
|
|
|
|
|
|
|
This sensor does not have any configurable attribute.
|
|
|
|
|
2018-12-16 07:06:03 +08:00
|
|
|
This sensor produces a
|
|
|
|
[`carla.LaneInvasionEvent`](python_api.md#carlalaneinvasioneventcarlasensordata)
|
|
|
|
object for each lane marking crossed by the actor
|
2018-12-16 00:35:04 +08:00
|
|
|
|
|
|
|
| Sensor data attribute | Type | Description |
|
|
|
|
| ----------------------- | ----------- | ----------- |
|
|
|
|
| `frame_number` | int | Frame count when the measurement took place |
|
|
|
|
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
|
|
|
|
| `actor` | carla.Actor | Actor that invaded another lane ("self" actor) |
|
|
|
|
| `crossed_lane_markings` | carla.LaneMarking list | List of lane markings that have been crossed |
|