First iteration on sensors

This commit is contained in:
sergi-e 2020-02-18 17:58:02 +01:00 committed by Marc Garcia Puig
parent 9cf27b1e90
commit 5770ae29eb
8 changed files with 492 additions and 294 deletions

View File

@ -145,7 +145,7 @@ Actors are not destroyed when the Python script finishes, they remain and the wo
##Types of actors
<h4>Sensors</h4>
Sensors are actors that produce a stream of data. They are so important and vast that they will be properly written about on their own section: [4th. Sensors and data](cameras_and_sensors.md).
Sensors are actors that produce a stream of data. They are so important and vast that they will be properly written about on their own section: [4th. Sensors and data](core_sensors.md).
So far, let's just take a look at a common sensor spawning routine:
```py

View File

@ -63,4 +63,24 @@ Some more complex elements and features in CARLA are listed here to make newcome
- **Recorder:** CARLA feature that allows for reenacting previous simulations using snapshots of the world.
- **Rendering options:** Some advanced configuration options in CARLA that allow for different graphics quality, off-screen rendering and a no-rendering mode.
- **Simulation time and synchrony:** Everything regarding the simulation time and how does the server run the simulation depending on clients.
- **Traffic manager:** This module is in charge of every vehicle set to autopilot mode. It conducts the traffic in the city for the simulation to look like a real urban environment.
- **Traffic manager:** This module is in charge of every vehicle set to autopilot mode. It conducts the traffic in the city for the simulation to look like a real urban environment.
---------------
That sums up the basics necessary to understand CARLA.
However, these broad strokes are just a big picture of the system.The next step should be learning more about the world of the simulation and the clients connecting to it. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:
<div text-align: center>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="forum.carla.org" target="_blank" class="btn btn-neutral" title="CARLA forum">
CARLA forum</a>
</p>
</div>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="../core_world" target="_blank" class="btn btn-neutral" title="1st. World and client">
1st. World and client</a>
</p>
</div>
</div>

View File

@ -157,7 +157,7 @@ CARLA forum</a>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="../cameras_and_sensors" target="_blank" class="btn btn-neutral" title="4th. Sensors and data">
<a href="../core_sensors" target="_blank" class="btn btn-neutral" title="4th. Sensors and data">
4th. Sensors and data</a>
</p>
</div>

180
Docs/core_sensors.md Normal file
View File

@ -0,0 +1,180 @@
<h1>4th. Sensors and data</h1>
The last step in this introduction to CARLA should be learning about sensors, which allow to retrieve data from the surroundings and so, are crucial to use CARLA as a learning environment for driving agents.
The first part of this page summarizes everything necessary to start handling sensors including some basic information about the different types available and a step-by-step of their life cycle. Use this to experiment and then come back when necessary to read more about each specific type of sensor.
* [__Sensors step-by-step__](#sensors-step-by-step):
* Setting
* Spawning
* Listening
* Destroying
* [__Types of sensors__](#types-of-sensors)
* Cameras
* Detectors
* Other
---------------
##Sensors step-by-step
The class [carla.Sensor](python_api.md#carla.Sensor) defines a special type of actor able to measure and stream data.
* __What is this data?__ It varies a lot depending on the type of sensor, but the data is always defined as an inherited class of the general [carla.SensorData](python_api.md#carla.SensorData).
* __When do they retrieve the data?__ Either on every simulation step or when a certain event is registered. Depends on the type of sensor.
* __How do they retrieve the data?__ Every sensor has a `listen()` method that receives and manages the data.
Although they are very different to each other, the way the user has to manage them is quite similar.
<h4>Setting</h4>
As with every other actor, the first step is to find the proper blueprint in the library and set specific attributes to get the desired results. This is essential when handling sensors, as their capabilities depend on the way this attributes are set. Their attributes are listed in the blueprint library but a detailed explanation on each type of sensor is provided later on this page.
The following example sets ready a dashboard HD camera that will later be attached to a vehicle.
```py
# Find the blueprint of the sensor.
blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
# Modify the attributes of the blueprint to set image resolution and field of view.
blueprint.set_attribute('image_size_x', '1920')
blueprint.set_attribute('image_size_y', '1080')
blueprint.set_attribute('fov', '110')
# Set the time in seconds between sensor captures
blueprint.set_attribute('sensor_tick', '1.0')
```
<h4>Spawning</h4>
Sensors are also spawned like any other actor, only this time the two optional parameters, `attachment_to` and `attachment_type` are crucial. They should be attached to another actor, usually a vehicle, to follow it around and gather the information regarding its surroundings.
There are two types of attachment:
* __Rigid__: the sensor's location will update strictly regarding its parent. Cameras may show "little hops" as the moves are not eased.
* __SpringArm__: movement will be smoothed with little accelerations and decelerations.
```py
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
```
!!! Important
When spawning an actor with attachment, remember that its location should be relative to its parent, not global.
<h4>Listening</h4>
Every sensor has a [`listen()`](python_api.md#carla.Sensor.listen) that is called every time the sensor retrieves data.
This method has one argument: `callback`, which is a lambda expression of a function, defining what should the sensor do when data is retrieved.
Similarly, the lambda function must have at least one argument, which will be the retrieved data:
```py
# do_something() will be called each time a new image is generated by the camera.
sensor.listen(lambda data: do_something(data))
...
# This supposed collision sensor would print everytime a collision is detected.
def callback(event):
for actor_id in event:
vehicle = world_ref().get_actor(actor_id)
print('Vehicle too close: %s' % vehicle.type_id)
sensor02.listen(callback)
```
!!! note
The __is_listening__ attribute of a sensor allows to enable/disable data listening at will.
Most sensor data objects have a function for saving the measurements to disk so it can be later used in other environments.
Sensor data differs a lot between sensor types, but it is always tagged with:
| Sensor data attribute | Type | Description |
| --------------------- | ------ | ----------- |
| `frame` | int | Frame number when the measurement took place. |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode. |
| `transform` | carla.Transform | World reference of the sensor at the time of the measurement. |
---------------
##Types of sensors
<h4>Cameras</h4>
These sensors take a shot of the world from their point of view and then use the helper class to alter this image and provide different types of information.
__Retrieve data:__ every simulation step.
| Sensor | Output | Overview |
| ---------- | ---------- | ---------- |
| Depth | [carla.Image](python_api.md#carla.Image) | Combines the photo with the distance of the elements on scene to provide with a gray-scale depth map. |
| RGB | [carla.Image](python_api.md#carla.Image) | Provides clear vision of the surroundings. Looks like a normal photo of the scene. |
| Semantic segmentation | [carla.Image](python_api.md#carla.Image) | Uses the tags of the different actors in the photo to group the elements by color. |
<h4>Detectors</h4>
Sensors that retrieve data when a parent object they are attached to registers a specific event in the simulation.
__Retrieve data:__ when triggered.
| Sensor | Output | Overview |
| ---------- | ---------- | ---------- |
| Collision | [carla.CollisionEvent](python_api.md#carla.CollisionEvent) | Retrieves collisions between its parent and other actors. |
| Lane invasion | [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent) | Registers when its parent crosses a lane marking. |
| Obstacle | [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleEvent) | Detects possible obstacles ahead of its parent. |
<h4>Other</h4>
This group gathers sensors with different functionalities: navigation, measure physical properties of an object and provide 2D and 3D models of the scene.
__Retrieve data:__ every simulation step.
| Sensor | Output | Overview |
| ---------- | ---------- | ---------- |
| GNSS | [carla.GNSSMeasurement](python_api.md#carla.GNSSMeasurement) | Retrieves the geolocation location of the sensor. |
| IMU | [carla.IMUMeasurement](python_api.md#carla.IMUMeasurement) | Comprises an accelerometer, a gyroscope and a compass. |
| Lidar raycast | [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement) | A rotating lidar retrieving a cloud of points to generate a 3D model the surroundings. |
| Radar | [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) | 2D point map that models elements in sight and their movement regarding the sensor. |
---------------
That is a wrap on sensors and how do these retrieve simulation data.
There is yet a lot to learn about CARLA, but this has been the last of the first steps. Now it is time to really discover the possibilities of CARLA.
However, here is a brief guidance for some of the different paths that are opened right now:
* __For those who want to gain some practise__:
> Python Cookbook
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="python_cookbook.md" target="_blank" class="btn btn-neutral" title="Python cookbook">
CARLA forum</a>
</p>
</div>
* __For those who want to continue learning__:
> Advanced step
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="configuring_the_simulation.md" target="_blank" class="btn btn-neutral" title="Configuring the simulation">
Configuring the simulation</a>
</p>
</div>
* __For those who want to experiment freely__:
> References
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="python_api.md" target="_blank" class="btn btn-neutral" title="Go to the Python API">
Python API reference</a>
</p>
</div>
* __For those who have something to say__:
> Forum
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="https://forum.carla.org/" target="_blank" class="btn btn-neutral" title="Go to the CARLA forum">
CARLA forum</a>
</p>
</div>

View File

@ -10,7 +10,7 @@
* [Python API tutorial](core_concepts.md)
* [Rendering options](rendering_options.md)
* [Simulation time and synchrony](simulation_time_and_synchrony.md)
* [Cameras and sensors](cameras_and_sensors.md)
* [Cameras and sensors](core_sensors.md)
* [F.A.Q.](faq.md)
<h3>Building from source</h3>

View File

@ -335,7 +335,7 @@ In this example we have attached a camera to a vehicle, and told the camera to
save to disk each of the images that are going to be generated.
The full list of sensors and their measurement is explained in
[Cameras and sensors](cameras_and_sensors.md).
[Cameras and sensors](core_sensors.md).
#### Other actors

View File

@ -1,61 +1,297 @@
<h1>Cameras and sensors</h1>
<h1>Sensor references</h1>
![Client window](img/client_window.png)
* [__Collision detector__](#collision-detector)
* [__Depth camera__](#depth-camera)
* [__GNSS sensor__](#gnss-sensor)
* [__IMU sensor__](#imu-sensor)
* [__Lane invasion detector__](#lane-invasion-sensor)
* [__Lidar raycast sensor__](#lidar-sensor)
* [__Obstacle detector__](#obstacle-detector)
* [__Radar sensor__](#radar-sensor)
* [__RGB camera__](#rgb-camera)
* [__Semantic segmentation camera__](#semantic-segmentation-camera)
Sensors are a special type of actor able to measure and stream data. All the
sensors have a [`listen`](python_api.md#carla.Sensor.listen) method that registers the
callback function that will be called each time the sensor produces a new measurement.
Sensors are typically attached to vehicles and produce data either each simulation update,
or when a certain event is registered.
The following Python excerpt shows how you would typically attach a sensor to a
vehicle, in this case we are adding a dashboard HD camera to a vehicle.
---------------
##Collision detector
```py
# Find the blueprint of the sensor.
blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
# Modify the attributes of the blueprint to set image resolution and field of view.
blueprint.set_attribute('image_size_x', '1920')
blueprint.set_attribute('image_size_y', '1080')
blueprint.set_attribute('fov', '110')
# Set the time in seconds between sensor captures
blueprint.set_attribute('sensor_tick', '1.0')
# Provide the position of the sensor relative to the vehicle.
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
# Tell the world to spawn the sensor, don't forget to attach it to your vehicle actor.
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
# Subscribe to the sensor stream by providing a callback function, this function is
# called each time a new image is generated by the sensor.
sensor.listen(lambda data: do_something(data))
```
* __Blueprint:__ sensor.other.collision
* __Output:__ [carla.CollisionEvent](python_api.md#carla.CollisionEvent)
Note that each sensor has a different set of attributes and produces different
type of data. However, the data produced by a sensor comes always tagged with:
This sensor, when attached to an actor, it registers an event each time the
actor collisions against something in the world. This sensor does not have any
configurable attribute.
| Sensor data attribute | Type | Description |
| --------------------- | ------ | ----------- |
| `frame` | int | Frame number when the measurement took place |
!!! note
This sensor creates "fake" actors when it collides with something that is not an actor, this is so we can retrieve the semantic tags of the object we hit.
<h4>Output attributes</h4>
This sensor produces a
[`carla.CollisionEvent`](python_api.md#carla.CollisionEvent)
object for each collision registered
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `actor` | carla.Actor | Actor that measured the collision ("self" actor) |
| `other_actor` | carla.Actor | Actor against whom we collide |
| `normal_impulse` | carla.Vector3D | Normal impulse result of the collision |
Note that several collision events might be registered during a single
simulation update.
---------------
##Depth camera
* __Blueprint:__ sensor.camera.depth
* __Output:__ [carla.Image](python_api.md#carla.Image)
[carla.colorConverter](python_api.md#carla.ColorConverter)
![ImageDepth](img/capture_depth.png)
The "Depth" camera provides a view over the scene codifying the distance of each
pixel to the camera (also known as **depth buffer** or **z-buffer**).
<h4>Basic camera attributes</h4>
| Blueprint attribute | Type | Default | Description |
| ------------------- | ---- | ------- | ----------- |
| `image_size_x` | int | 800 | Image width in pixels |
| `image_size_y` | int | 600 | Image height in pixels |
| `fov` | float | 90.0 | Horizontal field of view in degrees |
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
<h4>Camera lens distortion attributes</h4>
| Blueprint attribute | Type | Default | Description |
|---------------------|------|---------|-------------|
| `lens_circle_falloff` | float | 5.0 | Range: [0.0, 10.0] |
| `lens_circle_multiplier` | float | 0.0 | Range: [0.0, 10.0] |
| `lens_k` | float | -1.0 | Range: [-inf, inf] |
| `lens_kcube` | float | 0.0 | Range: [-inf, inf] |
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
<h4>Output attributes</h4>
This sensor produces [`carla.Image`](python_api.md#carla.Image)
objects.
| Sensor data attribute | Type | Description |
| --------------------- | ---- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `width` | int | Image width in pixels |
| `height` | int | Image height in pixels |
| `fov` | float | Horizontal field of view in degrees |
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
Most sensor data objects, like images and lidar measurements, have a function
for saving the measurements to disk.
This is the list of sensors currently available
The image codifies the depth in 3 channels of the RGB color space, from less to
more significant bytes: R -> G -> B. The actual distance in meters can be
decoded with
* [sensor.camera.rgb](#sensorcamerargb)
* [sensor.camera.depth](#sensorcameradepth)
* [sensor.camera.semantic_segmentation](#sensorcamerasemantic_segmentation)
* [sensor.lidar.ray_cast](#sensorlidarray_cast)
* [sensor.other.collision](#sensorothercollision)
* [sensor.other.lane_invasion](#sensorotherlane_invasion)
* [sensor.other.obstacle](#sensorotherobstacle)
```
normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
in_meters = 1000 * normalized
```
Camera sensors use [`carla.colorConverter`](python_api.md#carla.ColorConverter) in order to
convert the pixels of the original image.
---------------
##GNSS sensor
sensor.camera.rgb
-----------------
* __Blueprint:__ sensor.other.gnss
* __Output:__ [carla.GNSSMeasurement](python_api.md#carla.GNSSMeasurement)
This sensor, when attached to an actor, reports its current gnss position.
The gnss position is internally calculated by adding the metric position to
an initial geo reference location defined within the OpenDRIVE map definition.
<h4>Output attributes</h4>
This sensor produces
[`carla.GnssMeasurement`](python_api.md#carla.GnssMeasurement)
objects.
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `latitude` | double | Latitude position of the actor |
| `longitude` | double | Longitude position of the actor |
| `altitude` | double | Altitude of the actor |
---------------
##IMU sensor
* __Blueprint:__ sensor.other.imu
* __Output:__ [carla.IMUMeasurement](python_api.md#carla.IMUMeasurement)
This sensor, when attached to an actor, the user can access to it's accelerometer, gyroscope and compass.
<h4>Output attributes</h4>
This sensor produces
[`carla.IMUMeasurement`](python_api.md#carla.IMUMeasurement)
objects.
| Sensor data attribute | Type | Description |
| --------------------- | --------------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world |
| `accelerometer` | carla.Vector3D | Measures linear acceleration in `m/s^2` |
| `gyroscope` | carla.Vector3D | Measures angular velocity in `rad/sec` |
| `compass` | float | Orientation with respect to the North (`(0.0, -1.0, 0.0)` in Unreal) in radians |
---------------
##Lane invasion detector
* __Blueprint:__ sensor.other.lane_invasion
* __Output:__ [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent)
> _This sensor is a work in progress, currently very limited._
This sensor, when attached to an actor, it registers an event each time the
actor crosses a lane marking. This sensor is somehow special as it works fully
on the client-side. The lane invasion uses the road data of the active map to
determine whether a vehicle is invading another lane. This information is based
on the OpenDrive file provided by the map, therefore it is subject to the
fidelity of the OpenDrive description. In some places there might be
discrepancies between the lanes visible by the cameras and the lanes registered
by this sensor.
This sensor does not have any configurable attribute.
<h4>Output attributes</h4>
This sensor produces a
[`carla.LaneInvasionEvent`](python_api.md#carla.LaneInvasionEvent)
object for each lane marking crossed by the actor
| Sensor data attribute | Type | Description |
| ----------------------- | ----------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `actor` | carla.Actor | Actor that invaded another lane ("self" actor) |
| `crossed_lane_markings` | carla.LaneMarking list | List of lane markings that have been crossed |
---------------
##Lidar raycast sensor
* __Blueprint:__ sensor.lidar.ray_cast
* __Output:__ [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement)
![LidarPointCloud](img/lidar_point_cloud.gif)
This sensor simulates a rotating Lidar implemented using ray-casting. The points
are computed by adding a laser for each channel distributed in the vertical FOV,
then the rotation is simulated computing the horizontal angle that the Lidar
rotated this frame, and doing a ray-cast for each point that each laser was
supposed to generate this frame; `points_per_second / (FPS * channels)`.
<h4>Lidar attributes</h4>
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
| `channels` | int | 32 | Number of lasers |
| `range` | float | 10.0 | Maximum measurement distance in meters _(<=0.9.6: is in centimeters)_ |
| `points_per_second` | int | 56000 | Points generated by all lasers per second |
| `rotation_frequency` | float | 10.0 | Lidar rotation frequency |
| `upper_fov` | float | 10.0 | Angle in degrees of the upper most laser |
| `lower_fov` | float | -30.0 | Angle in degrees of the lower most laser |
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
<h4>Output attributes</h4>
This sensor produces
[`carla.LidarMeasurement`](python_api.md#carla.LidarMeasurement)
objects.
| Sensor data attribute | Type | Description |
| -------------------------- | ---------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `horizontal_angle` | float | Angle in XY plane of the lidar this frame (in radians) |
| `channels` | int | Number of channels (lasers) of the lidar |
| `get_point_count(channel)` | int | Number of points per channel captured this frame |
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point) |
The object also acts as a Python list of [`carla.Location`](python_api.md#carla.Location)
```py
for location in lidar_measurement:
print(location)
```
A Lidar measurement contains a packet with all the points generated during a
`1/FPS` interval. During this interval the physics is not updated so all the
points in a measurement reflect the same "static picture" of the scene.
!!! tip
Running the simulator at
[fixed time-step](configuring_the_simulation.md#fixed-time-step) it is
possible to tune the horizontal angle of each measurement. By adjusting the
frame rate and the rotation frequency is possible, for instance, to get a
360 view each measurement.
---------------
##Obstacle detector
* __Blueprint:__ sensor.other.obstacle
* __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent)
This sensor, when attached to an actor, reports if there is obstacles ahead.
!!! note
This sensor creates "fake" actors when it detects obstacles with something that is not an actor,
this is so we can retrieve the semantic tags of the object we hit.
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
| `distance` | float | 5 | Distance to throw the trace to |
| `hit_radius` | float | 0.5 | Radius of the trace |
| `only_dynamics` | bool | false | If true, the trace will only look for dynamic objects |
| `debug_linetrace` | bool | false | If true, the trace will be visible |
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
<h4>Output attributes</h4>
This sensor produces
[`carla.ObstacleDetectionEvent`](python_api.md#carla.ObstacleDetectionEvent)
objects.
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world |
| `actor` | carla.Actor | Actor that detected the obstacle ("self" actor) |
| `other_actor` | carla.Actor | Actor detected as obstacle |
| `distance` | float | Distance from actor to other_actor |
---------------
##Radar sensor
* __Blueprint:__ sensor.other.radar
* __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement)
---------------
##RGB camera
* __Blueprint:__ sensor.camera.rgb
* __Output:__ [carla.Image](python_api.md#carla.Image)
[carla.colorConverter](python_api.md#carla.ColorConverter)
![ImageRGB](img/capture_scenefinal.png)
@ -158,61 +394,13 @@ objects.
| `fov` | float | Horizontal field of view in degrees |
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
sensor.camera.depth
-------------------
---------------
##Semantic segmentation camera
![ImageDepth](img/capture_depth.png)
* __Blueprint:__ sensor.camera.semantic_segmentation
* __Output:__ [carla.Image](python_api.md#carla.Image)
The "Depth" camera provides a view over the scene codifying the distance of each
pixel to the camera (also known as **depth buffer** or **z-buffer**).
<h4>Basic camera attributes</h4>
| Blueprint attribute | Type | Default | Description |
| ------------------- | ---- | ------- | ----------- |
| `image_size_x` | int | 800 | Image width in pixels |
| `image_size_y` | int | 600 | Image height in pixels |
| `fov` | float | 90.0 | Horizontal field of view in degrees |
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
<h4>Camera lens distortion attributes</h4>
| Blueprint attribute | Type | Default | Description |
|---------------------|------|---------|-------------|
| `lens_circle_falloff` | float | 5.0 | Range: [0.0, 10.0] |
| `lens_circle_multiplier` | float | 0.0 | Range: [0.0, 10.0] |
| `lens_k` | float | -1.0 | Range: [-inf, inf] |
| `lens_kcube` | float | 0.0 | Range: [-inf, inf] |
| `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] |
| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] |
<h4>Output attributes</h4>
This sensor produces [`carla.Image`](python_api.md#carla.Image)
objects.
| Sensor data attribute | Type | Description |
| --------------------- | ---- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `width` | int | Image width in pixels |
| `height` | int | Image height in pixels |
| `fov` | float | Horizontal field of view in degrees |
| `raw_data` | bytes | Array of BGRA 32-bit pixels |
The image codifies the depth in 3 channels of the RGB color space, from less to
more significant bytes: R -> G -> B. The actual distance in meters can be
decoded with
```
normalized = (R + G * 256 + B * 256 * 256) / (256 * 256 * 256 - 1)
in_meters = 1000 * normalized
```
sensor.camera.semantic_segmentation
-----------------------------------
[carla.colorConverter](python_api.md#carla.ColorConverter)
![ImageSemanticSegmentation](img/capture_semseg.png)
@ -287,193 +475,3 @@ _"Unreal/CarlaUE4/Content/Static/Pedestrians"_ folder it's tagged as pedestrian.
the C++ code. Add a new label to the `ECityObjectLabel` enum in "Tagger.h",
and its corresponding filepath check inside `GetLabelByFolderName()`
function in "Tagger.cpp".
sensor.lidar.ray_cast
---------------------
![LidarPointCloud](img/lidar_point_cloud.gif)
This sensor simulates a rotating Lidar implemented using ray-casting. The points
are computed by adding a laser for each channel distributed in the vertical FOV,
then the rotation is simulated computing the horizontal angle that the Lidar
rotated this frame, and doing a ray-cast for each point that each laser was
supposed to generate this frame; `points_per_second / (FPS * channels)`.
<h4>Lidar attributes</h4>
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
| `channels` | int | 32 | Number of lasers |
| `range` | float | 10.0 | Maximum measurement distance in meters _(<=0.9.6: is in centimeters)_ |
| `points_per_second` | int | 56000 | Points generated by all lasers per second |
| `rotation_frequency` | float | 10.0 | Lidar rotation frequency |
| `upper_fov` | float | 10.0 | Angle in degrees of the upper most laser |
| `lower_fov` | float | -30.0 | Angle in degrees of the lower most laser |
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
<h4>Output attributes</h4>
This sensor produces
[`carla.LidarMeasurement`](python_api.md#carla.LidarMeasurement)
objects.
| Sensor data attribute | Type | Description |
| -------------------------- | ---------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `horizontal_angle` | float | Angle in XY plane of the lidar this frame (in radians) |
| `channels` | int | Number of channels (lasers) of the lidar |
| `get_point_count(channel)` | int | Number of points per channel captured this frame |
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point) |
The object also acts as a Python list of [`carla.Location`](python_api.md#carla.Location)
```py
for location in lidar_measurement:
print(location)
```
A Lidar measurement contains a packet with all the points generated during a
`1/FPS` interval. During this interval the physics is not updated so all the
points in a measurement reflect the same "static picture" of the scene.
!!! tip
Running the simulator at
[fixed time-step](simulation_time_and_synchrony.md) it is
possible to tune the horizontal angle of each measurement. By adjusting the
frame rate and the rotation frequency is possible, for instance, to get a
360 view each measurement.
sensor.other.collision
----------------------
This sensor, when attached to an actor, it registers an event each time the
actor collisions against something in the world. This sensor does not have any
configurable attribute.
!!! note
This sensor creates "fake" actors when it collides with something that is not an actor,
this is so we can retrieve the semantic tags of the object we hit.
<h4>Output attributes</h4>
This sensor produces a
[`carla.CollisionEvent`](python_api.md#carla.CollisionEvent)
object for each collision registered
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `actor` | carla.Actor | Actor that measured the collision ("self" actor) |
| `other_actor` | carla.Actor | Actor against whom we collide |
| `normal_impulse` | carla.Vector3D | Normal impulse result of the collision |
Note that several collision events might be registered during a single
simulation update.
sensor.other.lane_invasion
--------------------------
> _This sensor is a work in progress, currently very limited._
This sensor, when attached to an actor, it registers an event each time the
actor crosses a lane marking. This sensor is somehow special as it works fully
on the client-side. The lane invasion uses the road data of the active map to
determine whether a vehicle is invading another lane. This information is based
on the OpenDrive file provided by the map, therefore it is subject to the
fidelity of the OpenDrive description. In some places there might be
discrepancies between the lanes visible by the cameras and the lanes registered
by this sensor.
This sensor does not have any configurable attribute.
<h4>Output attributes</h4>
This sensor produces a
[`carla.LaneInvasionEvent`](python_api.md#carla.LaneInvasionEvent)
object for each lane marking crossed by the actor
| Sensor data attribute | Type | Description |
| ----------------------- | ----------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `actor` | carla.Actor | Actor that invaded another lane ("self" actor) |
| `crossed_lane_markings` | carla.LaneMarking list | List of lane markings that have been crossed |
sensor.other.gnss
-----------------
This sensor, when attached to an actor, reports its current gnss position.
The gnss position is internally calculated by adding the metric position to
an initial geo reference location defined within the OpenDRIVE map definition.
<h4>Output attributes</h4>
This sensor produces
[`carla.GnssMeasurement`](python_api.md#carla.GnssMeasurement)
objects.
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world coordinates of the sensor at the time of the measurement |
| `latitude` | double | Latitude position of the actor |
| `longitude` | double | Longitude position of the actor |
| `altitude` | double | Altitude of the actor |
sensor.other.obstacle
---------------------
This sensor, when attached to an actor, reports if there is obstacles ahead.
!!! note
This sensor creates "fake" actors when it detects obstacles with something that is not an actor,
this is so we can retrieve the semantic tags of the object we hit.
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
| `distance` | float | 5 | Distance to throw the trace to |
| `hit_radius` | float | 0.5 | Radius of the trace |
| `only_dynamics` | bool | false | If true, the trace will only look for dynamic objects |
| `debug_linetrace` | bool | false | If true, the trace will be visible |
| `sensor_tick` | float | 0.0 | Seconds between sensor captures (ticks) |
<h4>Output attributes</h4>
This sensor produces
[`carla.ObstacleDetectionEvent`](python_api.md#carla.ObstacleDetectionEvent)
objects.
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world |
| `actor` | carla.Actor | Actor that detected the obstacle ("self" actor) |
| `other_actor` | carla.Actor | Actor detected as obstacle |
| `distance` | float | Distance from actor to other_actor |
sensor.other.imu
----------------
This sensor, when attached to an actor, the user can access to it's accelerometer, gyroscope and compass.
<h4>Output attributes</h4>
This sensor produces
[`carla.IMUMeasurement`](python_api.md#carla.IMUMeasurement)
objects.
| Sensor data attribute | Type | Description |
| --------------------- | --------------- | ----------- |
| `frame` | int | Frame number when the measurement took place |
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode |
| `transform` | carla.Transform | Transform in world |
| `accelerometer` | carla.Vector3D | Measures linear acceleration in `m/s^2` |
| `gyroscope` | carla.Vector3D | Measures angular velocity in `rad/sec` |
| `compass` | float | Orientation with respect to the North (`(0.0, -1.0, 0.0)` in Unreal) in radians |

View File

@ -22,7 +22,7 @@ nav:
- '1st. World and client': 'core_world.md'
- '2nd. Actors and blueprints': 'core_actors.md'
- '3rd. Maps and navigation': 'core_map.md'
- '4th. Sensors and data': 'cameras_and_sensors.md'
- '4th. Sensors and data': 'core_sensors.md'
- Advanced steps:
- 'Recorder': 'recorder_and_playback.md'
- 'Rendering options': 'rendering_options.md'