181 lines
8.3 KiB
Markdown
181 lines
8.3 KiB
Markdown
|
<h1>4th. Sensors and data</h1>
|
||
|
|
||
|
The last step in this introduction to CARLA should be learning about sensors, which allow to retrieve data from the surroundings and so, are crucial to use CARLA as a learning environment for driving agents.
|
||
|
The first part of this page summarizes everything necessary to start handling sensors including some basic information about the different types available and a step-by-step of their life cycle. Use this to experiment and then come back when necessary to read more about each specific type of sensor.
|
||
|
|
||
|
* [__Sensors step-by-step__](#sensors-step-by-step):
|
||
|
* Setting
|
||
|
* Spawning
|
||
|
* Listening
|
||
|
* Destroying
|
||
|
* [__Types of sensors__](#types-of-sensors)
|
||
|
* Cameras
|
||
|
* Detectors
|
||
|
* Other
|
||
|
|
||
|
---------------
|
||
|
##Sensors step-by-step
|
||
|
|
||
|
The class [carla.Sensor](python_api.md#carla.Sensor) defines a special type of actor able to measure and stream data.
|
||
|
|
||
|
* __What is this data?__ It varies a lot depending on the type of sensor, but the data is always defined as an inherited class of the general [carla.SensorData](python_api.md#carla.SensorData).
|
||
|
* __When do they retrieve the data?__ Either on every simulation step or when a certain event is registered. Depends on the type of sensor.
|
||
|
* __How do they retrieve the data?__ Every sensor has a `listen()` method that receives and manages the data.
|
||
|
|
||
|
Although they are very different to each other, the way the user has to manage them is quite similar.
|
||
|
|
||
|
<h4>Setting</h4>
|
||
|
|
||
|
As with every other actor, the first step is to find the proper blueprint in the library and set specific attributes to get the desired results. This is essential when handling sensors, as their capabilities depend on the way this attributes are set. Their attributes are listed in the blueprint library but a detailed explanation on each type of sensor is provided later on this page.
|
||
|
|
||
|
The following example sets ready a dashboard HD camera that will later be attached to a vehicle.
|
||
|
```py
|
||
|
# Find the blueprint of the sensor.
|
||
|
blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
|
||
|
# Modify the attributes of the blueprint to set image resolution and field of view.
|
||
|
blueprint.set_attribute('image_size_x', '1920')
|
||
|
blueprint.set_attribute('image_size_y', '1080')
|
||
|
blueprint.set_attribute('fov', '110')
|
||
|
# Set the time in seconds between sensor captures
|
||
|
blueprint.set_attribute('sensor_tick', '1.0')
|
||
|
```
|
||
|
|
||
|
<h4>Spawning</h4>
|
||
|
|
||
|
Sensors are also spawned like any other actor, only this time the two optional parameters, `attachment_to` and `attachment_type` are crucial. They should be attached to another actor, usually a vehicle, to follow it around and gather the information regarding its surroundings.
|
||
|
There are two types of attachment:
|
||
|
|
||
|
* __Rigid__: the sensor's location will update strictly regarding its parent. Cameras may show "little hops" as the moves are not eased.
|
||
|
* __SpringArm__: movement will be smoothed with little accelerations and decelerations.
|
||
|
|
||
|
```py
|
||
|
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
|
||
|
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
|
||
|
```
|
||
|
!!! Important
|
||
|
When spawning an actor with attachment, remember that its location should be relative to its parent, not global.
|
||
|
|
||
|
<h4>Listening</h4>
|
||
|
|
||
|
Every sensor has a [`listen()`](python_api.md#carla.Sensor.listen) that is called every time the sensor retrieves data.
|
||
|
This method has one argument: `callback`, which is a lambda expression of a function, defining what should the sensor do when data is retrieved.
|
||
|
Similarly, the lambda function must have at least one argument, which will be the retrieved data:
|
||
|
|
||
|
```py
|
||
|
# do_something() will be called each time a new image is generated by the camera.
|
||
|
sensor.listen(lambda data: do_something(data))
|
||
|
|
||
|
...
|
||
|
|
||
|
# This supposed collision sensor would print everytime a collision is detected.
|
||
|
def callback(event):
|
||
|
for actor_id in event:
|
||
|
vehicle = world_ref().get_actor(actor_id)
|
||
|
print('Vehicle too close: %s' % vehicle.type_id)
|
||
|
|
||
|
sensor02.listen(callback)
|
||
|
```
|
||
|
|
||
|
!!! note
|
||
|
The __is_listening__ attribute of a sensor allows to enable/disable data listening at will.
|
||
|
|
||
|
Most sensor data objects have a function for saving the measurements to disk so it can be later used in other environments.
|
||
|
Sensor data differs a lot between sensor types, but it is always tagged with:
|
||
|
|
||
|
| Sensor data attribute | Type | Description |
|
||
|
| --------------------- | ------ | ----------- |
|
||
|
| `frame` | int | Frame number when the measurement took place. |
|
||
|
| `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode. |
|
||
|
| `transform` | carla.Transform | World reference of the sensor at the time of the measurement. |
|
||
|
|
||
|
|
||
|
---------------
|
||
|
##Types of sensors
|
||
|
|
||
|
<h4>Cameras</h4>
|
||
|
|
||
|
These sensors take a shot of the world from their point of view and then use the helper class to alter this image and provide different types of information.
|
||
|
__Retrieve data:__ every simulation step.
|
||
|
|
||
|
| Sensor | Output | Overview |
|
||
|
| ---------- | ---------- | ---------- |
|
||
|
| Depth | [carla.Image](python_api.md#carla.Image) | Combines the photo with the distance of the elements on scene to provide with a gray-scale depth map. |
|
||
|
| RGB | [carla.Image](python_api.md#carla.Image) | Provides clear vision of the surroundings. Looks like a normal photo of the scene. |
|
||
|
| Semantic segmentation | [carla.Image](python_api.md#carla.Image) | Uses the tags of the different actors in the photo to group the elements by color. |
|
||
|
|
||
|
<h4>Detectors</h4>
|
||
|
|
||
|
Sensors that retrieve data when a parent object they are attached to registers a specific event in the simulation.
|
||
|
__Retrieve data:__ when triggered.
|
||
|
|
||
|
| Sensor | Output | Overview |
|
||
|
| ---------- | ---------- | ---------- |
|
||
|
| Collision | [carla.CollisionEvent](python_api.md#carla.CollisionEvent) | Retrieves collisions between its parent and other actors. |
|
||
|
| Lane invasion | [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent) | Registers when its parent crosses a lane marking. |
|
||
|
| Obstacle | [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleEvent) | Detects possible obstacles ahead of its parent. |
|
||
|
|
||
|
<h4>Other</h4>
|
||
|
|
||
|
This group gathers sensors with different functionalities: navigation, measure physical properties of an object and provide 2D and 3D models of the scene.
|
||
|
__Retrieve data:__ every simulation step.
|
||
|
|
||
|
| Sensor | Output | Overview |
|
||
|
| ---------- | ---------- | ---------- |
|
||
|
| GNSS | [carla.GNSSMeasurement](python_api.md#carla.GNSSMeasurement) | Retrieves the geolocation location of the sensor. |
|
||
|
| IMU | [carla.IMUMeasurement](python_api.md#carla.IMUMeasurement) | Comprises an accelerometer, a gyroscope and a compass. |
|
||
|
| Lidar raycast | [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement) | A rotating lidar retrieving a cloud of points to generate a 3D model the surroundings. |
|
||
|
| Radar | [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) | 2D point map that models elements in sight and their movement regarding the sensor. |
|
||
|
|
||
|
---------------
|
||
|
That is a wrap on sensors and how do these retrieve simulation data.
|
||
|
There is yet a lot to learn about CARLA, but this has been the last of the first steps. Now it is time to really discover the possibilities of CARLA.
|
||
|
However, here is a brief guidance for some of the different paths that are opened right now:
|
||
|
|
||
|
* __For those who want to gain some practise__:
|
||
|
> Python Cookbook
|
||
|
|
||
|
<div class="build-buttons">
|
||
|
<!-- Latest release button -->
|
||
|
<p>
|
||
|
<a href="python_cookbook.md" target="_blank" class="btn btn-neutral" title="Python cookbook">
|
||
|
CARLA forum</a>
|
||
|
</p>
|
||
|
</div>
|
||
|
|
||
|
* __For those who want to continue learning__:
|
||
|
> Advanced step
|
||
|
|
||
|
<div class="build-buttons">
|
||
|
<!-- Latest release button -->
|
||
|
<p>
|
||
|
<a href="configuring_the_simulation.md" target="_blank" class="btn btn-neutral" title="Configuring the simulation">
|
||
|
Configuring the simulation</a>
|
||
|
</p>
|
||
|
</div>
|
||
|
|
||
|
* __For those who want to experiment freely__:
|
||
|
> References
|
||
|
|
||
|
<div class="build-buttons">
|
||
|
<!-- Latest release button -->
|
||
|
<p>
|
||
|
<a href="python_api.md" target="_blank" class="btn btn-neutral" title="Go to the Python API">
|
||
|
Python API reference</a>
|
||
|
</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
* __For those who have something to say__:
|
||
|
> Forum
|
||
|
|
||
|
<div class="build-buttons">
|
||
|
<!-- Latest release button -->
|
||
|
<p>
|
||
|
<a href="https://forum.carla.org/" target="_blank" class="btn btn-neutral" title="Go to the CARLA forum">
|
||
|
CARLA forum</a>
|
||
|
</p>
|
||
|
</div>
|
||
|
|
||
|
|
||
|
|