This page summarizes everything necessary to start handling sensors. It introduces the types available and a step-by-step guide of their life cycle. The specifics for every sensor can be found in the [sensors reference](ref_sensors.md).
* __What is this data?__ It varies a lot depending on the type of sensor. All the types of data are inherited from the general [carla.SensorData](python_api.md#carla.SensorData).
* __When do they retrieve the data?__ Either on every simulation step or when a certain event is registered. Depends on the type of sensor.
* __How do they retrieve the data?__ Every sensor has a `listen()` method to receive and manage the data.
As with every other actor, find the blueprint and set specific attributes. This is essential when handling sensors. Their attributes will determine the results obtained. These are detailed in the [sensors reference](ref_sensors.md).
`attachment_to` and `attachment_type`, are crucial. Sensors should be attached to a parent actor, usually a vehicle, to follow it around and gather the information. The attachment type will determine how its position is updated regarding said vehicle.
* __Rigid attachment.__ Movement is strict regarding its parent location. This is the proper attachment to retrieve data from the simulation.
* __SpringArm attachment.__ Movement is eased with little accelerations and decelerations. This attachment is only recommended to record videos from the simulation. The movement is smooth and "hops" are avoided when updating the cameras' positions.
The argument `callback` is a [lambda function](https://www.w3schools.com/python/python_lambda.asp). It describes what should the sensor do when data is retrieved. This must have the data retrieved as an argument.
Sensor data differs a lot between sensor types. Take a look at the [sensors reference](ref_sensors.md) to get a detailed explanation. However, all of them are always tagged with some basic information.
Take a shot of the world from their point of view. For cameras that return [carla.Image](<../python_api#carlaimage>), you can use the helper class [carla.ColorConverter](python_api.md#carla.ColorConverter) to modify the image to represent different information.
| Depth | [carla.Image](<../python_api#carlaimage>) |Renders the depth of the elements in the field of view in a gray-scale map. |
| RGB | [carla.Image](<../python_api#carlaimage>) | Provides clear vision of the surroundings. Looks like a normal photo of the scene. |
| Semantic segmentation | [carla.Image](<../python_api#carlaimage>) | Renders elements in the field of view with a specific color according to their tags. |
| DVS | [carla.DVSEventArray](<../python_api#carladvseventarray>) | Measures changes of brightness intensity asynchronously as an event stream. |
| GNSS | [carla.GNSSMeasurement](<../python_api#carlagnssmeasurement>) | Retrieves the geolocation of the sensor. |
| IMU | [carla.IMUMeasurement](<../python_api#carlaimumeasurement>) | Comprises an accelerometer, a gyroscope, and a compass. |
| LIDAR | [carla.LidarMeasurement](<../python_api#carlalidarmeasurement>) | A rotating LIDAR. Generates a 4D point cloud with coordinates and intensity per point to model the surroundings. |
| Radar | [carla.RadarMeasurement](<../python_api#carlaradarmeasurement>) | 2D point map modelling elements in sight and their movement regarding the sensor. |
| RSS | [carla.RssResponse](<../python_api#carlarssresponse>) | Modifies the controller applied to a vehicle according to safety checks. This sensor works in a different manner than the rest, and there is specific [RSS documentation](<../adv_rss>) for it. |
| Semantic LIDAR | [carla.SemanticLidarMeasurement](<../python_api#carlasemanticlidarmeasurement>) | A rotating LIDAR. Generates a 3D point cloud with extra information regarding instance and semantic segmentation. |