From 0d45230348bf0043920c3bb96b6b7262203db469 Mon Sep 17 00:00:00 2001 From: nsubiron Date: Thu, 16 Nov 2017 17:26:50 +0100 Subject: [PATCH] Document cameras and sensors --- Docs/cameras_and_sensors.md | 77 +++++++++++++++++++++++++++++++++++++ Docs/index.md | 3 +- Docs/measurements.md | 2 +- mkdocs.yml | 1 + 4 files changed, 81 insertions(+), 2 deletions(-) create mode 100644 Docs/cameras_and_sensors.md diff --git a/Docs/cameras_and_sensors.md b/Docs/cameras_and_sensors.md new file mode 100644 index 000000000..5072a5f41 --- /dev/null +++ b/Docs/cameras_and_sensors.md @@ -0,0 +1,77 @@ +Cameras and sensors +=================== + +Cameras and sensors can be added to the player vehicle by defining them in the +settings file sent by the client on every new episode. Check out the examples at +[CARLA Settings example][settingslink]. + +This document describes the details of the different cameras/sensors currently +available as well as the resulting images produced by them. + +Although we plan to extend the sensor suite of CARLA in the near future, at the +moment there are three different sensors available. These three sensors are +implemented as different post-processing effects applied to scene capture +cameras. + + * [Scene final](#Scene final) + * [Depth map](#Depth map) + * [Semantic segmentation](#Semantic segmentation) + +!!! note + The images are sent by the server as a BGRA array of bytes. The provided + Python client retrieves the images in this format, it's up to the users to + parse the images and convert them to the desired format. There are some + examples in the PythonClient folder showing how to parse the images. + +There is a fourth post-processing effect available, _None_, which provides a +view with of the scene with no effect, not even lens effects like flares or DOF; +we will skip this one in the following descriptions. + +We provide a tool to convert raw depth and semantic segmentation images to a +more human readable palette of colors. It can be found at +["Util/ImageConverter"][imgconvlink]. + +[settingslink]: https://github.com/carla-simulator/carla/blob/master/Docs/Example.CarlaSettings.ini + +[imgconvlink]: https://github.com/carla-simulator/carla/tree/master/Util/ImageConverter + +Scene final +----------- + +The "scene final" camera provides a view of the scene after applying some +post-processing effects to create a more realistic feel. + +Depth map +--------- + +Semantic segmentation +--------------------- + +The "semantic segmentation" camera classifies every object in the view by +displaying it in a different color according to the object class. E.g., +pedestrians appear in a different color than vehicles. + +The server provides an image with the tag information encoded in the red +channel. A pixel with a red value of x displays an object with tag x. The +following tags are currently available + +Value | Tag +-----:|:----- + 0 | None + 1 | Buildings + 2 | Fences + 3 | Other + 4 | Pedestrians + 5 | Poles + 6 | RoadLines + 7 | Roads + 8 | Sidewalks + 9 | Vegetation + 10 | Vehicles + 11 | Walls + 12 | TrafficSigns + +This is implemented by tagging every object in the scene before hand (either at +begin play or on spawn). The objects are classified by their relative file +system path in the project. E.g., every mesh stored in the "pedestrians" folder +it's tagged as pedestrian. diff --git a/Docs/index.md b/Docs/index.md index d22351d35..bad7ee9a4 100644 --- a/Docs/index.md +++ b/Docs/index.md @@ -4,8 +4,9 @@ CARLA Documentation #### Using CARLA * [How to run CARLA server and client](how_to_run.md) - * [CARLA Settings](carla_settings.md) + * [CARLA settings](carla_settings.md) * [Measurements](measurements.md) + * [Cameras and sensors](cameras_and_sensors.md) * [Troubleshooting](troubleshooting.md) #### Building from source diff --git a/Docs/measurements.md b/Docs/measurements.md index 9e54f34a8..21ad9e0e6 100644 --- a/Docs/measurements.md +++ b/Docs/measurements.md @@ -39,7 +39,7 @@ ai_control | Control | Vehicle's AI control that would apply t The transform contains two Vector3D objects, location and orientation. Currently, the orientation is represented as the Cartesian coordinates X, Y, Z. -_We will probably change this in the future to Roll, Pitch, and Yaw_ +_We will probably change this in the future to Roll, Pitch, and Yaw._ ###### Collision diff --git a/mkdocs.yml b/mkdocs.yml index 5356dc016..a29d129a4 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -8,6 +8,7 @@ pages: - 'How to run CARLA server and client': 'how_to_run.md' - 'CARLA Settings': 'carla_settings.md' - 'Measurements': 'measurements.md' + - 'Cameras and sensors': 'cameras_and_sensors.md' - 'Troubleshooting': 'troubleshooting.md' - Building from source: - 'How to build on Linux': 'how_to_build_on_linux.md'