Close #144 add sample output images to docs

This commit is contained in:
nsubiron 2018-02-02 14:20:18 +01:00
parent 0e694af2c9
commit 6ce8766816
4 changed files with 12 additions and 6 deletions

View File

@ -9,7 +9,7 @@ This document describes the details of the different cameras/sensors currently
available as well as the resulting images produced by them.
Although we plan to extend the sensor suite of CARLA in the near future, at the
moment there are three different sensors available. These three sensors are
moment there are only three different sensors available. These three sensors are
implemented as different post-processing effects applied to scene capture
cameras.
@ -38,6 +38,8 @@ more human readable palette of colors. It can be found at
Scene final
-----------
![SceneFinal](img/capture_scenefinal.png)<br>
The "scene final" camera provides a view of the scene after applying some
post-processing effects to create a more realistic feel. These are actually
stored on the Level, in an actor called [PostProcessVolume][postprolink] and not
@ -55,6 +57,8 @@ in the Camera. We use the following post process effects:
Depth map
---------
![Depth](img/capture_depth.png)
The "depth map" camera provides an image with 24 bit floating precision point
codified in the 3 channels of the RGB color space. The order from less to more
significant bytes is R -> G -> B.
@ -81,12 +85,14 @@ Our max render distance (far) is 1km.
Semantic segmentation
---------------------
![SemanticSegmentation](img/capture_semseg.png)
The "semantic segmentation" camera classifies every object in the view by
displaying it in a different color according to the object class. E.g.,
pedestrians appear in a different color than vehicles.
The server provides an image with the tag information encoded in the red
channel. A pixel with a red value of x displays an object with tag x. The
The server provides an image with the tag information **encoded in the red
channel**. A pixel with a red value of x displays an object with tag x. The
following tags are currently available
Value | Tag
@ -106,6 +112,6 @@ Value | Tag
12 | TrafficSigns
This is implemented by tagging every object in the scene before hand (either at
begin play or on spawn). The objects are classified by their relative file
system path in the project. E.g., every mesh stored in the "pedestrians" folder
it's tagged as pedestrian.
begin play or on spawn). The objects are classified by their relative file path
in the project. E.g., every mesh stored in the "Content/Static/Pedestrians"
folder it's tagged as pedestrian.

BIN
Docs/img/capture_depth.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

BIN
Docs/img/capture_semseg.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB