Fix some documentation issues
This commit is contained in:
parent
5ae7596d47
commit
2c23fc1e0e
|
@ -13,9 +13,9 @@ moment there are three different sensors available. These three sensors are
|
||||||
implemented as different post-processing effects applied to scene capture
|
implemented as different post-processing effects applied to scene capture
|
||||||
cameras.
|
cameras.
|
||||||
|
|
||||||
* Scene final
|
* [Scene final](#scene-final)
|
||||||
* Depth map
|
* [Depth map](#depth-map)
|
||||||
* Semantic segmentation
|
* [Semantic segmentation](#semantic-segmentation)
|
||||||
|
|
||||||
!!! note
|
!!! note
|
||||||
The images are sent by the server as a BGRA array of bytes. The provided
|
The images are sent by the server as a BGRA array of bytes. The provided
|
||||||
|
@ -24,8 +24,8 @@ cameras.
|
||||||
examples in the PythonClient folder showing how to parse the images.
|
examples in the PythonClient folder showing how to parse the images.
|
||||||
|
|
||||||
There is a fourth post-processing effect available, _None_, which provides a
|
There is a fourth post-processing effect available, _None_, which provides a
|
||||||
view with of the scene with no effect, not even lens effects like flares or DOF;
|
view with of the scene with no effect, not even lens effects like flares or
|
||||||
we will skip this one in the following descriptions.
|
depth of field; we will skip this one in the following descriptions.
|
||||||
|
|
||||||
We provide a tool to convert raw depth and semantic segmentation images to a
|
We provide a tool to convert raw depth and semantic segmentation images to a
|
||||||
more human readable palette of colors. It can be found at
|
more human readable palette of colors. It can be found at
|
||||||
|
@ -39,28 +39,25 @@ Scene final
|
||||||
-----------
|
-----------
|
||||||
|
|
||||||
The "scene final" camera provides a view of the scene after applying some
|
The "scene final" camera provides a view of the scene after applying some
|
||||||
post-processing effects to create a more realistic feel. Theese are actually stored on the Level, in an actor called [PostProcessVolume](https://docs.unrealengine.com/latest/INT/Engine/Rendering/PostProcessEffects/) and not in the Camera. We use the following post process effects:
|
post-processing effects to create a more realistic feel. These are actually
|
||||||
|
stored on the Level, in an actor called [PostProcessVolume][postprolink] and not
|
||||||
|
in the Camera. We use the following post process effects:
|
||||||
|
|
||||||
|
* **Vignette** Darkens the border of the screen.
|
||||||
|
* **Grain jitter** Adds a bit of noise to the render.
|
||||||
|
* **Bloom** Intense lights burn the area around them.
|
||||||
|
* **Auto exposure** Modifies the image gamma to simulate the eye adaptation to darker or brighter areas.
|
||||||
|
* **Lens flares** Simulates the reflection of bright objects on the lens.
|
||||||
|
* **Depth of field** Blurs objects near or very far away of the camera.
|
||||||
|
|
||||||
- Vignette
|
[postprolink]: https://docs.unrealengine.com/latest/INT/Engine/Rendering/PostProcessEffects/
|
||||||
Darkens the border of the screen.
|
|
||||||
- grain jitter
|
|
||||||
Adds a bit of noise to the render.
|
|
||||||
- Bloom
|
|
||||||
Intense lights burn the area arround them.
|
|
||||||
- AutoExposure
|
|
||||||
Modifies the image gamma to simulate the eye adaptation to darker or brighter areas.
|
|
||||||
- Lens Flares
|
|
||||||
Siumlates the reflection of bright objects on the lens.
|
|
||||||
- Depth of Field
|
|
||||||
Blurs objects near or very far away of the camera.
|
|
||||||
|
|
||||||
|
|
||||||
Depth map
|
Depth map
|
||||||
---------
|
---------
|
||||||
|
|
||||||
The "depth map" camera provides an image with 24 bit floating precision point codified in the 3 channels of the RGB color space.
|
The "depth map" camera provides an image with 24 bit floating precision point
|
||||||
The order from less to more significant bytes is R -> G -> B.
|
codified in the 3 channels of the RGB color space. The order from less to more
|
||||||
|
significant bytes is R -> G -> B.
|
||||||
|
|
||||||
| R | G | B | int24 | |
|
| R | G | B | int24 | |
|
||||||
|----------|----------|----------|----------|------------|
|
|----------|----------|----------|----------|------------|
|
||||||
|
|
|
@ -47,7 +47,7 @@ Note that UE4 itself and the UE4 free automotive materials follow their own
|
||||||
license terms.
|
license terms.
|
||||||
|
|
||||||
CARLA uses free automotive materials from Epic Games. For compiling CARLA, these
|
CARLA uses free automotive materials from Epic Games. For compiling CARLA, these
|
||||||
materials must be dowloanded from the UE4 marketplace and manually linked in
|
materials must be downloaded from the UE4 marketplace and manually linked in
|
||||||
CARLA following the instructions provided in the documentation.
|
CARLA following the instructions provided in the documentation.
|
||||||
|
|
||||||
CARLA uses pedestrians created with Adobe Fuse, which is a free tool for that
|
CARLA uses pedestrians created with Adobe Fuse, which is a free tool for that
|
||||||
|
|
Loading…
Reference in New Issue