Fixed index

This commit is contained in:
sergi-e 2020-09-06 13:43:19 +02:00 committed by bernat
parent 7afdebedca
commit f4fc32f911
25 changed files with 282 additions and 192 deletions

View File

@ -3,14 +3,14 @@
There are few details to take into account at the time of configuring a simulation. This page covers the more important ones.
* [__Graphics quality__](#graphics-quality)
* Vulkan vs OpenGL
* Quality levels
* [Vulkan vs OpenGL](#vulkan-vs-opengl)
* [Quality levels](#quality-levels)
* [__No-rendering mode__](#no-rendering-mode)
* [__Off-screen mode__](#off-screen-mode)
* Off-screen Vs no-rendering
* [Off-screen Vs no-rendering](#off-screen-vs-no-rendering)
* [__Running off-screen using a preferred GPU__](#running-off-screen-using-a-preferred-gpu)
* Docker: recommended approach
* Deprecated: emulate the virtual display
* [Docker - recommended approach](#docker-recommended-approach)
* [Deprecated - emulate the virtual display](#deprecated-emulate-the-virtual-display)
!!! Important
@ -117,16 +117,19 @@ DISPLAY= ./CarlaUE4.sh -opengl
---
## Running off-screen using a preferred GPU
### Docker: recommended approach
### Docker - recommended approach
The best way to run a headless CARLA and select the GPU is to [__run CARLA in a Docker__](build_docker.md).
This section contains an alternative tutorial, but this method is deprecated and performance is much worse. It is here only for those who Docker is not an option.
### Deprecated - emulate the virtual display
<details>
<summary><h4 style="display:inline">
Deprecated: emulate the virtual display
</h4></summary>
<summary>
Show deprecated tutorial on how to emulate the virtual display
</summary>
!!! Warning
This tutorial is deprecated. To run headless CARLA, please [__run CARLA in a Docker__](build_docker.md).

View File

@ -3,13 +3,13 @@
This section deals with two fundamental concepts in CARLA. Their configuration defines how does time go by in the simulation, and how does the server make the simulation move forward.
* [__Simulation time-step__](#simulation-time-step)
* Variable time-step
* Fixed time-step
* Tips when recording the simulation
* Time-step limitations
* [Variable time-step](#variable-time-step)
* [Fixed time-step](#fixed-time-step)
* [Tips when recording the simulation](#tips-when-recording-the-simulation)
* [Time-step limitations](#time-step-limitations)
* [__Client-server synchrony__](#client-server-synchrony)
* Setting synchronous mode
* Using synchronous mode
* [Setting synchronous mode](#setting-synchronous-mode)
* [Using synchronous mode](#using-synchronous-mode)
* [__Possible configurations__](#possible-configurations)
---

View File

@ -1,8 +1,8 @@
# Running CARLA in a Docker
* [__Docker installation__](#docker-installation)
* Docker CE
* NVIDIA-Docker2
* [Docker CE](#docker-ce)
* [NVIDIA-Docker2](#nvidia-docker2)
* [__Running CARLA container__](#running-carla-container)
This tutorial is designed for:

View File

@ -3,10 +3,10 @@
* [__Update commands summary__](#update-commands-summary)
* [__Get the lastest binary release__](#get-latest-binary-release)
* [__Update Linux and Windows build__](#update-linux-and-windows-build)
* Clean the build
* Pull from origin
* Download the assets
* Launch the server
* [Clean the build](#clean-the-build)
* [Pull from origin](#pull-from-origin)
* [Download the assets](#download-the-assets)
* [Launch the server](#launch-the-server)
* [__Get development assets__](#get-development-assets)
To post unexpected issues, doubts or suggestions, feel free to login in the CARLA forum.

View File

@ -2,16 +2,16 @@
* [__Windows build command summary__](#windows-build-command-summary)
* [__Requirements__](#requirements)
* System specifics
* [System specifics](#system-specifics)
* [__Necessary software__](#necessary-software)
* Minor installations: CMake, git, make, Python3 x64
* Visual Studio 2017
* Unreal Engine 4.24
* [Minor installations (CMake, git, make, Python3 x64)](#minor-installations)
* [Visual Studio 2017](#visual-studio-2017)
* [Unreal Engine (4.24)](#unreal-engine)
* [__CARLA build__](#carla-build)
* Clone repository
* Get assets
* Set the environment variable
* make CARLA
* [Clone repository](#clone-repository)
* [Get assets](#get-assets)
* [Set the environment variable](#set-the-environment-variable)
* [make CARLA](#make-carla)
The build process can be quite long and tedious. The **[F.A.Q.](build_faq.md)** section contains the most common issues and solutions that appear during the installation. However, the CARLA forum is open for anybody to post unexpected issues, doubts or suggestions. There is a specific section for installation issues on Linux. Feel free to login and become part of the community.
@ -89,7 +89,7 @@ Get the 2017 version from [here](https://developerinsider.co/download-visual-stu
!!! Important
Other Visual Studio versions may cause conflict. Even if these have been uninstalled, some registers may persist. To completely clean Visual Studio from the computer, go to `Program Files (x86)\Microsoft Visual Studio\Installer\resources\app\layout` and run `.\InstallCleanup.exe -full`
### Unreal Engine 4.24
### Unreal Engine
Go to [Unreal Engine](https://www.unrealengine.com/download) and download the _Epic Games Launcher_. In `Engine versions/Library`, download __Unreal Engine 4.24.x__. Make sure to run it in order to check that everything was properly installed.

View File

@ -1,5 +1,12 @@
# Contributor Covenant Code of Conduct
* [__Our pledge__](#our-pledge)
* [__Our standards__](#our-standards)
* [__Our responsibilities__](#our-responsibilities)
* [__Scope__](#scope)
* [__Enforcement__](#enforcement)
* [__Attribution__](#attribution)
---
## Our Pledge

View File

@ -1,5 +1,9 @@
# Coding standard
* [__General__](#general)
* [__Python__](#python)
* [__C++__](#c++)
---
## General

View File

@ -7,6 +7,11 @@ Take a look and don't hesitate!
* [__Report bugs__](#report-bugs)
* [__Request features__](#request-features)
* [__Code contributions__](#code-contributions)
* [Learn about Unreal Engine](#learn-about-unreal-engine)
* [Before getting started](#before-getting-started)
* [Coding standard](#coding-standard)
* [Submission](#submission)
* [Checklist](#checklist)
* [__Art contributions__](#art-contributions)
* [__Docs contributions__](#docs-contributions)
@ -24,7 +29,7 @@ __2. Read the docs.__ Make sure that the issue is a bug, not a misunderstanding
[faqlink]: build_faq.md
---
## Feature requests
## Request features
Ideas for new features are also a great way to contribute. Any suggestion that could improve the users' experience can be submitted in the corresponding GitHub section [here][frlink].
@ -44,7 +49,7 @@ A basic introduction to C++ programming with UE4 can be found at Unreal's [C++ P
[ue4tutorials]: https://docs.unrealengine.com/latest/INT/Programming/Tutorials/
[ue4course]: https://www.udemy.com/unrealcourse/
### What should I know before I get started?
### Before getting started
Check out the [CARLA Design](index.md)<!-- @todo --> document to get an idea on the different modules that compose CARLA. Choose the most appropriate one
to hold the new feature. Feel free to contact the team in the [Discord server](https://discord.com/invite/8kqACuC) in case any doubt arises during the process.

View File

@ -1,7 +1,10 @@
# Documentation Standard
This document will serve as a guide and example of some rules that need to be
followed in order to contribute to the documentation.
This document will serve as a guide and example of some rules that need to be followed in order to contribute to the documentation.
* [__Docs structure__](#docs-structure)
* [__Rules__](#rules)
* [__Exceptions__](#exceptions)
---
## Docs structure

View File

@ -7,6 +7,17 @@ which is divided into those in which the recipe is centered, and those that need
There are more recipes to come!
* [__Actor Spectator Recipe__](#actor-spectator-recipe)
* [__Attach Sensors Recipe__](#attach-sensors-recipe)
* [__Actor Attribute Recipe__](#actor-attribute-recipe)
* [__Converted Image Recipe__](#converted-image-recipe)
* [__Lanes Recipe__](#lanes-recipe)
* [__Debug Bounding Box Recipe__](#debug-bounding-box-recipe)
* [__Debug Vehicle Trail Recipe__](#debug-vehicle-trail-recipe)
* [__Parsing Client Arguments Recipe__](#parsing-client-arguments-recipe)
* [__Traffic Light Recipe__](#traffic-light-recipe)
* [__Walker Batch Recipe__](#walker-batch-recipe)
---
## Actor Spectator Recipe
@ -222,7 +233,7 @@ path it was following and the speed at each waypoint.
![debug_trail_recipe](img/recipe_debug_trail.jpg)
---
## Parse client creation arguments
## Parsing Client Arguments Recipe
This recipe shows in every script provided in `PythonAPI/Examples` and it is used to parse the client creation arguments when running the script.
@ -261,7 +272,7 @@ Used:<br>
```
---
## Traffic lights Recipe
## Traffic Light Recipe
This recipe changes from red to green the traffic light that affects the vehicle.
This is done by detecting if the vehicle actor is at a traffic light.
@ -286,7 +297,7 @@ if vehicle_actor.is_at_traffic_light():
![tl_recipe](img/tl_recipe.gif)
---
## Walker batch recipe
## Walker Batch Recipe
```py
# 0. Choose a blueprint fo the walkers

View File

@ -2,6 +2,23 @@
The recorder system saves all the info needed to replay the simulation in a binary file,
using little endian byte order for the multibyte values.
* [__1- Strings in binary__](#1-strings-in-binary)
* [__2- Info header__](#2-info-header)
* [__3- Packets__](#3-packets)
* [Packet 0 - Frame Start](#packet-0-frame-start)
* [Packet 1 - Frame End](#packet-1-frame-end)
* [Packet 2 - Event Add](#packet-2-event-add)
* [Packet 3 - Event Del](#packet-3-event-del)
* [Packet 4 - Event Parent](#packet-4-event-parent)
* [Packet 5 - Event Collision](#packet-5-event-collision)
* [Packet 6 - Position](#packet-6-position)
* [Packet 7 - TrafficLight](#packet-7-trafficlight)
* [Packet 8 - Vehicle Animation](#packet-8-vehicle-animation)
* [Packet 9 - Walker Animation](#packet-9-walker-animation)
* [__4- Frame Layout__](#4-frame-layout)
* [__5- File Layout__](#5-file-layout)
In the next image representing the file format, we can get a quick view of all the detailed
information. Each part that is visualized in the image will be explained in the following sections:
@ -14,7 +31,7 @@ In summary, the file format has a small header with general info
![global file format](img/RecorderFileFormat3.jpg)
---
## 1. Strings in binary
## 1- Strings in binary
Strings are encoded first with the length of it, followed by its characters without null
character ending. For example, the string 'Town06' will be saved
@ -23,7 +40,7 @@ as hex values: 06 00 54 6f 77 6e 30 36
![binary dynamic string](img/RecorderString.jpg)
---
## 2. Info header
## 2- Info header
The info header has general information about the recorded file. Basically, it contains the version
and a magic string to identify the file as a recorder file. If the header changes then the version
@ -37,7 +54,7 @@ A sample info header is:
![info header sample](img/RecorderHeader.jpg)
---
## 3. Packets
## 3- Packets
Each packet starts with a little header of two fields (5 bytes):
@ -64,7 +81,7 @@ The types of packets are:
We suggest to use **id** over 100 for user custom packets, because this list will keep growing in
the future.
### 3.1 Packet 0: Frame Start
### Packet 0 - Frame Start
This packet marks the start of a new frame, and it will be the first one to start each frame.
All packets need to be placed between a **Frame Start** and a **Frame End**.
@ -73,7 +90,7 @@ All packets need to be placed between a **Frame Start** and a **Frame End**.
So, elapsed + durationThis = elapsed time for next frame
### 3.2 Packet 1: Frame End
### Packet 1 - Frame End
This frame has no data and it only marks the end of the current frame. That helps the replayer
to know the end of each frame just before the new one starts.
@ -81,7 +98,7 @@ Usually, the next frame should be a Frame Start packet to start a new frame.
![frame end](img/RecorderFrameEnd.jpg)
### 3.3 Packet 2: Event Add
### Packet 2 - Event Add
This packet says how many actors we need to create at current frame.
@ -110,7 +127,7 @@ The number of attributes is variable and should look similar to this:
* color = 79,33,85
* role_name = autopilot
### 3.4 Packet 3: Event Del
### Packet 3 - Event Del
This packet says how many actors need to be destroyed this frame.
@ -128,7 +145,7 @@ the next 16 bytes and will be directly to the start of the next packet.
The next 3 says the total records that follows, and each record is the id of the actor to remove.
So, we need to remove at this frame the actors 100, 101 and 120.
### 3.5 Packet 4: Event Parent
### Packet 4 - Event Parent
This packet says which actor is the child of another (the parent).
@ -136,7 +153,7 @@ This packet says which actor is the child of another (the parent).
The first id is the child actor, and the second one will be the parent actor.
### 3.6 Packet 5: Event Collision
### Packet 5 - Event Collision
If a collision happens between two actors, it will be registered in this packet. Currently only
actors with a collision sensor will report collisions, so currently only hero vehicles have that
@ -148,28 +165,28 @@ The **id** is just a sequence to identify each collision internally.
Several collisions between the same pair of actors can happen in the same frame, because physics
frame rate is fixed and usually there are several physics substeps in the same rendered frame.
### 3.7 Packet 6: Position
### Packet 6 - Position
This packet records the position and orientation of all actors of type **vehicle** and
**walker** that exist in the scene.
![position](img/RecorderPosition.jpg)
### 3.8 Packet 7: TrafficLight
### Packet 7 - TrafficLight
This packet records the state of all **traffic lights** in the scene. Which means that it
stores the state (red, orange or green) and the time it is waiting to change to a new state.
![state](img/RecorderTrafficLight.jpg)
### 3.9 Packet 8: Vehicle animation
### Packet 8 - Vehicle animation
This packet records the animation of the vehicles, bikes and cycles. This packet stores the
**throttle**, **sterring**, **brake**, **handbrake** and **gear** inputs, and then set them at playback.
![state](img/RecorderVehicle.jpg)
### 3.10 Packet 9: Walker animation
### Packet 9 - Walker animation
This packet records the animation of the walker. It just saves the **speed** of the walker
that is used in the animation.
@ -177,7 +194,7 @@ that is used in the animation.
![state](img/RecorderWalker.jpg)
---
## 4. Frame Layout
## 4- Frame Layout
A frame consists of several packets, where all of them are optional, except the ones that
have the **start** and **end** in that frame, that must be there always.
@ -195,7 +212,7 @@ The **animation** packets are also optional, but by default they are recorded. T
are animated and also the vehicle wheels follow the direction of the vehicles.
---
## 5. File Layout
## 5- File Layout
The layout of the file starts with the **info header** and then follows a collection of packets in
groups. The first in each group is the **Frame Start** packet, and the last in the group is

View File

@ -1,9 +1,16 @@
# How to model vehicles
* [__4-wheeled Vehicles__](#4-wheeled-vehicles)
* [Modelling](#modelling)
* [Naming materials](#naming-materials)
* [Texturing](#texturing)
* [Rigging](#rigging)
* [LODs](#lods)
---
## 4-Wheeled Vehicles
#### Modelling
### Modelling
Vehicles must have a minimum of 10.000 and a maximum of 17.000 Tris
approximately. We model the vehicles using the size and scale of actual cars.
@ -36,7 +43,7 @@ The vehicle must be divided in 6 materials:
Put a rectangular plane with this size 29-12 cm, for the licence Plate.
We assign the license plate texture.
#### Nomenclature of Material
### Naming materials
* M(Material)_"CarName"_Bodywork(part of car)
@ -50,7 +57,7 @@ The vehicle must be divided in 6 materials:
* M_"CarName"_LicencePlate
#### Textures
### Texturing
The size of the textures is 2048x2048.
@ -71,7 +78,7 @@ TEXTURES
MATERIAL
* M_Tesla3_BodyWork
#### RIG
### Rigging
The easiest way is to copy the "General4WheeledVehicleSkeleton" present in our project,
either by exporting it and copying it to your model or by creating your skeleton

View File

@ -5,6 +5,19 @@ the necessary steps to implement a sensor in Unreal Engine 4 (UE4) and expose
its data via CARLA's Python API. We'll follow all the steps by creating a new
sensor as an example.
* [__Prerequisites__](#prerequisites)
* [__Introduction__](#introduction)
* [__Creating a new sensor__](#creating-a-new-sensor)
* [1- Sensor actor](#1-sensor-actor)
* [2- Sensor data serializer](#2-sensor-data-serializer)
* [3- Sensor data object](#3-sensor-data-object)
* [4- Register your sensor](#4-register-your-sensor)
* [5- Usage example](#5-usage-example)
* [__Appendix__](#appendix)
* [Reusing buffers](#reusing-buffers)
* [Sending data asynchronously](#sending-data-asynchronously)
* [Client-side sensors](#client-side-sensors)
---
## Prerequisites
@ -71,8 +84,7 @@ _For the sake of simplicity we're not going to take into account all the edge
cases, nor it will be implemented in the most efficient way. This is just an
illustrative example._
---
### 1. The sensor actor
### 1- Sensor actor
This is the most complicated class we're going to create. Here we're running
inside Unreal Engine framework, knowledge of UE4 API will be very helpful but
@ -295,8 +307,7 @@ that, the data is going to travel through several layers. First of them will be
the serializer that we have to create next. We'll fully understand this part
once we have completed the `Serialize` function in the next section.
---
### 2. The sensor data serializer
### 2- Sensor data serializer
This class is actually rather simple, it's only required to have two static
methods, `Serialize` and `Deserialize`. We'll add two files for it, this time to
@ -365,8 +376,8 @@ SharedPtr<SensorData> SafeDistanceSerializer::Deserialize(RawData &&data) {
except for the fact that we haven't defined yet what's a `SafeDistanceEvent`.
---
### 3. The sensor data object
### 3- Sensor data object
We need to create a data object for the users of this sensor, representing the
data of a _safe distance event_. We'll add this file to
@ -431,8 +442,7 @@ What we're doing here is exposing some C++ methods in Python. Just with this,
the Python API will be able to recognise our new event and it'll behave similar
to an array in Python, except that cannot be modified.
---
### 4. Register your sensor
### 4- Register your sensor
Now that the pipeline is complete, we're ready to register our new sensor. We do
so in _LibCarla/source/carla/sensor/SensorRegistry.h_. Follow the instruction in
@ -454,8 +464,7 @@ be a bit cryptic.
make rebuild
```
---
### 5. Usage example
### 5- Usage example
Finally, we have the sensor included and we have finished recompiling, our
sensor by now should be available in Python.
@ -493,7 +502,9 @@ Vehicle too close: vehicle.mercedes-benz.coupe
That's it, we have a new sensor working!
---
## Appendix: Reusing buffers
## Appendix
### Reusing buffers
In order to optimize memory usage, we can use the fact that each sensor sends
buffers of similar size; in particularly, in the case of cameras, the size of
@ -530,8 +541,7 @@ buffer.reset(512u); // (size 512 bytes, capacity 1024 bytes)
buffer.reset(2048u); // (size 2048 bytes, capacity 2048 bytes) -> allocates
```
---
## Appendix: Sending data asynchronously
### Sending data asynchronously
Some sensors may require to send data asynchronously, either for performance or
because the data is generated in a different thread, for instance, camera sensors send
@ -554,8 +564,7 @@ void MySensor::Tick(float DeltaSeconds)
}
```
---
## Appendix: Client-side sensors
### Client-side sensors
Some sensors do not require the simulator to do their measurements, those
sensors may run completely in the client-side freeing the simulator from extra

View File

@ -1,8 +1,5 @@
# How to generate the pedestrian navigation info
---
## Introduction
The pedestrians to walk need information about the map in a specific format. That file that describes the map for navigation is a binary file with extension `.BIN`, and they are saved in the **Nav** folder of the map. Each map needs a `.BIN` file with the same name that the map, so automatically can be loaded with the map.
This `.BIN` file is generated from the Recast & Detour library and has all the information that allows pathfinding and crow management.
@ -18,13 +15,34 @@ If we need to generate this `.BIN` file for a custom map, we need to follow this
We have several types of meshes for navigation. The meshes need to be identified as one of those types, using specific nomenclature.
| Type | Start with | Description |
|-----------|------------|-------------|
| Ground | `Road_Sidewalk` | Pedestrians can walk over these meshes freely (sidewalks...). |
| Grass | `Road_Crosswalk` | Pedestrians can walk over these meshes but as a second option if no ground is found. |
| Road | `Road_Grass` | Pedestrians won't be allowed to walk on it unless we specify some percentage of pedestrians that will be allowed. |
| Crosswalk | `Road_Road`, `Road_Curb`, `Road_Gutter` or `Road_Marking` | Pedestrians can cross the roads only through these meshes. |
| Block | any other name | Pedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...). |
<table class ="defTable">
<thead>
<th>Type</th>
<th>Start with</th>
<th>Description</th>
</thead>
<tbody>
<td>Ground</td>
<td><code>Road_Sidewalk</code></td>
<td>Pedestrians can walk over these meshes freely (sidewalks...).</td>
<tr>
<td>Grass</td>
<td><code>Road_Crosswalk</code></td>
<td>Pedestrians can walk over these meshes but as a second option if no ground is found.</td>
<tr>
<td>Road</td>
<td><code>Road_Grass</code></td>
<td>Pedestrians won't be allowed to walk on it unless we specify some percentage of pedestrians that will be allowed.</td>
<tr>
<td>Crosswalk</td>
<td><code>Road_Road</code>, <code>Road_Curb</code>, <code>Road_Gutter</code>, <code>Road_Marking</code></td>
<td>Pedestrians can cross the roads only through these meshes.</td>
<tr>
<td>Block</td>
<td>Any other name</td>
<td>Pedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...).</td>
</tbody>
</table>
<br>

View File

@ -5,6 +5,12 @@ skeletons of walkers from the CARLA Python API. The reference of
all classes and methods available can be found at
[Python API reference](python_api.md).
* [__Walker skeleton structure__](#walker-skeleton-structure)
* [__Manually control walker bones__](#manually-control-walker-bones)
* [Connect to the simulator](#connect-to-the-simulator)
* [Spawn a walker](#spawn-a-walker)
* [Control walker skeletons](#control-walker-skeletons)
!!! note
**This document assumes the user is familiar with the Python API**. <br>
The user should read the first steps tutorial before reading this document.
@ -86,12 +92,12 @@ crl_root
```
---
## How to manually control a walker's bones
## Manually control walker bones
Following is a detailed step-by-step example of how to change the bone transforms of a walker
from the CARLA Python API
#### Connecting to the simulator
### Connect to the simulator
Import neccessary libraries used in this example
@ -107,7 +113,7 @@ client = carla.Client('127.0.0.1', 2000)
client.set_timeout(2.0)
```
#### Spawning a walker
### Spawn a walker
Spawn a random walker at one of the map's spawn points
@ -119,7 +125,7 @@ spawn_point = random.choice(spawn_points) if spawn_points else carla.Transform()
world.try_spawn_actor(blueprint, spawn_point)
```
#### Controlling a walker's skeleton
### Control walker skeletons
A walker's skeleton can be modified by passing an instance of the WalkerBoneControl class
to the walker's apply_control function. The WalkerBoneControl class contains the transforms