diff --git a/Docs/adv_ptv.md b/Docs/adv_ptv.md index 376e77373..0fcdba9e2 100644 --- a/Docs/adv_ptv.md +++ b/Docs/adv_ptv.md @@ -4,8 +4,8 @@ CARLA has developed a co-simulation feature with PTV-Vissim. This allows to dist * [__Requisites__](#requisites) * [__Run a co-simulation__](#run-the-co-simulation) - * [Create a new network](#create-a-new-network) - + * [Create a new network](#create-a-new-network) + --- ## Requisites diff --git a/Docs/adv_recorder.md b/Docs/adv_recorder.md index 235229cce..79d2c5c63 100644 --- a/Docs/adv_recorder.md +++ b/Docs/adv_recorder.md @@ -2,14 +2,14 @@ This feature allows to record and reenact a previous simulation. All the events happened are registered in the [recorder file](ref_recorder_binary_file_format.md). There are some high-level queries to trace and study those events. -* [__Recording__](#recording) -* [__Simulation playback__](#simulation-playback) - * [Setting a time factor](#setting-a-time-factor) -* [__Recorded file__](#recorded-file) -* [__Queries__](#queries) - * [Collisions](#collisions) - * [Blocked actors](#blocked-actors) -* [__Sample Python scripts__](#sample-python-scripts) +* [__Recording__](#recording) +* [__Simulation playback__](#simulation-playback) + * [Setting a time factor](#setting-a-time-factor) +* [__Recorded file__](#recorded-file) +* [__Queries__](#queries) + * [Collisions](#collisions) + * [Blocked actors](#blocked-actors) +* [__Sample Python scripts__](#sample-python-scripts) --- ## Recording diff --git a/Docs/adv_rendering_options.md b/Docs/adv_rendering_options.md index 08f853208..4272bc167 100644 --- a/Docs/adv_rendering_options.md +++ b/Docs/adv_rendering_options.md @@ -2,15 +2,15 @@ There are few details to take into account at the time of configuring a simulation. This page covers the more important ones. -* [__Graphics quality__](#graphics-quality) - * Vulkan vs OpenGL - * Quality levels -* [__No-rendering mode__](#no-rendering-mode) -* [__Off-screen mode__](#off-screen-mode) - * Off-screen Vs no-rendering -* [__Running off-screen using a preferred GPU__](#running-off-screen-using-a-preferred-gpu) - * Docker: recommended approach - * Deprecated: emulate the virtual display +* [__Graphics quality__](#graphics-quality) + * [Vulkan vs OpenGL](#vulkan-vs-opengl) + * [Quality levels](#quality-levels) +* [__No-rendering mode__](#no-rendering-mode) +* [__Off-screen mode__](#off-screen-mode) + * [Off-screen Vs no-rendering](#off-screen-vs-no-rendering) +* [__Running off-screen using a preferred GPU__](#running-off-screen-using-a-preferred-gpu) + * [Docker - recommended approach](#docker-recommended-approach) + * [Deprecated - emulate the virtual display](#deprecated-emulate-the-virtual-display) !!! Important @@ -117,16 +117,19 @@ DISPLAY= ./CarlaUE4.sh -opengl --- ## Running off-screen using a preferred GPU -### Docker: recommended approach +### Docker - recommended approach The best way to run a headless CARLA and select the GPU is to [__run CARLA in a Docker__](build_docker.md). This section contains an alternative tutorial, but this method is deprecated and performance is much worse. It is here only for those who Docker is not an option. + +### Deprecated - emulate the virtual display +
-

- Deprecated: emulate the virtual display -

+ + Show deprecated tutorial on how to emulate the virtual display + !!! Warning This tutorial is deprecated. To run headless CARLA, please [__run CARLA in a Docker__](build_docker.md). diff --git a/Docs/adv_rss.md b/Docs/adv_rss.md index 7ea7a0b25..07f19d41b 100644 --- a/Docs/adv_rss.md +++ b/Docs/adv_rss.md @@ -2,13 +2,13 @@ CARLA integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) in the client library. This feature allows users to investigate behaviours of RSS without having to implement anything. CARLA will take care of providing the input, and applying the output to the AD systems on the fly. -* [__Overview__](#overview) -* [__Compilation__](#compilation) - * [Dependencies](#dependencies) - * [Build](#build) +* [__Overview__](#overview) +* [__Compilation__](#compilation) + * [Dependencies](#dependencies) + * [Build](#build) * [__Current state__](#current-state) - * [RssSensor](#rsssensor) - * [RssRestrictor](#rssrestrictor) + * [RssSensor](#rsssensor) + * [RssRestrictor](#rssrestrictor) !!! Important This feature is a work in progress. Right now, it is only available for the Linux build. diff --git a/Docs/adv_sumo.md b/Docs/adv_sumo.md index 9f7485cf6..6b0c44dfd 100644 --- a/Docs/adv_sumo.md +++ b/Docs/adv_sumo.md @@ -3,12 +3,12 @@ CARLA has developed a co-simulation feature with SUMO. This allows to distribute the tasks at will, and exploit the capabilities of each simulation in favour of the user. -* [__Requisites__](#requisites) -* [__Run a custom co-simulation__](#run-a-custom-co-simulation) - * [Create CARLA vtypes](#create-carla-vtypes) - * [Create the SUMO net](#create-the-sumo-net) - * [Run the synchronization](#run-the-synchronization) -* [__Spawn NPCs controlled by SUMO__](#spawn-npcs-controlled-by-sumo) +* [__Requisites__](#requisites) +* [__Run a custom co-simulation__](#run-a-custom-co-simulation) + * [Create CARLA vtypes](#create-carla-vtypes) + * [Create the SUMO net](#create-the-sumo-net) + * [Run the synchronization](#run-the-synchronization) +* [__Spawn NPCs controlled by SUMO__](#spawn-npcs-controlled-by-sumo) --- ## Requisites diff --git a/Docs/adv_synchrony_timestep.md b/Docs/adv_synchrony_timestep.md index b9df6b0f0..3ddb424db 100644 --- a/Docs/adv_synchrony_timestep.md +++ b/Docs/adv_synchrony_timestep.md @@ -2,15 +2,15 @@ This section deals with two fundamental concepts in CARLA. Their configuration defines how does time go by in the simulation, and how does the server make the simulation move forward. -* [__Simulation time-step__](#simulation-time-step) - * Variable time-step - * Fixed time-step - * Tips when recording the simulation - * Time-step limitations -* [__Client-server synchrony__](#client-server-synchrony) - * Setting synchronous mode - * Using synchronous mode -* [__Possible configurations__](#possible-configurations) +* [__Simulation time-step__](#simulation-time-step) + * [Variable time-step](#variable-time-step) + * [Fixed time-step](#fixed-time-step) + * [Tips when recording the simulation](#tips-when-recording-the-simulation) + * [Time-step limitations](#time-step-limitations) +* [__Client-server synchrony__](#client-server-synchrony) + * [Setting synchronous mode](#setting-synchronous-mode) + * [Using synchronous mode](#using-synchronous-mode) +* [__Possible configurations__](#possible-configurations) --- ## Simulation time-step diff --git a/Docs/build_docker.md b/Docs/build_docker.md index 66d114ed0..90552d2cb 100644 --- a/Docs/build_docker.md +++ b/Docs/build_docker.md @@ -1,9 +1,9 @@ # Running CARLA in a Docker -* [__Docker installation__](#docker-installation) - * Docker CE - * NVIDIA-Docker2 -* [__Running CARLA container__](#running-carla-container) +* [__Docker installation__](#docker-installation) + * [Docker CE](#docker-ce) + * [NVIDIA-Docker2](#nvidia-docker2) +* [__Running CARLA container__](#running-carla-container) This tutorial is designed for: diff --git a/Docs/build_linux.md b/Docs/build_linux.md index 7d3bd62a1..a0e666e1d 100644 --- a/Docs/build_linux.md +++ b/Docs/build_linux.md @@ -1,16 +1,16 @@ # Linux build -* [__Linux build command summary__](#linux-build-command-summary) -* [__Requirements__](#requirements) - * [System specifics](#system-specifics) - * [Dependencies](#dependencies) -* [__GitHub__](#github) -* [__Unreal Engine__](#unreal-engine) -* [__CARLA build__](#carla-build) - * [Clone repository](#clone-repository) - * [Get assets](#get-assets) - * [Set the environment variable](#set-the-environment-variable) - * [make CARLA](#make-carla) +* [__Linux build command summary__](#linux-build-command-summary) +* [__Requirements__](#requirements) + * [System specifics](#system-specifics) + * [Dependencies](#dependencies) +* [__GitHub__](#github) +* [__Unreal Engine__](#unreal-engine) +* [__CARLA build__](#carla-build) + * [Clone repository](#clone-repository) + * [Get assets](#get-assets) + * [Set the environment variable](#set-the-environment-variable) + * [make CARLA](#make-carla) The build process can be quite long and tedious. The **[F.A.Q.](build_faq.md)** section contains the most common issues and solutions that appear during the installation. However, the CARLA forum is open for anybody to post unexpected issues, doubts or suggestions. There is a specific section for installation issues on Linux. Feel free to login and become part of the community. diff --git a/Docs/build_update.md b/Docs/build_update.md index 019b604b7..01b367ae7 100644 --- a/Docs/build_update.md +++ b/Docs/build_update.md @@ -1,13 +1,13 @@ # Update CARLA -* [__Update commands summary__](#update-commands-summary) -* [__Get the lastest binary release__](#get-latest-binary-release) -* [__Update Linux and Windows build__](#update-linux-and-windows-build) - * Clean the build - * Pull from origin - * Download the assets - * Launch the server -* [__Get development assets__](#get-development-assets) +* [__Update commands summary__](#update-commands-summary) +* [__Get the lastest binary release__](#get-latest-binary-release) +* [__Update Linux and Windows build__](#update-linux-and-windows-build) + * [Clean the build](#clean-the-build) + * [Pull from origin](#pull-from-origin) + * [Download the assets](#download-the-assets) + * [Launch the server](#launch-the-server) +* [__Get development assets__](#get-development-assets) To post unexpected issues, doubts or suggestions, feel free to login in the CARLA forum. diff --git a/Docs/build_windows.md b/Docs/build_windows.md index cae564269..331c34582 100644 --- a/Docs/build_windows.md +++ b/Docs/build_windows.md @@ -1,17 +1,17 @@ # Windows build -* [__Windows build command summary__](#windows-build-command-summary) -* [__Requirements__](#requirements) - * System specifics -* [__Necessary software__](#necessary-software) - * Minor installations: CMake, git, make, Python3 x64 - * Visual Studio 2017 - * Unreal Engine 4.24 -* [__CARLA build__](#carla-build) - * Clone repository - * Get assets - * Set the environment variable - * make CARLA +* [__Windows build command summary__](#windows-build-command-summary) +* [__Requirements__](#requirements) + * [System specifics](#system-specifics) +* [__Necessary software__](#necessary-software) + * [Minor installations (CMake, git, make, Python3 x64)](#minor-installations) + * [Visual Studio 2017](#visual-studio-2017) + * [Unreal Engine (4.24)](#unreal-engine) +* [__CARLA build__](#carla-build) + * [Clone repository](#clone-repository) + * [Get assets](#get-assets) + * [Set the environment variable](#set-the-environment-variable) + * [make CARLA](#make-carla) The build process can be quite long and tedious. The **[F.A.Q.](build_faq.md)** section contains the most common issues and solutions that appear during the installation. However, the CARLA forum is open for anybody to post unexpected issues, doubts or suggestions. There is a specific section for installation issues on Linux. Feel free to login and become part of the community. @@ -89,7 +89,7 @@ Get the 2017 version from [here](https://developerinsider.co/download-visual-stu !!! Important Other Visual Studio versions may cause conflict. Even if these have been uninstalled, some registers may persist. To completely clean Visual Studio from the computer, go to `Program Files (x86)\Microsoft Visual Studio\Installer\resources\app\layout` and run `.\InstallCleanup.exe -full` -### Unreal Engine 4.24 +### Unreal Engine Go to [Unreal Engine](https://www.unrealengine.com/download) and download the _Epic Games Launcher_. In `Engine versions/Library`, download __Unreal Engine 4.24.x__. Make sure to run it in order to check that everything was properly installed. diff --git a/Docs/cont_code_of_conduct.md b/Docs/cont_code_of_conduct.md index 30b759490..ca505f82f 100644 --- a/Docs/cont_code_of_conduct.md +++ b/Docs/cont_code_of_conduct.md @@ -1,5 +1,12 @@ # Contributor Covenant Code of Conduct +* [__Our pledge__](#our-pledge) +* [__Our standards__](#our-standards) +* [__Our responsibilities__](#our-responsibilities) +* [__Scope__](#scope) +* [__Enforcement__](#enforcement) +* [__Attribution__](#attribution) + --- ## Our Pledge diff --git a/Docs/cont_coding_standard.md b/Docs/cont_coding_standard.md index 1c49adaed..5c6e04d1f 100644 --- a/Docs/cont_coding_standard.md +++ b/Docs/cont_coding_standard.md @@ -1,5 +1,9 @@ # Coding standard +* [__General__](#general) +* [__Python__](#python) +* [__C++__](#c++) + --- ## General diff --git a/Docs/cont_contribution_guidelines.md b/Docs/cont_contribution_guidelines.md index d7a192994..b8f6efe9d 100644 --- a/Docs/cont_contribution_guidelines.md +++ b/Docs/cont_contribution_guidelines.md @@ -7,6 +7,11 @@ Take a look and don't hesitate! * [__Report bugs__](#report-bugs) * [__Request features__](#request-features) * [__Code contributions__](#code-contributions) + * [Learn about Unreal Engine](#learn-about-unreal-engine) + * [Before getting started](#before-getting-started) + * [Coding standard](#coding-standard) + * [Submission](#submission) + * [Checklist](#checklist) * [__Art contributions__](#art-contributions) * [__Docs contributions__](#docs-contributions) @@ -24,7 +29,7 @@ __2. Read the docs.__ Make sure that the issue is a bug, not a misunderstanding [faqlink]: build_faq.md --- -## Feature requests +## Request features Ideas for new features are also a great way to contribute. Any suggestion that could improve the users' experience can be submitted in the corresponding GitHub section [here][frlink]. @@ -44,7 +49,7 @@ A basic introduction to C++ programming with UE4 can be found at Unreal's [C++ P [ue4tutorials]: https://docs.unrealengine.com/latest/INT/Programming/Tutorials/ [ue4course]: https://www.udemy.com/unrealcourse/ -### What should I know before I get started? +### Before getting started Check out the [CARLA Design](index.md) document to get an idea on the different modules that compose CARLA. Choose the most appropriate one to hold the new feature. Feel free to contact the team in the [Discord server](https://discord.com/invite/8kqACuC) in case any doubt arises during the process. diff --git a/Docs/cont_doc_standard.md b/Docs/cont_doc_standard.md index 69796623e..dd70797ce 100644 --- a/Docs/cont_doc_standard.md +++ b/Docs/cont_doc_standard.md @@ -1,7 +1,10 @@ # Documentation Standard -This document will serve as a guide and example of some rules that need to be -followed in order to contribute to the documentation. +This document will serve as a guide and example of some rules that need to be followed in order to contribute to the documentation. + +* [__Docs structure__](#docs-structure) +* [__Rules__](#rules) +* [__Exceptions__](#exceptions) --- ## Docs structure diff --git a/Docs/core_concepts.md b/Docs/core_concepts.md index f6c6a4891..22e0c99e1 100644 --- a/Docs/core_concepts.md +++ b/Docs/core_concepts.md @@ -5,10 +5,10 @@ This page introduces the main features and modules in CARLA. Detailed explanatio In order to learn about the different classes and methods in the API, take a look at the [Python API reference](python_api.md). Besides, the [Code recipes](ref_code_recipes.md) reference contains some common code chunks, specially useful during these first steps. * [__First steps__](#first-steps) - * [1st- World and client](#1st-world-and-client) - * [2nd- Actors and blueprints](#2nd-actors-and-blueprints) - * [3rd- Maps and navigation](#3rd-maps-and-navigation) - * [4th- Sensors and data](#4th-sensors-and-data) + * [1st- World and client](#1st-world-and-client) + * [2nd- Actors and blueprints](#2nd-actors-and-blueprints) + * [3rd- Maps and navigation](#3rd-maps-and-navigation) + * [4th- Sensors and data](#4th-sensors-and-data) * [__Advanced steps__](#advanced-steps) !!! Important diff --git a/Docs/plugins_carlaviz.md b/Docs/plugins_carlaviz.md index 6f7772c38..4c2479f68 100644 --- a/Docs/plugins_carlaviz.md +++ b/Docs/plugins_carlaviz.md @@ -2,12 +2,12 @@ The carlaviz plugin is used to visualize the simulation in a web browser. A windows with some basic representation of the scene is created. Actors are updated on-the-fly, sensor data can be retrieved, and additional text, lines and polylines can be drawn in the scene. -* [__General information__](#general-information) - * [Support](#support) -* [__Get carlaviz__](#get-carlaviz) - * [Prerequisites](#prerequisites) - * [Download the plugin](#download-the-plugin) -* [__Utilities__](#utilities) +* [__General information__](#general-information) + * [Support](#support) +* [__Get carlaviz__](#get-carlaviz) + * [Prerequisites](#prerequisites) + * [Download the plugin](#download-the-plugin) +* [__Utilities__](#utilities) --- ## General information diff --git a/Docs/ref_code_recipes.md b/Docs/ref_code_recipes.md index b38392438..03959b4ed 100644 --- a/Docs/ref_code_recipes.md +++ b/Docs/ref_code_recipes.md @@ -7,6 +7,17 @@ which is divided into those in which the recipe is centered, and those that need There are more recipes to come! +* [__Actor Spectator Recipe__](#actor-spectator-recipe) +* [__Attach Sensors Recipe__](#attach-sensors-recipe) +* [__Actor Attribute Recipe__](#actor-attribute-recipe) +* [__Converted Image Recipe__](#converted-image-recipe) +* [__Lanes Recipe__](#lanes-recipe) +* [__Debug Bounding Box Recipe__](#debug-bounding-box-recipe) +* [__Debug Vehicle Trail Recipe__](#debug-vehicle-trail-recipe) +* [__Parsing Client Arguments Recipe__](#parsing-client-arguments-recipe) +* [__Traffic Light Recipe__](#traffic-light-recipe) +* [__Walker Batch Recipe__](#walker-batch-recipe) + --- ## Actor Spectator Recipe @@ -222,7 +233,7 @@ path it was following and the speed at each waypoint. ![debug_trail_recipe](img/recipe_debug_trail.jpg) --- -## Parse client creation arguments +## Parsing Client Arguments Recipe This recipe shows in every script provided in `PythonAPI/Examples` and it is used to parse the client creation arguments when running the script. @@ -261,7 +272,7 @@ Used:
``` --- -## Traffic lights Recipe +## Traffic Light Recipe This recipe changes from red to green the traffic light that affects the vehicle. This is done by detecting if the vehicle actor is at a traffic light. @@ -286,7 +297,7 @@ if vehicle_actor.is_at_traffic_light(): ![tl_recipe](img/tl_recipe.gif) --- -## Walker batch recipe +## Walker Batch Recipe ```py # 0. Choose a blueprint fo the walkers diff --git a/Docs/ref_recorder_binary_file_format.md b/Docs/ref_recorder_binary_file_format.md index 6c3cf20a0..d1345999b 100644 --- a/Docs/ref_recorder_binary_file_format.md +++ b/Docs/ref_recorder_binary_file_format.md @@ -2,6 +2,23 @@ The recorder system saves all the info needed to replay the simulation in a binary file, using little endian byte order for the multibyte values. + +* [__1- Strings in binary__](#1-strings-in-binary) +* [__2- Info header__](#2-info-header) +* [__3- Packets__](#3-packets) + * [Packet 0 - Frame Start](#packet-0-frame-start) + * [Packet 1 - Frame End](#packet-1-frame-end) + * [Packet 2 - Event Add](#packet-2-event-add) + * [Packet 3 - Event Del](#packet-3-event-del) + * [Packet 4 - Event Parent](#packet-4-event-parent) + * [Packet 5 - Event Collision](#packet-5-event-collision) + * [Packet 6 - Position](#packet-6-position) + * [Packet 7 - TrafficLight](#packet-7-trafficlight) + * [Packet 8 - Vehicle Animation](#packet-8-vehicle-animation) + * [Packet 9 - Walker Animation](#packet-9-walker-animation) +* [__4- Frame Layout__](#4-frame-layout) +* [__5- File Layout__](#5-file-layout) + In the next image representing the file format, we can get a quick view of all the detailed information. Each part that is visualized in the image will be explained in the following sections: @@ -14,7 +31,7 @@ In summary, the file format has a small header with general info ![global file format](img/RecorderFileFormat3.jpg) --- -## 1. Strings in binary +## 1- Strings in binary Strings are encoded first with the length of it, followed by its characters without null character ending. For example, the string 'Town06' will be saved @@ -23,7 +40,7 @@ as hex values: 06 00 54 6f 77 6e 30 36 ![binary dynamic string](img/RecorderString.jpg) --- -## 2. Info header +## 2- Info header The info header has general information about the recorded file. Basically, it contains the version and a magic string to identify the file as a recorder file. If the header changes then the version @@ -37,7 +54,7 @@ A sample info header is: ![info header sample](img/RecorderHeader.jpg) --- -## 3. Packets +## 3- Packets Each packet starts with a little header of two fields (5 bytes): @@ -64,7 +81,7 @@ The types of packets are: We suggest to use **id** over 100 for user custom packets, because this list will keep growing in the future. -### 3.1 Packet 0: Frame Start +### Packet 0 - Frame Start This packet marks the start of a new frame, and it will be the first one to start each frame. All packets need to be placed between a **Frame Start** and a **Frame End**. @@ -73,7 +90,7 @@ All packets need to be placed between a **Frame Start** and a **Frame End**. So, elapsed + durationThis = elapsed time for next frame -### 3.2 Packet 1: Frame End +### Packet 1 - Frame End This frame has no data and it only marks the end of the current frame. That helps the replayer to know the end of each frame just before the new one starts. @@ -81,7 +98,7 @@ Usually, the next frame should be a Frame Start packet to start a new frame. ![frame end](img/RecorderFrameEnd.jpg) -### 3.3 Packet 2: Event Add +### Packet 2 - Event Add This packet says how many actors we need to create at current frame. @@ -110,7 +127,7 @@ The number of attributes is variable and should look similar to this: * color = 79,33,85 * role_name = autopilot -### 3.4 Packet 3: Event Del +### Packet 3 - Event Del This packet says how many actors need to be destroyed this frame. @@ -128,7 +145,7 @@ the next 16 bytes and will be directly to the start of the next packet. The next 3 says the total records that follows, and each record is the id of the actor to remove. So, we need to remove at this frame the actors 100, 101 and 120. -### 3.5 Packet 4: Event Parent +### Packet 4 - Event Parent This packet says which actor is the child of another (the parent). @@ -136,7 +153,7 @@ This packet says which actor is the child of another (the parent). The first id is the child actor, and the second one will be the parent actor. -### 3.6 Packet 5: Event Collision +### Packet 5 - Event Collision If a collision happens between two actors, it will be registered in this packet. Currently only actors with a collision sensor will report collisions, so currently only hero vehicles have that @@ -148,28 +165,28 @@ The **id** is just a sequence to identify each collision internally. Several collisions between the same pair of actors can happen in the same frame, because physics frame rate is fixed and usually there are several physics substeps in the same rendered frame. -### 3.7 Packet 6: Position +### Packet 6 - Position This packet records the position and orientation of all actors of type **vehicle** and **walker** that exist in the scene. ![position](img/RecorderPosition.jpg) -### 3.8 Packet 7: TrafficLight +### Packet 7 - TrafficLight This packet records the state of all **traffic lights** in the scene. Which means that it stores the state (red, orange or green) and the time it is waiting to change to a new state. ![state](img/RecorderTrafficLight.jpg) -### 3.9 Packet 8: Vehicle animation +### Packet 8 - Vehicle animation This packet records the animation of the vehicles, bikes and cycles. This packet stores the **throttle**, **sterring**, **brake**, **handbrake** and **gear** inputs, and then set them at playback. ![state](img/RecorderVehicle.jpg) -### 3.10 Packet 9: Walker animation +### Packet 9 - Walker animation This packet records the animation of the walker. It just saves the **speed** of the walker that is used in the animation. @@ -177,7 +194,7 @@ that is used in the animation. ![state](img/RecorderWalker.jpg) --- -## 4. Frame Layout +## 4- Frame Layout A frame consists of several packets, where all of them are optional, except the ones that have the **start** and **end** in that frame, that must be there always. @@ -195,7 +212,7 @@ The **animation** packets are also optional, but by default they are recorded. T are animated and also the vehicle wheels follow the direction of the vehicles. --- -## 5. File Layout +## 5- File Layout The layout of the file starts with the **info header** and then follows a collection of packets in groups. The first in each group is the **Frame Start** packet, and the last in the group is diff --git a/Docs/ref_sensors.md b/Docs/ref_sensors.md index 89cfbad66..fd6b71b34 100644 --- a/Docs/ref_sensors.md +++ b/Docs/ref_sensors.md @@ -1,18 +1,18 @@ # Sensors reference -* [__Collision detector__](#collision-detector) -* [__Depth camera__](#depth-camera) -* [__GNSS sensor__](#gnss-sensor) -* [__IMU sensor__](#imu-sensor) -* [__Lane invasion detector__](#lane-invasion-detector) -* [__LIDAR sensor__](#lidar-sensor) +* [__Collision detector__](#collision-detector) +* [__Depth camera__](#depth-camera) +* [__GNSS sensor__](#gnss-sensor) +* [__IMU sensor__](#imu-sensor) +* [__Lane invasion detector__](#lane-invasion-detector) +* [__LIDAR sensor__](#lidar-sensor) * [__Obstacle detector__](#obstacle-detector) -* [__Radar sensor__](#radar-sensor) -* [__RGB camera__](#rgb-camera) -* [__RSS sensor__](#rss-sensor) -* [__Semantic LIDAR sensor__](#semantic-lidar-sensor) -* [__Semantic segmentation camera__](#semantic-segmentation-camera) -* [__DVS camera__](#dvs-camera) +* [__Radar sensor__](#radar-sensor) +* [__RGB camera__](#rgb-camera) +* [__RSS sensor__](#rss-sensor) +* [__Semantic LIDAR sensor__](#semantic-lidar-sensor) +* [__Semantic segmentation camera__](#semantic-segmentation-camera) +* [__DVS camera__](#dvs-camera) --- ## Collision detector diff --git a/Docs/tuto_A_create_standalone.md b/Docs/tuto_A_create_standalone.md index c60de808d..d4b8aca82 100644 --- a/Docs/tuto_A_create_standalone.md +++ b/Docs/tuto_A_create_standalone.md @@ -2,8 +2,8 @@ It is a common practice in CARLA to manage assets with standalone packages. Keeping them aside allows to reduce the size of the build. These asset packages can be easily imported into a CARLA package anytime. They also become really useful to easily distribute assets in an organized way. -* [__Export a package from the UE4 Editor__](#export-a-package-from-the-ue4-editor) -* [__Import assets into a CARLA package__](#import-assets-into-a-carla-package) +* [__Export a package from the UE4 Editor__](#export-a-package-from-the-ue4-editor) +* [__Import assets into a CARLA package__](#import-assets-into-a-carla-package) --- ## Export a package from the UE4 Editor diff --git a/Docs/tuto_A_vehicle_modelling.md b/Docs/tuto_A_vehicle_modelling.md index 6ba949ce8..06d536dca 100644 --- a/Docs/tuto_A_vehicle_modelling.md +++ b/Docs/tuto_A_vehicle_modelling.md @@ -1,9 +1,16 @@ # How to model vehicles +* [__4-wheeled Vehicles__](#4-wheeled-vehicles) + * [Modelling](#modelling) + * [Naming materials](#naming-materials) + * [Texturing](#texturing) + * [Rigging](#rigging) + * [LODs](#lods) + --- ## 4-Wheeled Vehicles -#### Modelling +### Modelling Vehicles must have a minimum of 10.000 and a maximum of 17.000 Tris approximately. We model the vehicles using the size and scale of actual cars. @@ -36,7 +43,7 @@ The vehicle must be divided in 6 materials: Put a rectangular plane with this size 29-12 cm, for the licence Plate. We assign the license plate texture. -#### Nomenclature of Material +### Naming materials * M(Material)_"CarName"_Bodywork(part of car) @@ -50,7 +57,7 @@ The vehicle must be divided in 6 materials: * M_"CarName"_LicencePlate -#### Textures +### Texturing The size of the textures is 2048x2048. @@ -71,7 +78,7 @@ TEXTURES MATERIAL * M_Tesla3_BodyWork -#### RIG +### Rigging The easiest way is to copy the "General4WheeledVehicleSkeleton" present in our project, either by exporting it and copying it to your model or by creating your skeleton diff --git a/Docs/tuto_D_create_sensor.md b/Docs/tuto_D_create_sensor.md index e28c395f6..de49031d8 100644 --- a/Docs/tuto_D_create_sensor.md +++ b/Docs/tuto_D_create_sensor.md @@ -5,6 +5,19 @@ the necessary steps to implement a sensor in Unreal Engine 4 (UE4) and expose its data via CARLA's Python API. We'll follow all the steps by creating a new sensor as an example. +* [__Prerequisites__](#prerequisites) +* [__Introduction__](#introduction) +* [__Creating a new sensor__](#creating-a-new-sensor) + * [1- Sensor actor](#1-sensor-actor) + * [2- Sensor data serializer](#2-sensor-data-serializer) + * [3- Sensor data object](#3-sensor-data-object) + * [4- Register your sensor](#4-register-your-sensor) + * [5- Usage example](#5-usage-example) +* [__Appendix__](#appendix) + * [Reusing buffers](#reusing-buffers) + * [Sending data asynchronously](#sending-data-asynchronously) + * [Client-side sensors](#client-side-sensors) + --- ## Prerequisites @@ -71,8 +84,7 @@ _For the sake of simplicity we're not going to take into account all the edge cases, nor it will be implemented in the most efficient way. This is just an illustrative example._ ---- -### 1. The sensor actor +### 1- Sensor actor This is the most complicated class we're going to create. Here we're running inside Unreal Engine framework, knowledge of UE4 API will be very helpful but @@ -295,8 +307,7 @@ that, the data is going to travel through several layers. First of them will be the serializer that we have to create next. We'll fully understand this part once we have completed the `Serialize` function in the next section. ---- -### 2. The sensor data serializer +### 2- Sensor data serializer This class is actually rather simple, it's only required to have two static methods, `Serialize` and `Deserialize`. We'll add two files for it, this time to @@ -365,8 +376,8 @@ SharedPtr SafeDistanceSerializer::Deserialize(RawData &&data) { except for the fact that we haven't defined yet what's a `SafeDistanceEvent`. ---- -### 3. The sensor data object + +### 3- Sensor data object We need to create a data object for the users of this sensor, representing the data of a _safe distance event_. We'll add this file to @@ -431,8 +442,7 @@ What we're doing here is exposing some C++ methods in Python. Just with this, the Python API will be able to recognise our new event and it'll behave similar to an array in Python, except that cannot be modified. ---- -### 4. Register your sensor +### 4- Register your sensor Now that the pipeline is complete, we're ready to register our new sensor. We do so in _LibCarla/source/carla/sensor/SensorRegistry.h_. Follow the instruction in @@ -454,8 +464,7 @@ be a bit cryptic. make rebuild ``` ---- -### 5. Usage example +### 5- Usage example Finally, we have the sensor included and we have finished recompiling, our sensor by now should be available in Python. @@ -493,7 +502,9 @@ Vehicle too close: vehicle.mercedes-benz.coupe That's it, we have a new sensor working! --- -## Appendix: Reusing buffers +## Appendix + +### Reusing buffers In order to optimize memory usage, we can use the fact that each sensor sends buffers of similar size; in particularly, in the case of cameras, the size of @@ -530,8 +541,7 @@ buffer.reset(512u); // (size 512 bytes, capacity 1024 bytes) buffer.reset(2048u); // (size 2048 bytes, capacity 2048 bytes) -> allocates ``` ---- -## Appendix: Sending data asynchronously +### Sending data asynchronously Some sensors may require to send data asynchronously, either for performance or because the data is generated in a different thread, for instance, camera sensors send @@ -554,8 +564,7 @@ void MySensor::Tick(float DeltaSeconds) } ``` ---- -## Appendix: Client-side sensors +### Client-side sensors Some sensors do not require the simulator to do their measurements, those sensors may run completely in the client-side freeing the simulator from extra diff --git a/Docs/tuto_D_generate_pedestrian_navigation.md b/Docs/tuto_D_generate_pedestrian_navigation.md index 3b3b45919..603f5edca 100644 --- a/Docs/tuto_D_generate_pedestrian_navigation.md +++ b/Docs/tuto_D_generate_pedestrian_navigation.md @@ -1,8 +1,5 @@ # How to generate the pedestrian navigation info ---- -## Introduction - The pedestrians to walk need information about the map in a specific format. That file that describes the map for navigation is a binary file with extension `.BIN`, and they are saved in the **Nav** folder of the map. Each map needs a `.BIN` file with the same name that the map, so automatically can be loaded with the map. This `.BIN` file is generated from the Recast & Detour library and has all the information that allows pathfinding and crow management. @@ -18,13 +15,34 @@ If we need to generate this `.BIN` file for a custom map, we need to follow this We have several types of meshes for navigation. The meshes need to be identified as one of those types, using specific nomenclature. -| Type | Start with | Description | -|-----------|------------|-------------| -| Ground | `Road_Sidewalk` | Pedestrians can walk over these meshes freely (sidewalks...). | -| Grass | `Road_Crosswalk` | Pedestrians can walk over these meshes but as a second option if no ground is found. | -| Road | `Road_Grass` | Pedestrians won't be allowed to walk on it unless we specify some percentage of pedestrians that will be allowed. | -| Crosswalk | `Road_Road`, `Road_Curb`, `Road_Gutter` or `Road_Marking` | Pedestrians can cross the roads only through these meshes. | -| Block | any other name | Pedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...). | + + + + + + + + + + + + + + + + + + + + + + + + + + + +
TypeStart withDescription
GroundRoad_SidewalkPedestrians can walk over these meshes freely (sidewalks...).
GrassRoad_CrosswalkPedestrians can walk over these meshes but as a second option if no ground is found.
RoadRoad_GrassPedestrians won't be allowed to walk on it unless we specify some percentage of pedestrians that will be allowed.
CrosswalkRoad_Road, Road_Curb, Road_Gutter, Road_MarkingPedestrians can cross the roads only through these meshes.
BlockAny other namePedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...).

diff --git a/Docs/tuto_G_control_walker_skeletons.md b/Docs/tuto_G_control_walker_skeletons.md index 39bc39a2d..afa17bcc0 100644 --- a/Docs/tuto_G_control_walker_skeletons.md +++ b/Docs/tuto_G_control_walker_skeletons.md @@ -5,6 +5,12 @@ skeletons of walkers from the CARLA Python API. The reference of all classes and methods available can be found at [Python API reference](python_api.md). +* [__Walker skeleton structure__](#walker-skeleton-structure) +* [__Manually control walker bones__](#manually-control-walker-bones) + * [Connect to the simulator](#connect-to-the-simulator) + * [Spawn a walker](#spawn-a-walker) + * [Control walker skeletons](#control-walker-skeletons) + !!! note **This document assumes the user is familiar with the Python API**.
The user should read the first steps tutorial before reading this document. @@ -86,12 +92,12 @@ crl_root ``` --- -## How to manually control a walker's bones +## Manually control walker bones Following is a detailed step-by-step example of how to change the bone transforms of a walker from the CARLA Python API -#### Connecting to the simulator +### Connect to the simulator Import neccessary libraries used in this example @@ -107,7 +113,7 @@ client = carla.Client('127.0.0.1', 2000) client.set_timeout(2.0) ``` -#### Spawning a walker +### Spawn a walker Spawn a random walker at one of the map's spawn points @@ -119,7 +125,7 @@ spawn_point = random.choice(spawn_points) if spawn_points else carla.Transform() world.try_spawn_actor(blueprint, spawn_point) ``` -#### Controlling a walker's skeleton +### Control walker skeletons A walker's skeleton can be modified by passing an instance of the WalkerBoneControl class to the walker's apply_control function. The WalkerBoneControl class contains the transforms diff --git a/Docs/tuto_G_retrieve_data.md b/Docs/tuto_G_retrieve_data.md index 95581fbef..a70262c02 100644 --- a/Docs/tuto_G_retrieve_data.md +++ b/Docs/tuto_G_retrieve_data.md @@ -4,39 +4,39 @@ Learning an efficient way to retrieve simulation data is essential in CARLA. Thi First, the simulation is initialized with custom settings and traffic. An ego vehicle is set to roam around the city, optionally with some basic sensors. The simulation is recorded, so that later it can be queried to find the highlights. After that, the original simulation is played back, and exploited to the limit. New sensors can be added to retrieve consistent data. The weather conditions can be changed. The recorder can even be used to test specific scenarios with different outputs. -* [__Overview__](#overview) -* [__Set the simulation__](#set-the-simulation) - * [Map setting](#map-setting) - * [Weather setting](#weather-setting) -* [__Set traffic__](#set-traffic) - * [CARLA traffic and pedestrians](#carla-traffic-and-pedestrians) - * [SUMO co-simulation traffic](#sumo-co-simulation-traffic) -* [__Set the ego vehicle__](#set-the-ego-vehicle) - * [Spawn the ego vehicle](#spawn-the-ego-vehicle) - * [Place the spectator](#place-the-spectator) -* [__Set basic sensors__](#set-basic-sensors) - * [RGB camera](#rgb-camera) - * [Detectors](#detectors) - * [Other sensors](#other-sensors) -* [__Set advanced sensors__](#set-advanced-sensors) - * [Depth camera](#depth-camera) - * [Semantic segmentation camera](#semantic-segmentation-camera) - * [LIDAR raycast sensor](#lidar-raycast-sensor) - * [Radar sensor](#radar-sensor) -* [__No-rendering-mode__](#no-rendering-mode) - * [Simulate at a fast pace](#simulate-at-a-fast-pace) - * [Manual control without rendering](#manual-control-without-rendering) -* [__Record and retrieve data__](#record-and-retrieve-data) - * [Start recording](#start-recording) - * [Capture and record](#capture-and-record) - * [Stop recording](#stop-recording) -* [__Exploit the recording__](#exploit-the-recording) - * [Query the events](#query-the-events) - * [Choose a fragment](#choose-a-fragment) - * [Retrieve more data](#retrieve-more-data) - * [Change the weather](#change-the-weather) - * [Try new outcomes](#try-new-outcomes) -* [__Tutorial scripts__](#tutorial-scripts) +* [__Overview__](#overview) +* [__Set the simulation__](#set-the-simulation) + * [Map setting](#map-setting) + * [Weather setting](#weather-setting) +* [__Set traffic__](#set-traffic) + * [CARLA traffic and pedestrians](#carla-traffic-and-pedestrians) + * [SUMO co-simulation traffic](#sumo-co-simulation-traffic) +* [__Set the ego vehicle__](#set-the-ego-vehicle) + * [Spawn the ego vehicle](#spawn-the-ego-vehicle) + * [Place the spectator](#place-the-spectator) +* [__Set basic sensors__](#set-basic-sensors) + * [RGB camera](#rgb-camera) + * [Detectors](#detectors) + * [Other sensors](#other-sensors) +* [__Set advanced sensors__](#set-advanced-sensors) + * [Depth camera](#depth-camera) + * [Semantic segmentation camera](#semantic-segmentation-camera) + * [LIDAR raycast sensor](#lidar-raycast-sensor) + * [Radar sensor](#radar-sensor) +* [__No-rendering-mode__](#no-rendering-mode) + * [Simulate at a fast pace](#simulate-at-a-fast-pace) + * [Manual control without rendering](#manual-control-without-rendering) +* [__Record and retrieve data__](#record-and-retrieve-data) + * [Start recording](#start-recording) + * [Capture and record](#capture-and-record) + * [Stop recording](#stop-recording) +* [__Exploit the recording__](#exploit-the-recording) + * [Query the events](#query-the-events) + * [Choose a fragment](#choose-a-fragment) + * [Retrieve more data](#retrieve-more-data) + * [Change the weather](#change-the-weather) + * [Try new outcomes](#try-new-outcomes) +* [__Tutorial scripts__](#tutorial-scripts) --- ## Overview