diff --git a/Docs/CONTRIBUTING.md b/Docs/CONTRIBUTING.md index 15b425f1b..386b08309 100644 --- a/Docs/CONTRIBUTING.md +++ b/Docs/CONTRIBUTING.md @@ -1,4 +1,4 @@ -Contributing to CARLA +# Contributing to CARLA ===================== We are more than happy to accept contributions! @@ -10,8 +10,8 @@ How can I contribute? * Improving documentation * Code contributions -Reporting bugs --------------- +--------- +## Reporting bugs Use our [issue section][issueslink] on GitHub. Please check before that the issue is not already reported, and make sure you have read our @@ -21,8 +21,8 @@ issue is not already reported, and make sure you have read our [docslink]: http://carla.readthedocs.io [faqlink]: http://carla.readthedocs.io/en/latest/faq/ -Feature requests ----------------- +--------- +## Feature requests Please check first the list of [feature requests][frlink]. If it is not there and you think is a feature that might be interesting for users, please submit @@ -30,8 +30,8 @@ your request as a new issue. [frlink]: https://github.com/carla-simulator/carla/issues?q=is%3Aissue+is%3Aopen+label%3A%22feature+request%22+sort%3Acomments-desc -Improving documentation ------------------------ +------- +## Improving documentation If you feel something is missing in the documentation, please don't hesitate to open an issue to let us know. Even better, if you think you can improve it @@ -61,8 +61,8 @@ Once you are done with your changes, please submit a pull-request. > mkdocs serve ``` -Code contributions ------------------- +------------ +## Code contributions So you are considering making a code contribution, great! We love to have contributions from the community. diff --git a/Docs/asset_packages_for_dist.md b/Docs/asset_packages_for_dist.md index d17ecce43..3bd10fc05 100644 --- a/Docs/asset_packages_for_dist.md +++ b/Docs/asset_packages_for_dist.md @@ -1,4 +1,4 @@ -

Creating standalone asset packages for distribution

+# Creating standalone asset packages for distribution *Please note that we will use the term *assets* for referring to **props** and also **maps**.* diff --git a/Docs/bp_library.md b/Docs/bp_library.md index dcd51ad74..a1a3c7420 100755 --- a/Docs/bp_library.md +++ b/Docs/bp_library.md @@ -1,5 +1,5 @@ -

Blueprint Library

+#Blueprint Library The Blueprint Library ([`carla.BlueprintLibrary`](../python_api/#carlablueprintlibrary-class)) is a summary of all [`carla.ActorBlueprint`](../python_api/#carla.ActorBlueprint) and its attributes ([`carla.ActorAttribute`](../python_api/#carla.ActorAttribute)) available to the user in CARLA. Here is an example code for printing all actor blueprints and their attributes: @@ -594,13 +594,6 @@ Check out our [blueprint tutorial](../python_api_tutorial/#blueprints). - `object_type` (_String_) - `role_name` (_String_)_ – Modifiable_ - `sticky_control` (_Bool_)_ – Modifiable_ -- **vehicle.ford.mustang** - - **Attributes:** - - `color` (_RGBColor_)_ – Modifiable_ - - `number_of_wheels` (_Int_) - - `object_type` (_String_) - - `role_name` (_String_)_ – Modifiable_ - - `sticky_control` (_Bool_)_ – Modifiable_ - **vehicle.gazelle.omafiets** - **Attributes:** - `color` (_RGBColor_)_ – Modifiable_ @@ -632,7 +625,7 @@ Check out our [blueprint tutorial](../python_api_tutorial/#blueprints). - `object_type` (_String_) - `role_name` (_String_)_ – Modifiable_ - `sticky_control` (_Bool_)_ – Modifiable_ -- **vehicle.lincoln.mkz2017** +- **vehicle.lincoln.lincoln** - **Attributes:** - `color` (_RGBColor_)_ – Modifiable_ - `number_of_wheels` (_Int_) @@ -653,6 +646,13 @@ Check out our [blueprint tutorial](../python_api_tutorial/#blueprints). - `object_type` (_String_) - `role_name` (_String_)_ – Modifiable_ - `sticky_control` (_Bool_)_ – Modifiable_ +- **vehicle.mustang.mustang** + - **Attributes:** + - `color` (_RGBColor_)_ – Modifiable_ + - `number_of_wheels` (_Int_) + - `object_type` (_String_) + - `role_name` (_String_)_ – Modifiable_ + - `sticky_control` (_Bool_)_ – Modifiable_ - **vehicle.nissan.micra** - **Attributes:** - `color` (_RGBColor_)_ – Modifiable_ diff --git a/Docs/building_from_source.md b/Docs/building_from_source.md index a093fe6df..01ef6f28a 100644 --- a/Docs/building_from_source.md +++ b/Docs/building_from_source.md @@ -1,4 +1,4 @@ -

Building from source

+#Building from source * [How to build on Linux](how_to_build_on_linux.md) * [How to build on Windows](how_to_build_on_windows.md) diff --git a/Docs/carla_docker.md b/Docs/carla_docker.md index b9d91d4fe..9e9a579ae 100644 --- a/Docs/carla_docker.md +++ b/Docs/carla_docker.md @@ -1,4 +1,4 @@ -

Running CARLA in a Docker

+# Running CARLA in a Docker This tutorial is designed for: @@ -10,25 +10,27 @@ This tutorial was tested in Ubuntu 16.04 and using NVIDIA 396.37 drivers. This method requires a version of NVIDIA drivers >=390. -## Docker Installation +--- +##Docker Installation !!! note Docker requires sudo to run. Follow this guide to add users to the docker sudo group -### Docker CE +####Docker CE For our tests we used the Docker CE version. To install Docker CE we recommend using [this tutorial][tutoriallink] [tutoriallink]: https://docs.docker.com/install/linux/docker-ce/ubuntu/#extra-steps-for-aufs -### NVIDIA-Docker2 +#### NVIDIA-Docker2 To install nvidia-docker-2 we recommend using the "Quick Start" section from the [nvidia-dockers github](https://github.com/NVIDIA/nvidia-docker). -## Getting it Running +--- +##Getting it Running Pull the CARLA image. diff --git a/Docs/coding_standard.md b/Docs/coding_standard.md index b78b02345..fb92552e4 100644 --- a/Docs/coding_standard.md +++ b/Docs/coding_standard.md @@ -1,13 +1,13 @@ -

Coding standard

+# Coding standard -General -======= +------- +## General * Use spaces, not tabs. * Avoid adding trailing whitespace as it creates noise in the diffs. -Python ------- +------- +## Python * Comments should not exceed 80 columns, code should not exceed 120 columns. * All code must be compatible with Python 2.7, 3.5, and 3.6. @@ -19,8 +19,8 @@ Python [pylintlink]: https://www.pylint.org/ [pep8link]: https://www.python.org/dev/peps/pep-0008/ -C++ ---- +------- +## C++ * Comments should not exceed 80 columns, code may exceed this limit a bit in rare occasions if it results in clearer code. diff --git a/Docs/core_actors.md b/Docs/core_actors.md index ff1471ab2..c9c7ef9e4 100644 --- a/Docs/core_actors.md +++ b/Docs/core_actors.md @@ -1,4 +1,4 @@ -

2nd. Actors and blueprints

+# 2nd. Actors and blueprints The actors in CARLA include almost everything playing a role the simulation. That includes not only vehicles and walkers but also sensors, traffic signs, traffic lights and the spectator, the camera providing the simulation's point of view. They are crucial, and so it is to have fully understanding on how to operate on them. This section will cover the basics: from spawning up to destruction and their different types. However, the possibilities they present are almost endless. This is the first step to then experiment, take a look at the __How to's__ in this documentation and share doubts and ideas in the [CARLA forum](https://forum.carla.org/). @@ -21,7 +21,7 @@ This section will cover the basics: from spawning up to destruction and their di This layouts allow the user to smoothly add new actors into the simulation. They basically are already-made models with a series of attributes listed, some of which are modifiable and others are not: vehicle color, amount of channels in a lidar sensor, _fov_ in a camera, a walker's speed. All of these can be changed at will. All the available blueprints are listed in the [blueprint library](bp_library.md) with their attributes and a tag to identify which can be set by the user. -

Managing the blueprint library

+####Managing the blueprint library There is a [carla.BlueprintLibrary](python_api.md#carla.BlueprintLibrary) class containing a list of [carla.ActorBlueprint](python_api.md#carla.ActorBlueprint) elements. It is the world object who can provide access to an instance of it: ```py @@ -61,7 +61,7 @@ for attr in blueprint: !!! Important All along this section, many different functions and methods regarding actors will be covered. The Python API provides for __[commands](python_api.md#command.SpawnActor)__ to apply batches of this common functions (such as spawning or destroying actors) in just one frame. -

Spawning

+####Spawning The world object is responsible of spawning actors and keeping track of those who are currently on scene. Besides a blueprint, only a [carla.Transform](python_api.md#carla.Transform), which basically defines a location and rotation for the actor. @@ -105,7 +105,7 @@ for speed_sign in actor_list.filter('traffic.speed_limit.*'): print(speed_sign.get_location()) ``` -

Handling

+####Handling Once an actor si spawned, handling is quite straightforward. The [carla.Actor](python_api.md#carla.Actor) class mostly consists of _get()_ and _set()_ methods to manage the actors around the map: @@ -128,7 +128,7 @@ Besides that, actors also have tags provided by their blueprints that are mostly Most of the methods send requests to the simulator asynchronously. The simulator queues these, but has a limited amount of time each update to parse them. Flooding the simulator with lots of "set" methods will accumulate a significant lag. -

Destruction

+####Destruction To remove an actor from the simulation, an actor can get rid of itself and notify if successful doing so: @@ -143,7 +143,7 @@ Actors are not destroyed when the Python script finishes, they remain and the wo --------------- ##Types of actors -

Sensors

+####Sensors Sensors are actors that produce a stream of data. They are so important and vast that they will be properly written about on their own section: [4th. Sensors and data](core_sensors.md). So far, let's just take a look at a common sensor spawning routine: @@ -159,11 +159,11 @@ This example spawns a camera sensor, attaches it to a vehicle, and tellds the ca * Most of the sensors will be attached to a vehicle to gather information on its surroundings. * Sensors __listen__ to data. When receiving it, they call a function described with __[Lambda expressions](https://docs.python.org/3/reference/expressions.html)__, so it is advisable to learn on that beforehand (6.13 in the link provided). -

Spectator

+####Spectator This unique actor is the one element placed by Unreal Engine to provide an in-game point of view. It can be used to move the view of the simulator window. -

Traffic signs and traffic lights

+####Traffic signs and traffic lights So far, CARLA only is aware of some signs: stop, yield and speed limit, described with [carla.TrafficSign](python_api.md#carla.TrafficSign). Traffic lights, which are considered an inherited class named [carla.TrafficLight](python_api.md#carla.TrafficLight). __These cannot be found on the blueprint library__ and thus, cannot be spawned. They set traffic conditions and so, they are mindfully placed by developers. @@ -187,7 +187,7 @@ if traffic_light.get_state() == carla.TrafficLightState.Red: * **Speed limit signs** with the speed codified in their type_id. -

Vehicles

+####Vehicles [carla.Vehicle](python_api.md#carla.Vehicle) are a special type of actor that provide some better physics control. This is achieved applying four types of different controls: @@ -217,7 +217,7 @@ print(box.location) # Location relative to the vehicle. print(box.extent) # XYZ half-box extents in meters. ``` -

Walkers

+####Walkers [carla.Walker](python_api.md#carla.Walker) are moving actors and so, work in quite a similar way as vehicles do. Control over them is provided by controllers: diff --git a/Docs/core_concepts.md b/Docs/core_concepts.md index 13c8a1afd..3871b7497 100644 --- a/Docs/core_concepts.md +++ b/Docs/core_concepts.md @@ -1,4 +1,4 @@ -

Core concepts

+# Core concepts This section summarizes the main features and modules in CARLA. While this page is just an overview, the rest of the information can be found in their respective pages, including fragments of code and in-depth explanations. In order to learn everything about the different classes and methods in the API, take a look at the [Python API reference](python_api.md). There is also another reference named [Code recipes](python_cookbook.md) containing some of the most common fragments of code regarding different functionalities that could be specially useful during these first steps. @@ -17,13 +17,13 @@ In order to learn everything about the different classes and methods in the API, --------------- ##First steps -

1st. World and client

+#### 1st. World and client The client is the module the user runs to ask for information or changes in the simulation. It communicates with the server via terminal. A client runs with an IP and a specific port. There can be many clients running at the same time, although multiclient managing needs a full comprehension on CARLA in order to make things work properly. The world is an object representing the simulation, an abstract layer containing the main methods to manage it: spawn actors, change the weather, get the current state of the world, etc. There is only one world per simulation, but it will be destroyed and substituted for a new one when the map is changed. -

2nd. Actors and blueprints

+#### 2nd. Actors and blueprints In CARLA, an actor is anything that plays a role in the simulation. That includes: * Vehicles. @@ -34,13 +34,13 @@ In CARLA, an actor is anything that plays a role in the simulation. That include A blueprint is needed in order to spawn an actor. __Blueprints__ are a set of already-made actor layouts: models with animations and different attributes. Some of these attributes can be set by the user, others don't. There is a library provided by CARLA containing all the available blueprints and the information regarding them. Visit the [Blueprint library](bp_library.md) to learn more about this. -

3rd. Maps and navigation

+#### 3rd. Maps and navigation The map is the object representing the model of the world. There are many maps available, seven by the time this is written, and all of them use OpenDRIVE 1.4 standard to describe the roads. Roads, lanes and junctions are managed by the API to be accessed using different classes and methods. These are later used along with the waypoint class to provide vehicles with a navigation path. Traffic signs and traffic lights have bounding boxes placed on the road that make vehicles aware of them and their current state in order to set traffic conditions. -

4th. Sensors and data

+#### 4th. Sensors and data Sensors are one of the most important actors in CARLA and their use can be quite complex. A sensor is attached to a parent vehicle and follows it around, gathering information of the surroundings for the sake of learning. Sensors, as any other actor, have blueprints available in the [Blueprint library](bp_library.md) that correspond to the types available. Currently, these are: diff --git a/Docs/core_map.md b/Docs/core_map.md index 760c26fb2..ce7b3ec0e 100644 --- a/Docs/core_map.md +++ b/Docs/core_map.md @@ -1,4 +1,4 @@ -

3rd. Maps and navigation

+# 3rd. Maps and navigation After discussing about the world and its actors, it is time to put everything into place and understand the map and how do the actors navigate it. @@ -15,7 +15,7 @@ After discussing about the world and its actors, it is time to put everything in Understanding the map in CARLA is equivalent to understanding the road. All of the maps have an OpenDRIVE file defining the road layout fully annotated. The way the [OpenDRIVE standard 1.4](http://www.opendrive.org/docs/OpenDRIVEFormatSpecRev1.4H.pdf) defines roads, lanes, junctions, etc. is extremely important. It determines the possibilities of the API and the reasoning behind many decisions made. The Python API provides a higher level querying system to navigate these roads. It is constantly evolving to provide a wider set of tools. -

Changing the map

+####Changing the map This was briefly mentioned in [1st. World and client](core_world.md), so let's expand a bit on it: __To change the map, the world has to change too__. Everything will be rebooted and created from scratch, besides the Unreal Editor itself. Using `reload_world()` creates a new instance of the world with the same map while `load_world()` is used to change the current one: @@ -29,20 +29,21 @@ print(client.get_available_maps()) ``` So far there are seven different maps available. Each of these has a specific structure or unique features that are useful for different purposes, so a brief sum up on these: -Town | Summary --- | -- -__Town 01__ | As __Town 02__, a basic town layout with all "T junctions". These are the most stable. -__Town 02__ | As __Town 01__, a basic town layout with all "T junctions". These are the most stable. -__Town 03__ | The most complex town with a roundabout, unevenness, a tunnel. Essentially a medley. -__Town 04__ | An infinite loop in a highway. -__Town 05__ | Squared-grid town with cross junctions and a bridge. -__Town 06__ | Long highways with a lane exit and a [Michigan left](https://en.wikipedia.org/wiki/Michigan_left). -__Town 07__ | A rural environment with narrow roads, barely non traffic lights and barns. +|Town | Summary | +| -- | -- | +|__Town 01__ | As __Town 02__, a basic town layout with all "T junctions". These are the most stable.| +|__Town 02__ | As __Town 01__, a basic town layout with all "T junctions". These are the most stable.| +|__Town 03__ | The most complex town with a roundabout, unevenness, a tunnel. Essentially a medley.| +|__Town 04__ | An infinite loop in a highway.| +|__Town 05__ | Squared-grid town with cross junctions and a bridge.| +|__Town 06__ | Long highways with a lane exit and a [Michigan left](https://en.wikipedia.org/wiki/Michigan_left). | +|__Town 07__ | A rural environment with narrow roads, barely non traffic lights and barns.| +
Users can also [customize a map](dev/map_customization.md) or even [create a new map](how_to_make_a_new_map.md) to be used in CARLA. These are more advanced steps and have been developed in their own tutorials. -

Lanes

+####Lanes The different types of lane as defined by [OpenDRIVE standard 1.4](http://www.opendrive.org/docs/OpenDRIVEFormatSpecRev1.4H.pdf) are translated to the API in [carla.LaneType](python_api.md#carla.LaneType). The surrounding lane markings for each lane can also be accessed using [carla.LaneMarking](python_api.md#carla.LaneMarkingType). A lane marking is defined by: a [carla.LaneMarkingType](python_api.md#carla.LaneMarkingType) and a [carla.LaneMarkingColor](python_api.md#carla.LaneMarkingColor), a __width__ to state thickness and a variable stating lane changing permissions with [carla.LaneChange](python_api.md#carla.LaneChange). @@ -58,7 +59,7 @@ left_lanemarking_type = waypoint.left_lane_marking.type() lane_change = waypoint.lane_change ``` -

Junctions

+####Junctions To ease managing junctions with OpenDRIVE, the [carla.Junction](python_api.md#carla.Junction) class provides for a bounding box to state whereas lanes or vehicles are inside of it. There is also a method to get a pair of waypoints per lane determining the starting and ending point inside the junction boundaries for each lane: @@ -66,7 +67,7 @@ There is also a method to get a pair of waypoints per lane determining the start waypoints_junc = my_junction.get_waypoints() ``` -

Waypoints

+####Waypoints [carla.Waypoint](python_api.md#carla.Waypoint) objects are 3D-directed points that are prepared to mediate between the world and the openDRIVE definition of the road. Each waypoint contains a [carla.Transform](python_api.md#carla.Transform) summarizing a point on the map inside a lane and the orientation of the lane. The variables `road_id`,`section_id`,`lane_id` and `s` that translate this transform to the OpenDRIVE road and are used to create an __identifier__ of the waypoint. diff --git a/Docs/core_sensors.md b/Docs/core_sensors.md index 2db6866e6..5538f64c8 100644 --- a/Docs/core_sensors.md +++ b/Docs/core_sensors.md @@ -1,4 +1,4 @@ -

4th. Sensors and data

+# 4th. Sensors and data The last step in this introduction to CARLA are sensors. They allow to retrieve data from the surroundings and so, are crucial to use CARLA as a learning environment for driving agents. This page summarizes everything necessary to start handling sensors including some basic information about the different types available and a step-by-step of their life cycle. The specifics for every sensor can be found in their [reference](ref_sensors.md) @@ -24,7 +24,7 @@ The class [carla.Sensor](python_api.md#carla.Sensor) defines a special type of a Despite their differences, the way the user manages every sensor is quite similar. -

Setting

+####Setting As with every other actor, the first step is to find the proper blueprint in the library and set specific attributes to get the desired results. This is essential when handling sensors, as their capabilities depend on the way these are set. Their attributes are detailed in the [sensors' reference](ref_sensors.md). @@ -40,7 +40,7 @@ blueprint.set_attribute('fov', '110') blueprint.set_attribute('sensor_tick', '1.0') ``` -

Spawning

+####Spawning Sensors are also spawned like any other actor, only this time the two optional parameters, `attachment_to` and `attachment_type` are crucial. They should be attached to another actor, usually a vehicle, to follow it around and gather the information regarding its surroundings. There are two types of attachment: @@ -55,7 +55,7 @@ sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle) !!! Important When spawning an actor with attachment, remember that its location should be relative to its parent, not global. -

Listening

+####Listening Every sensor has a [`listen()`](python_api.md#carla.Sensor.listen) method that is called every time the sensor retrieves data. This method has one argument: `callback`, which is a [lambda expression](https://www.w3schools.com/python/python_lambda.asp) of a function, defining what should the sensor do when data is retrieved. @@ -87,13 +87,13 @@ Sensor data differs a lot between sensor types, but it is always tagged with: | --------------------- | ------ | ----------- | | `frame` | int | Frame number when the measurement took place. | | `timestamp` | double | Timestamp of the measurement in simulation seconds since the beginning of the episode. | -| `transform` | carla.Transform | World reference of the sensor at the time of the measurement. | - +| `transform` | carla.Transform | World reference of the sensor at the time of the measurement. | +
--------------- ##Types of sensors -

Cameras

+####Cameras These sensors take a shot of the world from their point of view and then use the helper class to alter this image and provide different types of information. __Retrieve data:__ every simulation step. @@ -102,9 +102,10 @@ __Retrieve data:__ every simulation step. | ---------- | ---------- | ---------- | | Depth | [carla.Image](python_api.md#carla.Image) | Renders the depth of the elements in the field of view in a gray-scale depth map. | | RGB | [carla.Image](python_api.md#carla.Image) | Provides clear vision of the surroundings. Looks like a normal photo of the scene. | -| Semantic segmentation | [carla.Image](python_api.md#carla.Image) | Renders elements in the field of view with a specific color according to their tags. | +| Semantic segmentation | [carla.Image](python_api.md#carla.Image) | Renders elements in the field of view with a specific color according to their tags. | -

Detectors

+
+####Detectors Sensors that retrieve data when a parent object they are attached to registers a specific event in the simulation. __Retrieve data:__ when triggered. @@ -113,9 +114,10 @@ __Retrieve data:__ when triggered. | ---------- | ---------- | ---------- | | Collision | [carla.CollisionEvent](python_api.md#carla.CollisionEvent) | Retrieves collisions between its parent and other actors. | | Lane invasion | [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent) | Registers when its parent crosses a lane marking. | -| Obstacle | [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleEvent) | Detects possible obstacles ahead of its parent. | +| Obstacle | [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleEvent) | Detects possible obstacles ahead of its parent. | -

Other

+
+####Other This group gathers sensors with different functionalities: navigation, measure physical properties of an object and provide 2D and 3D models of the scene. __Retrieve data:__ every simulation step. @@ -125,8 +127,9 @@ __Retrieve data:__ every simulation step. | GNSS | [carla.GNSSMeasurement](python_api.md#carla.GNSSMeasurement) | Retrieves the geolocation location of the sensor. | | IMU | [carla.IMUMeasurement](python_api.md#carla.IMUMeasurement) | Comprises an accelerometer, a gyroscope and a compass. | | Lidar raycast | [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement) | A rotating lidar retrieving a cloud of points to generate a 3D model the surroundings. | -| Radar | [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) | 2D point map that models elements in sight and their movement regarding the sensor. | +| Radar | [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) | 2D point map that models elements in sight and their movement regarding the sensor. | +
--------------- That is a wrap on sensors and how do these retrieve simulation data and thus, the introduction to CARLA is finished. However there is yet a lot to learn. Some of the different paths to follow now are listed here: diff --git a/Docs/core_world.md b/Docs/core_world.md index 1cb79583b..8b49dd8f0 100644 --- a/Docs/core_world.md +++ b/Docs/core_world.md @@ -1,4 +1,4 @@ -

1st. World and client

+# 1st. World and client This is bound to be one of the first topics to learn about when entering CARLA. The client and the world are two of the fundamentals of CARLA, a necessary abstraction to operate the simulation and its actors. This tutorial goes from defining the basics and creation of these elements to describing their possibilities without entering into higher complex matters. If any doubt or issue arises during the reading, the [CARLA forum](forum.carla.org/) is there to solve them. @@ -21,7 +21,7 @@ Besides that, the client is also able to access other CARLA modules, features an The __carla.Client__ class is explained thoroughly in the [PythonAPI reference](python_api.md#carla.Client). -

Client creation

+####Client creation Two things are needed: The IP address identifying it and two TCP ports the client will be using to communicate with the server. There is an optional third parameter, an `int` to set the working threads that by default is set to all (`0`). [This code recipe](python_cookbook.md#parse-client-creation-arguments) shows how to parse these as arguments when running the script. @@ -41,7 +41,7 @@ It is possible to have many clients connected, as it is common to have more than !!! Note Client and server have different `libcarla` modules. If the versions differ due to different origin commits, issues may arise. This will not normally the case, but it can be checked using the `get_client_version()` and `get_server_version()` methods. -

World connection

+####World connection Being the simulation running, a configured client can connect and retrieve the current world easily: @@ -58,7 +58,7 @@ world = client.load_world('Town01') Every world object has an `id` or episode. Everytime the client calls for `load_world()` or `reload_world()` the previous one is destroyed and the new one is created from from scratch without rebooting Unreal Engine, so this episode will change. -

Other client utilities

+####Other client utilities The main purpose of the client object is to get or change the world and many times, it is no longer used after that. However, this object is in charge of two other main tasks: accessing to advanced CARLA features and applying command batches. The list of features that are accessed from the client object are: @@ -87,7 +87,7 @@ This class acts as the major ruler of the simulation and its instance should be In fact, some of the most important methods of this class are the _getters_. They summarize all the information the world has access to. More explicit information regarding the World class can be found in the [Python API reference](python_api.md#carla.World). -

Actors

+####Actors The world has different methods related with actors that allow it to: @@ -99,7 +99,7 @@ The world has different methods related with actors that allow it to: Explanations on spawning will be conducted in the second step of this guide: [2nd. Actors and blueprints](core_actors.md), as it requires some understanding on the blueprint library, attributes, etc. Keep reading or visit the [Python API reference](python_api.md) to learn more about this matter. -

Weather

+####Weather The weather is not a class on its own, but a world setting. However, there is a helper class named [carla.WeatherParameters](python_api.md#carla.WeatherParameters) that allows to define a series of visual characteristics such as sun orientation, cloudiness, lightning, wind and much more. The changes can then be applied using the world as the following example does: ```py @@ -122,7 +122,7 @@ world.set_weather(carla.WeatherParameters.WetCloudySunset) !!! Note Changes in the weather do not affect physics. They are only visuals that can be captured by the camera sensors. -

Debugging

+#### Debugging World objects have a public attribute that defines a [carla.DebugHelper](python_api.md#carla.DebugHelper) object. It allows for different shapes to be drawn during the simulation in order to trace the events happening. The following example would access the attribute to draw a red box at an actor's location and rotation. @@ -133,7 +133,7 @@ debug.draw_box(carla.BoundingBox(actor_snapshot.get_transform().location,carla.V This example is extended in this [code recipe](python_cookbook.md#debug-bounding-box-recipe) to draw boxes for every actor in a world snapshot. Take a look at it and at the Python API reference to learn more about this. -

World snapshots

+#### World snapshots Contains the state of every actor in the simulation at a single frame, a sort of still image of the world with a time reference. This feature makes sure that all the information contained comes from the same simulation step without the need of using synchronous mode. @@ -157,7 +157,7 @@ for actor_snapshot in world_snapshot: #Get the actor and the snapshot informatio actor_snapshot = world_snapshot.find(actual_actor.id) #Get an actor's snapshot ``` -

World settings

+#### World settings The world also has access to some advanced configurations for the simulation that determine rendering conditions, steps in the simulation time and synchrony between clients and server. These are advanced concepts that do better if untouched by newcomers. For the time being let's say that CARLA by default runs in with its best quality, with a variable time-step and asynchronously. The helper class is [carla.WorldSettings](python_api.md#carla.WorldSettings). To dive further in this matters take a look at the __Advanced steps__ section of the documentation and read about [synchrony and time-step](simulation_time_and_synchrony.md) or [rendering_options.md](../rendering_options). diff --git a/Docs/cpp_reference.md b/Docs/cpp_reference.md index 8fb8aa75d..ae9ea4513 100644 --- a/Docs/cpp_reference.md +++ b/Docs/cpp_reference.md @@ -1,5 +1,5 @@ -

C++ Reference

+# C++ Reference We use Doxygen to generate the documentation of our C++ code: [Libcarla/Source](http://carla.org/Doxygen/html/dir_b9166249188ce33115fd7d5eed1849f2.html)
diff --git a/Docs/dev/build_system.md b/Docs/dev/build_system.md index ed3d00e94..f0300656c 100644 --- a/Docs/dev/build_system.md +++ b/Docs/dev/build_system.md @@ -1,4 +1,4 @@ -

Build system

+#Build system > _This document is a work in progress, only the Linux build system is taken into account here._ @@ -49,8 +49,9 @@ Two configurations: | **Requires** | rpclib, gtest, boost | rpclib, boost | **std runtime** | LLVM's `libc++` | Default `libstdc++` | | **Output** | headers and test exes | `libcarla_client.a` | -| **Required by** | Carla plugin | PythonAPI | +| **Required by** | Carla plugin | PythonAPI | +
#### CarlaUE4 and Carla plugin Both compiled at the same step with Unreal Engine build tool. They require the diff --git a/Docs/dev/how_to_add_a_new_sensor.md b/Docs/dev/how_to_add_a_new_sensor.md index ca742f236..20e952e18 100644 --- a/Docs/dev/how_to_add_a_new_sensor.md +++ b/Docs/dev/how_to_add_a_new_sensor.md @@ -1,4 +1,4 @@ -

How to add a new sensor

+# How to add a new sensor This tutorial explains the basics for adding a new sensor to CARLA. It provides the necessary steps to implement a sensor in Unreal Engine 4 (UE4) and expose diff --git a/Docs/dev/how_to_make_a_release.md b/Docs/dev/how_to_make_a_release.md index b63bfb32c..6a20a3545 100644 --- a/Docs/dev/how_to_make_a_release.md +++ b/Docs/dev/how_to_make_a_release.md @@ -1,4 +1,4 @@ -

How to make a release

+# How to make a release > _This document is meant for developers that want to publish a new release._ diff --git a/Docs/dev/how_to_upgrade_content.md b/Docs/dev/how_to_upgrade_content.md index 45cd603a5..8b9a4fdc2 100644 --- a/Docs/dev/how_to_upgrade_content.md +++ b/Docs/dev/how_to_upgrade_content.md @@ -1,4 +1,4 @@ -

How to upgrade content

+# How to upgrade content Our content resides on a separate [Git LFS repository][contentrepolink]. As part of our build system, we generate and upload a package containing the latest diff --git a/Docs/dev/map_customization.md b/Docs/dev/map_customization.md index f1fa1de60..63f0c205b 100644 --- a/Docs/dev/map_customization.md +++ b/Docs/dev/map_customization.md @@ -1,4 +1,4 @@ -

Map customization

+# Map customization > _This document is a work in progress and might be incomplete._ @@ -11,11 +11,11 @@ Creating a new map this guide will suggest duplicating an existing level instead of creating one from scratch. -

Requirements

+#### Requirements - Checkout and build Carla from source on [Linux](../how_to_build_on_linux.md) or [Windows](../how_to_build_on_windows.md) -

Creating

+#### Creating - Duplicate an existing map - Remove everything you don't need from the map @@ -66,7 +66,7 @@ SplinemeshRepeater !!! Bug See [#35 SplineMeshRepeater loses its collider mesh](https://github.com/carla-simulator/carla/issues/35) -

Standard use:

+#### Standard use SplineMeshRepeater "Content/Blueprints/SplineMeshRepeater" is a tool included in the Carla Project to help building urban environments; It repeats and aligns a @@ -92,7 +92,7 @@ the lower point possible with the rest of the mesh pointing positive (Preferably by the X axis) -

Specific Walls (Dynamic material)

+#### Specific Walls (Dynamic material) In the project folder "Content/Static/Walls" are included some specific assets to be used with this SplineMeshRepeater with a series of special diff --git a/Docs/doc_standard.md b/Docs/doc_standard.md index 452c0bfdd..d81d40546 100644 --- a/Docs/doc_standard.md +++ b/Docs/doc_standard.md @@ -1,4 +1,4 @@ -

Documentation Standard

+# Documentation Standard This document will serve as a guide and example of some rules that need to be followed in order to contribute to the documentation. @@ -6,8 +6,9 @@ followed in order to contribute to the documentation. We use a mix of markdown and HTML tags to customize the documentation along with an [`extra.css`](https://github.com/carla-simulator/carla/tree/master/Docs/extra.css) file. -Rules ------ +------- +## Rules + * Leave always an empty line between sections and at the end of the document. * Writting should not exceed `100` columns, except for HTML related content, markdown tables, @@ -20,7 +21,8 @@ Rules * Use `------` underlining a Heading or `#` hierarchy to make headings and show them in the navigation bar. -

Exceptions:

+-------- +## Exceptions * Documentation generated via python scripts like PythonAPI reference diff --git a/Docs/epic_automotive_materials.md b/Docs/epic_automotive_materials.md index e6be5e97c..00f2b6330 100644 --- a/Docs/epic_automotive_materials.md +++ b/Docs/epic_automotive_materials.md @@ -1,4 +1,4 @@ -

How to link Epic's Automotive Materials

+# How to link Epic's Automotive Materials !!! important Since version 0.8.0 CARLA does not use Epic's _Automotive Materials_ by diff --git a/Docs/faq.md b/Docs/faq.md index e5a3d4053..d6f4ab7a6 100644 --- a/Docs/faq.md +++ b/Docs/faq.md @@ -1,4 +1,4 @@ -

F.A.Q.

+# F.A.Q. Some of the most common issues regarding CARLA installation and builds are listed hereunder. More issues can be found in the [GitHub issues][issuelink] for the project. In case a doubt is not listed here, take a look at the forum and feel free to post it. [issuelink]: https://github.com/carla-simulator/carla/issues?utf8=%E2%9C%93&q=label%3Aquestion+ diff --git a/Docs/getting_started/quickstart.md b/Docs/getting_started/quickstart.md index 519fc6f75..4714079ef 100644 --- a/Docs/getting_started/quickstart.md +++ b/Docs/getting_started/quickstart.md @@ -1,4 +1,4 @@ -

Quickstart

+#Quick start * [Requirements](#requirements) * [Downloading CARLA](#downloading-carla) @@ -6,6 +6,7 @@ * Command-line options * [Updating CARLA](#updating-carla) * [Summary](#summary) + --------------- ##Requirements @@ -63,7 +64,7 @@ A window will open, containing a view over the city. This is the "spectator" vie !!! note If the firewall or any other application are blocking the TCP ports needed, these can be manually changed by adding to the previous command the argument: `-carla-port=N`, being `N` the desired port. The second will be automatically set to `N+1`. -

Command-line options

+####Command-line options There are some configuration options available when launching CARLA: diff --git a/Docs/how_to_add_assets.md b/Docs/how_to_add_assets.md index 110821c67..4ac854e30 100644 --- a/Docs/how_to_add_assets.md +++ b/Docs/how_to_add_assets.md @@ -1,4 +1,4 @@ -

How to add assets

+# How to add assets Adding a vehicle ---------------- diff --git a/Docs/how_to_add_friction_triggers.md b/Docs/how_to_add_friction_triggers.md index cf1748267..c7d11a5f9 100644 --- a/Docs/how_to_add_friction_triggers.md +++ b/Docs/how_to_add_friction_triggers.md @@ -1,4 +1,4 @@ -

How to add friction triggers

+# How to add friction triggers *Friction Triggers* are box triggers that can be added on runtime and let users define a different friction of the vehicles' wheels when being inside those type of triggers. diff --git a/Docs/how_to_build_on_linux.md b/Docs/how_to_build_on_linux.md index 5d20cfbc9..bce3c39af 100644 --- a/Docs/how_to_build_on_linux.md +++ b/Docs/how_to_build_on_linux.md @@ -1,4 +1,4 @@ -

Linux build

+#Linux build * [__Requirements__](#requirements): * System specifics @@ -148,6 +148,8 @@ Run the script to get the assets: ```sh ./Update.sh ``` +!!! Important + To get the assets still in development visit the [Update CARLA](../update_carla#get-development-assets) page and read __Get development assets__.

Set the environment variable

@@ -190,8 +192,9 @@ Now everything is ready to go and CARLA has been successfully built. Here is a b | `make PythonAPI` | Builds the CARLA client. | | `make package` | Builds CARLA and creates a packaged version for distribution. | | `make clean` | Deletes all the binaries and temporals generated by the build system. | -| `make rebuild` | make clean and make launch both in one command. | +| `make rebuild` | make clean and make launch both in one command. | +
Keep reading this section to learn more about how to update CARLA, the build itself and some advanced configuration options. Otherwise, visit the __First steps__ section to learn about CARLA:
diff --git a/Docs/how_to_build_on_windows.md b/Docs/how_to_build_on_windows.md index 03d63c307..5ffd056a7 100644 --- a/Docs/how_to_build_on_windows.md +++ b/Docs/how_to_build_on_windows.md @@ -1,4 +1,4 @@ -

Windows build

+#Windows build * [__Requirements__](#requirements): * System specifics * [__Necessary software__](#necessary-software): @@ -21,7 +21,7 @@ CARLA forum
--- ##Requirements -

System specifics

+####System specifics * __x64 system:__ The simulator should run in any Windows system currently available as long as it is a 64 bits OS. * __30GB disk space:__ Installing all the software needed and CARLA itself will require quite a lot of space, especially Unreal Engine. Make sure to have around 30/50GB of free disk space. @@ -29,7 +29,7 @@ CARLA forum * __Two TCP ports and good internet connection:__ 2000 and 2001 by default. Be sure neither firewall nor any other application are blocking these. --- ##Necessary software -

Minor installations

+####Minor installations Some software is needed for the build process the installation of which is quite straightforward. * [CMake](https://cmake.org/download/): Generates standard build files from simple configuration files. @@ -41,7 +41,7 @@ Some software is needed for the build process the installation of which is quite Be sure that these programs are added to your [environment path](https://www.java.com/en/download/help/path.xml), so you can use them from your command prompt. The path values to add lead to the _bin_ directories for each software. -

Visual Studio 2017

+####Visual Studio 2017 Get the 2017 version from [here](https://developerinsider.co/download-visual-studio-2017-web-installer-iso-community-professional-enterprise/). **Community** is the free version. Two elements will be needed to set up the environment for the build process. These must be added when using the Visual Studio Installer: @@ -52,7 +52,7 @@ Get the 2017 version from [here](https://developerinsider.co/download-visual-stu Having other Visual Studio versions may cause conflict during the build process, even if these have been uninstalled (Visual Studio is not that good at getting rid of itself and erasing registers). To completely clean Visual Studio from the computer run `.\InstallCleanup.exe -full` found in `Program Files (x86)\Microsoft Visual Studio\Installer\resources\app\layout`. This may need admin permissions. -

Unreal Engine 4.22

+####Unreal Engine 4.22 Go to the [Unreal Engine](https://www.unrealengine.com/download) site and download the Epic Games Launcher. In the _Library_ section, inside the _Engine versions_ panel, download any Unreal Engine 4.22.x version. After installing it, make sure to run it in order to be sure that everything was properly installed. @@ -60,12 +60,12 @@ Go to the [Unreal Engine](https://www.unrealengine.com/download) site and downlo This note will only be relevant if issues arise during the build process and manual build is required. Having VS2017 and UE4.22 installed, a **Generate Visual Studio project files** option should appear when doing right-click on **.uproject** files. If this option is not available, something went wrong while installing Unreal Engine and it may need to be reinstalled. Create a simple Unreal Engine project to check up. --- -# CARLA build +##CARLA build !!! Important Lots of things have happened so far. It is highly advisable to do a quick restart of the system. -

Clone repository

+####Clone repository
@@ -86,12 +86,12 @@ Now the latest content for the project, known as `master` branch in the reposito !!! Note The `master` branch contains the latest fixes and features. Stable code is inside the `stable` branch, and it can be built by changing the branch. The same goes for previous CARLA releases. Always remember to check the current branch in git with `git branch`. -

Get assets

+####Get assets Only the assets package, the visual content, is yet to be downloaded. `\Util\ContentVersions.txt` contains the links to the assets for every CARLA version. These must be extracted in `Unreal\CarlaUE4\Content\Carla`. If the path doesn't exist, create it. Download the **latest** assets to work with the current version of CARLA. When working with branches containing previous releases of CARLA, make sure to download the proper assets. -

make CARLA

+####make CARLA Go to the root CARLA folder, the one cloned from the repository. It is time to do the automatic build. The process may take a while, it will download and install the necessary libraries. It might take 20-40 minutes, depending on hardware and internet connection. There are different make commands to build the different modules: @@ -122,8 +122,9 @@ Now everything is ready to go and CARLA has been successfully built. Here is a b | `make PythonAPI` | Builds the CARLA client. | | `make package` | Builds CARLA and creates a packaged version for distribution. | | `make clean` | Deletes all the binaries and temporals generated by the build system. | -| `make rebuild` | make clean and make launch both in one command. | +| `make rebuild` | make clean and make launch both in one command. | +
Keep reading this section to learn more about how to update CARLA, the build itself and some advanced configuration options. Otherwise, visit the __First steps__ section to learn about CARLA:
diff --git a/Docs/how_to_control_vehicle_physics.md b/Docs/how_to_control_vehicle_physics.md index d9ecfb889..24798176e 100644 --- a/Docs/how_to_control_vehicle_physics.md +++ b/Docs/how_to_control_vehicle_physics.md @@ -1,4 +1,4 @@ -

How to control vehicle physics

+# How to control vehicle physics Physics properties can be tuned for vehicles and its wheels. These changes are applied **only** on runtime, and values are set back to default ones when diff --git a/Docs/how_to_generate_pedestrians_navigation.md b/Docs/how_to_generate_pedestrians_navigation.md index 56427f88b..22203d7b5 100644 --- a/Docs/how_to_generate_pedestrians_navigation.md +++ b/Docs/how_to_generate_pedestrians_navigation.md @@ -1,4 +1,4 @@ -

How to generate the pedestrian navigation info

+# How to generate the pedestrian navigation info ### Introduction The pedestrians to walk need information about the map in a specific format. That file that describes the map for navigation is a binary file with extension `.BIN`, and they are saved in the **Nav** folder of the map. Each map needs a `.BIN` file with the same name that the map, so automatically can be loaded with the map. @@ -21,7 +21,9 @@ We have several types of meshes for navigation. The meshes need to be identified | Grass | `Road_Crosswalk` | Pedestrians can walk over these meshes but as a second option if no ground is found. | | Road | `Road_Grass` | Pedestrians won't be allowed to walk on it unless we specify some percentage of pedestrians that will be allowed. | | Crosswalk | `Road_Road`, `Road_Curb`, `Road_Gutter` or `Road_Marking` | Pedestrians can cross the roads only through these meshes. | -| Block | any other name | Pedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...). | +| Block | any other name | Pedestrians will avoid these meshes always (are obstacles like traffic lights, trees, houses...). | + +
For instance, all road meshes need to start with `Road_Road` e.g: `Road_Road_Mesh_1`, `Road_Road_Mesh_2`... diff --git a/Docs/how_to_make_a_new_map.md b/Docs/how_to_make_a_new_map.md index af4a26dca..f990e0832 100644 --- a/Docs/how_to_make_a_new_map.md +++ b/Docs/how_to_make_a_new_map.md @@ -1,7 +1,8 @@ -

How to create and import a new map

+# How to create and import a new map ![Town03](img/create_map_01.jpg) +----- ## 1 Create a new map Files needed: @@ -15,6 +16,7 @@ tutorial. The following steps will introduce the RoadRunner software for map creation. If the map is created by other software, go to this [section](#3-importing-into-unreal). +------ ## 2 Create a new map with RoadRunner RoadRunner is a powerful software from Vector Zero to create 3D scenes. Using RoadRunner is easy, @@ -22,7 +24,7 @@ in a few steps you will be able to create an impressive scene. You can download a trial of RoadRunner at VectorZero's web page.
- +

Read VectorZero's RoadRunner [documentation][rr_docs] to install it and get started. @@ -36,7 +38,7 @@ They also have very useful [tutorials][rr_tutorials] on how to use RoadRunner, c !!! important Create the map centered arround (0, 0). -## 2.1 Validate the map +#### 2.1 Validate the map * Check that all connections and geometries seem correct. @@ -51,8 +53,8 @@ button and export. The _OpenDrive Preview Tool_ button lets you test the integrity of the current map. If there is any error with map junctions, click on `Maneuver Tool` and `Rebuild Maneuver Roads` buttons. - -## 2.2 Export the map + +#### 2.2 Export the map After verifying that everything is correct, it is time to export the map to CARLA. @@ -75,6 +77,7 @@ _check VectorZeros's [documentation][exportlink]._ [exportlink]: https://tracetransit.atlassian.net/wiki/spaces/VS/pages/752779356/Exporting+to+CARLA +------- ## 3 Importing into Unreal This section is divided into two. The first part shows how to import a map from RoadRunner @@ -87,9 +90,9 @@ and the second part shows how to import a map from other software that generates We have also created a new way to import assets into Unreal, check this [`guide`](./asset_packages_for_dist.md)! -### 3.1 Importing from RoadRunner +#### 3.1 Importing from RoadRunner -#### 3.1.1 Plugin Installation +##### 3.1.1 Plugin Installation RoadRunner provides a series of plugins that make the importing simpler. @@ -100,7 +103,7 @@ RoadRunner provides a series of plugins that make the importing simpler. 3. Rebuild the plugin. -##### Rebuild on Windows +###### Rebuild on Windows 1. Generate project files. @@ -108,7 +111,7 @@ RoadRunner provides a series of plugins that make the importing simpler. 2. Open the project and build the plugins. -##### Rebuild on Linux +###### Rebuild on Linux ```sh > UE4_ROOT/GenerateProjectFiles.sh -project="carla/Unreal/CarlaUE4/CarlaUE4.uproject" -game -engine @@ -118,7 +121,7 @@ Finally, restart Unreal Engine and make sure the checkbox is on for both plugins ![rr_ue_plugins](img/rr-ue4_plugins.png) -#### 3.1.2 Importing +##### 3.1.2 Importing 1. Import the _mapname.fbx_ file to a new folder under `/Content/Carla/Maps` with the `Import` button. @@ -144,7 +147,7 @@ The new map should now appear next to the others in the Unreal Engine _Content B And that's it! The map is ready! -### 3.2 Importing from the files +#### 3.2 Importing from the files This is the generic way to import maps into Unreal. @@ -154,7 +157,7 @@ and paste it in the new level, otherwise, the map will be in the dark. ![ue_illumination](img/ue_illumination.png) -#### 3.2.1 Binaries (.fbx) +##### 3.2.1 Binaries (.fbx) 1. Import the _mapname.fbx_ file to a new folder under `/Content/Carla/Maps` with the `Import` button. Make sure the following options are unchecked: @@ -238,7 +241,7 @@ Content ![ue__semantic_segmentation](img/ue_ssgt.png) -#### 3.2.2 OpenDRIVE (.xodr) +##### 3.2.2 OpenDRIVE (.xodr) 1. Copy the `.xodr` file inside the `Content/Carla/Maps/OpenDrive` folder. 2. Open the Unreal level and drag the _Open Drive Actor_ inside the level. @@ -248,6 +251,7 @@ It will read the level's name, search the Opendrive file with the same name and And that's it! Now the road network information is loaded into the map. +------- ## 4. Setting up traffic behavior Once everything is loaded into the level, it is time to create traffic behavior. @@ -261,7 +265,7 @@ Once everything is loaded into the level, it is time to create traffic behavior. This will generate a bunch of _RoutePlanner_ and _VehicleSpawnPoint_ actors that make it possible for vehicles to spawn and go in autopilot mode. -## 4.1 Traffic lights and signs +#### 4.1 Traffic lights and signs To regulate the traffic, traffic lights and signs must be placed all over the map. @@ -287,6 +291,7 @@ might need some tweaking and testing to fit perfectly into the city. > _Example: Traffic Signs, Traffic lights and Turn based stop._ +---------- ## 5 Adding pedestrian navigation areas To make a navigable mesh for pedestrians, we use the _Recast & Detour_ library.
@@ -335,6 +340,7 @@ Then build RecastDemo. Follow their [instructions][buildrecastlink] on how to bu Now pedestrians will be able to spawn randomly and walk on the selected meshes! +---------- ## Tips and Tricks * Traffic light group controls wich traffic light is active (green state) at each moment. diff --git a/Docs/how_to_model_vehicles.md b/Docs/how_to_model_vehicles.md index 75fa29870..845da8d61 100644 --- a/Docs/how_to_model_vehicles.md +++ b/Docs/how_to_model_vehicles.md @@ -1,8 +1,9 @@ -

How to model vehicles

+# How to model vehicles -# 4-Wheeled Vehicles +------------ +## 4-Wheeled Vehicles -## Modelling +#### Modelling Vehicles must have a minimum of 10.000 and a maximum of 17.000 Tris approximately. We model the vehicles using the size and scale of actual cars. @@ -35,7 +36,7 @@ The vehicle must be divided in 6 materials: Put a rectangular plane with this size 29-12 cm, for the licence Plate. We assign the license plate texture. -## Nomenclature of Material +#### Nomenclature of Material * M(Material)_"CarName"_Bodywork(part of car) @@ -49,7 +50,7 @@ The vehicle must be divided in 6 materials: * M_"CarName"_LicencePlate -## Textures +#### Textures The size of the textures is 2048x2048. @@ -59,7 +60,7 @@ The size of the textures is 2048x2048. * T_"CarName"_PartOfMaterial_orm (OcclusionRoughnessMetallic) -* **EXEMPLE**: +* **EXAMPLE**: Type of car Tesla Model 3 TEXTURES @@ -70,7 +71,7 @@ TEXTURES MATERIAL * M_Tesla3_BodyWork -## RIG +#### RIG The easiest way is to copy the "General4WheeledVehicleSkeleton" present in our project, either by exporting it and copying it to your model or by creating your skeleton @@ -91,7 +92,7 @@ Vhehicle_Base: The origin point of the mesh, place it in the point (0,0,0) of th * Wheel_Rear_Left: Set the joint's position in the middle of the Wheel. -## LODs +#### LODs All vehicle LODs must be made in Maya or other 3D software. Because Unreal does not generate LODs automatically, you can adjust the number of Tris to make a diff --git a/Docs/python_api.md b/Docs/python_api.md index 792162a12..076872652 100644 --- a/Docs/python_api.md +++ b/Docs/python_api.md @@ -1,3 +1,4 @@ +#Python API reference ## carla.Actor CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by [carla.World](#carla.World) and they need for a [carla.ActorBlueprint](#carla.ActorBlueprint) to be created. These blueprints belong into a library provided by CARLA, find more about them [here](../bp_library/). diff --git a/Docs/python_cookbook.md b/Docs/python_cookbook.md index cf563d410..7468ca99a 100644 --- a/Docs/python_cookbook.md +++ b/Docs/python_cookbook.md @@ -1,5 +1,4 @@ - -

Code recipes

+# Code recipes This section contains a list of recipes that complement the [tutorial](../python_api_tutorial/) and are used to illustrate the use of Python API methods. diff --git a/Docs/recorder_and_playback.md b/Docs/recorder_and_playback.md index f3ec90c4b..29d8baf1e 100644 --- a/Docs/recorder_and_playback.md +++ b/Docs/recorder_and_playback.md @@ -1,4 +1,4 @@ -

Recorder

+# Recorder This is one of the advanced CARLA features. It allows to record and reenact a simulation while providing with a complete log of the events happened and a few queries to ease the trace and study of those. To learn about the generated file and its specifics take a look at this [reference](recorder_binary_file_format.md). @@ -59,12 +59,14 @@ client.replay_file("recording01.log", start, duration, camera) | ---------- | ----------------------------------------------------- | ----- | | `start` | Recording time in seconds to start the simulation at. | If positive, time will be considered from the beginning of the recording.
If negative, it will be considered from the end. | | `duration` | Seconds to playback. 0 is all the recording. | By the end of the playback, vehicles will be set to autopilot and pedestrians will stop. | -| `camera` | ID of the actor that the camera will focus on. | By default the spectator will move freely. | +| `camera` | ID of the actor that the camera will focus on. | By default the spectator will move freely. | + +
!!! Note These parameters allows to recall an event and then let the simulation run free, as vehicles will be set to autopilot when the recording stops. -

Setting a time factor

+####Setting a time factor The time factor will determine the playback speed. @@ -75,7 +77,9 @@ client.set_replayer_time_factor(2.0) ``` | Parameters | Default | Fast motion | slow motion | | ------------- | ------- | ----------- | ----------- | -| `time_factor` | __1.0__ | __>1.0__ | __<1.0__ | +| `time_factor` | __1.0__ | __>1.0__ | __<1.0__ | + +
!!! Important Over 2.0 position interpolation is disabled and just updated. Pedestrians' animations are not affected by the time factor. @@ -135,7 +139,7 @@ Duration: 60.3753 seconds --------------- ##Queries -

Collisions

+####Collisions In order to record collisions, vehicles must have a [collision detector](../ref_sensors#collision-detector) attached. The collisions registered by the recorder can be queried using arguments to filter the type of the actors involved in the collisions. For example, `h` identifies actors whose `role_name = hero`, usually assigned to vehicles managed by the user. Currently, the actor types that can be used in the query are: @@ -184,7 +188,7 @@ In this case, the playback showed this: ![collision](img/collision1.gif) -

Blocked actors

+####Blocked actors This query is used to detect vehicles that where stucked during the recording. An actor is considered blocked if it does not move a minimum distance in a certain time. This definition is made by the user during the query: @@ -195,7 +199,9 @@ client.show_recorder_actors_blocked("recording01.log", min_time, min_distance) | Parameters | Description | Default | | -------------- | --------------------------------------------------------- | ----- | | `min_time` | Minimum seconds to move `min_distance`. | 30 secs. | -| `min_distance` | Minimum centimeters to move to not be considered blocked. | 10 cm. | +| `min_distance` | Minimum centimeters to move to not be considered blocked. | 10 cm. | + +
!!! Note Take into account that vehicles are stopped at traffic lights sometimes for longer than expected. @@ -241,7 +247,9 @@ Some of the provided scripts in `PythonAPI/examples` facilitate the use of the r | -------------- | ------------ | | `-f` | Filename. | | `-n` (optional)| Vehicles to spawn. Default is 10. | -| `-t` (optional)| Duration of the recording. | +| `-t` (optional)| Duration of the recording. | + +
* __start_replaying.py__: starts the playback of a recording. Starting time, duration and actor to follow can be set. @@ -250,7 +258,9 @@ Some of the provided scripts in `PythonAPI/examples` facilitate the use of the r | `-f` | Filename. | | `-s` (optional)| Starting time. Default is 0. | | `-d` (optional)| Duration. Default is all. | -| `-c` (optional)| ID of the actor to follow. | +| `-c` (optional)| ID of the actor to follow. | + +
* __show_recorder_file_info.py__: shows all the information in the recording file. Two modes of detail: by default it only shows frames where some event is recorded. The second shows all information for all frames. @@ -258,7 +268,9 @@ Two modes of detail: by default it only shows frames where some event is recorde | Parameters | Description | | -------------- | ------------ | | `-f` | Filename. | -| `-s` (optional)| Flag to show all details. | +| `-s` (optional)| Flag to show all details. | + +
* __show_recorder_collisions.py__: shows recorded collisions between two actors of type __A__ and __B__ defined using a series of flags: `-t = vv` would show all collisions between vehicles. @@ -274,8 +286,9 @@ Two modes of detail: by default it only shows frames where some event is recorde | --------------- | ------------ | | `-f` | Filename. | | `-t` (optional) | Time to move `-d` before being considered blocked. | -| `-d` (optional) | Distance to move to not be considered blocked. | +| `-d` (optional) | Distance to move to not be considered blocked. | +
--------------- Now it is time to experiment for a while. Use the recorder to playback a simulation, trace back events, make changes to see new outcomes. Feel free to say your word in the CARLA forum about this matter: diff --git a/Docs/ref_sensors.md b/Docs/ref_sensors.md index f80eb58fb..94841f901 100644 --- a/Docs/ref_sensors.md +++ b/Docs/ref_sensors.md @@ -1,4 +1,4 @@ -

Sensors' documentation

+# Sensors' documentation * [__Collision detector__](#collision-detector) * [__Depth camera__](#depth-camera) @@ -23,7 +23,7 @@ To ensure that collisions with any kind of object are detected, the server creat Collision detectors do not have any configurable attribute. -

Output attributes:

+#### Output attributes | Sensor data attribute | Type | Description | | ---------------------- | ----------- | ----------- | @@ -56,16 +56,18 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert ![ImageDepth](img/capture_depth.png) -

Basic camera attributes

+#### Basic camera attributes | Blueprint attribute | Type | Default | Description | | ------------------- | ---- | ------- | ----------- | | `image_size_x` | int | 800 | Image width in pixels. | | `image_size_y` | int | 600 | Image height in pixels. | | `fov` | float | 90.0 | Horizontal field of view in degrees. | -| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | +| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | -

Camera lens distortion attributes

+
+ +#### Camera lens distortion attributes | Blueprint attribute | Type | Default | Description | |--------------------------|-------|---------|-------------| @@ -74,9 +76,11 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert | `lens_k` | float | -1.0 | Range: [-inf, inf] | | `lens_kcube` | float | 0.0 | Range: [-inf, inf] | | `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] | -| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] | +| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] | -

Output attributes

+
+ +#### Output attributes | Sensor data attribute | Type | Description | | --------------------- | ------------------------------------------------ | ----------- | @@ -96,7 +100,7 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gnss) of its parent object. This is calculated by adding the metric position to an initial geo reference location defined within the OpenDRIVE map definition. -

GNSS attributes

+#### GNSS attributes | Blueprint attribute | Type | Default | Description | | -------------------- | ---- | ------- | ----------- | @@ -107,9 +111,11 @@ Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gns | `noise_lon_bias` | float | 0.0 | Mean parameter in the noise model for longitude. | | `noise_lon_stddev` | float | 0.0 | Standard deviation parameter in the noise model for longitude. | | `noise_seed` | int | 0 | Initializer for a pseudorandom number generator. | -| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | +| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | -

Output attributes

+
+ +#### Output attributes | Sensor data attribute | Type | Description | | ---------------------- | ------------------------------------------------ | ----------- | @@ -128,7 +134,7 @@ Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gns Provides measures that accelerometer, gyroscope and compass would retrieve for the parent object. The data is collected from the object's current state. -

IMU attributes

+#### IMU attributes | Blueprint attribute | Type | Default | Description | | --------------------- | ---- | ------- | ----------- | @@ -142,10 +148,11 @@ Provides measures that accelerometer, gyroscope and compass would retrieve for t | `noise_gyro_stddev_y` | float | 0.0 | Standard deviation parameter in the noise model for the gyroscope (Y axis). | | `noise_gyro_stddev_z` | float | 0.0 | Standard deviation parameter in the noise model for the gyroscope (Z axis). | | `noise_seed` | int | 0 | Initializer for a pseudorandom number generator. | -| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | +| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | +
-

Output attributes

+#### Output attributes | Sensor data attribute | Type | Description | | --------------------- | ------------------------------------------------ | ----------- | @@ -174,7 +181,7 @@ This sensor does not have any configurable attribute. !!! Important This sensor works fully on the client-side. -

Output attributes

+#### Output attributes | Sensor data attribute | Type | Description | | ----------------------- | ---------------------------------------------------------- | ----------- | @@ -210,7 +217,7 @@ for location in lidar_measurement: ![LidarPointCloud](img/lidar_point_cloud.gif) -

Lidar attributes

+#### Lidar attributes | Blueprint attribute | Type | Default | Description | | -------------------- | ---- | ------- | ----------- | @@ -220,9 +227,11 @@ for location in lidar_measurement: | `rotation_frequency` | float | 10.0 | Lidar rotation frequency. | | `upper_fov` | float | 10.0 | Angle in degrees of the highest laser. | | `lower_fov` | float | -30.0 | Angle in degrees of the lowest laser. | -| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | +| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | -

Output attributes

+
+ +#### Output attributes | Sensor data attribute | Type | Description | | -------------------------- | ------------------------------------------------ | ----------- | @@ -235,7 +244,7 @@ for location in lidar_measurement: | `raw_data` | bytes | Array of 32-bits floats (XYZ of each point). | --------------- -##Obstacle detector +## Obstacle detector * __Blueprint:__ sensor.other.obstacle * __Output:__ [carla.ObstacleDetectionEvent](python_api.md#carla.ObstacleDetectionEvent) per obstacle (unless `sensor_tick` says otherwise). @@ -250,9 +259,11 @@ To ensure that collisions with any kind of object are detected, the server creat | `hit_radius` | float | 0.5 | Radius of the trace. | | `only_dynamics` | bool | false | If true, the trace will only consider dynamic objects. | | `debug_linetrace` | bool | false | If true, the trace will be visible. | -| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | +| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | -

Output attributes

+
+ +#### Output attributes | Sensor data attribute | Type | Description | | ---------------------- | ------------------------------------------------ | ----------- | @@ -264,7 +275,7 @@ To ensure that collisions with any kind of object are detected, the server creat | `distance` | float | Distance from `actor` to `other_actor`. | --------------- -##Radar sensor +## Radar sensor * __Blueprint:__ sensor.other.radar * __Output:__ [carla.RadarMeasurement](python_api.md#carla.RadarMeasurement) per step (unless `sensor_tick` says otherwise). @@ -289,9 +300,11 @@ The provided script `manual_control.py` uses this sensor to show the points bein | `points_per_second` | int | 1500 | Points generated by all lasers per second. | | `range` | float | 100 | Maximum distance to measure/raycast in meters. | | `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | -| `vertical_fov` | float | 30 | Vertical field of view in degrees. | +| `vertical_fov` | float | 30 | Vertical field of view in degrees. | -

Output attributes

+
+ +#### Output attributes | Sensor data attribute | Type | Description | | ---------------------- | ---------------------------------------------------------------- | ----------- | @@ -305,7 +318,7 @@ The provided script `manual_control.py` uses this sensor to show the points bein | `velocity` | float | Velocity towards the sensor. | --------------- -##RGB camera +## RGB camera * __Blueprint:__ sensor.camera.rgb * __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).. @@ -328,7 +341,7 @@ A value of 1.5 means that we want the sensor to capture data each second and a h ![ImageRGB](img/capture_scenefinal.png) -

Basic camera attributes

+#### Basic camera attributes | Blueprint attribute | Type | Default | Description | |---------------------|-------|---------|-------------| @@ -339,9 +352,11 @@ A value of 1.5 means that we want the sensor to capture data each second and a h | `iso` | float | 1200.0 | The camera sensor sensitivity. | | `gamma` | float | 2.2 | Target gamma value of the camera. | | `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | -| `shutter_speed` | float | 60.0 | The camera shutter speed in seconds (1.0 / s). | +| `shutter_speed` | float | 60.0 | The camera shutter speed in seconds (1.0 / s). | -

Camera lens distortion attributes

+
+ +#### Camera lens distortion attributes | Blueprint attribute | Type | Default | Description | |--------------------------|-------|---------|-------------| @@ -350,9 +365,11 @@ A value of 1.5 means that we want the sensor to capture data each second and a h | `lens_k` | float | -1.0 | Range: [-inf, inf] | | `lens_kcube` | float | 0.0 | Range: [-inf, inf] | | `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] | -| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] | +| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] | -

Advanced camera attributes

+
+ +#### Advanced camera attributes Since these effects are provided by UE, please make sure to check their documentation: @@ -390,11 +407,13 @@ Since these effects are provided by UE, please make sure to check their document | `tint` | float | 0.0 | White balance temperature tint. Adjusts cyan and magenta color ranges. This should be used along with the white balance Temp property to get accurate colors. Under some light temperatures, the colors may appear to be more yellow or blue. This can be used to balance the resulting color to look more natural. | | `chromatic_aberration_intensity` | float | 0.0 | Scaling factor to control color shifting, more noticeable on the screen borders. | | `chromatic_aberration_offset` | float | 0.0 | Normalized distance to the center of the image where the effect takes place. | -| `enable_postprocess_effects` | bool | True | Post-process effects activation. | +| `enable_postprocess_effects` | bool | True | Post-process effects activation. | + +
[AutomaticExposure.gamesetting]: https://docs.unrealengine.com/en-US/Engine/Rendering/PostProcessEffects/AutomaticExposure/index.html#gamesetting -

Output attributes

+#### Output attributes | Sensor data attribute | Type | Description | | --------------------- | ------------------------------------------------ | ----------- | @@ -433,7 +452,9 @@ The following tags are currently available: | 9 | Vegetation | (107, 142, 35) | | 10 | Car | ( 0, 0, 142) | | 11 | Wall | (102, 102, 156) | -| 12 | Traffic sign | (220, 220, 0) | +| 12 | Traffic sign | (220, 220, 0) | + +
!!! Note **Adding new tags**: @@ -441,16 +462,18 @@ The following tags are currently available: ![ImageSemanticSegmentation](img/capture_semseg.png) -

Basic camera attributes

+#### Basic camera attributes | Blueprint attribute | Type | Default | Description | | ------------------- | ---- | ------- | ----------- | | `fov` | float | 90.0 | Horizontal field of view in degrees. | | `image_size_x` | int | 800 | Image width in pixels. | | `image_size_y` | int | 600 | Image height in pixels. | -| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | +| `sensor_tick` | float | 0.0 | Simulation seconds between sensor captures (ticks). | -

Camera lens distortion attributes

+
+ +#### Camera lens distortion attributes | Blueprint attribute | Type | Default | Description | |------------------------- |------ |---------|-------------| @@ -459,9 +482,11 @@ The following tags are currently available: | `lens_k` | float | -1.0 | Range: [-inf, inf] | | `lens_kcube` | float | 0.0 | Range: [-inf, inf] | | `lens_x_size` | float | 0.08 | Range: [0.0, 1.0] | -| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] | +| `lens_y_size` | float | 0.08 | Range: [0.0, 1.0] | -

Output attributes

+
+ +#### Output attributes | Sensor data attribute | Type | Description | | --------------------- | ------------------------------------------------ | ----------- | @@ -471,4 +496,6 @@ The following tags are currently available: | `raw_data` | bytes | Array of BGRA 32-bit pixels. | | `timestamp` | double | Simulation time of the measurement in seconds since the beginning of the episode. | | `transform` | [carla.Transform](python_api.md#carla.Transform) | Location and rotation in world coordinates of the sensor at the time of the measurement. | -| `width` | int | Image width in pixels. | +| `width` | int | Image width in pixels. | + +
diff --git a/Docs/rendering_options.md b/Docs/rendering_options.md index e432744c1..ef925c396 100644 --- a/Docs/rendering_options.md +++ b/Docs/rendering_options.md @@ -1,4 +1,4 @@ -

Rendering options

+# Rendering options Before you start running your own experiments there are few details to take into account at the time of configuring your simulation. In this document we cover @@ -21,7 +21,7 @@ the most important ones. --------------- ##Graphics quality -

Vulkan vs OpenGL

+####Vulkan vs OpenGL Vulkan is the default graphics API used by Unreal Engine and CARLA (if installed). It consumes more memory, but performs faster and makes for a better frame rate. However, it is quite experimental, especially in Linux, and it may lead to some issues. For said reasons, there is the option to change to OpenGL simply by using a flag when running CARLA. The same flag works for both Linux and Windows: @@ -32,7 +32,7 @@ cd carla && ./CarlaUE4.sh -opengl When working with the build version of CARLA it is Unreal Engine the one that needs to be set to use OpenGL. [Here][UEdoc] is a documentation regarding different command line options for Unreal Engine. [UEdoc]: https://docs.unrealengine.com/en-US/Programming/Basics/CommandLineArguments/index.html -

Quality levels

+####Quality levels CARLA also allows for two different graphic quality levels named as __Epic__, the default, and __Low__, which disables all post-processing, shadows and the drawing distance is set to 50m instead of infinite and makes the simulation run significantly faster. Low mode is not only used when precision is nonessential or there are technical limitations, but also to train agents under conditions with simpler data or regarding only close elements. @@ -82,11 +82,11 @@ Unreal Engine needs for a screen in order to run, but there is a workaround for The simulator launches but there is no available window. However, it can be connected in the usual manner and scripts run the same way. For the sake of understanding let's sake that this mode tricks Unreal Engine into running in a fake screen. -

Off-screen vs no-rendering

+####Off-screen vs no-rendering These may look similar but are indeed quite different. It is important to understand the disctintion them to prevent misunderstandings. In off-screen Unreal Engine is working as usual and rendering is computed as usual. The only difference is that there is no available display. In no-rendering, it is Unreal Engine the one that is said to avoid rendering and thus, graphics are not computed. For said reasons, GPU sensors return data when off-screen and no-rendering mode can be enabled at will. -

Setting off-screen mode

+####Setting off-screen mode Right now this is __only possible in Linux while using OpenGL__ instead of Vulkan. Unreal Engine crushes when Vulkan is running off-screen, and this issue is yet to be fixed by Epic. @@ -101,7 +101,7 @@ Note that this method, in multi-GPU environments, does not allow to choose the G --------------- ##Running off-screen using a preferred GPU -

Docker: recommended approach

+####Docker: recommended approach The best way to run a headless CARLA and select the GPU is to [__run CARLA in a Docker__](../carla_docker). This section contains an alternative tutorial, but this method is deprecated and performance is much worse. However, it is here just in case, for those who Docker is not an option. @@ -114,7 +114,7 @@ This section contains an alternative tutorial, but this method is deprecated and !!! Warning This tutorial is deprecated. To run headless CARLA, please [__run CARLA in a Docker__](../carla_docker). -
Requirements
+* __Requirements:__ This tutorial only works in Linux and makes it possible for a remote server using several graphical cards to use CARLA on all GPUs. This is also translatable to a desktop user trying to use CARLA with a GPU that is not plugged to any screen. To achieve that, the steps can be summarized as: @@ -139,15 +139,15 @@ sudo apt install x11-xserver-utils libxrandr-dev ``` !!! Warning Make sure that VNC version is compatible with Unreal. The one above worked properly during the making of this tutorial. + - -
Configure the X
+* __Configure the X__ Generate a X compatible with the Nvdia installed and able to run without display: - sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024 + sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024 -
Emulate the virtual display
+* __Emulate the virtual display__ Run a Xorg. Here number 7 is used, but it could be labeled with any free number: @@ -162,17 +162,17 @@ If everything is working fine the following command will run glxinfo on Xserver DISPLAY=:8 vglrun -d :7.0 glxinfo !!! Important - To run on other GPU, change the `7.X` pattern in the previous command. To set it to GPU 1: `DISPLAY=:8 vglrun -d :7.1 glxinfo` + To run on other GPU, change the `7.X` pattern in the previous command. To set it to GPU 1: `DISPLAY=:8 vglrun -d :7.1 glxinfo` -
Extra
+* __Extra__ To disable the need of sudo when creating the `nohup Xorg` go to `/etc/X11/Xwrapper.config` and change `allowed_users=console` to `allowed_users=anybody`. It may be needed to stop all Xorg servers before running `nohup Xorg`. The command for that could change depending on your system. Generally for Ubuntu 16.04 use: - sudo service lightdm stop + sudo service lightdm stop -
Running CARLA
+* __Running CARLA__ To run CARLA on a certain `` in a certain `$CARLA_PATH` use the following command: diff --git a/Docs/simulation_time_and_synchrony.md b/Docs/simulation_time_and_synchrony.md index 69a84e559..d4ddcb2e9 100644 --- a/Docs/simulation_time_and_synchrony.md +++ b/Docs/simulation_time_and_synchrony.md @@ -1,4 +1,4 @@ -

Synchrony and time-step

+# Synchrony and time-step This section deals with two concepts that are fundamental to fully comprehend CARLA and gain control over it to achieve the desired results. There are different configurations that define how does time go by in the simulation and how does the server running said simulation work. The following sections will dive deep into these concepts: @@ -22,7 +22,7 @@ The time-step can be fixed or variable depending on user preferences, and CARLA !!! Note After reading this section it would be a great idea to go for the following one, __Client-server synchrony__, especially the part about synchrony and time-step. Both are related concepts and affect each other when using CARLA. -

Variable time-step

+####Variable time-step This is the default mode in CARLA. When the time-step is variable, the simulation time that goes by between steps will be the time that the server takes to compute these. In order to set the simulation to a variable time-step the code could look like this: @@ -36,7 +36,7 @@ The provided script `PythonAPI/util/config.py` automatically sets time-step wit cd PythonAPI/util && ./config.py --delta-seconds 0 ``` -

Fixed time-step

+####Fixed time-step Going for a fixed time-step makes the server run a simulation where the elapsed time remains constant between steps. If it is set to 0.5 seconds, there will be two frames per simulated second. Using the same time increment on each step is the best way to gather data from the simulation, as physics and sensor data will correspond to an easy to comprehend moment of the simulation. Also, if the server is fast enough, it makes possible to simulate longer time periods in less real time. @@ -52,7 +52,7 @@ Thus, the simulator will take twenty steps (1/0.05) to recreate one second of th cd PythonAPI/util && ./config.py --delta-seconds 0.05 ``` -

Tips when recording the simulation

+####Tips when recording the simulation CARLA has a [recorder feature](recorder_and_playback.md) that allows a simulation to be recorded and then reenacted. However, when looking for precision, some things need to be taken into account. If the simulation ran with a fixed time-step, reenacting it will be easy, as the server can be set to the same time-step used in the original simulation. However, if the simulation used a variable time-step, things are a bit more complicated. @@ -61,7 +61,7 @@ Secondly, the server can be forced to reproduce the exact same time-steps passin Finally there is also the float-point arithmetic error that working with a variable time-step introduces. As the simulation is running with a time-step equal to the real one, being real time a continuous and simulation one a float variable, the time-steps show decimal limitations. The time that is cropped for each step is an error that accumulates and prevents the simulation from a precise repetition of what has happened. -

Time-step limitations

+####Time-step limitations Physics must be computed within very low time steps to be precise. The more time goes by, the more variables and chaos come to place and so, the more defective the simulation will be. CARLA uses up to 6 substeps to compute physics in every step, each with a maximum delta time of 0.016667s. @@ -96,7 +96,7 @@ cd PythonAPI/util && ./config.py --no-sync ``` Must be mentioned that synchronous mode cannot be enabled using the script, only disabled. Enabling the synchronous mode makes the server wait for a client tick, and using this script the user cannot send ticks when desired. -

Using synchronous mode

+####Using synchronous mode The synchronous mode becomes specially relevant when running with slow clients applications and when synchrony between different elements, such as sensors, is needed. If the client is too slow and the server does not wait for it, the amount of information received will be impossible to manage and it can easily be mixed. On a similar tune, if there are ten sensors waiting to retrieve data and the server is sending all these information without waiting for all of them to have the previous one, it would be impossible to know if all the sensors are using data from the same moment in the simulation. As a little extension to the previous code, in the following fragment, the client creates a camera sensor that puts the image data received in the current step in a queue and sends ticks to the server only after retrieving it from the queue. A more complex example regarding several sensors can be found [here][syncmodelink]. @@ -139,7 +139,9 @@ The configuration of both concepts explained in this page, simulation time-step | | __Fixed time-step__ | __Variable time-step__ | | --- | --- | --- | | __Synchronous mode__ | Client is in total control over the simulation and its information. | Risk of non reliable simulations. | -| __Asynchronous mode__ | Good time references for information. Server runs as fast as possible. | Non easily repeatable simulations. | +| __Asynchronous mode__ | Good time references for information. Server runs as fast as possible. | Non easily repeatable simulations. | + +
* __Synchronous mode + variable time-step:__ This is almost for sure a non-desirable state. Physics cannot run properly when the time-step is bigger than 0.1s and, if the server needs to wait for the client to compute the steps, this is likely to happen. Simulation time and physics then will not be in synchrony and thus, the simulation is not reliable. diff --git a/Docs/update_carla.md b/Docs/update_carla.md index 8cdcbf7f0..42a26fe2f 100644 --- a/Docs/update_carla.md +++ b/Docs/update_carla.md @@ -1,4 +1,4 @@ -

Update CARLA

+#Update CARLA * [__Get lastest binary release__](#get-latest-binary-release) * [__Update Linux and Windows build__](#update-linux-and-windows-build) @@ -18,7 +18,6 @@ CARLA forum

- --------------- ##Get latest binary release @@ -46,20 +45,20 @@ Binary releases are prepackaged and thus, tied to a specific version of CARLA. I The process of updating is quite similar and straightforward for both platforms: -

Clean the build

+####Clean the build Go to the CARLA main directory and delete binaries and temporals generated by previous build: ```sh git checkout master make clean ``` -

Pull from origin

+####Pull from origin Get the current version from `master` in the CARLA repository: ```sh git pull origin master ``` -

Download the assets

+####Download the assets __Linux:__ ```sh @@ -74,7 +73,7 @@ __Windows:__ !!! Note In order to work with the current content used by developers in the CARLA team, follow the get development assets section right below this one. -

Launch the server

+####Launch the server Run the editor with the spectator view to be sure that everything worked properly: ```sh diff --git a/Docs/walker_bone_control.md b/Docs/walker_bone_control.md index 369a872a4..caa8430c5 100644 --- a/Docs/walker_bone_control.md +++ b/Docs/walker_bone_control.md @@ -1,4 +1,4 @@ -

Walker Bone Control

+# Walker Bone Control In this tutorial we describe how to manually control and animate the skeletons of walkers from the CARLA Python API. The reference of diff --git a/PythonAPI/docs/bp_doc_gen.py b/PythonAPI/docs/bp_doc_gen.py index 3d5ab4517..fac1c2e5f 100644 --- a/PythonAPI/docs/bp_doc_gen.py +++ b/PythonAPI/docs/bp_doc_gen.py @@ -101,7 +101,7 @@ class MarkdownFile: def not_title(self, buf): self._data = join([ - self._data, '\n', self.list_depth(), '

', buf, '

', '\n']) + self._data, '\n', self.list_depth(), '#', buf, '\n']) def title(self, strongness, buf): self._data = join([ diff --git a/PythonAPI/docs/doc_gen.py b/PythonAPI/docs/doc_gen.py index b99a67c24..be9a0a77c 100755 --- a/PythonAPI/docs/doc_gen.py +++ b/PythonAPI/docs/doc_gen.py @@ -33,7 +33,7 @@ class MarkdownFile: self._data = "" self._list_depth = 0 self.endl = ' \n' - + def data(self): return self._data @@ -70,6 +70,10 @@ class MarkdownFile: def textn(self, buf): self._data = join([self._data, self.list_depth(), buf, self.endl]) + def first_title(self): + self._data = join([ + self._data, '#Python API reference']) + def title(self, strongness, buf): self._data = join([ self._data, '\n', self.list_depth(), '#' * strongness, ' ', buf, '\n']) @@ -437,6 +441,7 @@ class Documentation: def gen_body(self): """Generates the documentation body""" md = MarkdownFile() + md.first_title() for module_name in sorted(self.master_dict): module = self.master_dict[module_name] module_key = module_name diff --git a/mkdocs.yml b/mkdocs.yml index 1a59ba32d..7c2b3f19b 100644 --- a/mkdocs.yml +++ b/mkdocs.yml @@ -43,7 +43,6 @@ nav: - 'Generate pedestrian navigation': 'how_to_generate_pedestrians_navigation.md' - "Link Epic's Automotive Materials": 'epic_automotive_materials.md' - 'Map customization': 'dev/map_customization.md' - - 'Running without display and selecting GPUs': 'carla_headless.md' - How to... (content): - 'Add assets': 'how_to_add_assets.md' - 'Create and import a new map': 'how_to_make_a_new_map.md'