diff --git a/Docs/recorder_and_playback.md b/Docs/adv_recorder.md
similarity index 96%
rename from Docs/recorder_and_playback.md
rename to Docs/adv_recorder.md
index 29d8baf1e..8563175a0 100644
--- a/Docs/recorder_and_playback.md
+++ b/Docs/adv_recorder.md
@@ -1,7 +1,7 @@
# Recorder
This is one of the advanced CARLA features. It allows to record and reenact a simulation while providing with a complete log of the events happened and a few queries to ease the trace and study of those.
-To learn about the generated file and its specifics take a look at this [reference](recorder_binary_file_format.md).
+To learn about the generated file and its specifics take a look at this [reference](ref_recorder_binary_file_format.md).
* [__Recording__](#recording)
* [__Simulation playback__](#simulation-playback):
@@ -12,8 +12,8 @@ To learn about the generated file and its specifics take a look at this [referen
* Blocked actors
* [__Sample Python scripts__](#sample-python-scripts)
----------------
-##Recording
+---
+## Recording
All the data is written in a binary file on the server side only. However, the recorder is managed using the [carla.Client](python_api.md#carla.Client).
To reenact the simulation, actors will be updated on every frame according to the data contained in the recorded file. Actors that appear in the simulation will be either moved or re-spawned to emulate the recording. Those that do not appear in the recording will continue their way as if nothing happened.
@@ -42,8 +42,8 @@ client.stop_recorder()
!!! Note
As an estimate: 1h recording with 50 traffic lights and 100 vehicles takes around 200MB in size.
----------------
-##Simulation playback
+---
+## Simulation playback
A playback can be started at any point during a simulation only specifying the file name.
@@ -66,7 +66,7 @@ client.replay_file("recording01.log", start, duration, camera)
!!! Note
These parameters allows to recall an event and then let the simulation run free, as vehicles will be set to autopilot when the recording stops.
-####Setting a time factor
+#### Setting a time factor
The time factor will determine the playback speed.
@@ -88,10 +88,10 @@ For instance, with a time factor of __20x__ traffic flow is easily appreciated:
![flow](img/RecorderFlow2.gif)
----------------
-##Recorded file
+---
+## Recorded file
-The details of a recording can be retrieved using a simple API call. By default, it only retrieves those frames where an event was registered, but setting the parameter `show_all` would return all the information for every frame. The specifics on how the data is stored are detailed in the [recorder's reference](recorder_binary_file_format.md).
+The details of a recording can be retrieved using a simple API call. By default, it only retrieves those frames where an event was registered, but setting the parameter `show_all` would return all the information for every frame. The specifics on how the data is stored are detailed in the [recorder's reference](ref_recorder_binary_file_format.md).
The following example only would retrieve remarkable events:
```py
@@ -136,12 +136,12 @@ Frame 2351 at 60.3057 seconds
Frames: 2354
Duration: 60.3753 seconds
```
----------------
-##Queries
+---
+## Queries
-####Collisions
+#### Collisions
-In order to record collisions, vehicles must have a [collision detector](../ref_sensors#collision-detector) attached. The collisions registered by the recorder can be queried using arguments to filter the type of the actors involved in the collisions. For example, `h` identifies actors whose `role_name = hero`, usually assigned to vehicles managed by the user.
+In order to record collisions, vehicles must have a [collision detector](ref_sensors.md#collision-detector) attached. The collisions registered by the recorder can be queried using arguments to filter the type of the actors involved in the collisions. For example, `h` identifies actors whose `role_name = hero`, usually assigned to vehicles managed by the user.
Currently, the actor types that can be used in the query are:
* __h__ = Hero
@@ -188,7 +188,7 @@ In this case, the playback showed this:
![collision](img/collision1.gif)
-####Blocked actors
+#### Blocked actors
This query is used to detect vehicles that where stucked during the recording. An actor is considered blocked if it does not move a minimum distance in a certain time. This definition is made by the user during the query:
@@ -236,8 +236,8 @@ client.replay_file("col3.log", 34, 0, 173)
![accident](img/accident.gif)
----------------
-##Sample python scripts
+---
+## Sample python scripts
Some of the provided scripts in `PythonAPI/examples` facilitate the use of the recorder:
@@ -290,7 +290,7 @@ Two modes of detail: by default it only shows frames where some event is recorde
----------------
+---
Now it is time to experiment for a while. Use the recorder to playback a simulation, trace back events, make changes to see new outcomes. Feel free to say your word in the CARLA forum about this matter:
- + Introduction — Capabilities and intentions behind the project. - + Quickstart installation — Get the CARLA releases.
-- + Linux build — Make the build on Linux. - + Windows build — Make the build on Windows. - + Update CARLA — Get up to date with the latest content. - + Build system — Learn about the build and how it is made. - + Running in a Docker — Run CARLA using a container solution. - + F.A.Q. — Some of the most frequent issues for newcomers.
-Core concepts @@ -78,36 +79,36 @@ CARLA forum — Discover the different maps and how to move around. - (broken) 4th. Sensors and data + 4th. Sensors and data — Retrieve simulation data using sensors. -
-
+
Recorder
— Store all the events in a simulation a play it again.
-
+
Rendering options
— Different settings, from quality to no-render or off-screen runs.
-
+
Synchrony and time-step
— Client-server communication and simulation time.
-
- (broken) Traffic manager
-
+
+ (soon) Traffic manager
+
— Feature to handle autopilot vehicles and emulate traffic.
-
Python API reference
— Classes and methods in the Python API.
-
+
Code recipes
— Code fragments commonly used.
@@ -115,95 +116,95 @@ CARLA forum
Blueprint library
-
+
Add friction triggers
— Define dynamic box triggers for wheels.
-
+
Control vehicle physics
— Set runtime changes on a vehicle physics.
-
+
Control walker skeletons
— Skeleton and animation for walkers explained.
-
-
- Contribute with new assets
-
- — Add new content to CARLA.
-
+
Import new assets
— Use personal assets in CARLA.
-
- Map customization
-
- — Edit an existing map.
-
+
Map creation
— Guidelines to create a new map.
-
+
+ Map customization
+
+ — Edit an existing map.
+
Standalone asset packages
— Import assets into Unreal Engine and prepare them for package distribution.
-
- Use Automotive materials
+
+ Use Epic's Automotive materials
— Apply Epic's set of Automotive materials to vehicles for a more realistic painting.
-
+
Vehicle modelling
— Guidelines to create a new vehicle for CARLA.
-
-
+
+ Contribute with new assets
+
+ — Add new content to CARLA.
+
Create a sensor
— The basics on how to add a new sensor to CARLA.
-
+
Make a release
— For developers who want to publish a release.
-
- Pedestrian navigation physics
+
+ Generate pedestrian navigation
— Generate the information needed for walkers to navigate a map.
-
-
- General guidelines
+
+ Contribution guidelines
— The different ways to contribute to CARLA.
-
+
+ Code of conduct
+
+ — Some standards for CARLA, rights and duties for contributors.
+
Coding standard
— Guidelines to write proper code.
-
+
Documentation standard
- — Guidelines to write proper documentation.
-
- Code of conduct
-
- — Some standards for CARLA, rights and duties for contributors.
\ No newline at end of file
+ — Guidelines to write proper documentation.
\ No newline at end of file
diff --git a/Docs/measurements.md b/Docs/measurements.md
deleted file mode 100644
index 30a171db1..000000000
--- a/Docs/measurements.md
+++ /dev/null
@@ -1,203 +0,0 @@
- A traffic light actor, considered a specific type of traffic sign. As traffic lights will mostly appear at junctions, they belong to a group which contains the different traffic lights in it. Inside the group, traffic lights are differenciated by their pole index.
- Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](../python_cookbook/#traffic-lights-recipe) to learn how to do so.
+ Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to learn how to do so.
-
+
Linux build
@@ -46,8 +46,8 @@ If you downloaded any additional assets in Linux, move them to the _Import_ fold
./ImportAssets.sh
```
----------------
-##Running CARLA
+---
+## Running CARLA
Open a terminal in the folder where CARLA was extracted. The following command will execute the package file and start the simulation:
@@ -64,7 +64,7 @@ A window will open, containing a view over the city. This is the "spectator" vie
!!! note
If the firewall or any other application are blocking the TCP ports needed, these can be manually changed by adding to the previous command the argument: `-carla-port=N`, being `N` the desired port. The second will be automatically set to `N+1`.
-####Command-line options
+#### Command-line options
There are some configuration options available when launching CARLA:
@@ -90,21 +90,21 @@ To check all the available configurations, run the following command:
> ./config.py --help
```
----------------
-##Updating CARLA
+---
+## Updating CARLA
The packaged version requires no updates. The content is bundled and thus, tied to a specific version of CARLA. Everytime there is a release, the repository will be updated. To run this latest or any other version, delete the previous one and repeat the installation steps with the desired.
----------------
-##Summary
+---
+## Summary
That concludes the quickstart installation process. In case any unexpected error or issue occurs, the [CARLA forum](https://forum.carla.org/) is open to everybody. There is an _Installation issues_ category to post this kind of problems and doubts.
-So far, CARLA should be operative in the desired system. Terminals will be used to contact the server via script and retrieve data. Thus will access all of the capabilities that CARLA provides. Next step should be visiting the __First steps__ section to learn more about this. However, all the information about the Python API regarding classes and its methods can be accessed in the [Python API reference](../python_api.md).
+So far, CARLA should be operative in the desired system. Terminals will be used to contact the server via script and retrieve data. Thus will access all of the capabilities that CARLA provides. Next step should be visiting the __First steps__ section to learn more about this. However, all the information about the Python API regarding classes and its methods can be accessed in the [Python API reference](python_api.md).
References
+## References
Tutorials — General
+## Tutorials — General
Tutorials — Assets
+## Tutorials — Assets
Tutorials — Developers
+## Tutorials — Developers
Contributing
+## Contributing
Measurements
-
-!!! important
- Since version 0.8.0 the measurements received by the client are in SI
- units. All locations have been converted to `meters` and speeds to
- `meters/second`.
-
-Every frame the server sends a package with the measurements and images gathered
-to the client. This document describes the details of these measurements.
-
-Time-stamps
------------
-
-Every frame is described by three different counters/time-stamps
-
-Key | Type | Units | Description
--------------------------- | --------- | ------------ | ------------
-frame | uint64 | | Frame number (it is **not** restarted on each episode).
-platform_timestamp | uint32 | milliseconds | Time-stamp of the current frame, as given by the OS.
-game_timestamp | uint32 | milliseconds | In-game time-stamp, elapsed since the beginning of the current episode.
-
-In real-time mode, the elapsed time between two time steps should be similar
-both platform and game time-stamps. When run in fixed-time step, the game
-time-stamp increments in constant time steps (delta=1/FPS) while the platform
-time-stamp keeps the actual time elapsed.
-
-Player measurements
--------------------
-
-Key | Type | Units | Description
--------------------------- | ----------- | ------ | ------------
-transform | Transform | | World transform of the player (contains a locations and a rotation) respect the vehicle's mesh pivot.
-bounding_box | BoundingBox | | Bounding box of the player.
-acceleration | Vector3D | m/s^2 | Current acceleration of the player.
-forward_speed | float | m/s | Forward speed of the player.
-collision_vehicles | float | kg*m/s | Collision intensity with other vehicles.
-collision_pedestrians | float | kg*m/s | Collision intensity with pedestrians.
-collision_other | float | kg*m/s | General collision intensity (everything else but pedestrians and vehicles).
-intersection_otherlane | float | | Percentage of the vehicle invading other lanes.
-intersection_offroad | float | | Percentage of the vehicle off-road.
-autopilot_control | Control | | Vehicle's autopilot control that would apply this frame.
-
-Transform
-
-The transform contains the location and rotation of the player.
-
-Key | Type | Units | Description
--------------------------- | ---------- | ------- | ------------
-location | Vector3D | m | World location.
-orientation *[deprecated]* | Vector3D | | Orientation in Cartesian coordinates.
-rotation | Rotation3D | degrees | Pitch, roll, and yaw.
-
-BoundingBox
-
-Contains the transform and extent of a bounding box.
-
-Key | Type | Units | Description
--------------------------- | ---------- | ------- | ------------
-transform | Transform | | Transform of the bounding box relative to the vehicle.
-extent | Vector3D | m | Radii dimensions of the bounding box (half-box).
-
-Collision
-
-Collision variables keep an accumulation of all the collisions occurred during
-this episode. Every collision contributes proportionally to the intensity of the
-collision (norm of the normal impulse between the two colliding objects).
-
-Three different counts are kept (pedestrians, vehicles, and other). Colliding
-objects are classified based on their tag (same as for semantic segmentation).
-
-!!! Bug
- See [#13 Collisions are not annotated when vehicle's speed is low](https://github.com/carla-simulator/carla/issues/13)
-
-Collisions are not annotated if the vehicle is not moving (<1km/h) to avoid
-annotating undesired collision due to mistakes in the AI of non-player agents.
-
-Lane/off-road intersection
-
-The lane intersection measures the percentage of the vehicle invading the
-opposite lane. The off-road intersection measures the percentage of the vehicle
-outside the road.
-
-These values are computed intersecting the bounding box of the vehicle (as a 2D
-rectangle) against the map image of the city. These images are generated in the
-editor and serialized for runtime use. You can find them too in the release
-package under the folder "RoadMaps".
-
-Autopilot control
-
-The `autopilot_control` measurement contains the control values that the in-game
-autopilot system would apply as if it were controlling the vehicle.
-
-This is the same structure used to send the vehicle control to the server.
-
-Key | Type | Description
--------------------------- | --------- | ------------
-steer | float | Steering angle between [-1.0, 1.0] (*)
-throttle | float | Throttle input between [ 0.0, 1.0]
-brake | float | Brake input between [ 0.0, 1.0]
-hand_brake | bool | Whether the hand-brake is engaged
-reverse | bool | Whether the vehicle is in reverse gear
-
-To activate the autopilot from the client, send this `autopilot_control` back
-to the server. Note that you can modify it before sending it back.
-
-```py
-measurements, sensor_data = carla_client.read_data()
-control = measurements.player_measurements.autopilot_control
-# modify here control if wanted.
-carla_client.send_control(control)
-```
-
-(*) The actual steering angle depends on the vehicle used. The default Mustang
-has a maximum steering angle of 70 degrees (this can be checked in the vehicle's
-front wheel blueprint).
-
-![Mustan Steering Angle](img/steering_angle_mustang.png)
-
-Non-player agents info
-----------------------
-
-!!! important
- Since version 0.8.0 the player vehicle is not sent in the list of non-player
- agents.
-
-To receive info of every non-player agent in the scene every frame you need to
-activate this option in the settings file sent by the client at the beginning of
-the episode.
-
-```ini
-[CARLA/Server]
-SendNonPlayerAgentsInfo=true
-```
-
-If enabled, the server attaches a list of agents to the measurements package
-every frame. Each of these agents has an unique id that identifies it, and
-belongs to one of the following classes
-
- * Vehicle
- * Pedestrian
- * Traffic ligth
- * Speed limit sign
-
-Each of them can be accessed in Python by checking if the agent object has the
-field enabled
-
-```python
-measurements, sensor_data = client.read_data()
-
-for agent in measurements.non_player_agents:
- agent.id # unique id of the agent
- if agent.HasField('vehicle'):
- agent.vehicle.forward_speed
- agent.vehicle.transform
- agent.vehicle.bounding_box
-```
-
-Vehicle
-
-Key | Type | Description
-------------------------------- | --------- | ------------
-id | uint32 | Agent ID
-vehicle.forward_speed | float | Forward speed of the vehicle in m/s, is the linear speed projected to the forward vector of the chassis of the vehicle
-vehicle.transform | Transform | Agent-to-world transform
-vehicle.bounding_box.transform | Transform | Transform of the bounding box relative to the vehicle
-vehicle.bounding_box.extent | Vector3D | Radii dimensions of the bounding box in meters
-
-Pedestrian
-
-Key | Type | Description
---------------------------------- | --------- | ------------
-id | uint32 | Agent ID
-pedestrian.forward_speed | float | Forward speed of the pedestrian in m/s
-pedestrian.transform | Transform | Agent-to-world transform
-pedestrian.bounding_box.transform | Transform | Transform of the bounding box relative to the pedestrian
-pedestrian.bounding_box.extent | Vector3D | Radii dimensions of the bounding box in meters (*)
-
-(*) At this point every pedestrian is assumed to have the same
-bounding-box size.
-
-Traffic light
-
-Key | Type | Description
----------------------------- | --------- | ------------
-id | uint32 | Agent ID
-traffic_light.transform | Transform | Agent-to-world transform
-traffic_light.state | enum | Traffic light state; `GREEN`, `YELLOW`, or `RED`
-
-Speed limit sign
-
-Key | Type | Description
----------------------------- | --------- | ------------
-id | uint32 | Agent ID
-speed_limit_sign.transform | Transform | Agent-to-world transform
-speed_limit_sign.speed_limit | float | Speed limit in m/s
-
-Transform and bounding box
-
-The transform defines the location and orientation of the agent. The transform
-of the bounding box is given relative to the vehicle. The box extent gives the
-radii dimensions of the bounding box of the agent.
-
-![Vehicle Bounding Box](img/vehicle_bounding_box.png)
diff --git a/Docs/python_api.md b/Docs/python_api.md
index 076872652..a222cce21 100644
--- a/Docs/python_api.md
+++ b/Docs/python_api.md
@@ -1,6 +1,6 @@
#Python API reference
## carla.Actor
-CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by [carla.World](#carla.World) and they need for a [carla.ActorBlueprint](#carla.ActorBlueprint) to be created. These blueprints belong into a library provided by CARLA, find more about them [here](../bp_library/).
+CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by [carla.World](#carla.World) and they need for a [carla.ActorBlueprint](#carla.ActorBlueprint) to be created. These blueprints belong into a library provided by CARLA, find more about them [here](bp_library.md).
Instance Variables
- **attributes** (_dict_)
@@ -10,7 +10,7 @@ Identifier for this actor. Unique during a given episode.
- **parent** (_[carla.Actor](#carla.Actor)_)
Actors may be attached to a parent actor that they will follow around. This is said actor.
- **semantic_tags** (_list(int)_)
-A list of semantic tags provided by the blueprint listing components for this actor. E.g. a traffic light could be tagged with "pole" and "traffic light". These tags are used by the semantic segmentation sensor. Find more about this and other sensors [here](../cameras_and_sensors/#sensor.camera.semantic_segmentation).
+A list of semantic tags provided by the blueprint listing components for this actor. E.g. a traffic light could be tagged with "pole" and "traffic light". These tags are used by the semantic segmentation sensor. Find more about this and other sensors [here](ref_sensors.md#semantic-segmentation-camera).
- **type_id** (_str_)
The identifier of the blueprint this actor was based on, e.g. "vehicle.ford.mustang".
@@ -226,20 +226,20 @@ Returns the velocity vector registered for an actor in that tick.
---
## carla.AttachmentType
-Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for cameras and sensors. [Here](../python_cookbook/#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
+Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for cameras and sensors. [Here](ref_code_recipes.md#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
Instance Variables
- **Rigid**
With this fixed attatchment the object follow its parent position strictly.
- **SpringArm**
-An attachment that expands or retracts depending on camera situation. SpringArms are an Unreal Engine component so [check this out](../python_cookbook/#attach-sensors-recipe) to learn some more about them.
+An attachment that expands or retracts depending on camera situation. SpringArms are an Unreal Engine component so [check this out](ref_code_recipes.md#attach-sensors-recipe) to learn some more about them.
---
## carla.BlueprintLibrary
A class that contains the blueprints provided for actor spawning. Its main application is to return [carla.ActorBlueprint](#carla.ActorBlueprint) objects needed to spawn actors. Each blueprint has an identifier and attributes that may or may not be modifiable. The library is automatically created by the server and can be accessed through [carla.World](#carla.World).
- [Here](../bp_library/) is a reference containing every available blueprint and its specifics.
+ [Here](bp_library.md) is a reference containing every available blueprint and its specifics.
Methods
- **\__getitem__**(**self**, **pos**)
@@ -269,7 +269,7 @@ Returns the blueprint corresponding to that identifier.
---
## carla.BoundingBox
-Helper class defining a box location and its dimensions that will later be used by [carla.DebugHelper](#carla.DebugHelper) or a [carla.Client](#carla.Client) to draw shapes and detect collisions. Bounding boxes normally act for object colliders. Check out this [recipe](../python_cookbook/#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
+Helper class defining a box location and its dimensions that will later be used by [carla.DebugHelper](#carla.DebugHelper) or a [carla.Client](#carla.Client) to draw shapes and detect collisions. Bounding boxes normally act for object colliders. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
Instance Variables
- **location** (_[carla.Location](#carla.Location)_)
@@ -316,7 +316,7 @@ Parses the location and extent of the bounding box to string.
## carla.Client
The Client connects CARLA to the server which runs the simulation. Both server and client contain a CARLA library (libcarla) with some differences that allow communication between them. Many clients can be created and each of these will connect to the RPC server inside the simulation to send commands. The simulation runs server-side. Once the connection is established, the client will only receive data retrieved from the simulation. Walkers are the exception. The client is in charge of managing pedestrians so, if you are running a simulation with multiple clients, some issues may arise. For example, if you spawn walkers through different clients, collisions may happen, as each client is only aware of the ones it is in charge of.
- The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](recorder_and_playback.md) is some information about how to use this recorder.
+ The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](adv_recorder.md) is some information about how to use this recorder.
Methods
- **\__init__**(**self**, **host**=127.0.0.1, **port**=2000, **worker_threads**=0)
@@ -410,7 +410,7 @@ If you want to see only collisions between a vehicles and a walkers, use for `ca
- `category1` (_single char_) – Character variable specifying a first type of actor involved in the collision.
- `category2` (_single char_) – Character variable specifying the second type of actor involved in the collision.
- **show_recorder_file_info**(**self**, **filename**, **show_all**=False)
-The information saved by the recorder will be parsed and shown in your terminal as text (frames, times, events, state, positions...). The information shown can be specified by using the `show_all` parameter. [Here](recorder_binary_file_format.md) is some more information about how to read the recorder file.
+The information saved by the recorder will be parsed and shown in your terminal as text (frames, times, events, state, positions...). The information shown can be specified by using the `show_all` parameter. [Here](ref_recorder_binary_file_format.md) is some more information about how to read the recorder file.
- **Parameters:**
- `filename` (_str_) – Name or absolute path of the file recorded, depending on your previous choice.
- `show_all` (_bool_) – When true, will show all the details per frame (traffic light states, positions of all actors, orientation and animation data...), but by default it will only show a summary.
@@ -468,7 +468,7 @@ Initializes a color, black by default.
---
## carla.ColorConverter
-Class that defines conversion patterns that can be applied to a [carla.Image](#carla.Image) in order to show information provided by [carla.Sensor](#carla.Sensor). Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](../python_cookbook/#converted-image-recipe) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
+Class that defines conversion patterns that can be applied to a [carla.Image](#carla.Image) in order to show information provided by [carla.Sensor](#carla.Sensor). Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
Instance Variables
- **CityScapesPalette**
@@ -483,7 +483,7 @@ No changes applied to the image.
---
## carla.DebugHelper
-Helper class part of [carla.World](#carla.World) that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](../python_cookbook/#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
+Helper class part of [carla.World](#carla.World) that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
Methods
- **draw_point**(**self**, **location**, **size**=0.1f, **color**=(255,0,0), **life_time**=-1.0f)
@@ -817,7 +817,7 @@ Type 381.
---
## carla.LaneChange
-Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every [carla.Waypoint](#carla.Waypoint) according to the OpenDRIVE file. In this [recipe](../python_cookbook/#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
+Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every [carla.Waypoint](#carla.Waypoint) according to the OpenDRIVE file. In this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
Instance Variables
- **NONE**
@@ -876,7 +876,7 @@ White by default.
---
## carla.LaneMarkingType
-Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](../python_cookbook/#lanes-recipe) where the user creates a [carla.Waypoint](#carla.Waypoint) for a vehicle location and retrieves from it the information about adjacent lane markings.
+Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](ref_code_recipes.md#lanes-recipe) where the user creates a [carla.Waypoint](#carla.Waypoint) for a vehicle location and retrieves from it the information about adjacent lane markings.
__Note on double types:__ Lane markings are defined under the OpenDRIVE standard that determines whereas a line will be considered "BrokenSolid" or "SolidBroken". For each road there is a center lane marking, defined from left to right regarding the lane's directions. The rest of the lane markings are defined in order from the center lane to the closest outside of the road.
Instance Variables
@@ -895,7 +895,7 @@ __Note on double types:__ Lane markings are defined under the OpenDRIVE standard
---
## carla.LaneType
-Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](../python_cookbook/#lanes-recipe) the user creates a [carla.Waypoint](#carla.Waypoint) for the current location of a vehicle and uses it to get the current and adjacent lane types.
+Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a [carla.Waypoint](#carla.Waypoint) for the current location of a vehicle and uses it to get the current and adjacent lane types.
Instance Variables
- **NONE**
@@ -1231,7 +1231,7 @@ Time register of the frame at which this measurement was taken given by the OS i
## carla.TrafficLight
Instance Variables
- **state** (_[carla.TrafficLightState](#carla.TrafficLightState)_)
@@ -1288,7 +1288,7 @@ Sets a given time (in seconds) for the yellow light to be active.
---
## carla.TrafficLightState
-All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](../python_cookbook/#traffic-lights-recipe) to see an example.
+All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to see an example.
Instance Variables
- **Green**
@@ -1658,7 +1658,7 @@ Sets a speed for the walker in meters per second.
---
## carla.WalkerBoneControl
-This class grants bone specific manipulation for walker. The skeletons of walkers have been unified for clarity and the transform applied to each bone are always relative to its parent. Take a look [here](walker_bone_control.md) to learn more on how to create a walker and define its movement.
+This class grants bone specific manipulation for walker. The skeletons of walkers have been unified for clarity and the transform applied to each bone are always relative to its parent. Take a look [here](tuto_G_control_walker_skeletons.md) to learn more on how to create a walker and define its movement.
Instance Variables
- **bone_transforms** (_list([name,transform])_)
@@ -1956,7 +1956,7 @@ The client tells the server to block calling thread until a **
-The simulation has some advanced configuration options that are contained in this class and can be managed using [carla.World](#carla.World) and its methods. These allow the user to choose between client-server synchrony/asynchrony, activation of "no rendering mode" and either if the simulation should run with a fixed or variable time-step. Check [this](../configuring_the_simulation/) out if you want to learn about it.
+The simulation has some advanced configuration options that are contained in this class and can be managed using [carla.World](#carla.World) and its methods. These allow the user to choose between client-server synchrony/asynchrony, activation of "no rendering mode" and either if the simulation should run with a fixed or variable time-step. Check [this](adv_synchrony_timestep.md) out if you want to learn about it.
Instance Variables
- **synchronous_mode** (_bool_)
diff --git a/Docs/python_api_tutorial.md b/Docs/python_api_tutorial.md
deleted file mode 100644
index 6aeb9dd30..000000000
--- a/Docs/python_api_tutorial.md
+++ /dev/null
@@ -1,688 +0,0 @@
-Python
-API tutorial
-
-In this tutorial we introduce the basic concepts of the CARLA Python API, as
-well as an overview of its most important functionalities. The reference of all
-classes and methods available can be found at
-[Python API reference](python_api.md).
-
-!!! note
- **This document applies only to the latest development version**.
- The API has been significantly changed in the latest versions starting at
- 0.9.0. We commonly refer to the new API as **0.9.X API** as opposed to
- the previous **0.8.X API**.
-
-First of all, we need to introduce a few core concepts:
-
- - **Actor:** Actor is anything that plays a role in the simulation and can be
- moved around, examples of actors are vehicles, pedestrians, and sensors.
- - **Blueprint:** Before spawning an actor you need to specify its attributes,
- and that's what blueprints are for. We provide a blueprint library with
- the definitions of all the actors available.
- - **World:** The world represents the currently loaded map and contains the
- functions for converting a blueprint into a living actor, among other. It
- also provides access to the road map and functions to change the weather
- conditions.
-
-#### Connecting and retrieving the world
-
-To connect to a simulator we need to create a "Client" object, to do so we need
-to provide the IP address and port of a running instance of the simulator
-
-```py
-client = carla.Client('localhost', 2000)
-```
-
-The first recommended thing to do right after creating a client instance is
-setting its time-out. This time-out sets a time limit to all networking
-operations, if the time-out is not set networking operations may block forever
-
-```py
-client.set_timeout(10.0) # seconds
-```
-
-Once we have the client configured we can directly retrieve the world
-
-```py
-world = client.get_world()
-```
-
-Typically we won't need the client object anymore, all the objects created by
-the world will connect to the IP and port provided if they need to. These
-operations are usually done in the background and are transparent to the user.
-
-Changing the map
-----------------
-
-The map can be changed from the Python API with
-
-```py
-world = client.load_world('Town01')
-```
-
-this creates an empty world with default settings. The list of currently
-available maps can be retrieved with
-
-```py
-print(client.get_available_maps())
-```
-
-To reload the world using the current active map, use
-
-```py
-world = client.reload_world()
-```
-
-#### Blueprints
-
-A blueprint contains the information necessary to create a new actor. For
-instance, if the blueprint defines a car, we can change its color here, if it
-defines a lidar, we can decide here how many channels the lidar will have. A
-blueprints also has an ID that uniquely identifies it and all the actor
-instances created with it. Examples of IDs are "vehicle.nissan.patrol" or
-"sensor.camera.depth".
-
-The list of all available blueprints is kept in the [**blueprint library**](/bp_library)
-
-```py
-blueprint_library = world.get_blueprint_library()
-```
-
-The library allows us to find specific blueprints by ID, filter them with
-wildcards, or just choosing one at random
-
-```py
-# Find specific blueprint.
-collision_sensor_bp = blueprint_library.find('sensor.other.collision')
-# Chose a vehicle blueprint at random.
-vehicle_bp = random.choice(blueprint_library.filter('vehicle.bmw.*'))
-```
-
-Some of the attributes of the blueprints can be modified while some other are
-just read-only. For instance, we cannot modify the number of wheels of a vehicle
-but we can change its color
-
-```py
-vehicles = blueprint_library.filter('vehicle.*')
-bikes = [x for x in vehicles if int(x.get_attribute('number_of_wheels')) == 2]
-for bike in bikes:
- bike.set_attribute('color', '255,0,0')
-```
-
-Modifiable attributes also come with a list of recommended values
-
-```py
-for attr in blueprint:
- if attr.is_modifiable:
- blueprint.set_attribute(attr.id, random.choice(attr.recommended_values))
-```
-
-The blueprint system has been designed to ease contributors adding their custom
-actors directly in Unreal Editor, we'll add a tutorial on this soon, stay tuned!
-
-#### Spawning actors
-
-Once we have the blueprint set up, spawning an actor is pretty straightforward
-
-```py
-transform = Transform(Location(x=230, y=195, z=40), Rotation(yaw=180))
-actor = world.spawn_actor(blueprint, transform)
-```
-
-The spawn actor function comes in two flavours, [`spawn_actor`](python_api.md#carla.World.spawn_actor) and
-[`try_spawn_actor`](python_api.md#carla.World.try_spawn_actor).
-The former will raise an exception if the actor could not be spawned,
-the later will return `None` instead. The most typical cause of
-failure is collision at spawn point, meaning the actor does not fit at the spot
-we chose; probably another vehicle is in that spot or we tried to spawn into a
-static object.
-
-To ease the task of finding a spawn location, each map provides a list of
-recommended transforms
-
-```py
-spawn_points = world.get_map().get_spawn_points()
-```
-
-We'll add more on the map object later in this tutorial.
-
-Finally, the spawn functions have an optional argument that controls whether the
-actor is going to be attached to another actor. This is specially useful for
-sensors. In the next example, the camera remains rigidly attached to our vehicle
-during the rest of the simulation
-
-```py
-camera = world.spawn_actor(camera_bp, relative_transform, attach_to=my_vehicle)
-```
-
-Note that in this case, the transform provided is treated relative to the parent
-actor.
-
-#### Handling actors
-
-Once we have an actor alive in the world, we can move this actor around and
-check its dynamic properties
-
-```py
-location = actor.get_location()
-location.z += 10.0
-actor.set_location(location)
-print(actor.get_acceleration())
-print(actor.get_velocity())
-```
-
-We can even freeze an actor by disabling its physics simulation
-
-```py
-actor.set_simulate_physics(False)
-```
-
-And once we get tired of an actor we can remove it from the simulation with
-
-```py
-actor.destroy()
-```
-
-Note that actors are not cleaned up automatically when the Python script
-finishes, if we want to get rid of them we need to explicitly destroy them.
-
-!!! important
- **Known issue:** To improve performance, most of the methods send requests
- to the simulator asynchronously. The simulator queues each of these
- requests, but only has a limited amount of time each update to parse them.
- If we flood the simulator by calling "set" methods too often, e.g.
- set_transform, the requests will accumulate a significant lag.
-
-#### Vehicles
-
-Vehicles are a special type of actor that provide a few extra methods. Apart
-from the handling methods common to all actors, vehicles can also be controlled
-by providing throttle, break, and steer values
-
-```py
-vehicle.apply_control(carla.VehicleControl(throttle=1.0, steer=-1.0))
-```
-
-These are all the parameters of the [`VehicleControl`](python_api.md#carla.VehicleControl)
-object and their default values
-
-```py
-carla.VehicleControl(
- throttle = 0.0
- steer = 0.0
- brake = 0.0
- hand_brake = False
- reverse = False
- manual_gear_shift = False
- gear = 0)
-```
-
-Also, physics control properties can be tuned for vehicles and its wheels
-
-```py
-vehicle.apply_physics_control(carla.VehiclePhysicsControl(max_rpm = 5000.0, center_of_mass = carla.Vector3D(0.0, 0.0, 0.0), torque_curve=[[0,400],[5000,400]]))
-```
-
-These properties are controlled through a
-[`VehiclePhysicsControl`](python_api.md#carla.VehiclePhysicsControl) object,
-which also contains a property to control each wheel's physics through a
-[`WheelPhysicsControl`](python_api.md#carla.WheelPhysicsControl) object.
-
-```py
-carla.VehiclePhysicsControl(
- torque_curve,
- max_rpm,
- moi,
- damping_rate_full_throttle,
- damping_rate_zero_throttle_clutch_engaged,
- damping_rate_zero_throttle_clutch_disengaged,
- use_gear_autobox,
- gear_switch_time,
- clutch_strength,
- mass,
- drag_coefficient,
- center_of_mass,
- steering_curve,
- wheels)
-```
-
-Where:
-
-- *torque_curve*: Curve that indicates the torque measured in Nm for a specific revolutions
-per minute of the vehicle's engine
-- *max_rpm*: The maximum revolutions per minute of the vehicle's engine
-- *moi*: The moment of inertia of the vehicle's engine
-- *damping_rate_full_throttle*: Damping rate when the throttle is maximum.
-- *damping_rate_zero_throttle_clutch_engaged*: Damping rate when the thottle is zero
-with clutch engaged
-- *damping_rate_zero_throttle_clutch_disengaged*: Damping rate when the thottle is zero
-with clutch disengaged
-
-- *use_gear_autobox*: If true, the vehicle will have automatic transmission
-- *gear_switch_time*: Switching time between gears
-- *clutch_strength*: The clutch strength of the vehicle. Measured in Kgm^2/s
-
-- *final_ratio*: The fixed ratio from transmission to wheels.
-- *forward_gears*: List of [`GearPhysicsControl`](python_api.md#carla.GearPhysicsControl) objects.
-
-- *mass*: The mass of the vehicle measured in Kg
-- *drag_coefficient*: Drag coefficient of the vehicle's chassis
-- *center_of_mass*: The center of mass of the vehicle
-- *steering_curve*: Curve that indicates the maximum steering for a specific forward speed
-- *wheels*: List of [`WheelPhysicsControl`](python_api.md#carla.WheelPhysicsControl) objects.
-
-```py
-carla.WheelPhysicsControl(
- tire_friction,
- damping_rate,
- max_steer_angle,
- radius,
- max_brake_torque,
- max_handbrake_torque,
- position)
-```
-Where:
-- *tire_friction*: Scalar value that indicates the friction of the wheel.
-- *damping_rate*: The damping rate of the wheel.
-- *max_steer_angle*: The maximum angle in degrees that the wheel can steer.
-- *radius*: The radius of the wheel in centimeters.
-- *max_brake_torque*: The maximum brake torque in Nm.
-- *max_handbrake_torque*: The maximum handbrake torque in Nm.
-- *position*: The position of the wheel.
-
-```py
-carla.GearPhysicsControl(
- ratio,
- down_ratio,
- up_ratio)
-```
-Where:
-- *ratio*: The transmission ratio of this gear.
-- *down_ratio*: The level of RPM (in relation to MaxRPM) where the gear autobox initiates shifting down.
-- *up_ratio*: The level of RPM (in relation to MaxRPM) where the gear autobox initiates shifting up.
-
-Our vehicles also come with a handy autopilot
-
-```py
-vehicle.set_autopilot(True)
-```
-
-As has been a common misconception, we need to clarify that this autopilot
-control is purely hard-coded into the simulator and it's not based at all in
-machine learning techniques.
-
-Finally, vehicles also have a bounding box that encapsulates them
-
-```py
-box = vehicle.bounding_box
-print(box.location) # Location relative to the vehicle.
-print(box.extent) # XYZ half-box extents in meters.
-```
-
-#### Sensors
-
-Sensors are actors that produce a stream of data. Sensors are such a key
-component of CARLA that they deserve their own documentation page, so here we'll
-limit ourselves to show a small example of how sensors work
-
-```py
-camera_bp = blueprint_library.find('sensor.camera.rgb')
-camera = world.spawn_actor(camera_bp, relative_transform, attach_to=my_vehicle)
-camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame))
-```
-
-In this example we have attached a camera to a vehicle, and told the camera to
-save to disk each of the images that are going to be generated.
-
-The full list of sensors and their measurement is explained in
-[Cameras and sensors](core_sensors.md).
-
-#### Other actors
-
-Apart from vehicles and sensors, there are a few other actors in the world. The
-full list can be requested to the world with
-
-```py
-actor_list = world.get_actors()
-```
-
-The actor list object returned has functions for finding, filtering, and
-iterating actors
-
-```py
-# Find an actor by id.
-actor = actor_list.find(id)
-# Print the location of all the speed limit signs in the world.
-for speed_sign in actor_list.filter('traffic.speed_limit.*'):
- print(speed_sign.get_location())
-```
-
-Among the actors you can find in this list are
-
- * **Traffic lights** with a [`state`](python_api.md#carla.TrafficLight.state) property
- to check the light's current state.
- * **Speed limit signs** with the speed codified in their type_id.
- * The **Spectator** actor that can be used to move the view of the simulator window.
-
-#### Changing the weather
-
-The lighting and weather conditions can be requested and changed with the world
-object
-
-```py
-weather = carla.WeatherParameters(
- cloudiness=80.0,
- precipitation=30.0,
- sun_altitude_angle=70.0)
-
-world.set_weather(weather)
-
-print(world.get_weather())
-```
-
-For convenience, we also provided a list of predefined weather presets that can
-be directly applied to the world
-
-```py
-world.set_weather(carla.WeatherParameters.WetCloudySunset)
-```
-
-The full list of presets can be found in the
-[WeatherParameters reference](python_api.md#carla.WeatherParameters).
-
-### World Snapshot
-
-A world snapshot represents the state of every actor in the simulation at a single frame,
-a sort of still image of the world with a timestamp. With this feature it is possible to
-record the location of every actor and make sure all of them were captured at the same
-frame without the need of using synchronous mode.
-
-```py
-# Retrieve a snapshot of the world at this point in time.
-world_snapshot = world.get_snapshot()
-
-# Wait for the next tick and retrieve the snapshot of the tick.
-world_snapshot = world.wait_for_tick()
-
-# Register a callback to get called every time we receive a new snapshot.
-world.on_tick(lambda world_snapshot: do_something(world_snapshot))
-```
-
-The world snapshot contains a timestamp and a list of actor snapshots. Actor snapshots do not
-allow to operate on the actor directly as they only contain data about the physical state of
-the actor, but you can use their id to retrieve the actual actor. And the other way around,
-you can look up snapshots by id (average O(1) complexity).
-
-```py
-timestamp = world_snapshot.timestamp
-timestamp.frame_count
-timestamp.elapsed_seconds
-timestamp.delta_seconds
-timestamp.platform_timestamp
-
-
-for actor_snapshot in world_snapshot:
- actor_snapshot.get_transform()
- actor_snapshot.get_velocity()
- actor_snapshot.get_angular_velocity()
- actor_snapshot.get_acceleration()
-
- actual_actor = world.get_actor(actor_snapshot.id)
-
-
-actor_snapshot = world_snapshot.find(actual_actor.id)
-```
-
-#### Map and waypoints
-
-One of the key features of CARLA is that our roads are fully annotated. All our
-maps come accompanied by [OpenDrive](http://www.opendrive.org/) files that
-defines the road layout. Furthermore, we provide a higher level API for querying
-and navigating this information.
-
-These objects were a recent addition to our API and are still in heavy
-development, we hope to make them much more powerful soon.
-
-Let's start by getting the map of the current world
-
-```py
-map = world.get_map()
-```
-
-For starters, the map has a [`name`](python_api.md#carla.Map.name) attribute that matches
-the name of the currently loaded city, e.g. Town01. And, as we've seen before, we can also ask
-the map to provide a list of recommended locations for spawning vehicles,
-[`map.get_spawn_points()`](python_api.md#carla.Map.get_spawn_points).
-
-However, the real power of this map API comes apparent when we introduce
-[`waypoints`](python_api.md#carla.Waypoint). We can tell the map to give us a waypoint on
-the road closest to our vehicle
-
-```py
-waypoint = map.get_waypoint(vehicle.get_location())
-```
-
-This waypoint's [`transform`](python_api.md#carla.Waypoint.transform) is located on a drivable lane,
-and it's oriented according to the road direction at that point.
-
-Waypoints have their unique identifier [`carla.Waypoint.id`](python_api.md#carla.Waypoint.id)
-based on the hash of its [`road_id`](python_api.md#carla.Waypoint.road_id),
-[`section_id`](python_api.md#carla.Waypoint.section_id),
-[`lane_id`](python_api.md#carla.Waypoint.lane_id) and [`s`](python_api.md#carla.Waypoint.s).
-They also provide more information about lanes, such as the
-[`lane_type`](python_api.md#carla.Waypoint.lane_type) of the current waypoint
-and if a [`lane_change`](python_api.md#carla.Waypoint.lane_change) is possible and in which direction.
-
-```py
-# Nearest waypoint on the center of a Driving or Sidewalk lane.
-waypoint = map.get_waypoint(vehicle.get_location(),project_to_road=True, lane_type=(carla.LaneType.Driving | carla.LaneType.Sidewalk))
-# Get the current lane type (driving or sidewalk).
-lane_type = waypoint.lane_type
-# Get available lane change.
-lane_change = waypoint.lane_change
-```
-
-Surrounding lane markings _(right / left)_ can also be accessed through the waypoint API.
-Therefore, it is possible to know all the information provided by a
-[`carla.LaneMarking`](python_api.md#carla.LaneMarking),
-like the lane marking [`type`](python_api.md#carla.LaneMarkingType) and its
-[`lane_change`](python_api.md#carla.LaneChange) availability.
-
-```py
-# Get right lane marking type
-right_lm_type = waypoint.right_lane_marking.type
-```
-
-Waypoints also have function to query the "next" waypoints; this method returns
-a list of waypoints at a certain distance that can be accessed from this
-waypoint following the traffic rules. In other words, if a vehicle is placed in
-this waypoint, give me the list of posible locations that this vehicle can drive
-to. Let's see a practical example:
-
-```py
-# Retrieve the closest waypoint.
-waypoint = map.get_waypoint(vehicle.get_location())
-
-# Disable physics, in this example we're just teleporting the vehicle.
-vehicle.set_simulate_physics(False)
-
-while True:
- # Find next waypoint 2 meters ahead.
- waypoint = random.choice(waypoint.next(2.0))
- # Teleport the vehicle.
- vehicle.set_transform(waypoint.transform)
-```
-
-The map object also provides methods for generating in bulk waypoints all over
-the map at an approximated distance between them
-
-```py
-waypoint_list = map.generate_waypoints(2.0)
-```
-
-For routing purposes, it is also possible to retrieve a topology graph of the
-roads
-
-```py
-waypoint_tuple_list = map.get_topology()
-```
-
-This method returns a list of pairs (tuples) of waypoints, for each pair, the
-first element connects with the second one. Only the minimal set of waypoints to
-define the topology are generated by this method, only a waypoint for each lane
-for each road segment in the map.
-
-Finally, to allow access to the whole road information, the map object can be
-converted to OpenDrive format, and saved to disk as such.
-
-### Recording and Replaying system
-
-CARLA includes now a recording and replaying API, that allows to record a simulation in a file and
-later replay that simulation. The file is written on server side only, and it includes which
-**actors are created or destroyed** in the simulation, the **state of the traffic lights**
-and the **position** and **orientation** of all vehicles and pedestrians.
-
-To start recording we only need to supply a file name:
-
-```py
-client.start_recorder("recording01.log")
-```
-
-To stop the recording, we need to call:
-
-```py
-client.stop_recorder()
-```
-
-At any point we can replay a simulation, specifying the filename:
-
-```py
-client.replay_file("recording01.log")
-```
-
-The replayer replicates the actor and traffic light information of the recording each frame.
-
-For more details, [Recorder and Playback system](recorder_and_playback.md)
-
-#### Pedestrians
-
-![pedestrian types](img/pedestrian_types.png)
-
-We can get a lit of all pedestrians from the blueprint library and choose one:
-
-```py
-world = client.get_world()
-blueprintsWalkers = world.get_blueprint_library().filter("walker.pedestrian.*")
-walker_bp = random.choice(blueprintsWalkers)
-```
-
-We can **get a list of random points** where to spawn the pedestrians. Those points are always
-from the areas where the pedestrian can walk:
-
-```py
-# 1. take all the random locations to spawn
-spawn_points = []
-for i in range(50):
- spawn_point = carla.Transform()
- spawn_point.location = world.get_random_location_from_navigation()
- if (spawn_point.location != None):
- spawn_points.append(spawn_point)
-
-```
-
-Now we can **spawn the pedestrians** at those positions using a batch of commands:
-
-```py
-# 2. build the batch of commands to spawn the pedestrians
-batch = []
-for spawn_point in spawn_points:
- walker_bp = random.choice(blueprintsWalkers)
- batch.append(carla.command.SpawnActor(walker_bp, spawn_point))
-
-# apply the batch
-results = client.apply_batch_sync(batch, True)
-for i in range(len(results)):
- if results[i].error:
- logging.error(results[i].error)
- else:
- walkers_list.append({"id": results[i].actor_id})
-```
-
-We save the id of each walker from the results of the batch, in a dictionary because we will
-assign to them also a controller.
-We need to **create the controller** that will manage the pedestrian automatically:
-
-```py
-# 3. we spawn the walker controller
-batch = []
-walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker')
-for i in range(len(walkers_list)):
- batch.append(carla.command.SpawnActor(walker_controller_bp, carla.Transform(), walkers_list[i]["id"]))
-
-# apply the batch
-results = client.apply_batch_sync(batch, True)
-for i in range(len(results)):
- if results[i].error:
- logging.error(results[i].error)
- else:
- walkers_list[i]["con"] = results[i].actor_id
-```
-
-We create the controller as child of the walker, so the location we pass is (0,0,0).
-
-At this point we have a list of pedestrians with a controller each one, but we need to get
-the actual actor from the id. Because the controller is a child of the pedestrian,
-we need to **put all id in the same list** so the parent can find the child in the same list.
-
-```py
-# 4. we put altogether the walkers and controllers id to get the objects from their id
-for i in range(len(walkers_list)):
- all_id.append(walkers_list[i]["con"])
- all_id.append(walkers_list[i]["id"])
-all_actors = world.get_actors(all_id)
-```
-
-The list all_actors has now all the actor objects we created.
-
-At this point is a good idea to **wait for a tick** on client, because then the server has
-time to send all new data about the new actors we just created (we need the transform of
-each one updated). So we can do a call like:
-
-```py
-# wait for a tick to ensure client receives the last transform of the walkers we have just created
-world.wait_for_tick()
-```
-
-After that, our client has the data about the actors updated.
-
- **Using the controller** we can set the locations where we want each pedestrian walk to:
-
-```py
-# 5. initialize each controller and set target to walk to (list is [controller, actor, controller, actor ...])
-for i in range(0, len(all_actors), 2):
- # start walker
- all_actors[i].start()
- # set walk to random point
- all_actors[i].go_to_location(world.get_random_location_from_navigation())
- # random max speed
- all_actors[i].set_max_speed(1 + random.random()) # max speed between 1 and 2 (default is 1.4 m/s)
-```
-
-There we have set at each pedestrian (through its controller) a random point and random speed.
-When they reach the target point then automatically walk to another random point.
-
-If the target point is not reachable, then they reach the closest point from the are where they are.
-
-![pedestrian sample](img/pedestrians_shoot.png)
-
-To **destroy the pedestrians**, we need to stop them from the navigation,
-and then destroy the objects (actor and controller):
-
-```py
-# stop pedestrians (list is [controller, actor, controller, actor ...])
-for i in range(0, len(all_id), 2):
- all_actors[i].stop()
-
-# destroy pedestrian (actor and controller)
-client.apply_batch([carla.command.DestroyActor(x) for x in all_id])
-```
diff --git a/Docs/python_cookbook.md b/Docs/ref_code_recipes.md
similarity index 80%
rename from Docs/python_cookbook.md
rename to Docs/ref_code_recipes.md
index 7468ca99a..02aaf59a2 100644
--- a/Docs/python_cookbook.md
+++ b/Docs/ref_code_recipes.md
@@ -1,24 +1,24 @@
# Code recipes
-This section contains a list of recipes that complement the [tutorial](../python_api_tutorial/)
-and are used to illustrate the use of Python API methods.
+This section contains a list of recipes that complement the [first steps](core_concepts.md) section and are used to illustrate the use of Python API methods.
-Each recipe has a list of [python API classes](../python_api/),
+Each recipe has a list of [python API classes](python_api.md),
which is divided into those in which the recipe is centered, and those that need to be used.
There are more recipes to come!
+---
## Actor Spectator Recipe
This recipe spawns an actor and the spectator camera at the actor's location.
Focused on:
-[`carla.World`](../python_api/#carla.World)
-[`carla.Actor`](../python_api/#carla.Actor)
+[`carla.World`](python_api.md#carla.World)
+[`carla.Actor`](python_api.md#carla.Actor)
Used:
-[`carla.WorldSnapshot`](../python_api/#carla.WorldSnapshot)
-[`carla.ActorSnapshot`](../python_api/#carla.ActorSnapshot)
+[`carla.WorldSnapshot`](python_api.md#carla.WorldSnapshot)
+[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)
```py
# ...
@@ -40,16 +40,17 @@ spectator.set_transform(actor_snapshot.get_transform())
# ...
```
+---
## Attach Sensors Recipe
This recipe attaches different camera / sensors to a vehicle with different attachments.
Focused on:
-[`carla.Sensor`](../python_api/#carla.Sensor)
-[`carla.AttachmentType`](../python_api/#carla.AttachmentType)
+[`carla.Sensor`](python_api.md#carla.Sensor)
+[`carla.AttachmentType`](python_api.md#carla.AttachmentType)
Used:
-[`carla.World`](../python_api/#carla.World)
+[`carla.World`](python_api.md#carla.World)
```py
# ...
@@ -61,17 +62,18 @@ lane_invasion_sensor = world.spawn_actor(sensor_lane_invasion_bp, transform, att
# ...
```
+---
## Actor Attribute Recipe
This recipe changes attributes of different type of blueprint actors.
Focused on:
-[`carla.ActorAttribute`](../python_api/#carla.ActorAttribute)
-[`carla.ActorBlueprint`](../python_api/#carla.ActorBlueprint)
+[`carla.ActorAttribute`](python_api.md#carla.ActorAttribute)
+[`carla.ActorBlueprint`](python_api.md#carla.ActorBlueprint)
Used:
-[`carla.World`](../python_api/#carla.World)
-[`carla.BlueprintLibrary`](../python_api/#carla.BlueprintLibrary)
+[`carla.World`](python_api.md#carla.World)
+[`carla.BlueprintLibrary`](python_api.md#carla.BlueprintLibrary)
```py
# ...
@@ -92,14 +94,15 @@ camera_bp.set_attribute('image_size_y', 600)
# ...
```
+---
## Converted Image Recipe
This recipe applies a color conversion to the image taken by a camera sensor,
so it is converted to a semantic segmentation image.
Focused on:
-[`carla.ColorConverter`](../python_api/#carla.ColorConverter)
-[`carla.Sensor`](../python_api/#carla.Sensor)
+[`carla.ColorConverter`](python_api.md#carla.ColorConverter)
+[`carla.Sensor`](python_api.md#carla.Sensor)
```py
# ...
@@ -110,20 +113,21 @@ camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame,
# ...
```
+---
## Lanes Recipe
This recipe shows the current traffic rules affecting the vehicle. Shows the current lane type and
if a lane change can be done in the actual lane or the surrounding ones.
Focused on:
-[`carla.LaneMarking`](../python_api/#carla.LaneMarking)
-[`carla.LaneMarkingType`](../python_api/#carla.LaneMarkingType)
-[`carla.LaneChange`](../python_api/#carla.LaneChange)
-[`carla.LaneType`](../python_api/#carla.LaneType)
+[`carla.LaneMarking`](python_api.md#carla.LaneMarking)
+[`carla.LaneMarkingType`](python_api.md#carla.LaneMarkingType)
+[`carla.LaneChange`](python_api.md#carla.LaneChange)
+[`carla.LaneType`](python_api.md#carla.LaneType)
Used:
-[`carla.Waypoint`](../python_api/#carla.Waypoint)
-[`carla.World`](../python_api/#carla.World)
+[`carla.Waypoint`](python_api.md#carla.Waypoint)
+[`carla.World`](python_api.md#carla.World)
```py
# ...
@@ -141,19 +145,20 @@ print("R lane marking change: " + str(waypoint.right_lane_marking.lane_change))
![lane_marking_recipe](img/lane_marking_recipe.png)
+---
## Debug Bounding Box Recipe
This recipe shows how to draw traffic light actor bounding boxes from a world snapshot.
Focused on:
-[`carla.DebugHelper`](../python_api/#carla.DebugHelper)
-[`carla.BoundingBox`](../python_api/#carla.BoundingBox)
+[`carla.DebugHelper`](python_api.md#carla.DebugHelper)
+[`carla.BoundingBox`](python_api.md#carla.BoundingBox)
Used:
-[`carla.ActorSnapshot`](../python_api/#carla.ActorSnapshot)
-[`carla.Actor`](../python_api/#carla.Actor)
-[`carla.Vector3D`](../python_api/#carla.Vector3D)
-[`carla.Color`](../python_api/#carla.Color)
+[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)
+[`carla.Actor`](python_api.md#carla.Actor)
+[`carla.Vector3D`](python_api.md#carla.Vector3D)
+[`carla.Color`](python_api.md#carla.Color)
```py
# ....
@@ -169,6 +174,7 @@ for actor_snapshot in world_snapshot:
![debug_bb_recipe](img/debug_bb_recipe.png)
+---
## Debug Vehicle Trail Recipe
This recipe is a modification of
@@ -176,16 +182,16 @@ This recipe is a modification of
It draws the path of an actor through the world, printing information at each waypoint.
Focused on:
-[`carla.DebugHelper`](../python_api/#carla.DebugHelper)
-[`carla.Waypoint`](../python_api/#carla.Waypoint)
-[`carla.Actor`](../python_api/#carla.Actor)
+[`carla.DebugHelper`](python_api.md#carla.DebugHelper)
+[`carla.Waypoint`](python_api.md#carla.Waypoint)
+[`carla.Actor`](python_api.md#carla.Actor)
Used:
-[`carla.ActorSnapshot`](../python_api/#carla.ActorSnapshot)
-[`carla.Vector3D`](../python_api/#carla.Vector3D)
-[`carla.LaneType`](../python_api/#carla.LaneType)
-[`carla.Color`](../python_api/#carla.Color)
-[`carla.Map`](../python_api/#carla.Map)
+[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)
+[`carla.Vector3D`](python_api.md#carla.Vector3D)
+[`carla.LaneType`](python_api.md#carla.LaneType)
+[`carla.Color`](python_api.md#carla.Color)
+[`carla.Map`](python_api.md#carla.Map)
```py
# ...
@@ -215,15 +221,16 @@ path it was following and the speed at each waypoint.
![debug_trail_recipe](img/debug_trail_recipe.png)
+---
## Parse client creation arguments
This recipe shows in every script provided in `PythonAPI/Examples` and it is used to parse the client creation arguments when running the script.
Focused on:
-[`carla.Client`](../python_api/#carla.Client)
+[`carla.Client`](python_api.md#carla.Client)
Used:
-[`carla.Client`](../python_api/#carla.Client)
+[`carla.Client`](python_api.md#carla.Client)
```py
argparser = argparse.ArgumentParser(
@@ -253,17 +260,18 @@ Used:
client = carla.Client(args.host, args.port)
```
+---
## Traffic lights Recipe
This recipe changes from red to green the traffic light that affects the vehicle.
This is done by detecting if the vehicle actor is at a traffic light.
Focused on:
-[`carla.TrafficLight`](../python_api/#carla.TrafficLight)
-[`carla.TrafficLightState`](../python_api/#carla.TrafficLightState)
+[`carla.TrafficLight`](python_api.md#carla.TrafficLight)
+[`carla.TrafficLightState`](python_api.md#carla.TrafficLightState)
Used:
-[`carla.Vehicle`](../python_api/#carla.Vehicle)
+[`carla.Vehicle`](python_api.md#carla.Vehicle)
```py
# ...
@@ -277,7 +285,7 @@ if vehicle_actor.is_at_traffic_light():
![tl_recipe](img/tl_recipe.gif)
-
+---
## Walker batch recipe
```py
diff --git a/Docs/cpp_reference.md b/Docs/ref_cpp.md
similarity index 99%
rename from Docs/cpp_reference.md
rename to Docs/ref_cpp.md
index ae9ea4513..075b97a7d 100644
--- a/Docs/cpp_reference.md
+++ b/Docs/ref_cpp.md
@@ -1,4 +1,3 @@
-
# C++ Reference
We use Doxygen to generate the documentation of our C++ code:
diff --git a/Docs/recorder_binary_file_format.md b/Docs/ref_recorder_binary_file_format.md
similarity index 99%
rename from Docs/recorder_binary_file_format.md
rename to Docs/ref_recorder_binary_file_format.md
index c674c74a0..d42b97cd3 100644
--- a/Docs/recorder_binary_file_format.md
+++ b/Docs/ref_recorder_binary_file_format.md
@@ -13,6 +13,7 @@ In summary, the file format has a small header with general info
![global file format](img/RecorderFileFormat3.png)
+---
## 1. Strings in binary
Strings are encoded first with the length of it, followed by its characters without null
@@ -21,6 +22,7 @@ as hex values: 06 00 54 6f 77 6e 30 36
![binary dynamic string](img/RecorderString.png)
+---
## 2. Info header
The info header has general information about the recorded file. Basically, it contains the version
@@ -34,6 +36,7 @@ A sample info header is:
![info header sample](img/RecorderHeader.png)
+---
## 3. Packets
Each packet starts with a little header of two fields (5 bytes):
@@ -173,6 +176,7 @@ that is used in the animation.
![state](img/RecorderWalker.png)
+---
## 4. Frame Layout
A frame consists of several packets, where all of them are optional, except the ones that
@@ -190,6 +194,7 @@ or set the state of traffic lights.
The **animation** packets are also optional, but by default they are recorded. That way the walkers
are animated and also the vehicle wheels follow the direction of the vehicles.
+---
## 5. File Layout
The layout of the file starts with the **info header** and then follows a collection of packets in
diff --git a/Docs/ref_sensors.md b/Docs/ref_sensors.md
index 94841f901..54c0edeb6 100644
--- a/Docs/ref_sensors.md
+++ b/Docs/ref_sensors.md
@@ -12,8 +12,8 @@
* [__Semantic segmentation camera__](#semantic-segmentation-camera)
----------------
-##Collision detector
+---
+## Collision detector
* __Blueprint:__ sensor.other.collision
* __Output:__ [carla.CollisionEvent](python_api.md#carla.CollisionEvent) per collision.
@@ -34,8 +34,8 @@ Collision detectors do not have any configurable attribute.
| `other_actor` | [carla.Actor](python_api.md#carla.Actor) | Actor against whom the parent collided. |
| `normal_impulse` | [carla.Vector3D](python_api.md#carla.Vector3D) | Normal impulse result of the collision. |
----------------
-##Depth camera
+---
+## Depth camera
* __Blueprint:__ sensor.camera.depth
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
@@ -92,8 +92,8 @@ There are two options in [carla.colorConverter](python_api.md#carla.ColorConvert
| `fov` | float | Horizontal field of view in degrees. |
| `raw_data` | bytes | Array of BGRA 32-bit pixels. |
----------------
-##GNSS sensor
+---
+## GNSS sensor
* __Blueprint:__ sensor.other.gnss
* __Output:__ [carla.GNSSMeasurement](python_api.md#carla.GNSSMeasurement) per step (unless `sensor_tick` says otherwise).
@@ -126,8 +126,8 @@ Reports current [gnss position](https://www.gsa.europa.eu/european-gnss/what-gns
| `longitude` | double | Longitude of the actor. |
| `altitude` | double | Altitude of the actor. |
----------------
-##IMU sensor
+---
+## IMU sensor
* __Blueprint:__ sensor.other.imu
* __Output:__ [carla.IMUMeasurement](python_api.md#carla.IMUMeasurement) per step (unless `sensor_tick` says otherwise).
@@ -163,8 +163,8 @@ Provides measures that accelerometer, gyroscope and compass would retrieve for t
| `gyroscope` | [carla.Vector3D](python_api.md#carla.Vector3D) | Measures angular velocity in `rad/sec`. |
| `compass` | float | Orientation in radians. North is `(0.0, -1.0, 0.0)` in UE. |
----------------
-##Lane invasion detector
+---
+## Lane invasion detector
* __Blueprint:__ sensor.other.lane_invasion
* __Output:__ [carla.LaneInvasionEvent](python_api.md#carla.LaneInvasionEvent) per crossing.
@@ -192,8 +192,8 @@ This sensor does not have any configurable attribute.
| `crossed_lane_markings` | list([carla.LaneMarking](python_api.md#carla.LaneMarking)) | List of lane markings that have been crossed. |
----------------
-##Lidar raycast sensor
+---
+## Lidar raycast sensor
* __Blueprint:__ sensor.lidar.ray_cast
* __Output:__ [carla.LidarMeasurement](python_api.md#carla.LidarMeasurement) per step (unless `sensor_tick` says otherwise).
@@ -212,8 +212,7 @@ for location in lidar_measurement:
```
!!! Tip
- Running the simulator at [fixed time-step](configuring_the_simulation.md#fixed-time-step) it is possible to tune the rotation for each measurement. Adjust the
- step and the rotation frequency to get, for instance, a 360 view each measurement.
+ Running the simulator at [fixed time-step](adv_synchrony_timestep.md) it is possible to tune the rotation for each measurement. Adjust the step and the rotation frequency to get, for instance, a 360 view each measurement.
![LidarPointCloud](img/lidar_point_cloud.gif)
@@ -243,7 +242,7 @@ for location in lidar_measurement:
| `get_point_count(channel)` | int | Number of points per channel captured this frame. |
| `raw_data` | bytes | Array of 32-bits floats (XYZ of each point). |
----------------
+---
## Obstacle detector
* __Blueprint:__ sensor.other.obstacle
@@ -274,7 +273,7 @@ To ensure that collisions with any kind of object are detected, the server creat
| `other_actor` | [carla.Actor](python_api.md#carla.Actor) | Actor detected as an obstacle. |
| `distance` | float | Distance from `actor` to `other_actor`. |
----------------
+---
## Radar sensor
* __Blueprint:__ sensor.other.radar
@@ -317,7 +316,7 @@ The provided script `manual_control.py` uses this sensor to show the points bein
| `depth` | float | Distance in meters. |
| `velocity` | float | Velocity towards the sensor. |
----------------
+---
## RGB camera
* __Blueprint:__ sensor.camera.rgb
@@ -425,8 +424,8 @@ Since these effects are provided by UE, please make sure to check their document
| `fov` | float | Horizontal field of view in degrees. |
| `raw_data` | bytes | Array of BGRA 32-bit pixels. |
----------------
-##Semantic segmentation camera
+---
+## Semantic segmentation camera
* __Blueprint:__ sensor.camera.semantic_segmentation
* __Output:__ [carla.Image](python_api.md#carla.Image) per step (unless `sensor_tick` says otherwise).
diff --git a/Docs/release_readme.md b/Docs/release_readme.md
deleted file mode 100644
index d00e264ed..000000000
--- a/Docs/release_readme.md
+++ /dev/null
@@ -1,54 +0,0 @@
-CARLA Simulator
-===============
-
-Thanks for downloading CARLA!
-
-AD Responsibility Sensitive Safety model (RSS) integration
-
-> _This feature is a work in progress, only a Linux build variant is available._
-
-This feature integrates the [C++ Library for Responsibility Sensitive Safety](https://github.com/intel/ad-rss-lib) into the CARLA Client library.
-
-**As the _ad-rss-lib_ library is licensed under LGPL-2.1-only, building the variant which includes this feature and therefor the library might have some implications to the outgoing license of the resulting binary!**
-
-It provides basic implementations of both an **RssSensor**, the situation analysis and response generation by the **ad-rss-lib** and an basic **RssRestrictor** class which applies the restrictions to given vehicle commands.
-
-The **RssSensor** results can be visualized within CARLA.
-[![RSS safety sensor in CARLA](img/rss_carla_integration.png)](https://www.youtube.com/watch?v=UxKPXPT2T8Q)
-
-
-Please see [C++ Library for Responsibility Sensitive Safety documentation](https://intel.github.io/ad-rss-lib/) and especially the [Background documentation](https://intel.github.io/ad-rss-lib/documentation/Main.html) for further details.
-
-
-Compilation
-
-RSS integration is a Linux-only build variant.
-Please see [Build System](dev/build_system.md) for general information.
-*LibCarla* with RSS has the be explicitly compiled by
-
-```sh
-make LibCarla.client.rss
-```
-
-The *PythonAPI* with RSS is built by
-
-```sh
-make PythonAPI.rss
-```
-
-
-Current state
-RssSensor
-The RssSensor is currently only considering vehicles within the same road segment, but on all lanes within that segment. Intersections are not yet supported!
-
-RssRestrictor
-The current implementation of the RssRestrictor checks and potentially modifies a given *VehicleControl* generated by e.g. and Automated Driving stack or user imput via a *manual_control* client (see the *PythonAPI/examples/manual_control_rss.py*).
-
-Due to the structure of *VehicleControl* (just throttle, brake, streering values for the car under control), the Restrictor modifies and sets these values to best reach the desired accelerations or decelerations by a given restriction. Due to car physics and the simple control options these might not be met.
diff --git a/Docs/getting_started/introduction.md b/Docs/start_introduction.md
similarity index 92%
rename from Docs/getting_started/introduction.md
rename to Docs/start_introduction.md
index 9e70b95d5..545a7863c 100644
--- a/Docs/getting_started/introduction.md
+++ b/Docs/start_introduction.md
@@ -1,6 +1,6 @@
-#CARLA
+# CARLA
-![Welcome to CARLA](../img/welcome.png)
+![Welcome to CARLA](img/welcome.png)
!!! important
This documentation refers to the latest development versions of CARLA, 0.9.0 or
@@ -10,14 +10,14 @@ CARLA is an open-source autonomous driving simulator. It was built from scratch
In order to smooth the process of developing, training and validating driving systems, CARLA evolved to become an ecosystem of projects, built around the main platform by the community. In this context, it is important to understand some things about how does CARLA work, so as to fully comprehend its capabilities.
----------------
-##The simulator
+---
+## The simulator
The CARLA simulator consists of a scalable client-server architecture.
The server is responsible of everything related with the simulation itself: sensor rendering, computation of physics, updates on the world-state and its actors and much more. As it aims for realistic results, the best fit would be running the server with a dedicated GPU, especially when dealing with machine learning.
The client side consists of a sum of client modules controlling the logic of actors on scene and setting world conditions. This is achieved by leveraging the CARLA API (in Python or C++), a layer that mediates between server and client that is constantly evolving to provide new functionalities.
-![CARLA Modules](../img/carla_modules.png)
+![CARLA Modules](img/carla_modules.png)
That summarizes the basic structure of the simulator. Understanding CARLA though is much more than that, as many different features and elements coexist within it. Some of these are listed hereunder, as to gain perspective on the capabilities of what CARLA can achieve.
@@ -28,8 +28,8 @@ That summarizes the basic structure of the simulator. Understanding CARLA though
* __Open assets:__ CARLA facilitates different maps for urban settings with control over weather conditions and a blueprint library with a wide set of actors to be used. However, these elements can be customized and new can be generated following simple guidelines.
* __Scenario runner:__ In order to ease the learning process for vehicles, CARLA provides a series of routes describing different situations to iterate on. These also set the basis for the [CARLA challenge](https://carlachallenge.org/), open for everybody to test their solutions and make it to the leaderboard.
----------------
-##The project
+---
+## The project
CARLA grows fast and steady, widening the range of solutions provided and opening the way for the different approaches to autonomous driving. It does so while never forgetting its open-source nature. The project is transparent, acting as a white box where anybody is granted access to the tools and the development community. In that democratization is where CARLA finds its value.
Talking about how CARLA grows means talking about a community of developers who dive together into the thorough question of autonomous driving. Everybody is free to explore with CARLA, find their own solutions and then share their achievements with the rest of the community.
@@ -40,11 +40,11 @@ Welcome to CARLA.
@@ -340,7 +340,7 @@ Then build RecastDemo. Follow their [instructions][buildrecastlink] on how to bu
Now pedestrians will be able to spawn randomly and walk on the selected meshes!
-----------
+---
## Tips and Tricks
* Traffic light group controls wich traffic light is active (green state) at each moment.
diff --git a/Docs/dev/map_customization.md b/Docs/tuto_A_map_customization.md
similarity index 96%
rename from Docs/dev/map_customization.md
rename to Docs/tuto_A_map_customization.md
index 63f0c205b..d4fc046c4 100644
--- a/Docs/dev/map_customization.md
+++ b/Docs/tuto_A_map_customization.md
@@ -2,8 +2,8 @@
> _This document is a work in progress and might be incomplete._
-Creating a new map
-------------------
+---
+## Creating a new map
!!! Bug
Creating a map from scratch with the Carla tools causes a crash with
@@ -13,7 +13,7 @@ Creating a new map
#### Requirements
- - Checkout and build Carla from source on [Linux](../how_to_build_on_linux.md) or [Windows](../how_to_build_on_windows.md)
+ - Checkout and build Carla from source on [Linux](build_linux.md) or [Windows](build_windows.md).
#### Creating
@@ -25,7 +25,7 @@ Creating a new map
- You can change the seed until you have a map you are satisfied with.
- After that you can place new PlayerStarts at the places you want the cars to be spawned.
- The AI already works, but the cars won't act randomly. Vehicles will follow the instructions given by the RoadMapGenerator. They will follow the road easily while in straight roads but wont so much when entering Intersections:
-![road_instructions_example.png](../img/road_instructions_example.png)
+![road_instructions_example.png](img/road_instructions_example.png)
> (This is a debug view of the instructions the road gives to the Vehicle. They will always follow the green arrows, the white points are shared points between one or more routes, by default they order the vehicle to continue straight; Black points are off the road, the vehicle gets no instructions and drives to the left, trying to get back to the road)
- To get a random behavior, you have to place IntersectionEntrances, this will let you redefine the direction the vehicle will take overwriting the directions given by the road map (until they finish their given order).
@@ -37,8 +37,8 @@ Creating a new map
Every street at a crossing should have its own turn at green without the other streets having green.
- Then you can populate the world with landscape and buildings.
-MultipleFloorBuilding
----------------------
+---
+## MultipleFloorBuilding
The purpose of this blueprint is to make repeating and varying tall buildings a
bit easier. Provided a Base, a MiddleFloor and a roof; this blueprint repeats
@@ -60,8 +60,9 @@ This blueprint is controlled by this 6 specific Parameters:
All of This parameters can be modified once this blueprint is placed in the
world.
-SplinemeshRepeater
-------------------
+---
+## SplinemeshRepeater
+
!!! Bug
See [#35 SplineMeshRepeater loses its collider mesh](https://github.com/carla-simulator/carla/issues/35)
@@ -91,7 +92,6 @@ that all the meshes have their pivot placed wherever the repetition starts in
the lower point possible with the rest of the mesh pointing positive (Preferably
by the X axis)
-
#### Specific Walls (Dynamic material)
In the project folder "Content/Static/Walls" are included some specific assets
@@ -119,8 +119,8 @@ The rest of the parameters are the mask the textures and the color corrections
that won't be modified in this instance but in the blueprint that will be
launched into the world.
-Weather
--------
+---
+## Weather
This is the actor in charge of modifying all the lighting, environmental actors
an anything that affects the impression of the climate. It runs automatically
diff --git a/Docs/asset_packages_for_dist.md b/Docs/tuto_A_standalone_packages.md
similarity index 96%
rename from Docs/asset_packages_for_dist.md
rename to Docs/tuto_A_standalone_packages.md
index 3bd10fc05..c7eb958c3 100644
--- a/Docs/asset_packages_for_dist.md
+++ b/Docs/tuto_A_standalone_packages.md
@@ -6,8 +6,8 @@ The main objective for importing and exporting assets is to reduce the size of
the distribution build. This is possible since these assets will be imported as
independent packages that can be plugged in anytime inside Carla and also exported.
-How to import assets inside Unreal Engine
------------------------------------------
+---
+## How to import assets inside Unreal Engine
The first step is to create an empty folder inside the Carla `Import` folder and rename it with any
folder name desired. For simplifying this newly created folder structure, we recommend having
@@ -155,7 +155,7 @@ _required files and place them following the structure listed above._
_If the process doesn't work due to different names or other issues, you can always move the assets_
_manually, check this [`tutorial`][importtutorial]_ (_Section 3.2.1 - 6_).
-[importtutorial]: ../how_to_make_a_new_map/#32-importing-from-the-files
+[importtutorial]: tuto_A_map_creation.md#32-importing-from-the-files
Now we have everything ready for importing assets. To do so, you just need to run the command:
@@ -175,10 +175,10 @@ _a new one with the same name._
The imported map won't have collisions, so they should be generated manually. This
[tutorial][collisionlink] (_Section 3.2.1 - 5_) shows how to do it.
-[collisionlink]: ../how_to_make_a_new_map/#32-importing-from-the-files
+[collisionlink]: how_to_make_a_new_map.md/#32-importing-from-the-files
-How to export assets
---------------------
+---
+## How to export assets
Once imported all the packages inside Unreal, users could also generate a **cooked package**
for each of them. This last step is important in order to have all packages ready to add for
diff --git a/Docs/how_to_model_vehicles.md b/Docs/tuto_A_vehicle_modelling.md
similarity index 99%
rename from Docs/how_to_model_vehicles.md
rename to Docs/tuto_A_vehicle_modelling.md
index 845da8d61..6ba949ce8 100644
--- a/Docs/how_to_model_vehicles.md
+++ b/Docs/tuto_A_vehicle_modelling.md
@@ -1,6 +1,6 @@
# How to model vehicles
-------------
+---
## 4-Wheeled Vehicles
#### Modelling
diff --git a/Docs/dev/how_to_upgrade_content.md b/Docs/tuto_D_contribute_assets.md
similarity index 100%
rename from Docs/dev/how_to_upgrade_content.md
rename to Docs/tuto_D_contribute_assets.md
diff --git a/Docs/dev/how_to_add_a_new_sensor.md b/Docs/tuto_D_create_sensor.md
similarity index 99%
rename from Docs/dev/how_to_add_a_new_sensor.md
rename to Docs/tuto_D_create_sensor.md
index 20e952e18..5bde0dd44 100644
--- a/Docs/dev/how_to_add_a_new_sensor.md
+++ b/Docs/tuto_D_create_sensor.md
@@ -5,14 +5,16 @@ the necessary steps to implement a sensor in Unreal Engine 4 (UE4) and expose
its data via CARLA's Python API. We'll follow all the steps by creating a new
sensor as an example.
+---
## Prerequisites
In order to implement a new sensor, you'll need to compile CARLA source code,
for detailed instructions on how to achieve this see
-[Building from source](../building_from_source.md).
+[Building from source](build_linux.md).
This tutorial also assumes the reader is fluent in C++ programming.
+---
## Introduction
Sensors in CARLA are a special type of actor that produce a stream of data. Some
@@ -31,7 +33,7 @@ In this tutorial, we'll be focusing on server-side sensors.
In order to have a sensor running inside UE4 sending data all the way to a
Python client, we need to cover the whole communication pipeline.
-![Communication pipeline](../img/pipeline.png)
+![Communication pipeline](img/pipeline.png)
Thus we'll need the following classes covering the different steps of the
pipeline
@@ -53,6 +55,7 @@ pipeline
sort of "compile-time plugin system" based on template meta-programming.
Most likely, the code won't compile until all the pieces are present.
+---
## Creating a new sensor
[**Full source code here.**](https://gist.github.com/nsubiron/011fd1b9767cd441b1d8467dc11e00f9)
@@ -62,12 +65,13 @@ that we'll create a trigger box that detects objects within, and we'll be
reporting status to the client every time a vehicle is inside our trigger box.
Let's call it _Safe Distance Sensor_.
-![Trigger box](../img/safe_distance_sensor.jpg)
+![Trigger box](img/safe_distance_sensor.jpg)
_For the sake of simplicity we're not going to take into account all the edge
cases, nor it will be implemented in the most efficient way. This is just an
illustrative example._
+---
### 1. The sensor actor
This is the most complicated class we're going to create. Here we're running
@@ -291,6 +295,7 @@ that, the data is going to travel through several layers. First of them will be
the serializer that we have to create next. We'll fully understand this part
once we have completed the `Serialize` function in the next section.
+---
### 2. The sensor data serializer
This class is actually rather simple, it's only required to have two static
@@ -360,6 +365,7 @@ SharedPtr
- See [How to upgrade content](how_to_upgrade_content.md).
+ See [Upgrade the content](tuto_D_contribute_assets.md).
2. **Increase CARLA version where necessary.**
Increase version in the following files: _DefaultGame.ini_, _Carla.uplugin_,
diff --git a/Docs/how_to_add_friction_triggers.md b/Docs/tuto_G_add_friction_triggers.md
similarity index 100%
rename from Docs/how_to_add_friction_triggers.md
rename to Docs/tuto_G_add_friction_triggers.md
diff --git a/Docs/how_to_control_vehicle_physics.md b/Docs/tuto_G_control_vehicle_physics.md
similarity index 99%
rename from Docs/how_to_control_vehicle_physics.md
rename to Docs/tuto_G_control_vehicle_physics.md
index 24798176e..279cf65d0 100644
--- a/Docs/how_to_control_vehicle_physics.md
+++ b/Docs/tuto_G_control_vehicle_physics.md
@@ -9,6 +9,7 @@ These properties are controlled through a
which also provides the control of each wheel's physics through a
[carla.WheelPhysicsControl](/python_api/#carla.WheelPhysicsControl) object.
+---
## Example
```py
diff --git a/Docs/walker_bone_control.md b/Docs/tuto_G_control_walker_skeletons.md
similarity index 98%
rename from Docs/walker_bone_control.md
rename to Docs/tuto_G_control_walker_skeletons.md
index caa8430c5..39bc39a2d 100644
--- a/Docs/walker_bone_control.md
+++ b/Docs/tuto_G_control_walker_skeletons.md
@@ -10,7 +10,8 @@ all classes and methods available can be found at
The user should read the first steps tutorial before reading this document.
[Core concepts](core_concepts.md).
-### Walker skeleton structure
+---
+## Walker skeleton structure
All walkers have the same skeleton hierarchy and bone names. Below is an image of the skeleton
hierarchy.
@@ -84,7 +85,8 @@ crl_root
└── crl_toeEnd__R
```
-### How to manually control a walker's bones
+---
+## How to manually control a walker's bones
Following is a detailed step-by-step example of how to change the bone transforms of a walker
from the CARLA Python API
diff --git a/PythonAPI/docs/actor.yml b/PythonAPI/docs/actor.yml
index 58a28dd6c..5b03c5851 100644
--- a/PythonAPI/docs/actor.yml
+++ b/PythonAPI/docs/actor.yml
@@ -5,7 +5,7 @@
- class_name: Actor
# - DESCRIPTION ------------------------
doc: >
- CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by carla.World and they need for a carla.ActorBlueprint to be created. These blueprints belong into a library provided by CARLA, find more about them [here](../bp_library/).
+ CARLA defines actors as anything that plays a role in the simulation or can be moved around. That includes: pedestrians, vehicles, sensors and traffic signs (considering traffic lights as part of these). Actors are spawned in the simulation by carla.World and they need for a carla.ActorBlueprint to be created. These blueprints belong into a library provided by CARLA, find more about them [here](bp_library.md).
# - PROPERTIES -------------------------
instance_variables:
- var_name: attributes
@@ -23,7 +23,7 @@
- var_name: semantic_tags
type: list(int)
doc: >
- A list of semantic tags provided by the blueprint listing components for this actor. E.g. a traffic light could be tagged with "pole" and "traffic light". These tags are used by the semantic segmentation sensor. Find more about this and other sensors [here](../cameras_and_sensors/#sensor.camera.semantic_segmentation).
+ A list of semantic tags provided by the blueprint listing components for this actor. E.g. a traffic light could be tagged with "pole" and "traffic light". These tags are used by the semantic segmentation sensor. Find more about this and other sensors [here](ref_sensors.md#semantic-segmentation-camera).
- var_name: type_id
type: str
doc: >
@@ -321,7 +321,7 @@
- class_name: TrafficLightState
# - DESCRIPTION ------------------------
doc: >
- All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](../python_cookbook/#traffic-lights-recipe) to see an example.
+ All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to see an example.
# - PROPERTIES -------------------------
instance_variables:
- var_name: Green
@@ -337,7 +337,7 @@
doc: >
A traffic light actor, considered a specific type of traffic sign. As traffic lights will mostly appear at junctions, they belong to a group which contains the different traffic lights in it. Inside the group, traffic lights are differenciated by their pole index.
- Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](../python_cookbook/#traffic-lights-recipe) to learn how to do so.
+ Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to learn how to do so.
# - PROPERTIES -------------------------
instance_variables:
- var_name: state
diff --git a/PythonAPI/docs/blueprint.yml b/PythonAPI/docs/blueprint.yml
index 1c371a5c7..4e7c2b4a0 100644
--- a/PythonAPI/docs/blueprint.yml
+++ b/PythonAPI/docs/blueprint.yml
@@ -235,7 +235,7 @@
doc: >
A class that contains the blueprints provided for actor spawning. Its main application is to return carla.ActorBlueprint objects needed to spawn actors. Each blueprint has an identifier and attributes that may or may not be modifiable. The library is automatically created by the server and can be accessed through carla.World.
- [Here](../bp_library/) is a reference containing every available blueprint and its specifics.
+ [Here](bp_library.md) is a reference containing every available blueprint and its specifics.
# - METHODS ----------------------------
methods:
- def_name: __getitem__
diff --git a/PythonAPI/docs/client.yml b/PythonAPI/docs/client.yml
index 586424b67..86bbe9d3f 100644
--- a/PythonAPI/docs/client.yml
+++ b/PythonAPI/docs/client.yml
@@ -9,7 +9,7 @@
doc: >
The Client connects CARLA to the server which runs the simulation. Both server and client contain a CARLA library (libcarla) with some differences that allow communication between them. Many clients can be created and each of these will connect to the RPC server inside the simulation to send commands. The simulation runs server-side. Once the connection is established, the client will only receive data retrieved from the simulation. Walkers are the exception. The client is in charge of managing pedestrians so, if you are running a simulation with multiple clients, some issues may arise. For example, if you spawn walkers through different clients, collisions may happen, as each client is only aware of the ones it is in charge of.
- The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](recorder_and_playback.md) is some information about how to use this recorder.
+ The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](adv_recorder.md) is some information about how to use this recorder.
# - PROPERTIES -------------------------
instance_variables:
# - METHODS ----------------------------
@@ -223,7 +223,7 @@
doc: >
When true, will show all the details per frame (traffic light states, positions of all actors, orientation and animation data...), but by default it will only show a summary.
doc: >
- The information saved by the recorder will be parsed and shown in your terminal as text (frames, times, events, state, positions...). The information shown can be specified by using the `show_all` parameter. [Here](recorder_binary_file_format.md) is some more information about how to read the recorder file.
+ The information saved by the recorder will be parsed and shown in your terminal as text (frames, times, events, state, positions...). The information shown can be specified by using the `show_all` parameter. [Here](ref_recorder_binary_file_format.md) is some more information about how to read the recorder file.
# --------------------------------------
- def_name: start_recorder
params:
diff --git a/PythonAPI/docs/control.yml b/PythonAPI/docs/control.yml
index 39255efb8..ea5b3a90b 100644
--- a/PythonAPI/docs/control.yml
+++ b/PythonAPI/docs/control.yml
@@ -139,7 +139,7 @@
- class_name: WalkerBoneControl
# - DESCRIPTION ------------------------
doc: >
- This class grants bone specific manipulation for walker. The skeletons of walkers have been unified for clarity and the transform applied to each bone are always relative to its parent. Take a look [here](walker_bone_control.md) to learn more on how to create a walker and define its movement.
+ This class grants bone specific manipulation for walker. The skeletons of walkers have been unified for clarity and the transform applied to each bone are always relative to its parent. Take a look [here](tuto_G_control_walker_skeletons.md) to learn more on how to create a walker and define its movement.
# - PROPERTIES -------------------------
instance_variables:
- var_name: bone_transforms
diff --git a/PythonAPI/docs/geom.yml b/PythonAPI/docs/geom.yml
index 5b8d08845..f559058ad 100644
--- a/PythonAPI/docs/geom.yml
+++ b/PythonAPI/docs/geom.yml
@@ -362,7 +362,7 @@
- class_name: BoundingBox
# - DESCRIPTION ------------------------
doc: >
- Helper class defining a box location and its dimensions that will later be used by carla.DebugHelper or a carla.Client to draw shapes and detect collisions. Bounding boxes normally act for object colliders. Check out this [recipe](../python_cookbook/#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
+ Helper class defining a box location and its dimensions that will later be used by carla.DebugHelper or a carla.Client to draw shapes and detect collisions. Bounding boxes normally act for object colliders. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
# - PROPERTIES -------------------------
instance_variables:
- var_name: location
diff --git a/PythonAPI/docs/map.yml b/PythonAPI/docs/map.yml
index 583d69c7c..8d4cd4dea 100644
--- a/PythonAPI/docs/map.yml
+++ b/PythonAPI/docs/map.yml
@@ -6,7 +6,7 @@
- class_name: LaneType
# - DESCRIPTION ------------------------
doc: >
- Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](../python_cookbook/#lanes-recipe) the user creates a carla.Waypoint for the current location of a vehicle and uses it to get the current and adjacent lane types.
+ Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a carla.Waypoint for the current location of a vehicle and uses it to get the current and adjacent lane types.
# - PROPERTIES -------------------------
instance_variables:
- var_name: NONE
@@ -58,7 +58,7 @@
- class_name: LaneChange
# - DESCRIPTION ------------------------
doc: >
- Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every carla.Waypoint according to the OpenDRIVE file. In this [recipe](../python_cookbook/#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
+ Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every carla.Waypoint according to the OpenDRIVE file. In this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
# - PROPERTIES -------------------------
instance_variables:
- var_name: NONE
@@ -99,7 +99,7 @@
- class_name: LaneMarkingType
# - DESCRIPTION ------------------------
doc: >
- Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](../python_cookbook/#lanes-recipe) where the user creates a carla.Waypoint for a vehicle location and retrieves from it the information about adjacent lane markings.
+ Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](ref_code_recipes.md#lanes-recipe) where the user creates a carla.Waypoint for a vehicle location and retrieves from it the information about adjacent lane markings.
__Note on double types:__ Lane markings are defined under the OpenDRIVE standard that determines whereas a line will be considered "BrokenSolid" or "SolidBroken". For each road there is a center lane marking, defined from left to right regarding the lane's directions. The rest of the lane markings are defined in order from the center lane to the closest outside of the road.
# - PROPERTIES -------------------------
diff --git a/PythonAPI/docs/sensor_data.yml b/PythonAPI/docs/sensor_data.yml
index 2809a46f9..b14a7169d 100644
--- a/PythonAPI/docs/sensor_data.yml
+++ b/PythonAPI/docs/sensor_data.yml
@@ -33,7 +33,7 @@
- class_name: ColorConverter
# - DESCRIPTION ------------------------
doc: >
- Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](../python_cookbook/#converted-image-recipe) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
+ Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
# - PROPERTIES -------------------------
instance_variables:
- var_name: CityScapesPalette
diff --git a/PythonAPI/docs/world.yml b/PythonAPI/docs/world.yml
index a422ce77a..225424373 100644
--- a/PythonAPI/docs/world.yml
+++ b/PythonAPI/docs/world.yml
@@ -104,7 +104,7 @@
- class_name: WorldSettings
# - DESCRIPTION ------------------------
doc: >
- The simulation has some advanced configuration options that are contained in this class and can be managed using carla.World and its methods. These allow the user to choose between client-server synchrony/asynchrony, activation of "no rendering mode" and either if the simulation should run with a fixed or variable time-step. Check [this](../configuring_the_simulation/) out if you want to learn about it.
+ The simulation has some advanced configuration options that are contained in this class and can be managed using carla.World and its methods. These allow the user to choose between client-server synchrony/asynchrony, activation of "no rendering mode" and either if the simulation should run with a fixed or variable time-step. Check [this](adv_synchrony_timestep.md) out if you want to learn about it.
# - PROPERTIES -------------------------
instance_variables:
- var_name: synchronous_mode
@@ -170,7 +170,7 @@
- class_name: AttachmentType
# - DESCRIPTION ------------------------
doc: >
- Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for cameras and sensors. [Here](../python_cookbook/#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
+ Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for cameras and sensors. [Here](ref_code_recipes.md#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
# - PROPERTIES -------------------------
instance_variables:
@@ -179,7 +179,7 @@
With this fixed attatchment the object follow its parent position strictly.
- var_name: SpringArm
doc: >
- An attachment that expands or retracts depending on camera situation. SpringArms are an Unreal Engine component so [check this out](../python_cookbook/#attach-sensors-recipe) to learn some more about them.
+ An attachment that expands or retracts depending on camera situation. SpringArms are an Unreal Engine component so [check this out](ref_code_recipes.md#attach-sensors-recipe) to learn some more about them.
# --------------------------------------
- class_name: World
@@ -361,7 +361,7 @@
- class_name: DebugHelper
# - DESCRIPTION ------------------------
doc: >
- Helper class part of carla.World that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](../python_cookbook/#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
+ Helper class part of carla.World that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
# - METHODS ----------------------------
methods:
- def_name: draw_point
diff --git a/mkdocs.yml b/mkdocs.yml
index 7c2b3f19b..d27b4f5d6 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -8,15 +8,15 @@ extra_css: [extra.css]
nav:
- Home: 'index.md'
- Getting started:
- - 'Introduction': 'getting_started/introduction.md'
- - 'Quick start': 'getting_started/quickstart.md'
+ - 'Introduction': 'start_introduction.md'
+ - 'Quickstart installation': 'start_quickstart.md'
- Building CARLA:
- - 'Linux build': 'how_to_build_on_linux.md'
- - 'Windows build': 'how_to_build_on_windows.md'
- - 'Update CARLA': 'update_carla.md'
- - 'Build system': 'dev/build_system.md'
- - 'Running in a Docker': 'carla_docker.md'
- - 'F.A.Q.': 'faq.md'
+ - 'Linux build': 'build_linux.md'
+ - 'Windows build': 'build_windows.md'
+ - 'Update CARLA': 'build_update.md'
+ - 'Build system': 'build_system.md'
+ - 'Running in a Docker': 'build_docker.md'
+ - 'F.A.Q.': 'build_faq.md'
- First steps:
- 'Core concepts': 'core_concepts.md'
- '1st. World and client': 'core_world.md'
@@ -24,36 +24,37 @@ nav:
- '3rd. Maps and navigation': 'core_map.md'
- '4th. Sensors and data': 'core_sensors.md'
- Advanced steps:
- - 'Recorder': 'recorder_and_playback.md'
- - 'Rendering options': 'rendering_options.md'
- - 'Synchrony and time-step': 'simulation_time_and_synchrony.md'
+ - 'Recorder': 'adv_recorder.md'
+ - 'Rendering options': 'adv_rendering_options.md'
+ - 'Synchrony and time-step': 'adv_synchrony_timestep.md'
- References:
- 'Python API reference': 'python_api.md'
- - 'Code recipes': 'python_cookbook.md'
+ - 'Code recipes': 'ref_code_recipes.md'
- 'Blueprint Library': 'bp_library.md'
- - 'C++ reference' : 'cpp_reference.md'
- - 'Recorder binary file format': 'recorder_binary_file_format.md'
+ - 'C++ reference' : 'ref_cpp.md'
+ - 'Recorder binary file format': 'ref_recorder_binary_file_format.md'
- "Sensors reference": 'ref_sensors.md'
-- How to... (general):
- - 'Add a new sensor': 'dev/how_to_add_a_new_sensor.md'
- - 'Add friction triggers': "how_to_add_friction_triggers.md"
- - 'Control vehicle physics': "how_to_control_vehicle_physics.md"
- - 'Control walker skeletons': "walker_bone_control.md"
- - 'Creating standalone asset packages for distribution': 'asset_packages_for_dist.md'
- - 'Generate pedestrian navigation': 'how_to_generate_pedestrians_navigation.md'
- - "Link Epic's Automotive Materials": 'epic_automotive_materials.md'
- - 'Map customization': 'dev/map_customization.md'
-- How to... (content):
- - 'Add assets': 'how_to_add_assets.md'
- - 'Create and import a new map': 'how_to_make_a_new_map.md'
- - 'Model vehicles': 'how_to_model_vehicles.md'
+- Tutorials (general):
+ - 'Add friction triggers': "tuto_G_add_friction_triggers.md"
+ - 'Control vehicle physics': "tuto_G_control_vehicle_physics.md"
+ - 'Control walker skeletons': "tuto_G_control_walker_skeletons.md"
+- Tutorials (assets):
+ - 'Import new assets': 'tuto_A_import_assets.md'
+ - 'Map creation': 'tuto_A_map_creation.md'
+ - 'Map customization': 'tuto_A_map_customization.md'
+ - 'Standalone asset packages': 'tuto_A_standalone_packages.md'
+ - "Use Epic's Automotive materials": 'tuto_A_epic_automotive_materials.md'
+ - 'Vehicle modelling': 'tuto_A_vehicle_modelling.md'
+- Tutorials (developers):
+ - 'Contribute with assets': 'tuto_D_contribute_assets.md'
+ - 'Create a sensor': 'tuto_D_create_sensor.md'
+ - 'Make a release': 'tuto_D_make_release.md'
+ - 'Generate pedestrian navigation': 'tuto_D_generate_pedestrian_navigation.md'
- Contributing:
- - 'Contribution guidelines': 'CONTRIBUTING.md'
- - 'Coding standard': 'coding_standard.md'
- - 'Documentation standard': 'doc_standard.md'
- - 'Make a release': 'dev/how_to_make_a_release.md'
- - 'Upgrade the content': 'dev/how_to_upgrade_content.md'
- - 'Code of conduct': 'CODE_OF_CONDUCT.md'
+ - 'Contribution guidelines': 'cont_contribution_guidelines.md'
+ - 'Code of conduct': 'cont_code_of_conduct.md'
+ - 'Coding standard': 'cont_coding_standard.md'
+ - 'Documentation standard': 'cont_doc_standard.md'
markdown_extensions:
- admonition