From 31fff28bc6d24ddae0b3977f8ac343ed16019202 Mon Sep 17 00:00:00 2001
From: "sergi.e" <59253112+sergi-e@users.noreply.github.com>
Date: Thu, 15 Oct 2020 16:48:48 +0200
Subject: [PATCH] sergi-e/api-snipets (#3409)
* First iteration with snipets, copy button and snippet button for all methods
* New iteration with snipets implemented and test snipets
* New iteration without yaml intervention, with auxiliary doc deleted and responsive buttons to window width
* New iteration with snipets copied from recipes, images added, and invisible button for small window
* Fixed some marging issues & code formatting
* Removed recipe reference and its instances. Snipets are now scalable to window width. Added button to close snipet. doc_gen_snipets.py is now imported in doc_gen.py. Cleaned css inside .py and added to extra.css
* New iteration with two functionalities for snipet buttons. Pop up for small windows.
* Fixed bp_library.md
Co-authored-by: Marc Garcia Puig
---
Docs/core_actors.md | 2 +-
Docs/core_concepts.md | 2 +-
Docs/core_world.md | 4 +-
Docs/extra.css | 118 +++-
.../carla.DebugHelper.draw_box.jpg} | Bin
.../carla.DebugHelper.draw_line.jpg} | Bin
.../carla.Map.get_waypoint.jpg} | Bin
.../carla.TrafficLight.set_state.gif} | Bin
Docs/index.md | 10 +-
Docs/python_api.md | 557 ++++++++++++++++--
Docs/ref_code_recipes.md | 372 ------------
PythonAPI/docs/actor.yml | 4 +-
PythonAPI/docs/doc_gen.py | 92 ++-
PythonAPI/docs/doc_gen_snipets.py | 146 +++++
PythonAPI/docs/geom.yml | 2 +-
PythonAPI/docs/map.yml | 8 +-
PythonAPI/docs/sensor_data.yml | 2 +-
.../carla.ActorBlueprint.set_attribute.py | 19 +
.../docs/snipets/carla.Client.__init__.py | 30 +
.../snipets/carla.Client.apply_batch_sync.py | 58 ++
.../snipets/carla.DebugHelper.draw_box.py | 13 +
.../snipets/carla.DebugHelper.draw_line.py | 24 +
.../docs/snipets/carla.Map.get_waypoint.py | 15 +
PythonAPI/docs/snipets/carla.Sensor.listen.py | 10 +
.../snipets/carla.TrafficLight.set_state.py | 28 +
.../snipets/carla.WalkerAIController.stop.py | 9 +
.../docs/snipets/carla.World.get_spectator.py | 20 +
.../docs/snipets/carla.World.spawn_actor.py | 10 +
PythonAPI/docs/world.yml | 6 +-
mkdocs.yml | 3 +-
30 files changed, 1111 insertions(+), 453 deletions(-)
rename Docs/img/{recipe_debug_bb.jpg => snipets_images/carla.DebugHelper.draw_box.jpg} (100%)
rename Docs/img/{recipe_debug_trail.jpg => snipets_images/carla.DebugHelper.draw_line.jpg} (100%)
rename Docs/img/{recipe_lane_marking.jpg => snipets_images/carla.Map.get_waypoint.jpg} (100%)
rename Docs/img/{tl_recipe.gif => snipets_images/carla.TrafficLight.set_state.gif} (100%)
delete mode 100644 Docs/ref_code_recipes.md
create mode 100755 PythonAPI/docs/doc_gen_snipets.py
create mode 100755 PythonAPI/docs/snipets/carla.ActorBlueprint.set_attribute.py
create mode 100755 PythonAPI/docs/snipets/carla.Client.__init__.py
create mode 100755 PythonAPI/docs/snipets/carla.Client.apply_batch_sync.py
create mode 100755 PythonAPI/docs/snipets/carla.DebugHelper.draw_box.py
create mode 100755 PythonAPI/docs/snipets/carla.DebugHelper.draw_line.py
create mode 100755 PythonAPI/docs/snipets/carla.Map.get_waypoint.py
create mode 100755 PythonAPI/docs/snipets/carla.Sensor.listen.py
create mode 100755 PythonAPI/docs/snipets/carla.TrafficLight.set_state.py
create mode 100755 PythonAPI/docs/snipets/carla.WalkerAIController.stop.py
create mode 100755 PythonAPI/docs/snipets/carla.World.get_spectator.py
create mode 100755 PythonAPI/docs/snipets/carla.World.spawn_actor.py
diff --git a/Docs/core_actors.md b/Docs/core_actors.md
index 39102d1b6..51eea777a 100644
--- a/Docs/core_actors.md
+++ b/Docs/core_actors.md
@@ -281,7 +281,7 @@ ai_controller.stop()
```
When a walker reaches the target location, they will automatically walk to another random point. If the target point is not reachable, walkers will go to the closest point from their current location.
-[This recipe](ref_code_recipes.md#walker-batch-recipe) uses batches to spawn a lot of walkers and make them wander around.
+A snipet in [carla.Client](python_api.md#carla.Client.apply_batch_sync) uses batches to spawn a lot of walkers and make them wander around.
!!! Important
__To destroy AI pedestrians__, stop the AI controller and destroy both, the actor, and the controller.
diff --git a/Docs/core_concepts.md b/Docs/core_concepts.md
index 22e0c99e1..ce87df825 100644
--- a/Docs/core_concepts.md
+++ b/Docs/core_concepts.md
@@ -2,7 +2,7 @@
This page introduces the main features and modules in CARLA. Detailed explanations of the different subjects can be found in their corresponding page.
-In order to learn about the different classes and methods in the API, take a look at the [Python API reference](python_api.md). Besides, the [Code recipes](ref_code_recipes.md) reference contains some common code chunks, specially useful during these first steps.
+In order to learn about the different classes and methods in the API, take a look at the [Python API reference](python_api.md).
* [__First steps__](#first-steps)
* [1st- World and client](#1st-world-and-client)
diff --git a/Docs/core_world.md b/Docs/core_world.md
index c0c809b14..2b6d1593d 100644
--- a/Docs/core_world.md
+++ b/Docs/core_world.md
@@ -28,7 +28,7 @@ Take a look at [__carla.Client__](python_api.md#carla.Client) in the Python API
### Client creation
-Two things are needed. The __IP__ address identifying it, and __two TCP ports__ to communicate with the server. An optional third parameter sets the amount of working threads. By default this is set to all (`0`). [This code recipe](ref_code_recipes.md#parse-client-creation-arguments) shows how to parse these as arguments when running the script.
+Two things are needed. The __IP__ address identifying it, and __two TCP ports__ to communicate with the server. An optional third parameter sets the amount of working threads. By default this is set to all (`0`). The [carla.Client](python_api.md#carla.Client.__init__) in the Python API reference contains a snipet that shows how to parse these as arguments when running the script.
```py
client = carla.Client('localhost', 2000)
@@ -224,7 +224,7 @@ debug = world.debug
debug.draw_box(carla.BoundingBox(actor_snapshot.get_transform().location,carla.Vector3D(0.5,0.5,2)),actor_snapshot.get_transform().rotation, 0.05, carla.Color(255,0,0,0),0)
```
-This example is extended in a [code recipe](ref_code_recipes.md#debug-bounding-box-recipe) to draw boxes for every actor in a world snapshot.
+This example is extended in a snipet in [carla.DebugHelper](python_api.md#carla.DebugHelper.draw_box) that shows how to draw boxes for every actor in a world snapshot.
### World snapshots
diff --git a/Docs/extra.css b/Docs/extra.css
index 341ee511e..8fdd2827d 100644
--- a/Docs/extra.css
+++ b/Docs/extra.css
@@ -23,7 +23,7 @@ table.defTable {
}
table.defTable thead {
- background: #ffffff;
+ background: #ffffff;
border-bottom: 1px solid #444444;
}
@@ -39,6 +39,13 @@ table.defTable tbody td{
padding: 7px 13px;
}
+/************************* INHERITED PYTHON API LINE **************************/
+
+.Inherited {
+ padding-left:30px;
+ margin-top:-20px
+}
+
/************************* TOWN SLIDER **************************/
* {box-sizing:border-box}
@@ -53,7 +60,7 @@ table.defTable tbody td{
/* Hide the images by default */
.townslide {
display: none;
- text-align: center;
+ text-align: center;
}
@@ -130,3 +137,110 @@ table.defTable tbody td{
background-color: #717171;
}
+/************************* COPY SCRIPT BUTTON **************************/
+
+.CopyScript {
+ box-shadow:inset 0px 1px 0px 0px #ffffff;
+ background:linear-gradient(to bottom, #ffffff 5%, #f6f6f6 100%);
+ background-color:#ffffff;
+ border-radius:6px;
+ border:1px solid #dcdcdc;
+ display:inline-block;
+ cursor:pointer;
+ color:#666666;
+ font-family:Arial;
+ font-size:15px;
+ font-weight:bold;
+ padding:6px 6px;
+ text-decoration:none;
+ text-shadow:0px 1px 0px #ffffff;
+ margin-left: 2px;
+}
+.CopyScript:hover {
+ background:linear-gradient(to bottom, #f6f6f6 5%, #ffffff 100%);
+ background-color:#f6f6f6;
+}
+.CopyScript:active {
+ position:relative;
+ top:1px;
+}
+
+/************************* CLOSE SNIPET BUTTON **************************/
+
+.CloseSnipet {
+ box-shadow:inset 0px 1px 0px 0px #ffffff;
+ background:linear-gradient(to bottom, #ffffff 5%, #f6f6f6 100%);
+ background-color:#ffffff;
+ border-radius:6px;
+ border:1px solid #dcdcdc;
+ display:inline-block;
+ cursor:pointer;
+ color:#666666;
+ font-family:Arial;
+ font-size:15px;
+ font-weight:bold;
+ padding:6px 6px;
+ text-decoration:none;
+ text-shadow:0px 1px 0px #ffffff;
+ margin-left: 2px;
+}
+.CloseSnipet:hover {
+ background:linear-gradient(to bottom, #ffe6e6 5%, #ffffff 100%);
+ background-color:#ffffff;
+}
+.CloseSnipet:active {
+ position:relative;
+ top:1px;
+}
+
+/************************* SNIPET TITLE **************************/
+
+.SnipetFont {
+ font-family: Arial, Helvetica, sans-serif;
+ font-size: 16px;
+ letter-spacing: 0.4px;
+ margin-left: 10px;
+ margin-bottom: 0px;
+ word-spacing: -2.2px;
+ color: #4675B1;
+ font-weight: 700;
+ text-decoration: rgb(68, 68, 68);
+ font-style: normal;
+ font-variant: normal;
+ text-transform: none;
+}
+
+/************************* SNIPET BUTTON **************************/
+
+.SnipetButton {
+ background-color: #476e9e;
+ border-radius:42px;
+ border:0px;
+ display:inline-block;
+ cursor:pointer;
+ color:#ffffff;
+ font-family:Arial;
+ font-size:12px;
+ padding:2px 3px;
+ text-decoration:none;
+ text-shadow:0px 1px 0px #2f6627;
+}
+
+/************************* SNIPET CONTENT **************************/
+
+.SnipetContent {
+ width: calc(100vw - 1150px);
+ margin-left: 10px;
+}
+
+/************************* SNIPET CONTAINER **************************/
+
+.Container {
+ position: fixed;
+ margin-left: 0px;
+ overflow-y: auto;
+ padding-left: 10px;
+ height: 95%;
+ top: 70px;
+ left: 1100px;
+}
diff --git a/Docs/img/recipe_debug_bb.jpg b/Docs/img/snipets_images/carla.DebugHelper.draw_box.jpg
similarity index 100%
rename from Docs/img/recipe_debug_bb.jpg
rename to Docs/img/snipets_images/carla.DebugHelper.draw_box.jpg
diff --git a/Docs/img/recipe_debug_trail.jpg b/Docs/img/snipets_images/carla.DebugHelper.draw_line.jpg
similarity index 100%
rename from Docs/img/recipe_debug_trail.jpg
rename to Docs/img/snipets_images/carla.DebugHelper.draw_line.jpg
diff --git a/Docs/img/recipe_lane_marking.jpg b/Docs/img/snipets_images/carla.Map.get_waypoint.jpg
similarity index 100%
rename from Docs/img/recipe_lane_marking.jpg
rename to Docs/img/snipets_images/carla.Map.get_waypoint.jpg
diff --git a/Docs/img/tl_recipe.gif b/Docs/img/snipets_images/carla.TrafficLight.set_state.gif
similarity index 100%
rename from Docs/img/tl_recipe.gif
rename to Docs/img/snipets_images/carla.TrafficLight.set_state.gif
diff --git a/Docs/index.md b/Docs/index.md
index 1b762e08f..1d28f9e0b 100644
--- a/Docs/index.md
+++ b/Docs/index.md
@@ -81,8 +81,6 @@ CARLA forum
[__Python API reference__](python_api.md)
— Classes and methods in the Python API.
- [__Code recipes__](ref_code_recipes.md)
- — Some code fragments commonly used.
[__Blueprint library__](bp_library.md)
— Blueprints provided to spawn actors.
[__C++ reference__](ref_cpp.md)
@@ -91,12 +89,14 @@ CARLA forum
— Detailed explanation of the recorder file format.
[__Sensors reference__](ref_sensors.md)
— Everything about sensors and the data they retrieve.
+
## Plugins
[__carlaviz — web visualizer__](plugins_carlaviz.md)
— Plugin that listens the simulation and shows the scene and some simulation data in a web browser.
-
+
- [__Contribute with new assets__](tuto_D_contribute_assets.md)
+ [__Contribute new assets__](tuto_D_contribute_assets.md)
— Add new content to CARLA.
[__Create a sensor__](tuto_D_create_sensor.md)
— Develop a new sensor to be used in CARLA.
- [__Create semantic tags_](tuto_D_create_semantic_tags.md)
+ [__Create semantic tags__](tuto_D_create_semantic_tags.md)
— Define new semantic tags for semantic segmentation.
[__Customize vehicle suspension__](tuto_D_customize_vehicle_suspension.md)
— Modify the suspension system of a vehicle.
diff --git a/Docs/python_api.md b/Docs/python_api.md
index 2bd8dc9c7..ab4d2f62e 100644
--- a/Docs/python_api.md
+++ b/Docs/python_api.md
@@ -193,7 +193,7 @@ Returns the actor's attribute with `id` as identifier if existing.
- **Setter:** _[carla.ActorBlueprint.set_attribute](#carla.ActorBlueprint.set_attribute)_
Setters
-- **set_attribute**(**self**, **id**, **value**)
+- **set_attribute**(**self**, **id**, **value**)
If the `id` attribute is modifiable, changes its value to `value`.
- **Parameters:**
- `id` (_str_) – The identifier for the attribute that is intended to be changed.
@@ -265,13 +265,13 @@ Returns the velocity vector registered for an actor in that tick.
---
## carla.AttachmentType
-Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for sensors. [Here](ref_code_recipes.md#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
+Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for sensors. The snipet in [carla.World.spawn_actor](#carla.World.spawn_actor) shows some sensors being attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
Instance Variables
- **Rigid**
With this fixed attatchment the object follow its parent position strictly. This is the recommended attachment to retrieve precise data from the simulation.
- **SpringArm**
-An attachment that expands or retracts the position of the actor, depending on its parent. This attachment is only recommended to record videos from the simulation where a smooth movement is needed. SpringArms are an Unreal Engine component so [check this out](ref_code_recipes.md#attach-sensors-recipe) to learn some more about them. Warning: The SpringArm attachment presents weird behaviors when an actor is spawned with a relative translation in the Z-axis (e.g. child_location = Location(0,0,2)).
+An attachment that expands or retracts the position of the actor, depending on its parent. This attachment is only recommended to record videos from the simulation where a smooth movement is needed. SpringArms are an Unreal Engine component so [check the UE docs](https://docs.unrealengine.com/en-US/Gameplay/HowTo/UsingCameras/SpringArmComponents/index.html) to learn more about them. Warning: The SpringArm attachment presents weird behaviors when an actor is spawned with a relative translation in the Z-axis (e.g. child_location = Location(0,0,2)).
---
@@ -308,7 +308,7 @@ Parses the identifiers for every blueprint to string.
---
## carla.BoundingBox
-Bounding boxes contain the geometry of an actor or an element in the scene. They can be used by [carla.DebugHelper](#carla.DebugHelper) or a [carla.Client](#carla.Client) to draw their shapes for debugging. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
+Bounding boxes contain the geometry of an actor or an element in the scene. They can be used by [carla.DebugHelper](#carla.DebugHelper) or a [carla.Client](#carla.Client) to draw their shapes for debugging. Check out the snipet in [carla.DebugHelper.draw_box](#carla.DebugHelper.draw_box) where a snapshot of the world is used to draw bounding boxes for traffic lights.
Instance Variables
- **extent** (_[carla.Vector3D](#carla.Vector3D) – meters_)
@@ -390,7 +390,7 @@ The Client connects CARLA to the server which runs the simulation. Both server a
The client also has a recording feature that saves all the information of a simulation while running it. This allows the server to replay it at will to obtain information and experiment with it. [Here](adv_recorder.md) is some information about how to use this recorder.
Methods
-- **\__init__**(**self**, **host**=127.0.0.1, **port**=2000, **worker_threads**=0)
+- **\__init__**(**self**, **host**=127.0.0.1, **port**=2000, **worker_threads**=0)
Client constructor.
- **Parameters:**
- `host` (_str_) – IP address where a CARLA Simulator instance is running. Default is localhost (127.0.0.1).
@@ -400,7 +400,7 @@ Client constructor.
Executes a list of commands on a single simulation step and retrieves no information. If you need information about the response of each command, use the __apply_batch_sync()__ method. [Here](https://github.com/carla-simulator/carla/blob/10c5f6a482a21abfd00220c68c7f12b4110b7f63/PythonAPI/examples/spawn_npc.py#L126) is an example on how to delete the actors that appear in [carla.ActorList](#carla.ActorList) all at once.
- **Parameters:**
- `commands` (_list_) – A list of commands to execute in batch. Each command is different and has its own parameters. They appear listed at the bottom of this page.
-- **apply_batch_sync**(**self**, **commands**, **due_tick_cue**=False)
+- **apply_batch_sync**(**self**, **commands**, **due_tick_cue**=False)
Executes a list of commands on a single simulation step, blocks until the commands are linked, and returns a list of command.Response that can be used to determine whether a single command succeeded or not. [Here](https://github.com/carla-simulator/carla/blob/10c5f6a482a21abfd00220c68c7f12b4110b7f63/PythonAPI/examples/spawn_npc.py#L112-L116) is an example of it being used to spawn actors.
- **Parameters:**
- `commands` (_list_) – A list of commands to execute in batch. The commands available are listed right above, in the method **apply_batch()**.
@@ -503,7 +503,7 @@ Sets the maxixum time a network call is allowed before blocking it and raising a
---
## carla.CollisionEvent
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines a collision data for sensor.other.collision. The sensor creates one of this for every collision detected which may be many for one simulation step. Learn more about this [here](ref_sensors.md#collision-detector).
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines a collision data for sensor.other.collision. The sensor creates one of this for every collision detected which may be many for one simulation step. Learn more about this [here](ref_sensors.md#collision-detector).
Instance Variables
- **actor** (_[carla.Actor](#carla.Actor)_)
@@ -545,7 +545,7 @@ Initializes a color, black by default.
---
## carla.ColorConverter
-Class that defines conversion patterns that can be applied to a [carla.Image](#carla.Image) in order to show information provided by [carla.Sensor](#carla.Sensor). Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
+Class that defines conversion patterns that can be applied to a [carla.Image](#carla.Image) in order to show information provided by [carla.Sensor](#carla.Sensor). Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look at the snipet in [carla.Sensor.listen](#carla.Sensor.listen) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
Instance Variables
- **CityScapesPalette**
@@ -616,7 +616,7 @@ Iterate over the [carla.DVSEvent](#carla.DVSEvent) retrieved as data.
---
## carla.DebugHelper
-Helper class part of [carla.World](#carla.World) that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
+Helper class part of [carla.World](#carla.World) that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Take a look at the snipets available for this class to learn how to debug easily in CARLA.
Methods
- **draw_arrow**(**self**, **begin**, **end**, **thickness**=0.1, **arrow_size**=0.1, **color**=(255,0,0), **life_time**=-1.0)
@@ -628,7 +628,7 @@ Draws an arrow from `begin` to `end` pointing in that direction.
- `arrow_size` (_float – meters_) – Size of the tip of the arrow.
- `color` (_[carla.Color](#carla.Color)_) – RGB code to color the object. Red by default.
- `life_time` (_float – seconds_) – Shape's lifespan. By default it only lasts one frame. Set this to 0 for permanent shapes.
-- **draw_box**(**self**, **box**, **rotation**, **thickness**=0.1, **color**=(255,0,0), **life_time**=-1.0)
+- **draw_box**(**self**, **box**, **rotation**, **thickness**=0.1, **color**=(255,0,0), **life_time**=-1.0)
Draws a box, ussually to act for object colliders.
- **Parameters:**
- `box` (_[carla.BoundingBox](#carla.BoundingBox)_) – Object containing a location and the length of a box for every axis.
@@ -636,7 +636,7 @@ Draws a box, ussually to act for object colliders.
- `thickness` (_float – meters_) – Density of the lines that define the box.
- `color` (_[carla.Color](#carla.Color)_) – RGB code to color the object. Red by default.
- `life_time` (_float – seconds_) – Shape's lifespan. By default it only lasts one frame. Set this to 0 for permanent shapes.
-- **draw_line**(**self**, **begin**, **end**, **thickness**=0.1, **color**=(255,0,0), **life_time**=-1.0)
+- **draw_line**(**self**, **begin**, **end**, **thickness**=0.1, **color**=(255,0,0), **life_time**=-1.0)
Draws a line in between `begin` and `end`.
- **Parameters:**
- `begin` (_[carla.Location](#carla.Location) – meters_) – Point in the coordinate system where the line starts.
@@ -713,7 +713,7 @@ Height regarding ground level.
---
## carla.GnssMeasurement
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the Gnss data registered by a sensor.other.gnss. It essentially reports its position with the position of the sensor and an OpenDRIVE geo-reference.
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the Gnss data registered by a sensor.other.gnss. It essentially reports its position with the position of the sensor and an OpenDRIVE geo-reference.
Instance Variables
- **altitude** (_float – meters_)
@@ -731,7 +731,7 @@ West/East value of a point on the map.
---
## carla.IMUMeasurement
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the data registered by a sensor.other.imu, regarding the sensor's transformation according to the current [carla.World](#carla.World). It essentially acts as accelerometer, gyroscope and compass.
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the data registered by a sensor.other.imu, regarding the sensor's transformation according to the current [carla.World](#carla.World). It essentially acts as accelerometer, gyroscope and compass.
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines an image of 32-bit BGRA colors that will be used as initial data retrieved by camera sensors. There are different camera sensors (currently three, RGB, depth and semantic segmentation) and each of these makes different use for the images. Learn more about them [here](ref_sensors.md).
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines an image of 32-bit BGRA colors that will be used as initial data retrieved by camera sensors. There are different camera sensors (currently three, RGB, depth and semantic segmentation) and each of these makes different use for the images. Learn more about them [here](ref_sensors.md).
Instance Variables
- **fov** (_float – degrees_)
@@ -952,7 +952,7 @@ Type 381.
---
## carla.LaneChange
-Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every [carla.Waypoint](#carla.Waypoint) according to the OpenDRIVE file. In this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
+Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every [carla.Waypoint](#carla.Waypoint) according to the OpenDRIVE file. The snipet in [carla.Map.get_waypoint](#carla.Map.get_waypoint) shows how a waypoint can be used to learn which turns are permitted.
Instance Variables
- **NONE**
@@ -967,7 +967,7 @@ Traffic rules allow turning either right or left.
---
## carla.LaneInvasionEvent
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines lanes invasion for sensor.other.lane_invasion. It works only client-side and is dependant on OpenDRIVE to provide reliable information. The sensor creates one of this every time there is a lane invasion, which may be more than once per simulation step. Learn more about this [here](ref_sensors.md#lane-invasion-detector).
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines lanes invasion for sensor.other.lane_invasion. It works only client-side and is dependant on OpenDRIVE to provide reliable information. The sensor creates one of this every time there is a lane invasion, which may be more than once per simulation step. Learn more about this [here](ref_sensors.md#lane-invasion-detector).
Instance Variables
- **actor** (_[carla.Actor](#carla.Actor)_)
@@ -1013,8 +1013,7 @@ White by default.
---
## carla.LaneMarkingType
-Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](ref_code_recipes.md#lanes-recipe) where the user creates a [carla.Waypoint](#carla.Waypoint) for a vehicle location and retrieves from it the information about adjacent lane markings.
-__Note on double types:__ Lane markings are defined under the OpenDRIVE standard that determines whereas a line will be considered "BrokenSolid" or "SolidBroken". For each road there is a center lane marking, defined from left to right regarding the lane's directions. The rest of the lane markings are defined in order from the center lane to the closest outside of the road.
+Class that defines the lane marking types accepted by OpenDRIVE 1.4. The snipet in [carla.Map.get_waypoint](#carla.Map.get_waypoint) shows how a waypoint can be used to retrieve the information about adjacent lane markings.
__Note on double types:__ Lane markings are defined under the OpenDRIVE standard that determines whereas a line will be considered "BrokenSolid" or "SolidBroken". For each road there is a center lane marking, defined from left to right regarding the lane's directions. The rest of the lane markings are defined in order from the center lane to the closest outside of the road.
Instance Variables
- **NONE**
@@ -1032,7 +1031,7 @@ __Note on double types:__ Lane markings are defined under the OpenDRIVE standard
---
## carla.LaneType
-Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a [carla.Waypoint](#carla.Waypoint) for the current location of a vehicle and uses it to get the current and adjacent lane types.
+Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. The snipet in [carla.Map.get_waypoint](#carla.Map.get_waypoint) makes use of a waypoint to get the current and adjacent lane types.
Instance Variables
- **NONE**
@@ -1078,7 +1077,7 @@ Computed intensity for this point as a scalar value between [0.0 , 1.0].
---
## carla.LidarMeasurement
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the LIDAR data retrieved by a sensor.lidar.ray_cast. This essentially simulates a rotating LIDAR using ray-casting. Learn more about this [here](ref_sensors.md#lidar-raycast-sensor).
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the LIDAR data retrieved by a sensor.lidar.ray_cast. This essentially simulates a rotating LIDAR using ray-casting. Learn more about this [here](ref_sensors.md#lidar-raycast-sensor).
Instance Variables
- **channels** (_int_)
@@ -1309,7 +1308,7 @@ Switch of a light. It is __True__ when the light is on.
---
## carla.Location
-
Inherited from _[carla.Vector3D](#carla.Vector3D)_
Represents a spot in the world.
+
Inherited from _[carla.Vector3D](#carla.Vector3D)_
Represents a spot in the world.
Instance Variables
- **x** (_float – meters_)
@@ -1401,7 +1400,7 @@ Returns a list of recommendations made by the creators of the map to be used as
- **get_topology**(**self**)
Returns a list of tuples describing a minimal graph of the topology of the OpenDRIVE file. The tuples contain pairs of waypoints located either at the point a road begins or ends. The first one is the origin and the second one represents another road end that can be reached. This graph can be loaded into [NetworkX](https://networkx.github.io/) to work with. Output could look like this: [(w0, w1), (w0, w2), (w1, w3), (w2, w3), (w0, w4)].
- **Return:** _list(tuple([carla.Waypoint](#carla.Waypoint), [carla.Waypoint](#carla.Waypoint)))_
-- **get_waypoint**(**self**, **location**, **project_to_road**=True, **lane_type**=[carla.LaneType.Driving](#carla.LaneType.Driving))
+- **get_waypoint**(**self**, **location**, **project_to_road**=True, **lane_type**=[carla.LaneType.Driving](#carla.LaneType.Driving))
Returns a waypoint that can be located in an exact location or translated to the center of the nearest lane. Said lane type can be defined using flags such as `LaneType.Driving & LaneType.Shoulder`.
The method will return None if the waypoint is not found, which may happen only when trying to retrieve a waypoint for an exact location. That eases checking if a point is inside a certain road, as otherwise, it will return the corresponding waypoint.
- **Parameters:**
@@ -1423,7 +1422,7 @@ Returns a waypoint if all the parameters passed are correct. Otherwise, returns
---
## carla.ObstacleDetectionEvent
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the obstacle data for sensor.other.obstacle. Learn more about this [here](ref_sensors.md#obstacle-detector).
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the obstacle data for sensor.other.obstacle. Learn more about this [here](ref_sensors.md#obstacle-detector).
Instance Variables
- **actor** (_[carla.Actor](#carla.Actor)_)
@@ -1512,7 +1511,7 @@ The velocity of the detected object towards the sensor.
---
## carla.RadarMeasurement
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines and gathers the measures registered by a sensor.other.radar, representing a wall of points in front of the sensor with a distance, angle and velocity in relation to it. The data consists of a [carla.RadarDetection](#carla.RadarDetection) array. Learn more about this [here](ref_sensors.md#radar-sensor).
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines and gathers the measures registered by a sensor.other.radar, representing a wall of points in front of the sensor with a distance, angle and velocity in relation to it. The data consists of a [carla.RadarDetection](#carla.RadarDetection) array. Learn more about this [here](ref_sensors.md#radar-sensor).
Instance Variables
- **raw_data** (_bytes_)
@@ -1678,7 +1677,7 @@ Enum declaration used in [carla.RssSensor](#carla.RssSensor) to set the log leve
---
## carla.RssResponse
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that contains the output of a [carla.RssSensor](#carla.RssSensor). This is the result of the RSS calculations performed for the parent vehicle of the sensor.
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that contains the output of a [carla.RssSensor](#carla.RssSensor). This is the result of the RSS calculations performed for the parent vehicle of the sensor.
A [carla.RssRestrictor](#carla.RssRestrictor) will use the data to modify the [carla.VehicleControl](#carla.VehicleControl) of the vehicle.
@@ -1730,7 +1729,7 @@ Disables the _stay on road_ feature.
---
## carla.RssSensor
-
Inherited from _[carla.Sensor](#carla.Sensor)_
This sensor works a bit differently than the rest. Take look at the [specific documentation](adv_rss.md), and the [rss sensor reference](ref_sensors.md#rss-sensor) to gain full understanding of it.
+
Inherited from _[carla.Sensor](#carla.Sensor)_
This sensor works a bit differently than the rest. Take look at the [specific documentation](adv_rss.md), and the [rss sensor reference](ref_sensors.md#rss-sensor) to gain full understanding of it.
The RSS sensor uses world information, and a [RSS library](https://github.com/intel/ad-rss-lib) to make safety checks on a vehicle. The output retrieved by the sensor is a [carla.RssResponse](#carla.RssResponse). This will be used by a [carla.RssRestrictor](#carla.RssRestrictor) to modify a [carla.VehicleControl](#carla.VehicleControl) before applying it to a vehicle.
@@ -1796,7 +1795,7 @@ ID of the actor hit by the ray.
---
## carla.SemanticLidarMeasurement
-
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the semantic LIDAR data retrieved by a sensor.lidar.ray_cast_semantic. This essentially simulates a rotating LIDAR using ray-casting. Learn more about this [here](ref_sensors.md#semanticlidar-raycast-sensor).
+
Inherited from _[carla.SensorData](#carla.SensorData)_
Class that defines the semantic LIDAR data retrieved by a sensor.lidar.ray_cast_semantic. This essentially simulates a rotating LIDAR using ray-casting. Learn more about this [here](ref_sensors.md#semanticlidar-raycast-sensor).
Instance Variables
- **channels** (_int_)
@@ -1829,7 +1828,7 @@ Iterate over the [carla.SemanticLidarDetection](#carla.SemanticLidarDetection) r
---
## carla.Sensor
-
Inherited from _[carla.Actor](#carla.Actor)_
Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at [carla.World](#carla.World) to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from [carla.SensorData](#carla.SensorData) (depending on the sensor).
+
Inherited from _[carla.Actor](#carla.Actor)_
Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at [carla.World](#carla.World) to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from [carla.SensorData](#carla.SensorData) (depending on the sensor).
Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in [carla.BlueprintLibrary](#carla.BlueprintLibrary). All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
Receive data on every tick.
@@ -1852,7 +1851,7 @@ Iterate over the [carla.SemanticLidarDetection](#carla.SemanticLidarDetection) r
When True the sensor will be waiting for data.
Methods
-- **listen**(**self**, **callback**)
+- **listen**(**self**, **callback**)
The function the sensor will be calling to every time a new measurement is received. This function needs for an argument containing an object type [carla.SensorData](#carla.SensorData) to work with.
- **Parameters:**
- `callback` (_function_) – The called function with one argument containing the sensor data.
@@ -1916,9 +1915,9 @@ Time register of the frame at which this measurement was taken given by the OS i
---
## carla.TrafficLight
-
Inherited from _[carla.TrafficSign](#carla.TrafficSign)_
A traffic light actor, considered a specific type of traffic sign. As traffic lights will mostly appear at junctions, they belong to a group which contains the different traffic lights in it. Inside the group, traffic lights are differenciated by their pole index.
+
Inherited from _[carla.TrafficSign](#carla.TrafficSign)_
A traffic light actor, considered a specific type of traffic sign. As traffic lights will mostly appear at junctions, they belong to a group which contains the different traffic lights in it. Inside the group, traffic lights are differenciated by their pole index.
- Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to learn how to do so.
+ Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually.
Instance Variables
- **state** (_[carla.TrafficLightState](#carla.TrafficLightState)_)
@@ -1967,7 +1966,7 @@ The client returns the time set for the traffic light to be yellow, according to
- **Setter:** _[carla.TrafficLight.set_yellow_time](#carla.TrafficLight.set_yellow_time)_
Setters
-- **set_state**(**self**, **state**)
+- **set_state**(**self**, **state**)
Sets a given state to a traffic light actor.
- **Parameters:**
- `state` (_[carla.TrafficLightState](#carla.TrafficLightState)_)
@@ -1993,7 +1992,7 @@ Sets a given time for the yellow light to be active.
---
## carla.TrafficLightState
-All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to see an example.
+All possible states for traffic lights. These can either change at a specific time step or be changed manually. The snipet in [carla.TrafficLight.set_state](#carla.TrafficLight.set_state) changes the state of a traffic light on the fly.
Instance Variables
- **Red**
@@ -2085,7 +2084,7 @@ Enables or disables the OSM mode. This mode allows the user to run TM in a map c
---
## carla.TrafficSign
-
Inherited from _[carla.Actor](#carla.Actor)_
Traffic signs appearing in the simulation except for traffic lights. These have their own class inherited from this in [carla.TrafficLight](#carla.TrafficLight). Right now, speed signs, stops and yields are mainly the ones implemented, but many others are borne in mind.
+
Inherited from _[carla.Actor](#carla.Actor)_
Traffic signs appearing in the simulation except for traffic lights. These have their own class inherited from this in [carla.TrafficLight](#carla.TrafficLight). Right now, speed signs, stops and yields are mainly the ones implemented, but many others are borne in mind.
Instance Variables
- **trigger_volume**
@@ -2210,7 +2209,7 @@ Returns the axis values for the vector parsed as string.
---
## carla.Vehicle
-
Inherited from _[carla.Actor](#carla.Actor)_
One of the most important group of actors in CARLA. These include any type of vehicle from cars to trucks, motorbikes, vans, bycicles and also official vehicles such as police cars. A wide set of these actors is provided in [carla.BlueprintLibrary](#carla.BlueprintLibrary) to facilitate differente requirements. Vehicles can be either manually controlled or set to an autopilot mode that will be conducted client-side by the traffic manager.
+
Inherited from _[carla.Actor](#carla.Actor)_
One of the most important group of actors in CARLA. These include any type of vehicle from cars to trucks, motorbikes, vans, bycicles and also official vehicles such as police cars. A wide set of these actors is provided in [carla.BlueprintLibrary](#carla.BlueprintLibrary) to facilitate differente requirements. Vehicles can be either manually controlled or set to an autopilot mode that will be conducted client-side by the traffic manager.
This class inherits from the [carla.Actor](#carla.Actor) and defines pedestrians in the simulation. Walkers are a special type of actor that can be controlled either by an AI ([carla.WalkerAIController](#carla.WalkerAIController)) or manually via script, using a series of [carla.WalkerControl](#carla.WalkerControl) to move these and their skeletons.
+
Inherited from _[carla.Actor](#carla.Actor)_
This class inherits from the [carla.Actor](#carla.Actor) and defines pedestrians in the simulation. Walkers are a special type of actor that can be controlled either by an AI ([carla.WalkerAIController](#carla.WalkerAIController)) or manually via script, using a series of [carla.WalkerControl](#carla.WalkerControl) to move these and their skeletons.
Instance Variables
- **bounding_box** (_[carla.BoundingBox](#carla.BoundingBox)_)
@@ -2421,7 +2420,7 @@ The client returns the control applied to this walker during last tick. The meth
---
## carla.WalkerAIController
-
Inherited from _[carla.Actor](#carla.Actor)_
Class that conducts AI control for a walker. The controllers are defined as actors, but they are quite different from the rest. They need to be attached to a parent actor during their creation, which is the walker they will be controlling (take a look at [carla.World](#carla.World) if you are yet to learn on how to spawn actors). They also need for a special blueprint (already defined in [carla.BlueprintLibrary](#carla.BlueprintLibrary) as "controller.ai.walker"). This is an empty blueprint, as the AI controller will be invisible in the simulation but will follow its parent around to dictate every step of the way.
+
Inherited from _[carla.Actor](#carla.Actor)_
Class that conducts AI control for a walker. The controllers are defined as actors, but they are quite different from the rest. They need to be attached to a parent actor during their creation, which is the walker they will be controlling (take a look at [carla.World](#carla.World) if you are yet to learn on how to spawn actors). They also need for a special blueprint (already defined in [carla.BlueprintLibrary](#carla.BlueprintLibrary) as "controller.ai.walker"). This is an empty blueprint, as the AI controller will be invisible in the simulation but will follow its parent around to dictate every step of the way.
Methods
- **go_to_location**(**self**, **destination**)
@@ -2430,7 +2429,7 @@ Sets the destination that the pedestrian will reach.
- `destination` (_[carla.Location](#carla.Location) – meters_)
- **start**(**self**)
Enables AI control for its parent walker.
-- **stop**(**self**)
+- **stop**(**self**)
Disables AI control for its parent walker.
Setters
@@ -2702,7 +2701,7 @@ This method is used in [__asynchronous__ mode](https://[carla.readthedocs.io](#c
- **Parameters:**
- `seconds` (_float – seconds_) – Maximum time the server should wait for a tick. It is set to 10.0 by default.
- **Return:** _[carla.WorldSnapshot](#carla.WorldSnapshot)_
-- **spawn_actor**(**self**, **blueprint**, **transform**, **attach_to**=None, **attachment**=Rigid)
+- **spawn_actor**(**self**, **blueprint**, **transform**, **attach_to**=None, **attachment**=Rigid)
The method will create, return and spawn an actor into the world. The actor will need an available blueprint to be created and a transform (location and rotation). It can also be attached to a parent with a certain attachment type.
- **Parameters:**
- `blueprint` (_[carla.ActorBlueprint](#carla.ActorBlueprint)_) – The reference from which the actor will be created.
@@ -2774,7 +2773,7 @@ Returns an object containing some data about the simulation such as synchrony be
- **get_snapshot**(**self**)
Returns a snapshot of the world at a certain moment comprising all the information about the actors.
- **Return:** _[carla.WorldSnapshot](#carla.WorldSnapshot)_
-- **get_spectator**(**self**)
+- **get_spectator**(**self**)
Returns the spectator actor. The spectator is a special type of actor created by Unreal Engine, usually with ID=0, that acts as a camera and controls the view in the simulator window.
- **Return:** _[carla.Actor](#carla.Actor)_
- **get_weather**(**self**)
@@ -3148,4 +3147,480 @@ Links another command to be executed right after. It allows to ease very common
- **Parameters:**
- `command` (_any carla Command_) – a Carla command.
----
\ No newline at end of file
+---
+[comment]: <> (=========================)
+[comment]: <> (PYTHON API SCRIPT SNIPETS)
+[comment]: <> (=========================)
+
+
+
+
+
+
+Snipet for carla.Map.get_waypoint
+
+
+
+```py
+
+
+# This recipe shows the current traffic rules affecting the vehicle.
+# Shows the current lane type and if a lane change can be done in the actual lane or the surrounding ones.
+
+# ...
+waypoint = world.get_map().get_waypoint(vehicle.get_location(),project_to_road=True, lane_type=(carla.LaneType.Driving | carla.LaneType.Shoulder | carla.LaneType.Sidewalk))
+print("Current lane type: " + str(waypoint.lane_type))
+# Check current lane change allowed
+print("Current Lane change: " + str(waypoint.lane_change))
+# Left and Right lane markings
+print("L lane marking type: " + str(waypoint.left_lane_marking.type))
+print("L lane marking change: " + str(waypoint.left_lane_marking.lane_change))
+print("R lane marking type: " + str(waypoint.right_lane_marking.type))
+print("R lane marking change: " + str(waypoint.right_lane_marking.lane_change))
+# ...
+
+
+```
+
+
+
+
+
+
+
+
+
+Snipet for carla.Sensor.listen
+
+
+
+```py
+
+
+# This recipe applies a color conversion to the image taken by a camera sensor,
+# so it is converted to a semantic segmentation image.
+
+# ...
+camera_bp = world.get_blueprint_library().filter('sensor.camera.semantic_segmentation')
+# ...
+cc = carla.ColorConverter.CityScapesPalette
+camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame, cc))
+# ...
+
+
+```
+
+
+
+
+
+
+Snipet for carla.World.spawn_actor
+
+
+
+```py
+
+
+# This recipe attaches different camera / sensors to a vehicle with different attachments.
+
+# ...
+camera = world.spawn_actor(rgb_camera_bp, transform, attach_to=vehicle, attachment_type=Attachment.Rigid)
+# Default attachment: Attachment.Rigid
+gnss_sensor = world.spawn_actor(sensor_gnss_bp, transform, attach_to=vehicle)
+collision_sensor = world.spawn_actor(sensor_collision_bp, transform, attach_to=vehicle)
+lane_invasion_sensor = world.spawn_actor(sensor_lane_invasion_bp, transform, attach_to=vehicle)
+# ...
+
+
+```
+
+
+
+
+
+
+Snipet for carla.ActorBlueprint.set_attribute
+
+
+
+```py
+
+
+# This recipe changes attributes of different type of blueprint actors.
+
+# ...
+walker_bp = world.get_blueprint_library().filter('walker.pedestrian.0002')
+walker_bp.set_attribute('is_invincible', True)
+
+# ...
+# Changes attribute randomly by the recommended value
+vehicle_bp = wolrd.get_blueprint_library().filter('vehicle.bmw.*')
+color = random.choice(vehicle_bp.get_attribute('color').recommended_values)
+vehicle_bp.set_attribute('color', color)
+
+# ...
+
+camera_bp = world.get_blueprint_library().filter('sensor.camera.rgb')
+camera_bp.set_attribute('image_size_x', 600)
+camera_bp.set_attribute('image_size_y', 600)
+# ...
+
+
+```
+
+
+
+
+
+
+Snipet for carla.World.get_spectator
+
+
+
+```py
+
+
+# This recipe spawns an actor and the spectator camera at the actor's location.
+
+# ...
+world = client.get_world()
+spectator = world.get_spectator()
+
+vehicle_bp = random.choice(world.get_blueprint_library().filter('vehicle.bmw.*'))
+transform = random.choice(world.get_map().get_spawn_points())
+vehicle = world.try_spawn_actor(vehicle_bp, transform)
+
+# Wait for world to get the vehicle actor
+world.tick()
+
+world_snapshot = world.wait_for_tick()
+actor_snapshot = world_snapshot.find(vehicle.id)
+
+# Set spectator at given transform (vehicle transform)
+spectator.set_transform(actor_snapshot.get_transform())
+# ...
+
+
+```
+
+
+
+
+
+
+Snipet for carla.DebugHelper.draw_line
+
+
+
+```py
+
+
+# This recipe is a modification of lane_explorer.py example.
+# It draws the path of an actor through the world, printing information at each waypoint.
+
+# ...
+current_w = map.get_waypoint(vehicle.get_location())
+while True:
+
+ next_w = map.get_waypoint(vehicle.get_location(), lane_type=carla.LaneType.Driving | carla.LaneType.Shoulder | carla.LaneType.Sidewalk )
+ # Check if the vehicle is moving
+ if next_w.id != current_w.id:
+ vector = vehicle.get_velocity()
+ # Check if the vehicle is on a sidewalk
+ if current_w.lane_type == carla.LaneType.Sidewalk:
+ draw_waypoint_union(debug, current_w, next_w, cyan if current_w.is_junction else red, 60)
+ else:
+ draw_waypoint_union(debug, current_w, next_w, cyan if current_w.is_junction else green, 60)
+ debug.draw_string(current_w.transform.location, str('%15.0f km/h' % (3.6 * math.sqrt(vector.x**2 + vector.y**2 + vector.z**2))), False, orange, 60)
+ draw_transform(debug, current_w.transform, white, 60)
+
+ # Update the current waypoint and sleep for some time
+ current_w = next_w
+ time.sleep(args.tick_time)
+# ...
+
+
+```
+
+
+
+
+
+
+
+
+
+Snipet for carla.DebugHelper.draw_box
+
+
+
+```py
+
+
+# This recipe shows how to draw traffic light actor bounding boxes from a world snapshot.
+
+# ....
+debug = world.debug
+world_snapshot = world.get_snapshot()
+
+for actor_snapshot in world_snapshot:
+ actual_actor = world.get_actor(actor_snapshot.id)
+ if actual_actor.type_id == 'traffic.traffic_light':
+ debug.draw_box(carla.BoundingBox(actor_snapshot.get_transform().location,carla.Vector3D(0.5,0.5,2)),actor_snapshot.get_transform().rotation, 0.05, carla.Color(255,0,0,0),0)
+# ...
+
+
+
+```
+
+
+
+
+
+
+
+
+
+Snipet for carla.TrafficLight.set_state
+
+
+
+```py
+
+
+# This recipe changes from red to green the traffic light that affects the vehicle.
+# This is done by detecting if the vehicle actor is at a traffic light.
+
+# ...
+world = client.get_world()
+spectator = world.get_spectator()
+
+vehicle_bp = random.choice(world.get_blueprint_library().filter('vehicle.bmw.*'))
+transform = random.choice(world.get_map().get_spawn_points())
+vehicle = world.try_spawn_actor(vehicle_bp, transform)
+
+# Wait for world to get the vehicle actor
+world.tick()
+
+world_snapshot = world.wait_for_tick()
+actor_snapshot = world_snapshot.find(vehicle.id)
+
+# Set spectator at given transform (vehicle transform)
+spectator.set_transform(actor_snapshot.get_transform())
+# ...# ...
+if vehicle_actor.is_at_traffic_light():
+ traffic_light = vehicle_actor.get_traffic_light()
+ if traffic_light.get_state() == carla.TrafficLightState.Red:
+ # world.hud.notification("Traffic light changed! Good to go!")
+ traffic_light.set_state(carla.TrafficLightState.Green)
+# ...
+
+
+
+```
+
+
+
+
+
+
+
+
+
+Snipet for carla.Client.apply_batch_sync
+
+
+
+```py
+
+# 0. Choose a blueprint fo the walkers
+world = client.get_world()
+blueprintsWalkers = world.get_blueprint_library().filter("walker.pedestrian.*")
+walker_bp = random.choice(blueprintsWalkers)
+
+# 1. Take all the random locations to spawn
+spawn_points = []
+for i in range(50):
+ spawn_point = carla.Transform()
+ spawn_point.location = world.get_random_location_from_navigation()
+ if (spawn_point.location != None):
+ spawn_points.append(spawn_point)
+
+# 2. Build the batch of commands to spawn the pedestrians
+batch = []
+for spawn_point in spawn_points:
+ walker_bp = random.choice(blueprintsWalkers)
+ batch.append(carla.command.SpawnActor(walker_bp, spawn_point))
+
+# 2.1 apply the batch
+results = client.apply_batch_sync(batch, True)
+for i in range(len(results)):
+ if results[i].error:
+ logging.error(results[i].error)
+ else:
+ walkers_list.append({"id": results[i].actor_id})
+
+# 3. Spawn walker AI controllers for each walker
+batch = []
+walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker')
+for i in range(len(walkers_list)):
+ batch.append(carla.command.SpawnActor(walker_controller_bp, carla.Transform(), walkers_list[i]["id"]))
+
+# 3.1 apply the batch
+results = client.apply_batch_sync(batch, True)
+for i in range(len(results)):
+ if results[i].error:
+ logging.error(results[i].error)
+ else:
+ walkers_list[i]["con"] = results[i].actor_id
+
+# 4. Put altogether the walker and controller ids
+for i in range(len(walkers_list)):
+ all_id.append(walkers_list[i]["con"])
+ all_id.append(walkers_list[i]["id"])
+all_actors = world.get_actors(all_id)
+
+# wait for a tick to ensure client receives the last transform of the walkers we have just created
+world.wait_for_tick()
+
+# 5. initialize each controller and set target to walk to (list is [controller, actor, controller, actor ...])
+for i in range(0, len(all_actors), 2):
+ # start walker
+ all_actors[i].start()
+ # set walk to random point
+ all_actors[i].go_to_location(world.get_random_location_from_navigation())
+ # random max speed
+ all_actors[i].set_max_speed(1 + random.random()) # max speed between 1 and 2 (default is 1.4 m/s)
+
+
+```
+
+
+
+
+
+
+Snipet for carla.WalkerAIController.stop
+
+
+
+```py
+
+
+#To destroy the pedestrians, stop them from the navigation, and then destroy the objects (actor and controller).
+
+# stop pedestrians (list is [controller, actor, controller, actor ...])
+for i in range(0, len(all_id), 2):
+ all_actors[i].stop()
+
+# destroy pedestrian (actor and controller)
+client.apply_batch([carla.command.DestroyActor(x) for x in all_id])
+
+
+```
+
+
+
+
+
+
+Snipet for carla.Client.__init__
+
+
+
+```py
+
+
+# This recipe shows in every script provided in PythonAPI/Examples
+# and it is used to parse the client creation arguments when running the script.
+
+ argparser = argparse.ArgumentParser(
+ description=__doc__)
+ argparser.add_argument(
+ '--host',
+ metavar='H',
+ default='127.0.0.1',
+ help='IP of the host server (default: 127.0.0.1)')
+ argparser.add_argument(
+ '-p', '--port',
+ metavar='P',
+ default=2000,
+ type=int,
+ help='TCP port to listen to (default: 2000)')
+ argparser.add_argument(
+ '-s', '--speed',
+ metavar='FACTOR',
+ default=1.0,
+ type=float,
+ help='rate at which the weather changes (default: 1.0)')
+ args = argparser.parse_args()
+
+ speed_factor = args.speed
+ update_freq = 0.1 / speed_factor
+
+ client = carla.Client(args.host, args.port)
+
+
+
+```
+
+
+
+
+
+
+
+
\ No newline at end of file
diff --git a/Docs/ref_code_recipes.md b/Docs/ref_code_recipes.md
deleted file mode 100644
index 03959b4ed..000000000
--- a/Docs/ref_code_recipes.md
+++ /dev/null
@@ -1,372 +0,0 @@
-# Code recipes
-
-This section contains a list of recipes that complement the [first steps](core_concepts.md) section and are used to illustrate the use of Python API methods.
-
-Each recipe has a list of [python API classes](python_api.md),
-which is divided into those in which the recipe is centered, and those that need to be used.
-
-There are more recipes to come!
-
-* [__Actor Spectator Recipe__](#actor-spectator-recipe)
-* [__Attach Sensors Recipe__](#attach-sensors-recipe)
-* [__Actor Attribute Recipe__](#actor-attribute-recipe)
-* [__Converted Image Recipe__](#converted-image-recipe)
-* [__Lanes Recipe__](#lanes-recipe)
-* [__Debug Bounding Box Recipe__](#debug-bounding-box-recipe)
-* [__Debug Vehicle Trail Recipe__](#debug-vehicle-trail-recipe)
-* [__Parsing Client Arguments Recipe__](#parsing-client-arguments-recipe)
-* [__Traffic Light Recipe__](#traffic-light-recipe)
-* [__Walker Batch Recipe__](#walker-batch-recipe)
-
----
-## Actor Spectator Recipe
-
-This recipe spawns an actor and the spectator camera at the actor's location.
-
-Focused on:
-[`carla.World`](python_api.md#carla.World)
-[`carla.Actor`](python_api.md#carla.Actor)
-
-Used:
-[`carla.WorldSnapshot`](python_api.md#carla.WorldSnapshot)
-[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)
-
-```py
-# ...
-world = client.get_world()
-spectator = world.get_spectator()
-
-vehicle_bp = random.choice(world.get_blueprint_library().filter('vehicle.bmw.*'))
-transform = random.choice(world.get_map().get_spawn_points())
-vehicle = world.try_spawn_actor(vehicle_bp, transform)
-
-# Wait for world to get the vehicle actor
-world.tick()
-
-world_snapshot = world.wait_for_tick()
-actor_snapshot = world_snapshot.find(vehicle.id)
-
-# Set spectator at given transform (vehicle transform)
-spectator.set_transform(actor_snapshot.get_transform())
-# ...
-```
-
----
-## Attach Sensors Recipe
-
-This recipe attaches different camera / sensors to a vehicle with different attachments.
-
-Focused on:
-[`carla.Sensor`](python_api.md#carla.Sensor)
-[`carla.AttachmentType`](python_api.md#carla.AttachmentType)
-
-Used:
-[`carla.World`](python_api.md#carla.World)
-
-```py
-# ...
-camera = world.spawn_actor(rgb_camera_bp, transform, attach_to=vehicle, attachment_type=Attachment.Rigid)
-# Default attachment: Attachment.Rigid
-gnss_sensor = world.spawn_actor(sensor_gnss_bp, transform, attach_to=vehicle)
-collision_sensor = world.spawn_actor(sensor_collision_bp, transform, attach_to=vehicle)
-lane_invasion_sensor = world.spawn_actor(sensor_lane_invasion_bp, transform, attach_to=vehicle)
-# ...
-```
-
----
-## Actor Attribute Recipe
-
-This recipe changes attributes of different type of blueprint actors.
-
-Focused on:
-[`carla.ActorAttribute`](python_api.md#carla.ActorAttribute)
-[`carla.ActorBlueprint`](python_api.md#carla.ActorBlueprint)
-
-Used:
-[`carla.World`](python_api.md#carla.World)
-[`carla.BlueprintLibrary`](python_api.md#carla.BlueprintLibrary)
-
-```py
-# ...
-walker_bp = world.get_blueprint_library().filter('walker.pedestrian.0002')
-walker_bp.set_attribute('is_invincible', True)
-
-# ...
-# Changes attribute randomly by the recommended value
-vehicle_bp = wolrd.get_blueprint_library().filter('vehicle.bmw.*')
-color = random.choice(vehicle_bp.get_attribute('color').recommended_values)
-vehicle_bp.set_attribute('color', color)
-
-# ...
-
-camera_bp = world.get_blueprint_library().filter('sensor.camera.rgb')
-camera_bp.set_attribute('image_size_x', 600)
-camera_bp.set_attribute('image_size_y', 600)
-# ...
-```
-
----
-## Converted Image Recipe
-
-This recipe applies a color conversion to the image taken by a camera sensor,
-so it is converted to a semantic segmentation image.
-
-Focused on:
-[`carla.ColorConverter`](python_api.md#carla.ColorConverter)
-[`carla.Sensor`](python_api.md#carla.Sensor)
-
-```py
-# ...
-camera_bp = world.get_blueprint_library().filter('sensor.camera.semantic_segmentation')
-# ...
-cc = carla.ColorConverter.CityScapesPalette
-camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame, cc))
-# ...
-```
-
----
-## Lanes Recipe
-
-This recipe shows the current traffic rules affecting the vehicle. Shows the current lane type and
-if a lane change can be done in the actual lane or the surrounding ones.
-
-Focused on:
-[`carla.LaneMarking`](python_api.md#carla.LaneMarking)
-[`carla.LaneMarkingType`](python_api.md#carla.LaneMarkingType)
-[`carla.LaneChange`](python_api.md#carla.LaneChange)
-[`carla.LaneType`](python_api.md#carla.LaneType)
-
-Used:
-[`carla.Waypoint`](python_api.md#carla.Waypoint)
-[`carla.World`](python_api.md#carla.World)
-
-```py
-# ...
-waypoint = world.get_map().get_waypoint(vehicle.get_location(),project_to_road=True, lane_type=(carla.LaneType.Driving | carla.LaneType.Shoulder | carla.LaneType.Sidewalk))
-print("Current lane type: " + str(waypoint.lane_type))
-# Check current lane change allowed
-print("Current Lane change: " + str(waypoint.lane_change))
-# Left and Right lane markings
-print("L lane marking type: " + str(waypoint.left_lane_marking.type))
-print("L lane marking change: " + str(waypoint.left_lane_marking.lane_change))
-print("R lane marking type: " + str(waypoint.right_lane_marking.type))
-print("R lane marking change: " + str(waypoint.right_lane_marking.lane_change))
-# ...
-```
-
-
-
----
-## Debug Bounding Box Recipe
-
-This recipe shows how to draw traffic light actor bounding boxes from a world snapshot.
-
-Focused on:
-[`carla.DebugHelper`](python_api.md#carla.DebugHelper)
-[`carla.BoundingBox`](python_api.md#carla.BoundingBox)
-
-Used:
-[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)
-[`carla.Actor`](python_api.md#carla.Actor)
-[`carla.Vector3D`](python_api.md#carla.Vector3D)
-[`carla.Color`](python_api.md#carla.Color)
-
-```py
-# ....
-debug = world.debug
-world_snapshot = world.get_snapshot()
-
-for actor_snapshot in world_snapshot:
- actual_actor = world.get_actor(actor_snapshot.id)
- if actual_actor.type_id == 'traffic.traffic_light':
- debug.draw_box(carla.BoundingBox(actor_snapshot.get_transform().location,carla.Vector3D(0.5,0.5,2)),actor_snapshot.get_transform().rotation, 0.05, carla.Color(255,0,0,0),0)
-# ...
-```
-
-
-
----
-## Debug Vehicle Trail Recipe
-
-This recipe is a modification of
-[`lane_explorer.py`](https://github.com/carla-simulator/carla/blob/master/PythonAPI/util/lane_explorer.py) example.
-It draws the path of an actor through the world, printing information at each waypoint.
-
-Focused on:
-[`carla.DebugHelper`](python_api.md#carla.DebugHelper)
-[`carla.Waypoint`](python_api.md#carla.Waypoint)
-[`carla.Actor`](python_api.md#carla.Actor)
-
-Used:
-[`carla.ActorSnapshot`](python_api.md#carla.ActorSnapshot)
-[`carla.Vector3D`](python_api.md#carla.Vector3D)
-[`carla.LaneType`](python_api.md#carla.LaneType)
-[`carla.Color`](python_api.md#carla.Color)
-[`carla.Map`](python_api.md#carla.Map)
-
-```py
-# ...
-current_w = map.get_waypoint(vehicle.get_location())
-while True:
-
- next_w = map.get_waypoint(vehicle.get_location(), lane_type=carla.LaneType.Driving | carla.LaneType.Shoulder | carla.LaneType.Sidewalk )
- # Check if the vehicle is moving
- if next_w.id != current_w.id:
- vector = vehicle.get_velocity()
- # Check if the vehicle is on a sidewalk
- if current_w.lane_type == carla.LaneType.Sidewalk:
- draw_waypoint_union(debug, current_w, next_w, cyan if current_w.is_junction else red, 60)
- else:
- draw_waypoint_union(debug, current_w, next_w, cyan if current_w.is_junction else green, 60)
- debug.draw_string(current_w.transform.location, str('%15.0f km/h' % (3.6 * math.sqrt(vector.x**2 + vector.y**2 + vector.z**2))), False, orange, 60)
- draw_transform(debug, current_w.transform, white, 60)
-
- # Update the current waypoint and sleep for some time
- current_w = next_w
- time.sleep(args.tick_time)
-# ...
-```
-
-The image below shows how a vehicle loses control and drives on a sidewalk. The trail shows the
-path it was following and the speed at each waypoint.
-
-
-
----
-## Parsing Client Arguments Recipe
-
-This recipe shows in every script provided in `PythonAPI/Examples` and it is used to parse the client creation arguments when running the script.
-
-Focused on:
-[`carla.Client`](python_api.md#carla.Client)
-
-Used:
-[`carla.Client`](python_api.md#carla.Client)
-
-```py
- argparser = argparse.ArgumentParser(
- description=__doc__)
- argparser.add_argument(
- '--host',
- metavar='H',
- default='127.0.0.1',
- help='IP of the host server (default: 127.0.0.1)')
- argparser.add_argument(
- '-p', '--port',
- metavar='P',
- default=2000,
- type=int,
- help='TCP port to listen to (default: 2000)')
- argparser.add_argument(
- '-s', '--speed',
- metavar='FACTOR',
- default=1.0,
- type=float,
- help='rate at which the weather changes (default: 1.0)')
- args = argparser.parse_args()
-
- speed_factor = args.speed
- update_freq = 0.1 / speed_factor
-
- client = carla.Client(args.host, args.port)
-```
-
----
-## Traffic Light Recipe
-
-This recipe changes from red to green the traffic light that affects the vehicle.
-This is done by detecting if the vehicle actor is at a traffic light.
-
-Focused on:
-[`carla.TrafficLight`](python_api.md#carla.TrafficLight)
-[`carla.TrafficLightState`](python_api.md#carla.TrafficLightState)
-
-Used:
-[`carla.Vehicle`](python_api.md#carla.Vehicle)
-
-```py
-# ...
-if vehicle_actor.is_at_traffic_light():
- traffic_light = vehicle_actor.get_traffic_light()
- if traffic_light.get_state() == carla.TrafficLightState.Red:
- # world.hud.notification("Traffic light changed! Good to go!")
- traffic_light.set_state(carla.TrafficLightState.Green)
-# ...
-```
-
-
-
----
-## Walker Batch Recipe
-
-```py
-# 0. Choose a blueprint fo the walkers
-world = client.get_world()
-blueprintsWalkers = world.get_blueprint_library().filter("walker.pedestrian.*")
-walker_bp = random.choice(blueprintsWalkers)
-
-# 1. Take all the random locations to spawn
-spawn_points = []
-for i in range(50):
- spawn_point = carla.Transform()
- spawn_point.location = world.get_random_location_from_navigation()
- if (spawn_point.location != None):
- spawn_points.append(spawn_point)
-
-# 2. Build the batch of commands to spawn the pedestrians
-batch = []
-for spawn_point in spawn_points:
- walker_bp = random.choice(blueprintsWalkers)
- batch.append(carla.command.SpawnActor(walker_bp, spawn_point))
-
-# 2.1 apply the batch
-results = client.apply_batch_sync(batch, True)
-for i in range(len(results)):
- if results[i].error:
- logging.error(results[i].error)
- else:
- walkers_list.append({"id": results[i].actor_id})
-
-# 3. Spawn walker AI controllers for each walker
-batch = []
-walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker')
-for i in range(len(walkers_list)):
- batch.append(carla.command.SpawnActor(walker_controller_bp, carla.Transform(), walkers_list[i]["id"]))
-
-# 3.1 apply the batch
-results = client.apply_batch_sync(batch, True)
-for i in range(len(results)):
- if results[i].error:
- logging.error(results[i].error)
- else:
- walkers_list[i]["con"] = results[i].actor_id
-
-# 4. Put altogether the walker and controller ids
-for i in range(len(walkers_list)):
- all_id.append(walkers_list[i]["con"])
- all_id.append(walkers_list[i]["id"])
-all_actors = world.get_actors(all_id)
-
-# wait for a tick to ensure client receives the last transform of the walkers we have just created
-world.wait_for_tick()
-
-# 5. initialize each controller and set target to walk to (list is [controller, actor, controller, actor ...])
-for i in range(0, len(all_actors), 2):
- # start walker
- all_actors[i].start()
- # set walk to random point
- all_actors[i].go_to_location(world.get_random_location_from_navigation())
- # random max speed
- all_actors[i].set_max_speed(1 + random.random()) # max speed between 1 and 2 (default is 1.4 m/s)
-```
-
-To **destroy the pedestrians**, stop them from the navigation, and then destroy the objects (actor and controller):
-
-```py
-# stop pedestrians (list is [controller, actor, controller, actor ...])
-for i in range(0, len(all_id), 2):
- all_actors[i].stop()
-
-# destroy pedestrian (actor and controller)
-client.apply_batch([carla.command.DestroyActor(x) for x in all_id])
-```
\ No newline at end of file
diff --git a/PythonAPI/docs/actor.yml b/PythonAPI/docs/actor.yml
index 4791e2a87..f8157a1c9 100644
--- a/PythonAPI/docs/actor.yml
+++ b/PythonAPI/docs/actor.yml
@@ -387,7 +387,7 @@
- class_name: TrafficLightState
# - DESCRIPTION ------------------------
doc: >
- All possible states for traffic lights. These can either change at a specific time step or be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to see an example.
+ All possible states for traffic lights. These can either change at a specific time step or be changed manually. The snipet in carla.TrafficLight.set_state changes the state of a traffic light on the fly.
# - PROPERTIES -------------------------
instance_variables:
- var_name: Red
@@ -403,7 +403,7 @@
doc: >
A traffic light actor, considered a specific type of traffic sign. As traffic lights will mostly appear at junctions, they belong to a group which contains the different traffic lights in it. Inside the group, traffic lights are differenciated by their pole index.
- Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually. Take a look at this [recipe](ref_code_recipes.md#traffic-lights-recipe) to learn how to do so.
+ Within a group the state of traffic lights is changed in a cyclic pattern: one index is chosen and it spends a few seconds in green, yellow and eventually red. The rest of the traffic lights remain frozen in red this whole time, meaning that there is a gap in the last seconds of the cycle where all the traffic lights are red. However, the state of a traffic light can be changed manually.
# - PROPERTIES -------------------------
instance_variables:
- var_name: state
diff --git a/PythonAPI/docs/doc_gen.py b/PythonAPI/docs/doc_gen.py
index d8247a422..2f17c03b3 100755
--- a/PythonAPI/docs/doc_gen.py
+++ b/PythonAPI/docs/doc_gen.py
@@ -10,6 +10,7 @@
import os
import yaml
import re
+import doc_gen_snipets
COLOR_METHOD = '#7fb800'
COLOR_PARAM = '#00a6ed'
@@ -35,7 +36,7 @@ class MarkdownFile:
self._data = ""
self._list_depth = 0
self.endl = ' \n'
-
+
def data(self):
return self._data
@@ -77,9 +78,7 @@ class MarkdownFile:
self._data, '#Python API reference\n'])
def button_apis(self):
- self._data = join([
- self._data,
- ''])
+ self._data = join([self._data, ''])
def title(self, strongness, buf):
self._data = join([
@@ -95,7 +94,7 @@ class MarkdownFile:
def inherit_join(self, inh):
self._data = join([
- self._data,'
Inherited from ',inh,'
'])
+ self._data,'
Inherited from ',inh,'
'])
def note(self, buf):
self._data = join([self._data, buf])
@@ -117,6 +116,9 @@ def italic(buf):
def bold(buf):
return join(['**', buf, '**'])
+def snipet(name,class_key):
+
+ return join(["'])
def code(buf):
return join(['`', buf, '`'])
@@ -221,6 +223,52 @@ class YamlFile:
def get_modules(self):
return [module for module in self.data]
+def append_snipet_button_script(md):
+ md.textn("\n\n\n")
+
+def append_code_snipets(md):
+ current_folder = os.path.dirname(os.path.abspath(__file__))
+ snipets_path = os.path.join(current_folder, '../../Docs/python_api_snipets.md')
+ snipets = open(snipets_path, 'r')
+ md.text(snipets.read())
+ os.remove(snipets_path)
+
def gen_stub_method_def(method):
"""Return python def as it should be written in stub files"""
@@ -235,14 +283,16 @@ def gen_stub_method_def(method):
return join([method_name, parentheses(param), return_type])
-def gen_doc_method_def(method, is_indx=False, with_self=True):
+def gen_doc_method_def(method, class_key, is_indx=False, with_self=True):
"""Return python def as it should be written in docs"""
param = ''
+ snipet_link = ''
method_name = method['def_name']
+ full_method_name = method_name
if valid_dic_val(method, 'static'):
with_self = False
- # to correclty render methods like __init__ in md
+ # to correctly render methods like __init__ in md
if method_name[0] == '_':
method_name = '\\' + method_name
if is_indx:
@@ -269,7 +319,15 @@ def gen_doc_method_def(method, is_indx=False, with_self=True):
del method['params']
param = param[:-2] # delete the last ', '
- return join([method_name, parentheses(param)])
+
+ # Add snipet
+ current_folder = os.path.dirname(os.path.abspath(__file__))
+ snipets_path = os.path.join(current_folder, '../../Docs/python_api_snipets.md')
+ snipets = open(snipets_path, 'r')
+ if class_key+'.'+full_method_name+'-snipet' in snipets.read():
+ snipet_link = snipet(full_method_name, class_key)
+
+ return join([method_name, parentheses(param),snipet_link])
def gen_doc_dunder_def(dunder, is_indx=False, with_self=True):
"""Return python def as it should be written in docs"""
@@ -278,7 +336,7 @@ def gen_doc_dunder_def(dunder, is_indx=False, with_self=True):
if valid_dic_val(dunder, 'static'):
with_self = False
- # to correclty render methods like __init__ in md
+ # to correctly render methods like __init__ in md
if dunder_name[0] == '_':
dunder_name = '\\' + dunder_name
if is_indx:
@@ -320,7 +378,7 @@ def gen_inst_var_indx(inst_var, class_key):
def gen_method_indx(method, class_key):
method_name = method['def_name']
method_key = join([class_key, method_name], '.')
- method_def = gen_doc_method_def(method, True)
+ method_def = gen_doc_method_def(method, class_key, True)
return join([
brackets(method_def),
parentheses(method_key), ' ',
@@ -352,7 +410,7 @@ def add_doc_method_param(md, param):
def add_doc_method(md, method, class_key):
method_name = method['def_name']
method_key = join([class_key, method_name], '.')
- method_def = gen_doc_method_def(method, False)
+ method_def = gen_doc_method_def(method, class_key, False)
md.list_pushn(join([html_key(method_key), method_def]))
# Method doc
@@ -403,10 +461,10 @@ def add_doc_method(md, method, class_key):
md.list_pop()
-def add_doc_getter_setter(md, method,class_key,is_getter,other_list):
+def add_doc_getter_setter(md, method, class_key, is_getter, other_list):
method_name = method['def_name']
method_key = join([class_key, method_name], '.')
- method_def = gen_doc_method_def(method, False)
+ method_def = gen_doc_method_def(method, class_key, False)
md.list_pushn(join([html_key(method_key), method_def]))
# Method doc
@@ -535,7 +593,6 @@ def add_doc_inst_var(md, inst_var, class_key):
md.list_pop()
-
class Documentation:
"""Main documentation class"""
@@ -644,16 +701,18 @@ class Documentation:
if len(get_list)>0:
md.title_html(5, 'Getters')
for method in get_list:
- add_doc_getter_setter(md, method,class_key,True,set_list)
+ add_doc_getter_setter(md, method, class_key, True, set_list)
if len(set_list)>0:
md.title_html(5, 'Setters')
for method in set_list:
- add_doc_getter_setter(md, method,class_key,False,get_list)
+ add_doc_getter_setter(md, method, class_key, False, get_list)
if len(dunder_list)>0:
md.title_html(5, 'Dunder methods')
for method in dunder_list:
add_doc_dunder(md, method, class_key)
md.separator()
+ append_code_snipets(md)
+ append_snipet_button_script(md)
return md.data().strip()
def gen_markdown(self):
@@ -665,6 +724,7 @@ def main():
"""Main function"""
print("Generating PythonAPI documentation...")
script_path = os.path.dirname(os.path.abspath(__file__))
+ doc_gen_snipets.main()
docs = Documentation(script_path)
with open(os.path.join(script_path, '../../Docs/python_api.md'), 'w') as md_file:
md_file.write(docs.gen_markdown())
diff --git a/PythonAPI/docs/doc_gen_snipets.py b/PythonAPI/docs/doc_gen_snipets.py
new file mode 100755
index 000000000..cade46456
--- /dev/null
+++ b/PythonAPI/docs/doc_gen_snipets.py
@@ -0,0 +1,146 @@
+#!/usr/bin/env python
+# -*- coding: utf-8 -*-
+
+# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
+# Barcelona (UAB).
+#
+# This work is licensed under the terms of the MIT license.
+# For a copy, see .
+
+import os
+import yaml
+import re
+
+COLOR_METHOD = '#7fb800'
+COLOR_PARAM = '#00a6ed'
+COLOR_INSTANCE_VAR = '#f8805a'
+COLOR_NOTE = '#8E8E8E'
+COLOR_WARNING = '#ED2F2F'
+
+QUERY = re.compile(r'([cC]arla(\.[a-zA-Z0-9_]+)+)')
+
+
+def create_hyperlinks(text):
+ return re.sub(QUERY, r'[\1](#\1)', text)
+
+def create_getter_setter_hyperlinks(text):
+ return re.sub(QUERY, r'[\1](#\1)', text)
+
+def join(elem, separator=''):
+ return separator.join(elem)
+
+
+class MarkdownFile:
+ def __init__(self):
+ self._data = ""
+ self._list_depth = 0
+ self.endl = ' \n'
+
+ def data(self):
+ return self._data
+
+ def list_depth(self):
+ if self._data.strip()[-1:] != '\n' or self._list_depth == 0:
+ return ''
+ return join([' ' * self._list_depth])
+
+ def textn(self, buf):
+ self._data = join([self._data, self.list_depth(), buf, self.endl])
+
+
+
+class Documentation:
+ """Main documentation class"""
+
+ def __init__(self, path, images_path):
+ self._snipets_path = os.path.join(os.path.dirname(path), 'snipets')
+ self._files = [f for f in os.listdir(self._snipets_path) if f.endswith('.py')]
+ self._snipets = list()
+ for snipet_file in self._files:
+ current_snipet_path = os.path.join(self._snipets_path, snipet_file)
+ self._snipets.append(current_snipet_path)
+ # Gather snipet images
+ self._snipets_images_path = images_path
+ self._files_images = [f for f in os.listdir(self._snipets_images_path)]
+ self._snipets_images = list()
+ for snipet_image in self._files_images:
+ current_image_path = os.path.join(self._snipets_images_path, snipet_image)
+ self._snipets_images.append(current_image_path)
+
+
+ def gen_body(self):
+ """Generates the documentation body"""
+ md = MarkdownFile()
+ # Create header for snipets (div container and script to copy)
+ md.textn(
+ "[comment]: <> (=========================)\n"+
+ "[comment]: <> (PYTHON API SCRIPT SNIPETS)\n"+
+ "[comment]: <> (=========================)\n"+
+ "\n"+
+ "\n"+
+ "\n")
+ # Create content for every snipet
+ for snipet_path in self._snipets:
+ current_snipet = open(snipet_path, 'r')
+ snipet_name = os.path.basename(current_snipet.name) # Remove path
+ snipet_name = os.path.splitext(snipet_name)[0] # Remove extension
+ # Header for a snipet
+ md.textn("
\n"+
+ "
\n"+
+ "Snipet for "+snipet_name+"\n"+
+ "
\n"+
+ "
\n\n```py\n")
+ # The snipet code
+ md.textn(current_snipet.read())
+ # Closing for a snipet
+ md.textn("\n```\n
\n")
+ # Check if snipet image exists, and add it
+ for snipet_path_to_image in self._snipets_images:
+ snipet_image_name = os.path.splitext(os.path.basename(snipet_path_to_image))[0]
+ if snipet_name == snipet_image_name:
+ md.textn("\n\n")
+ md.textn("
\n")
+ # Closing div
+ md.textn("\n
\n")
+ return md.data().strip()
+
+
+ def gen_markdown(self):
+ """Generates the whole markdown file"""
+ return join([self.gen_body()], '\n').strip()
+
+
+def main():
+ """Main function"""
+ print("Generating PythonAPI snipets...")
+ script_path = os.path.dirname(os.path.abspath(__file__)+'/snipets')
+ snipets_images_path = os.path.dirname(os.path.dirname(os.path.dirname(
+ os.path.abspath(__file__)))) + '/Docs/img/snipets_images'
+ docs = Documentation(script_path, snipets_images_path)
+ snipets_md_path = os.path.join(os.path.dirname(os.path.dirname(
+ os.path.dirname(script_path))), 'Docs/python_api_snipets.md')
+ with open(snipets_md_path, 'w') as md_file:
+ md_file.write(docs.gen_markdown())
+ print("Done snipets!")
+
+
+if __name__ == "__main__":
+ main()
diff --git a/PythonAPI/docs/geom.yml b/PythonAPI/docs/geom.yml
index 29e92d9a9..db9ccb096 100644
--- a/PythonAPI/docs/geom.yml
+++ b/PythonAPI/docs/geom.yml
@@ -383,7 +383,7 @@
- class_name: BoundingBox
# - DESCRIPTION ------------------------
doc: >
- Bounding boxes contain the geometry of an actor or an element in the scene. They can be used by carla.DebugHelper or a carla.Client to draw their shapes for debugging. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
+ Bounding boxes contain the geometry of an actor or an element in the scene. They can be used by carla.DebugHelper or a carla.Client to draw their shapes for debugging. Check out the snipet in carla.DebugHelper.draw_box where a snapshot of the world is used to draw bounding boxes for traffic lights.
# - PROPERTIES -------------------------
instance_variables:
- var_name: extent
diff --git a/PythonAPI/docs/map.yml b/PythonAPI/docs/map.yml
index 77bede69e..6aa2daf76 100644
--- a/PythonAPI/docs/map.yml
+++ b/PythonAPI/docs/map.yml
@@ -5,7 +5,7 @@
- class_name: LaneType
# - DESCRIPTION ------------------------
doc: >
- Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. For instance in this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a carla.Waypoint for the current location of a vehicle and uses it to get the current and adjacent lane types.
+ Class that defines the possible lane types accepted by OpenDRIVE 1.4. This standards define the road information. The snipet in carla.Map.get_waypoint makes use of a waypoint to get the current and adjacent lane types.
# - PROPERTIES -------------------------
instance_variables:
- var_name: NONE
@@ -57,7 +57,7 @@
- class_name: LaneChange
# - DESCRIPTION ------------------------
doc: >
- Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every carla.Waypoint according to the OpenDRIVE file. In this [recipe](ref_code_recipes.md#lanes-recipe) the user creates a waypoint for a current vehicle position and learns which turns are permitted.
+ Class that defines the permission to turn either left, right, both or none (meaning only going straight is allowed). This information is stored for every carla.Waypoint according to the OpenDRIVE file. The snipet in carla.Map.get_waypoint shows how a waypoint can be used to learn which turns are permitted.
# - PROPERTIES -------------------------
instance_variables:
- var_name: NONE
@@ -98,8 +98,8 @@
- class_name: LaneMarkingType
# - DESCRIPTION ------------------------
doc: >
- Class that defines the lane marking types accepted by OpenDRIVE 1.4. Take a look at this [recipe](ref_code_recipes.md#lanes-recipe) where the user creates a carla.Waypoint for a vehicle location and retrieves from it the information about adjacent lane markings.
-
+ Class that defines the lane marking types accepted by OpenDRIVE 1.4. The snipet in carla.Map.get_waypoint shows how a waypoint can be used to retrieve the information about adjacent lane markings.
+
__Note on double types:__ Lane markings are defined under the OpenDRIVE standard that determines whereas a line will be considered "BrokenSolid" or "SolidBroken". For each road there is a center lane marking, defined from left to right regarding the lane's directions. The rest of the lane markings are defined in order from the center lane to the closest outside of the road.
# - PROPERTIES -------------------------
instance_variables:
diff --git a/PythonAPI/docs/sensor_data.yml b/PythonAPI/docs/sensor_data.yml
index 961cde57f..d17ff758b 100644
--- a/PythonAPI/docs/sensor_data.yml
+++ b/PythonAPI/docs/sensor_data.yml
@@ -36,7 +36,7 @@
- class_name: ColorConverter
# - DESCRIPTION ------------------------
doc: >
- Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look a this [recipe](ref_code_recipes.md#converted-image-recipe) to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
+ Class that defines conversion patterns that can be applied to a carla.Image in order to show information provided by carla.Sensor. Depth conversions cause a loss of accuracy, as sensors detect depth as float that is then converted to a grayscale value between 0 and 255. Take a look at the snipet in carla.Sensor.listen to see an example of how to create and save image data for sensor.camera.semantic_segmentation.
# - PROPERTIES -------------------------
instance_variables:
- var_name: CityScapesPalette
diff --git a/PythonAPI/docs/snipets/carla.ActorBlueprint.set_attribute.py b/PythonAPI/docs/snipets/carla.ActorBlueprint.set_attribute.py
new file mode 100755
index 000000000..84e81960c
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.ActorBlueprint.set_attribute.py
@@ -0,0 +1,19 @@
+
+# This recipe changes attributes of different type of blueprint actors.
+
+# ...
+walker_bp = world.get_blueprint_library().filter('walker.pedestrian.0002')
+walker_bp.set_attribute('is_invincible', True)
+
+# ...
+# Changes attribute randomly by the recommended value
+vehicle_bp = wolrd.get_blueprint_library().filter('vehicle.bmw.*')
+color = random.choice(vehicle_bp.get_attribute('color').recommended_values)
+vehicle_bp.set_attribute('color', color)
+
+# ...
+
+camera_bp = world.get_blueprint_library().filter('sensor.camera.rgb')
+camera_bp.set_attribute('image_size_x', 600)
+camera_bp.set_attribute('image_size_y', 600)
+# ...
diff --git a/PythonAPI/docs/snipets/carla.Client.__init__.py b/PythonAPI/docs/snipets/carla.Client.__init__.py
new file mode 100755
index 000000000..eec6b305c
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.Client.__init__.py
@@ -0,0 +1,30 @@
+
+# This recipe shows in every script provided in PythonAPI/Examples
+# and it is used to parse the client creation arguments when running the script.
+
+ argparser = argparse.ArgumentParser(
+ description=__doc__)
+ argparser.add_argument(
+ '--host',
+ metavar='H',
+ default='127.0.0.1',
+ help='IP of the host server (default: 127.0.0.1)')
+ argparser.add_argument(
+ '-p', '--port',
+ metavar='P',
+ default=2000,
+ type=int,
+ help='TCP port to listen to (default: 2000)')
+ argparser.add_argument(
+ '-s', '--speed',
+ metavar='FACTOR',
+ default=1.0,
+ type=float,
+ help='rate at which the weather changes (default: 1.0)')
+ args = argparser.parse_args()
+
+ speed_factor = args.speed
+ update_freq = 0.1 / speed_factor
+
+ client = carla.Client(args.host, args.port)
+
diff --git a/PythonAPI/docs/snipets/carla.Client.apply_batch_sync.py b/PythonAPI/docs/snipets/carla.Client.apply_batch_sync.py
new file mode 100755
index 000000000..e761afdc4
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.Client.apply_batch_sync.py
@@ -0,0 +1,58 @@
+# 0. Choose a blueprint fo the walkers
+world = client.get_world()
+blueprintsWalkers = world.get_blueprint_library().filter("walker.pedestrian.*")
+walker_bp = random.choice(blueprintsWalkers)
+
+# 1. Take all the random locations to spawn
+spawn_points = []
+for i in range(50):
+ spawn_point = carla.Transform()
+ spawn_point.location = world.get_random_location_from_navigation()
+ if (spawn_point.location != None):
+ spawn_points.append(spawn_point)
+
+# 2. Build the batch of commands to spawn the pedestrians
+batch = []
+for spawn_point in spawn_points:
+ walker_bp = random.choice(blueprintsWalkers)
+ batch.append(carla.command.SpawnActor(walker_bp, spawn_point))
+
+# 2.1 apply the batch
+results = client.apply_batch_sync(batch, True)
+for i in range(len(results)):
+ if results[i].error:
+ logging.error(results[i].error)
+ else:
+ walkers_list.append({"id": results[i].actor_id})
+
+# 3. Spawn walker AI controllers for each walker
+batch = []
+walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker')
+for i in range(len(walkers_list)):
+ batch.append(carla.command.SpawnActor(walker_controller_bp, carla.Transform(), walkers_list[i]["id"]))
+
+# 3.1 apply the batch
+results = client.apply_batch_sync(batch, True)
+for i in range(len(results)):
+ if results[i].error:
+ logging.error(results[i].error)
+ else:
+ walkers_list[i]["con"] = results[i].actor_id
+
+# 4. Put altogether the walker and controller ids
+for i in range(len(walkers_list)):
+ all_id.append(walkers_list[i]["con"])
+ all_id.append(walkers_list[i]["id"])
+all_actors = world.get_actors(all_id)
+
+# wait for a tick to ensure client receives the last transform of the walkers we have just created
+world.wait_for_tick()
+
+# 5. initialize each controller and set target to walk to (list is [controller, actor, controller, actor ...])
+for i in range(0, len(all_actors), 2):
+ # start walker
+ all_actors[i].start()
+ # set walk to random point
+ all_actors[i].go_to_location(world.get_random_location_from_navigation())
+ # random max speed
+ all_actors[i].set_max_speed(1 + random.random()) # max speed between 1 and 2 (default is 1.4 m/s)
diff --git a/PythonAPI/docs/snipets/carla.DebugHelper.draw_box.py b/PythonAPI/docs/snipets/carla.DebugHelper.draw_box.py
new file mode 100755
index 000000000..1bdc967b4
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.DebugHelper.draw_box.py
@@ -0,0 +1,13 @@
+
+# This recipe shows how to draw traffic light actor bounding boxes from a world snapshot.
+
+# ....
+debug = world.debug
+world_snapshot = world.get_snapshot()
+
+for actor_snapshot in world_snapshot:
+ actual_actor = world.get_actor(actor_snapshot.id)
+ if actual_actor.type_id == 'traffic.traffic_light':
+ debug.draw_box(carla.BoundingBox(actor_snapshot.get_transform().location,carla.Vector3D(0.5,0.5,2)),actor_snapshot.get_transform().rotation, 0.05, carla.Color(255,0,0,0),0)
+# ...
+
diff --git a/PythonAPI/docs/snipets/carla.DebugHelper.draw_line.py b/PythonAPI/docs/snipets/carla.DebugHelper.draw_line.py
new file mode 100755
index 000000000..38be0feb3
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.DebugHelper.draw_line.py
@@ -0,0 +1,24 @@
+
+# This recipe is a modification of lane_explorer.py example.
+# It draws the path of an actor through the world, printing information at each waypoint.
+
+# ...
+current_w = map.get_waypoint(vehicle.get_location())
+while True:
+
+ next_w = map.get_waypoint(vehicle.get_location(), lane_type=carla.LaneType.Driving | carla.LaneType.Shoulder | carla.LaneType.Sidewalk )
+ # Check if the vehicle is moving
+ if next_w.id != current_w.id:
+ vector = vehicle.get_velocity()
+ # Check if the vehicle is on a sidewalk
+ if current_w.lane_type == carla.LaneType.Sidewalk:
+ draw_waypoint_union(debug, current_w, next_w, cyan if current_w.is_junction else red, 60)
+ else:
+ draw_waypoint_union(debug, current_w, next_w, cyan if current_w.is_junction else green, 60)
+ debug.draw_string(current_w.transform.location, str('%15.0f km/h' % (3.6 * math.sqrt(vector.x**2 + vector.y**2 + vector.z**2))), False, orange, 60)
+ draw_transform(debug, current_w.transform, white, 60)
+
+ # Update the current waypoint and sleep for some time
+ current_w = next_w
+ time.sleep(args.tick_time)
+# ...
diff --git a/PythonAPI/docs/snipets/carla.Map.get_waypoint.py b/PythonAPI/docs/snipets/carla.Map.get_waypoint.py
new file mode 100755
index 000000000..0e93dcde4
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.Map.get_waypoint.py
@@ -0,0 +1,15 @@
+
+# This recipe shows the current traffic rules affecting the vehicle.
+# Shows the current lane type and if a lane change can be done in the actual lane or the surrounding ones.
+
+# ...
+waypoint = world.get_map().get_waypoint(vehicle.get_location(),project_to_road=True, lane_type=(carla.LaneType.Driving | carla.LaneType.Shoulder | carla.LaneType.Sidewalk))
+print("Current lane type: " + str(waypoint.lane_type))
+# Check current lane change allowed
+print("Current Lane change: " + str(waypoint.lane_change))
+# Left and Right lane markings
+print("L lane marking type: " + str(waypoint.left_lane_marking.type))
+print("L lane marking change: " + str(waypoint.left_lane_marking.lane_change))
+print("R lane marking type: " + str(waypoint.right_lane_marking.type))
+print("R lane marking change: " + str(waypoint.right_lane_marking.lane_change))
+# ...
diff --git a/PythonAPI/docs/snipets/carla.Sensor.listen.py b/PythonAPI/docs/snipets/carla.Sensor.listen.py
new file mode 100755
index 000000000..f6acde605
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.Sensor.listen.py
@@ -0,0 +1,10 @@
+
+# This recipe applies a color conversion to the image taken by a camera sensor,
+# so it is converted to a semantic segmentation image.
+
+# ...
+camera_bp = world.get_blueprint_library().filter('sensor.camera.semantic_segmentation')
+# ...
+cc = carla.ColorConverter.CityScapesPalette
+camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame, cc))
+# ...
diff --git a/PythonAPI/docs/snipets/carla.TrafficLight.set_state.py b/PythonAPI/docs/snipets/carla.TrafficLight.set_state.py
new file mode 100755
index 000000000..6bcb71fb3
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.TrafficLight.set_state.py
@@ -0,0 +1,28 @@
+
+# This recipe changes from red to green the traffic light that affects the vehicle.
+# This is done by detecting if the vehicle actor is at a traffic light.
+
+# ...
+world = client.get_world()
+spectator = world.get_spectator()
+
+vehicle_bp = random.choice(world.get_blueprint_library().filter('vehicle.bmw.*'))
+transform = random.choice(world.get_map().get_spawn_points())
+vehicle = world.try_spawn_actor(vehicle_bp, transform)
+
+# Wait for world to get the vehicle actor
+world.tick()
+
+world_snapshot = world.wait_for_tick()
+actor_snapshot = world_snapshot.find(vehicle.id)
+
+# Set spectator at given transform (vehicle transform)
+spectator.set_transform(actor_snapshot.get_transform())
+# ...# ...
+if vehicle_actor.is_at_traffic_light():
+ traffic_light = vehicle_actor.get_traffic_light()
+ if traffic_light.get_state() == carla.TrafficLightState.Red:
+ # world.hud.notification("Traffic light changed! Good to go!")
+ traffic_light.set_state(carla.TrafficLightState.Green)
+# ...
+
diff --git a/PythonAPI/docs/snipets/carla.WalkerAIController.stop.py b/PythonAPI/docs/snipets/carla.WalkerAIController.stop.py
new file mode 100755
index 000000000..de79b1292
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.WalkerAIController.stop.py
@@ -0,0 +1,9 @@
+
+#To destroy the pedestrians, stop them from the navigation, and then destroy the objects (actor and controller).
+
+# stop pedestrians (list is [controller, actor, controller, actor ...])
+for i in range(0, len(all_id), 2):
+ all_actors[i].stop()
+
+# destroy pedestrian (actor and controller)
+client.apply_batch([carla.command.DestroyActor(x) for x in all_id])
diff --git a/PythonAPI/docs/snipets/carla.World.get_spectator.py b/PythonAPI/docs/snipets/carla.World.get_spectator.py
new file mode 100755
index 000000000..8c94f9511
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.World.get_spectator.py
@@ -0,0 +1,20 @@
+
+# This recipe spawns an actor and the spectator camera at the actor's location.
+
+# ...
+world = client.get_world()
+spectator = world.get_spectator()
+
+vehicle_bp = random.choice(world.get_blueprint_library().filter('vehicle.bmw.*'))
+transform = random.choice(world.get_map().get_spawn_points())
+vehicle = world.try_spawn_actor(vehicle_bp, transform)
+
+# Wait for world to get the vehicle actor
+world.tick()
+
+world_snapshot = world.wait_for_tick()
+actor_snapshot = world_snapshot.find(vehicle.id)
+
+# Set spectator at given transform (vehicle transform)
+spectator.set_transform(actor_snapshot.get_transform())
+# ...
diff --git a/PythonAPI/docs/snipets/carla.World.spawn_actor.py b/PythonAPI/docs/snipets/carla.World.spawn_actor.py
new file mode 100755
index 000000000..21115ab10
--- /dev/null
+++ b/PythonAPI/docs/snipets/carla.World.spawn_actor.py
@@ -0,0 +1,10 @@
+
+# This recipe attaches different camera / sensors to a vehicle with different attachments.
+
+# ...
+camera = world.spawn_actor(rgb_camera_bp, transform, attach_to=vehicle, attachment_type=Attachment.Rigid)
+# Default attachment: Attachment.Rigid
+gnss_sensor = world.spawn_actor(sensor_gnss_bp, transform, attach_to=vehicle)
+collision_sensor = world.spawn_actor(sensor_collision_bp, transform, attach_to=vehicle)
+lane_invasion_sensor = world.spawn_actor(sensor_lane_invasion_bp, transform, attach_to=vehicle)
+# ...
diff --git a/PythonAPI/docs/world.yml b/PythonAPI/docs/world.yml
index af98f9bad..19e698a66 100644
--- a/PythonAPI/docs/world.yml
+++ b/PythonAPI/docs/world.yml
@@ -174,7 +174,7 @@
- class_name: AttachmentType
# - DESCRIPTION ------------------------
doc: >
- Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for sensors. [Here](ref_code_recipes.md#attach-sensors-recipe) is a brief recipe in which we can see how sensors can be attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
+ Class that defines attachment options between an actor and its parent. When spawning actors, these can be attached to another actor so their position changes accordingly. This is specially useful for sensors. The snipet in carla.World.spawn_actor shows some sensors being attached to a car when spawned. Note that the attachment type is declared as an enum within the class.
# - PROPERTIES -------------------------
instance_variables:
@@ -183,7 +183,7 @@
With this fixed attatchment the object follow its parent position strictly. This is the recommended attachment to retrieve precise data from the simulation.
- var_name: SpringArm
doc: >
- An attachment that expands or retracts the position of the actor, depending on its parent. This attachment is only recommended to record videos from the simulation where a smooth movement is needed. SpringArms are an Unreal Engine component so [check this out](ref_code_recipes.md#attach-sensors-recipe) to learn some more about them. Warning: The SpringArm attachment presents weird behaviors when an actor is spawned with a relative translation in the Z-axis (e.g. child_location = Location(0,0,2)).
+ An attachment that expands or retracts the position of the actor, depending on its parent. This attachment is only recommended to record videos from the simulation where a smooth movement is needed. SpringArms are an Unreal Engine component so [check the UE docs](https://docs.unrealengine.com/en-US/Gameplay/HowTo/UsingCameras/SpringArmComponents/index.html) to learn more about them. Warning: The SpringArm attachment presents weird behaviors when an actor is spawned with a relative translation in the Z-axis (e.g. child_location = Location(0,0,2)).
# --------------------------------------
- class_name: World
@@ -430,7 +430,7 @@
- class_name: DebugHelper
# - DESCRIPTION ------------------------
doc: >
- Helper class part of carla.World that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Check out this [recipe](ref_code_recipes.md#debug-bounding-box-recipe) where the user takes a snapshot of the world and then proceeds to draw bounding boxes for traffic lights.
+ Helper class part of carla.World that defines methods for creating debug shapes. By default, shapes last one second. They can be permanent, but take into account the resources needed to do so. Take a look at the snipets available for this class to learn how to debug easily in CARLA.
# - METHODS ----------------------------
methods:
- def_name: draw_arrow
diff --git a/mkdocs.yml b/mkdocs.yml
index 3cfaa9db5..e694596e5 100644
--- a/mkdocs.yml
+++ b/mkdocs.yml
@@ -34,7 +34,6 @@ nav:
- 'Traffic Manager': 'adv_traffic_manager.md'
- References:
- 'Python API reference': 'python_api.md'
- - 'Code recipes': 'ref_code_recipes.md'
- 'Blueprint Library': 'bp_library.md'
- 'C++ reference' : 'ref_cpp.md'
- 'Recorder binary file format': 'ref_recorder_binary_file_format.md'
@@ -60,7 +59,7 @@ nav:
- "Material customization": 'tuto_A_material_customization.md'
- 'Vehicle modelling': 'tuto_A_vehicle_modelling.md'
- Tutorials (developers):
- - 'Contribute with assets': 'tuto_D_contribute_assets.md'
+ - 'Contribute assets': 'tuto_D_contribute_assets.md'
- 'Create a sensor': 'tuto_D_create_sensor.md'
- 'Create semantic tags': 'tuto_D_create_semantic_tags.md'
- 'Customize vehicle suspension': 'tuto_D_customize_vehicle_suspension.md'