New draft with steps 1 to 3 (#2457)

This commit is contained in:
sergi.e 2020-02-19 10:10:48 +01:00 committed by GitHub
parent 18e65f3eca
commit db81cb6c91
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
9 changed files with 849 additions and 16 deletions

268
Docs/core_actors.md Normal file
View File

@ -0,0 +1,268 @@
<h1>2nd. Actors and blueprints</h1>
The actors in CARLA include almost everything playing a role the simulation. That includes not only vehicles and walkers but also sensors, traffic signs, traffic lights and the spectator, the camera providing the simulation's point of view. They are crucial, and so it is to have fully understanding on how to operate on them.
This section will cover the basics: from spawning up to destruction and their different types. However, the possibilities they present are almost endless. This is the first step to then experiment, take a look at the __How to's__ in this documentation and share doubts and ideas in the [CARLA forum](https://forum.carla.org/).
* [__Blueprints__](#blueprints)
* Managing the blueprint library
* [__Actor life cycle__](#actor-life-cycle):
* Spawning
* Handling
* Destruction
* [__Types of actors__](#types-of-actors):
* Sensors
* Spectator
* Traffic signs and traffic lights
* Vehicles
* Walkers
---------------
##Blueprints
This layouts allow the user to smoothly add new actors into the simulation. They basically are already-made models with a series of attributes listed, some of which are modifiable and others are not: vehicle color, amount of channels in a lidar sensor, _fov_ in a camera, a walker's speed. All of these can be changed at will. All the available blueprints are listed in the [blueprint library](bp_library.md) with their attributes and a tag to identify which can be set by the user.
<h4>Managing the blueprint library</h4>
There is a [carla.BlueprintLibrary](python_api.md#carla.BlueprintLibrary) class containing a list of [carla.ActorBlueprint](python_api.md#carla.ActorBlueprint) elements. It is the world object who can provide access to an instance of it:
```py
blueprint_library = world.get_blueprint_library()
```
Each blueprints has its own ID, useful to identify it and the actors spawned using it. The library can be read to find a certain ID, choose randomly or filter results using a [wildcard pattern](https://tldp.org/LDP/GNU-Linux-Tools-Summary/html/x11655.htm):
```py
# Find a specific blueprint.
collision_sensor_bp = blueprint_library.find('sensor.other.collision')
# Choose a vehicle blueprint at random.
vehicle_bp = random.choice(blueprint_library.filter('vehicle.*.*'))
```
Besides that, each [carla.ActorBlueprint](python_api.md#carla.ActorBlueprint) has a series of [carla.ActorAttribute](python_api.md#carla.ActorAttribute) that can be _get_, _set_, and checked if existing:
```py
is_bike = [vehicle.get_attribute('number_of_wheels') == 2]
if(is_bike)
vehicle.set_attribute('color', '255,0,0')
```
!!! Note
Some of the attributes cannot be modified. Check it out in the [blueprint library](bp_library.md).
Attributes have a helper class [carla.ActorAttributeType](python_api.md#carla.ActorAttributeType) which defines possible types as enums. Also, modifiable attributes come with a __list of recommended values__:
```py
for attr in blueprint:
if attr.is_modifiable:
blueprint.set_attribute(attr.id, random.choice(attr.recommended_values))
```
!!! Note
Users can create their own vehicles, take a look at the tutorials in __How to... (content)__ to learn on that. Contributors can [add their new content to CARLA](dev/how_to_upgrade_content.md).
---------------
##Actor life cycle
!!! Important
All along this section, many different functions and methods regarding actors will be covered. The Python API provides for __[commands](python_api.md#command.SpawnActor)__ to apply batches of this common functions (such as spawning or destroying actors) in just one frame.
<h4>Spawning</h4>
The world object is responsible of spawning actors and keeping track of those who are currently on scene. Besides a blueprint, only a [carla.Transform](python_api.md#carla.Transform), which basically defines a location and rotation for the actor.
```py
transform = Transform(Location(x=230, y=195, z=40), Rotation(yaw=180))
actor = world.spawn_actor(blueprint, transform)
```
In case of collision at the specified location, the actor will not spawn. This may happen when trying to spawn inside a static object, but also if there is another actor currently at said point. The world has two different methods to spawn actors: [`spawn_actor()`](python_api.md#carla.World.spawn_actor) and [`try_spawn_actor()`](python_api.md#carla.World.try_spawn_actor).
The former will raise an exception if the actor could not be spawned, the later will return `None` instead.
To try to avoid this, the world can ask the map for a list of recommended transforms to act as spawning points __for vehicles__:
```py
spawn_points = world.get_map().get_spawn_points()
```
Quite the same works goes __for walkers__, but this time the world can get a random point on a sidewalk (this same method is used to set goal locations for walkers):
```py
spawn_point = carla.Transform()
spawn_point.location = world.get_random_location_from_navigation()
```
Finally, an actor can be attached to another one when spawned, meaning it will follow the parent object around. This is specially useful for sensors. The attachment can be rigid or smooth, as defined by the helper class [carla.AttachmentType](python_api.md#carla.AttachmentType).
The next example attaches a camera rigidly to a vehicle, so their relative position remains fixed.
```py
camera = world.spawn_actor(camera_bp, relative_transform, attach_to=my_vehicle)
```
!!! Note
When spawning attached actors, the transform provided must be relative to the parent actor.
Once spawned, the world object adds the actors to a list, keeping track of the current state of the simulation. This list can be iterated on or searched easily:
```py
actor_list = world.get_actors()
# Find an actor by id.
actor = actor_list.find(id)
# Print the location of all the speed limit signs in the world.
for speed_sign in actor_list.filter('traffic.speed_limit.*'):
print(speed_sign.get_location())
```
<h4>Handling</h4>
Once an actor si spawned, handling is quite straightforward. The [carla.Actor](python_api.md#carla.Actor) class mostly consists of _get()_ and _set()_ methods to manage the actors around the map:
```py
location = actor.get_location()
location.z += 10.0
actor.set_location(location)
print(actor.get_acceleration())
print(actor.get_velocity())
```
The actor's physics can be disabled to freeze it in place.
```py
actor.set_simulate_physics(False)
```
Besides that, actors also have tags provided by their blueprints that are mostly useful for semantic segmentation sensors, though this will be covered later on this documentation.
!!! Important
Most of the methods send requests to the simulator asynchronously. The simulator queues these, but has a limited amount of time each update to parse them. Flooding the simulator with lots of "set" methods will accumulate a significant lag.
<h4>Destruction</h4>
To remove an actor from the simulation, an actor can get rid of itself and notify if successful doing so:
```py
destroyed_sucessfully = actor.destroy()
```
Actors are not destroyed when the Python script finishes, they remain and the world keeps track of them until they are explicitly destroyed.
!!! Important
Destroying an actor blocks the simulator until the process finishes.
---------------
##Types of actors
<h4>Sensors</h4>
Sensors are actors that produce a stream of data. They are so important and vast that they will be properly written about on their own section: [4th. Sensors and data](cameras_and_sensors.md).
So far, let's just take a look at a common sensor spawning routine:
```py
camera_bp = blueprint_library.find('sensor.camera.rgb')
camera = world.spawn_actor(camera_bp, relative_transform, attach_to=my_vehicle)
camera.listen(lambda image: image.save_to_disk('output/%06d.png' % image.frame))
```
This example spawns a camera sensor, attaches it to a vehicle, and tellds the camera to save to disk each of the images that is going to generate. Three main highlights here:
* They have blueprints too. Each sensor works differently so setting attributes is especially important.
* Most of the sensors will be attached to a vehicle to gather information on its surroundings.
* Sensors __listen__ to data. When receiving it, they call a function described with __[Lambda expressions](https://docs.python.org/3/reference/expressions.html)__, so it is advisable to learn on that beforehand <small>(6.13 in the link provided)</small>.
<h4>Spectator</h4>
This unique actor is the one element placed by Unreal Engine to provide an in-game point of view. It can be used to move the view of the simulator window.
<h4>Traffic signs and traffic lights</h4>
So far, CARLA only is aware of some signs: stop, yield and speed limit, described with [carla.TrafficSign](python_api.md#carla.TrafficSign). Traffic lights, which are considered an inherited class named [carla.TrafficLight](python_api.md#carla.TrafficLight). __These cannot be found on the blueprint library__ and thus, cannot be spawned. They set traffic conditions and so, they are mindfully placed by developers.
Traffic signs are not defined in the road map itself (more on that in the following page). Instead, they have a [carla.BoundingBox](python_api.md#carla.BoundingBox) that triggers when a vehicle is inside of it and thus, affected by the traffic sign.
```py
#Get the traffic light affecting a vehicle
if vehicle_actor.is_at_traffic_light():
traffic_light = vehicle_actor.get_traffic_light()
```
!!! Note
Traffic lights will only affect a vehicle if the light is red.
Regarding traffic lights, they are found in junctions so they belong to a `group` and identify with a `pole` number inside the group.
A group of traffic lights follows a cycle in which a pole is set to green, yellow and then red while the rest remain frozen in red. After spending a certain amount of seconds per state (including red, meaning there is a period of time in which all lights are red), the next pole starts its cycle and the previous one is frozen with the rest.
State and time per state can be easily accessed using _get()_ and _set()_ methods described in said class. Possible states are described with [carla.TrafficLightState](python_api.md#carla.TrafficLightState) as a series of enum values.
```py
#Change a red traffic light to green
if traffic_light.get_state() == carla.TrafficLightState.Red:
traffic_light.set_state(carla.TrafficLightState.Green)
```
* **Speed limit signs** with the speed codified in their type_id.
<h4>Vehicles</h4>
[carla.Vehicle](python_api.md#carla.Vehicle) are a special type of actor that provide some better physics control. This is achieved applying four types of different controls:
* __[carla.VehicleControl](python_api.md#carla.VehiclePhysicsControl)__: provides input for driving commands such as throttle, steering, brake, etc.
```py
vehicle.apply_control(carla.VehicleControl(throttle=1.0, steer=-1.0))
```
* __[carla.VehiclePhysicsControl](python_api.md#carla.VehiclePhysicsControl)__: defines many physical attributes of the vehicle from mass and drag coefficient up to torque, maximum rpm, clutch strenght and many more. These controller contains within two more controllers. One is for the gears, [carla.GearPhysicsControl](python_api.md#carla.GearPhysicsControl), the other is a list of [carla.WheelPhysicsControl](python_api.md#carla.WheelPhysicsControl) that provide specific control over the different wheels of the vehicle for a realistic result.
```py
vehicle.apply_physics_control(carla.VehiclePhysicsControl(max_rpm = 5000.0, center_of_mass = carla.Vector3D(0.0, 0.0, 0.0), torque_curve=[[0,400],[5000,400]]))
```
Vehicles can be set to an __autopilot mode__ which will subscribe them to the Traffic manager, an advanced CARLA module that needs its own section to be fully comprehended. For now, it is enough to know that it conducts all the vehicles set to autopilot in order to simulate real urban conditions:
```py
vehicle.set_autopilot(True)
```
!!! Note
The traffic manager module is hard-coded, not based on machine learning.
Finally, vehicles also have a [carla.BoundingBox](python_api.md#carla.BoundingBox) that encapsulates them and is used for collision detection:
```py
box = vehicle.bounding_box
print(box.location) # Location relative to the vehicle.
print(box.extent) # XYZ half-box extents in meters.
```
<h4>Walkers</h4>
[carla.Walker](python_api.md#carla.Walker) are moving actors and so, work in quite a similar way as vehicles do. Control over them is provided by controllers:
* __[carla.WalkerControl](python_api.md#carla.Walker)__: to move the pedestrian around with a certain direction and speed. It also allows them to jump.
* __[carla.WalkerBoneControl](python_api.md#carla.Walker)__: provides control over the specific bones of the 3D model. The skeleton structure and how to control it is summarized in this __[How to](walker_bone_control.md)__.
Walkers can be AI controlled. They do not have an autopilot mode, but there is another actor, [carla.WalkerAIController](python_api.md#carla.WalkerAIController) that, when spawned attached to a walker, can move them around:
```py
walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker')
world.SpawnActor(walker_controller_bp, carla.Transform(), parent_walker)
```
The AI controller is bodiless, so it will not appear on scene. Passing location (0,0,0) relative to its parent will not cause a collision.
For the walkers to start wandering around, each AI controller has to be initialized, set a goal and optionally a speed. Stopping the controller works in the same manner:
```py
ai_controller.start()
ai_controller.go_to_location(world.get_random_location_from_navigation())
ai_controller.set_max_speed(1 + random.random()) # Between 1 and 2 m/s (default is 1.4 m/s).
...
ai_controller.stop()
```
When a walker reaches the target location, they will automatically walk to another random point. If the target point is not reachable, then they reach the closest point from the are where they are.
For a more advanced reference on how to use this, take a look at [this recipe](python_cookbook.md#walker-batch-recipe) where a lot of walkers is spawned and set to wander around using batches.
!!! Note
To **destroy the pedestrians**, the AI controller needs to be stopped first and then, both actor and controller should be destroyed.
---------------
That is a wrap as regarding actors in CARLA.
The next step should be learning more about the map, roads and traffic in CARLA. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:
<div text-align: center>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="forum.carla.org" target="_blank" class="btn btn-neutral" title="CARLA forum">
CARLA forum</a>
</p>
</div>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="../core_actors" target="_blank" class="btn btn-neutral" title="3rd. Maps and navigation">
3rd. Maps and navigation</a>
</p>
</div>
</div>

66
Docs/core_concepts.md Normal file
View File

@ -0,0 +1,66 @@
<h1>Core concepts</h1>
This section summarizes the main features and modules in CARLA. While this page is just an overview, the rest of the information can be found in their respective pages, including fragments of code and in-depth explanations.
In order to learn everything about the different classes and methods in the API, take a look at the [Python API reference](python_api.md). There is also another reference named [Code recipes](python_cookbook.md) containing some of the most common fragments of code regarding different functionalities that could be specially useful during these first steps.
* [__First steps__](#first-steps)
* 1st. World and client
* 2nd. Actors and blueprints
* 3rd. Maps and navigation
* 4th. Sensors and data
* [__Advanced steps__](#advanced-steps)
!!! Important
**This documentation refers to CARLA 0.9.X**. <br>
The API changed significantly from previous versions (0.8.X). There is another documentation regarding those versions that can be found [here](https://carla.readthedocs.io/en/stable/getting_started/).
---------------
##First steps
<h4>1st. World and client</h4>
The client is the module the user runs to ask for information or changes in the simulation. It communicates with the server via terminal. A client runs with an IP and a specific port. There can be many clients running at the same time, although multiclient managing needs a full comprehension on CARLA in order to make things work properly.
The world is an object representing the simulation, an abstract layer containing the main methods to manage it: spawn actors, change the weather, get the current state of the world, etc. There is only one world per simulation, but it will be destroyed and substituted for a new one when the map is changed.
<h4>2nd. Actors and blueprints</h4>
In CARLA, an actor is anything that plays a role in the simulation. That includes:
* Vehicles.
* Walkers.
* Sensors.
* The spectator.
* Traffic signs and traffic lights.
A blueprint is needed in order to spawn an actor. __Blueprints__ are a set of already-made actor layouts: models with animations and different attributes. Some of these attributes can be set by the user, others don't. There is a library provided by CARLA containing all the available blueprints and the information regarding them. Visit the [Blueprint library](bp_library.md) to learn more about this.
<h4>3rd. Maps and navigation</h4>
The map is the object representing the model of the world. There are many maps available, seven by the time this is written, and all of them use OpenDRIVE 1.4 standard to describe the roads.
Roads, lanes and junctions are managed by the API to be accessed using different classes and methods. These are later used along with the waypoint class to provide vehicles with a navigation path.
Traffic signs and traffic lights have bounding boxes placed on the road that make vehicles aware of them and their current state in order to set traffic conditions.
<h4>4th. Sensors and data</h4>
Sensors are one of the most important actors in CARLA and their use can be quite complex. A sensor is attached to a parent vehicle and follows it around, gathering information of the surroundings for the sake of learning. Sensors, as any other actor, have blueprints available in the [Blueprint library](bp_library.md) that correspond to the types available. Currently, these are:
* Cameras (RGB, depth and semantic segmentation).
* Collision detector.
* Gnss sensor.
* IMU sensor.
* Lidar raycast.
* Lane invasion detector.
* Obstacle detector.
* Radar.
Sensors wait for some event to happen to gather data and then call for a function defining what they should do. Depending on which, sensors retrieve different types of data in different ways and their usage varies substantially.
---------------
##Advanced steps
Some more complex elements and features in CARLA are listed here to make newcomers familiar with their existence. However it is highly encouraged to first take a closer look to the pages regarding the first steps in order to learn the basics.
- **Recorder:** CARLA feature that allows for reenacting previous simulations using snapshots of the world.
- **Rendering options:** Some advanced configuration options in CARLA that allow for different graphics quality, off-screen rendering and a no-rendering mode.
- **Simulation time and synchrony:** Everything regarding the simulation time and how does the server run the simulation depending on clients.
- **Traffic manager:** This module is in charge of every vehicle set to autopilot mode. It conducts the traffic in the city for the simulation to look like a real urban environment.

164
Docs/core_map.md Normal file
View File

@ -0,0 +1,164 @@
<h1>3rd. Maps and navigation</h1>
After discussing about the world and its actors, it is time to put everything into place and understand the map and how do the actors navigate it.
* [__The map__](#the-map)
* Map changing
* Lanes
* Junctions
* Waypoints
* [__Map navigation__](#map-navigation)
---------------
##The map
Understanding the map in CARLA is equivalent to understanding the road. All of the maps have an OpenDRIVE file defining the road layout fully annotated. The way the [OpenDRIVE standard 1.4](http://www.opendrive.org/docs/OpenDRIVEFormatSpecRev1.4H.pdf) defines roads, lanes, junctions, etc. is extremely important. It determines the possibilities of the API and the reasoning behind many decisions made.
The Python API provides a higher level querying system to navigate these roads. It is constantly evolving to provide a wider set of tools.
<h4>Changing the map</h4>
This was briefly mentioned in [1st. World and client](core_world.md), so let's expand a bit on it: __To change the map, the world has to change too__. Everything will be rebooted and created from scratch, besides the Unreal Editor itself.
Using `reload_world()` creates a new instance of the world with the same map while `load_world()` is used to change the current one:
```py
world = client.load_world('Town01')
```
The client can also get a list of available maps. Each map has a `name` attribute that matches the name of the currently loaded city, e.g. _Town01._:
```py
print(client.get_available_maps())
```
So far there are seven different maps available. Each of these has a specific structure or unique features that are useful for different purposes, so a brief sum up on these:
Town | Summary
-- | --
__Town 01__ | As __Town 02__, a basic town layout with all "T junctions". These are the most stable.
__Town 02__ | As __Town 01__, a basic town layout with all "T junctions". These are the most stable.
__Town 03__ | The most complex town with a roundabout, unevenness, a tunnel. Essentially a medley.
__Town 04__ | An infinite loop in a highway.
__Town 05__ | Squared-grid town with cross junctions and a bridge.
__Town 06__ | Long highways with a lane exit and a [Michigan left](https://en.wikipedia.org/wiki/Michigan_left).
__Town 07__ | A rural environment with narrow roads, barely non traffic lights and barns.
Users can also [customize a map](dev/map_customization.md) or even [create a new map](how_to_make_a_new_map.md) to be used in CARLA. These are more advanced steps and have been developed in their own tutorials.
<h4>Lanes</h4>
The different types of lane as defined by [OpenDRIVE standard 1.4](http://www.opendrive.org/docs/OpenDRIVEFormatSpecRev1.4H.pdf) are translated to the API in [carla.LaneType](python_api.md#carla.LaneType). The surrounding lane markings for each lane can also be accessed using [carla.LaneMarking](python_api.md#carla.LaneMarkingType).
A lane marking is defined by: a [carla.LaneMarkingType](python_api.md#carla.LaneMarkingType) and a [carla.LaneMarkingColor](python_api.md#carla.LaneMarkingColor), a __width__ to state thickness and a variable stating lane changing permissions with [carla.LaneChange](python_api.md#carla.LaneChange).
Both lanes and lane markings are accessed by waypoints to locate a vehicle within the road and aknowledge traffic permissions.
```py
# Get the lane type where the waypoint is.
lane_type = waypoint.lane_type
# Get the type of lane marking on the left.
left_lanemarking_type = waypoint.left_lane_marking.type()
# Get available lane changes for this waypoint.
lane_change = waypoint.lane_change
```
<h4>Junctions</h4>
To ease managing junctions with OpenDRIVE, the [carla.Junction](python_api.md#carla.Junction) class provides for a bounding box to state whereas lanes or vehicles are inside of it.
There is also a method to get a pair of waypoints per lane determining the starting and ending point inside the junction boundaries for each lane:
```py
waypoints_junc = my_junction.get_waypoints()
```
<h4>Waypoints</h4>
[carla.Waypoint](python_api.md#carla.Waypoint) objects are 3D-directed points that are prepared to mediate between the world and the openDRIVE definition of the road.
Each waypoint contains a [carla.Transform](python_api.md#carla.Transform) summarizing a point on the map inside a lane and the orientation of the lane. The variables `road_id`,`section_id`,`lane_id` and `s` that translate this transform to the OpenDRIVE road and are used to create an __identifier__ of the waypoint.
!!! Note
Due to granularity, waypoints closer than __2cm within the same road__ will share the same `id`.
Besides that, each waypoint also contains some information regarding the __lane__ it is contained in and its left and right __lane markings__ and a boolean to determine when it is inside a junction:
```py
inside_junction = waypoint.is_junction()
width = waypoint.lane_width
# Get right lane marking color
right_lm_color = waypoint.right_lane_marking.color
```
Finally regarding navigation, waypoints have a set of methods to ease the flow inside the road:
The `next(d)` creates new waypoint at an approximate distance `d` following the direction of the current lane, while `previous(d)` will do so on the opposite direction.
`next_until_lane_end(d)` and `previous_until_lane_start(d)` will use said distance to find a list of equally distant waypoints contained in the lane. All of these methods follow traffic rules to determine only places where the vehicle can go:
```py
# Disable physics, in this example the vehicle is teleported.
vehicle.set_simulate_physics(False)
while True:
# Find next waypoint 2 meters ahead.
waypoint = random.choice(waypoint.next(2.0))
# Teleport the vehicle.
vehicle.set_transform(waypoint.transform)
```
!!! Note
These methods return a list. If there is more than one possible location (for example at junctions where the lane diverges), the returned list will contain as many waypoints.
Waypoints can also find their equivalent at the center of an adjacent lane (if said lane exists) using `get_right_lane()` and `get_left_lane()`. This is useful to find the next waypoint on a neighbour lane to then perform a lane change:
---------------
##Map Navigation
The instance of the map is provided by the world. Once it is retrieved, it provides acces to different methods that will be useful to create routes and make vehicles roam around the city and reach goal destinations:
```py
map = world.get_map()
```
* __Get recommended spawn points for vehicles__: assigned by developers with no ensurance of the spot being free:
```py
spawn_points = world.get_map().get_spawn_points()
```
* __Get a waypoint__: returns the closest waypoint for a specific location in the simulation or the one belonging to a certain `road_id`, `lane_id` and `s` in OpenDRIVE:
```py
# Nearest waypoint on the center of a Driving or Sidewalk lane.
waypoint01 = map.get_waypoint(vehicle.get_location(),project_to_road=True, lane_type=(carla.LaneType.Driving | carla.LaneType.Sidewalk))
#Nearest waypoint but specifying OpenDRIVE parameters.
waypoint02 = map.get_waypoint_xodr(road_id,lane_id,s)
```
* __Generate a collection of waypoitns__: to visualize the city lanes. Creates waypoints all over the map for every road and lane at an approximated distance between them:
```py
waypoint_list = map.generate_waypoints(2.0)
```
* __Generate road topology__: useful for routing. Returns a list of pairs (tuples) of waypoints. For each pair, the first element connects with the second one and both define the starting and ending point of each lane in the map:
```py
waypoint_tuple_list = map.get_topology()
```
* __Simulation point to world coordinates__: transforms a certain location to world coordinates with latitude and longitude defined with the [carla.Geolocation](python_api.md#carla.Geolocation):
```py
my_geolocation = map.transform_to_geolocation(vehicle.transform)
```
* __Road information__: converts road information to OpenDRIVE format, and saved to disk:
```py
info_map = map.to_opendrive()
```
---------------
That is a wrap as regarding maps and navigation around the cities in CARLA.
The next step should be learning more about sensors, the different types and the data they retrieve. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:
<div text-align: center>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="forum.carla.org" target="_blank" class="btn btn-neutral" title="CARLA forum">
CARLA forum</a>
</p>
</div>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="../core_actors" target="_blank" class="btn btn-neutral" title="4th. Sensors and data">
4th. Sensors and data</a>
</p>
</div>
</div>

191
Docs/core_world.md Normal file
View File

@ -0,0 +1,191 @@
<h1>1st. World and client</h1>
This is bound to be one of the first topics to learn about when entering CARLA. The client and the world are two of the fundamentals of CARLA, a necessary abstraction to operate the simulation and its actors.
This tutorial goes from defining the basics and creation of these elements to describing their possibilities without entering into higher complex matters. If any doubt or issue arises during the reading, the [CARLA forum](forum.carla.org/) is there to solve them.
* [__The client__](#the-client):
* Client creation
* World connection
* Other client utilities
* [__The world__](#the-world):
* World life cycle
* Get() from the world
* Weather
* World snapshots
* Settings
---------------
##The client
Clients are one of the main elements in the CARLA architecture. Using these, the user can connect to the server, retrieve information from the simulation and command changes. That is done via scripts where the client identifies itself and connects to the world to then operate with the simulation.
Besides that, the client is also able to access other CARLA modules, features and apply command batches. Command batches are relevant to this moment of the documentation, as they are useful as soon as spawning actors is required. The rest though are more advanced parts of CARLA and they will not be covered yet in this section.
The __carla.Client__ class is explained thoroughly in the [PythonAPI reference](python_api.md#carla.Client).
<h4>Client creation</h4>
Two things are needed: The IP address identifying it and two TCP ports the client will be using to communicate with the server. There is an optional third parameter, an `int` to set the working threads that by default is set to all (`0`). [This code recipe](python_cookbook.md#parse-client-creation-arguments) shows how to parse these as arguments when running the script.
```py
client = carla.Client('localhost', 2000)
```
By default, CARLA uses local host and port 2000 to connect but these can be changed at will. The second port will always be `n+1` (in this case, 2001).
Once the client is created, set its time-out. This limits all networking operations so that these don't block forever the client but return an error instead if connection fails.
```py
client.set_timeout(10.0) # seconds
```
It is possible to have many clients connected, as it is common to have more than one script running at a time. Just take note that working in a multiclient scheme with more advanced CARLA features such as the traffic manager or the synchronous mode is bound to make communication more complex.
!!! Note
Client and server have different `libcarla` modules. If the versions differ due to different origin commits, issues may arise. This will not normally the case, but it can be checked using the `get_client_version()` and `get_server_version()` methods.
<h4>World connection</h4>
Being the simulation running, a configured client can connect and retrieve the current world easily:
```py
world = client.get_world()
```
Using `reload_world()` the client creates a new instance of the world with the same map. Kind of a reboot method.
The client can also get a list of available maps to change the current one. This will destroy the current world and create a new one.
```py
print(client.get_available_maps())
...
world = client.load_world('Town01')
```
Every world object has an `id` or episode. Everytime the client calls for `load_world()` or `reload_world()` the previous one is destroyed and the new one is created from from scratch without rebooting Unreal Engine, so this episode will change.
<h4>Other client utilities</h4>
The main purpose of the client object is to get or change the world and many times, it is no longer used after that. However, this object is in charge of two other main tasks: accessing to advanced CARLA features and applying command batches.
The list of features that are accessed from the client object are:
* __Traffic manager:__ this module is in charge of every vehicle set to autopilot to recreate an urban environment.
* __[Recorder](recorder_and_playback.md):__ allows to reenact a previous simulation using the information stored in the [snapshots]() summarizing the simulation state per frame.
As far as batches are concerned, the latest sections in the Python API describe the [available commands](python_api.md#command.ApplyAngularVelocity). These are common functions that have been prepared to be executed in batches or lots so that they are applied during the same step of the simulation.
The following example would destroy all the vehicles contained in `vehicles_list` at once:
```py
client.apply_batch([carla.command.DestroyActor(x) for x in vehicles_list])
```
The method `apply_batch_sync()` is only available when running CARLA in [synchronous mode]() and allows to return a __command.Response__ per command applied.
---------------
##The world
This class acts as the major ruler of the simulation and its instance should be retrieved by the client. It does not contain the model of the world itself (that is part of the [Map](core_map.md) class), but rather is an anchor for the simulation. Most of the information and general settings can be accessed from this class, for example:
* Actors and the spectator.
* Blueprint library.
* Map.
* Settings.
* Snapshots.
In fact, some of the most important methods of this class are the _getters_. They summarize all the information the world has access to. More explicit information regarding the World class can be found in the [Python API reference](python_api.md#carla.World).
<h4>Actors</h4>
The world has different methods related with actors that allow it to:
* Spawn actors (but not destroy them).
* Get every actor on scene or find one in particular.
* Access the blueprint library used for spawning these.
* Access the spectator actor that manages the simulation's point of view.
* Retrieve a random location that is fitting to spawn an actor.
Explanations on spawning will be conducted in the second step of this guide: [2nd. Actors and blueprints](core_actors.md), as it requires some understanding on the blueprint library, attributes, etc. Keep reading or visit the [Python API reference](python_api.md) to learn more about this matter.
<h4>Weather</h4>
The weather is not a class on its own, but a world setting. However, there is a helper class named [carla.WeatherParameters](python_api.md#carla.WeatherParameters) that allows to define a series of visual characteristics such as sun orientation, cloudiness, lightning, wind and much more. The changes can then be applied using the world as the following example does:
```py
weather = carla.WeatherParameters(
cloudiness=80.0,
precipitation=30.0,
sun_altitude_angle=70.0)
world.set_weather(weather)
print(world.get_weather())
```
For convenience, there are a series of weather presets that can be directly applied to the world. These are listed in the [Python API reference](python_api.md#carla.WeatherParameters) with all the information regarding the class and are quite straightforward to use:
```py
world.set_weather(carla.WeatherParameters.WetCloudySunset)
```
!!! Note
Changes in the weather do not affect physics. They are only visuals that can be captured by the camera sensors.
<h4>Debugging</h4>
World objects have a public attribute that defines a [carla.DebugHelper](python_api.md#carla.DebugHelper) object. It allows for different shapes to be drawn during the simulation in order to trace the events happening. The following example would access the attribute to draw a red box at an actor's location and rotation.
```py
debug = world.debug
debug.draw_box(carla.BoundingBox(actor_snapshot.get_transform().location,carla.Vector3D(0.5,0.5,2)),actor_snapshot.get_transform().rotation, 0.05, carla.Color(255,0,0,0),0)
```
This example is extended in this [code recipe](python_cookbook.md#debug-bounding-box-recipe) to draw boxes for every actor in a world snapshot. Take a look at it and at the Python API reference to learn more about this.
<h4>World snapshots</h4>
Contains the state of every actor in the simulation at a single frame, a sort of still image of the world with a time reference. This feature makes sure that all the information contained comes from the same simulation step without the need of using synchronous mode.
```py
# Retrieve a snapshot of the world at current frame.
world_snapshot = world.get_snapshot()
```
The [carla.WorldSnapshot](python_api.md#carla.WorldSnapshot) contains a [carla.Timestamp](python_api.md#carla.Timestamp) and a list of [carla.ActorSnapshot](python_api.md#carla.ActorSnapshot). Actor snapshots can be searched using the `id` of an actor and the other way round, the actor regarding a snapshot is facilitated by the `id` in the actor snapshot.
```py
timestamp = world_snapshot.timestamp #Get the time reference
for actor_snapshot in world_snapshot: #Get the actor and the snapshot information
actual_actor = world.get_actor(actor_snapshot.id)
actor_snapshot.get_transform()
actor_snapshot.get_velocity()
actor_snapshot.get_angular_velocity()
actor_snapshot.get_acceleration()
actor_snapshot = world_snapshot.find(actual_actor.id) #Get an actor's snapshot
```
<h4>World settings</h4>
The world also has access to some advanced configurations for the simulation that determine rendering conditions, steps in the simulation time and synchrony between clients and server. These are advanced concepts that do better if untouched by newcomers.
For the time being let's say that CARLA by default runs in with its best quality, with a variable time-step and asynchronously. The helper class is [carla.WorldSettings](python_api.md#carla.WorldSettings). To dive further in this matters take a look at the __Advanced steps__ section of the documentation and read about [configuring the simulation](configuring_the_simulation.md)
__THIS IS TO BE DELETED AND MOVED TO SYNC AND TIME-STEP__
```py
# Wait for the next tick and retrieve the snapshot of the tick.
world_snapshot = world.wait_for_tick()
# Register a callback to get called every time we receive a new snapshot.
world.on_tick(lambda world_snapshot: do_something(world_snapshot))
```
---------------
That is a wrap on the world and client objects, the very first steps in CARLA.
The next step should be learning more about actors and blueprints to give life to the simulation. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:
<div text-align: center>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="forum.carla.org" target="_blank" class="btn btn-neutral" title="CARLA forum">
CARLA forum</a>
</p>
</div>
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="../core_actors" target="_blank" class="btn btn-neutral" title="2nd. Actors and blueprints">
2nd. Actors and blueprints</a>
</p>
</div>
</div>

View File

@ -7,7 +7,7 @@
<h3>Quick start</h3>
* [Python API tutorial](python_api_tutorial.md)
* [Python API tutorial](core_concepts.md)
* [Configuring the simulation](configuring_the_simulation.md)
* [Cameras and sensors](cameras_and_sensors.md)
* [F.A.Q.](faq.md)

View File

@ -216,6 +216,44 @@ path it was following and the speed at each waypoint.
![debug_trail_recipe](img/debug_trail_recipe.png)
## Parse client creation arguments
This recipe shows in every script provided in `PythonAPI/Examples` and it is used to parse the client creation arguments when running the script.
Focused on:<br>
[`carla.Client`](../python_api/#carla.Client)<br>
Used:<br>
[`carla.Client`](../python_api/#carla.Client)
```py
argparser = argparse.ArgumentParser(
description=__doc__)
argparser.add_argument(
'--host',
metavar='H',
default='127.0.0.1',
help='IP of the host server (default: 127.0.0.1)')
argparser.add_argument(
'-p', '--port',
metavar='P',
default=2000,
type=int,
help='TCP port to listen to (default: 2000)')
argparser.add_argument(
'-s', '--speed',
metavar='FACTOR',
default=1.0,
type=float,
help='rate at which the weather changes (default: 1.0)')
args = argparser.parse_args()
speed_factor = args.speed
update_freq = 0.1 / speed_factor
client = carla.Client(args.host, args.port)
```
## Traffic lights Recipe
This recipe changes from red to green the traffic light that affects the vehicle.
@ -239,3 +277,78 @@ if vehicle_actor.is_at_traffic_light():
```
![tl_recipe](img/tl_recipe.gif)
## Walker batch recipe
```py
# 0. Choose a blueprint fo the walkers
world = client.get_world()
blueprintsWalkers = world.get_blueprint_library().filter("walker.pedestrian.*")
walker_bp = random.choice(blueprintsWalkers)
# 1. Take all the random locations to spawn
spawn_points = []
for i in range(50):
spawn_point = carla.Transform()
spawn_point.location = world.get_random_location_from_navigation()
if (spawn_point.location != None):
spawn_points.append(spawn_point)
# 2. Build the batch of commands to spawn the pedestrians
batch = []
for spawn_point in spawn_points:
walker_bp = random.choice(blueprintsWalkers)
batch.append(carla.command.SpawnActor(walker_bp, spawn_point))
# 2.1 apply the batch
results = client.apply_batch_sync(batch, True)
for i in range(len(results)):
if results[i].error:
logging.error(results[i].error)
else:
walkers_list.append({"id": results[i].actor_id})
# 3. Spawn walker AI controllers for each walker
batch = []
walker_controller_bp = world.get_blueprint_library().find('controller.ai.walker')
for i in range(len(walkers_list)):
batch.append(carla.command.SpawnActor(walker_controller_bp, carla.Transform(), walkers_list[i]["id"]))
# 3.1 apply the batch
results = client.apply_batch_sync(batch, True)
for i in range(len(results)):
if results[i].error:
logging.error(results[i].error)
else:
walkers_list[i]["con"] = results[i].actor_id
# 4. Put altogether the walker and controller ids
for i in range(len(walkers_list)):
all_id.append(walkers_list[i]["con"])
all_id.append(walkers_list[i]["id"])
all_actors = world.get_actors(all_id)
# wait for a tick to ensure client receives the last transform of the walkers we have just created
world.wait_for_tick()
# 5. initialize each controller and set target to walk to (list is [controller, actor, controller, actor ...])
for i in range(0, len(all_actors), 2):
# start walker
all_actors[i].start()
# set walk to random point
all_actors[i].go_to_location(world.get_random_location_from_navigation())
# random max speed
all_actors[i].set_max_speed(1 + random.random()) # max speed between 1 and 2 (default is 1.4 m/s)
```
To **destroy the pedestrians**, stop them from the navigation, and then destroy the objects (actor and controller):
```py
# stop pedestrians (list is [controller, actor, controller, actor ...])
for i in range(0, len(all_id), 2):
all_actors[i].stop()
# destroy pedestrian (actor and controller)
client.apply_batch([carla.command.DestroyActor(x) for x in all_id])
```

View File

@ -1,5 +1,32 @@
# Recording and Replaying system
CARLA includes now a recording and replaying API, that allows to record a simulation in a file and
later replay that simulation. The file is written on server side only, and it includes which
**actors are created or destroyed** in the simulation, the **state of the traffic lights**
and the **position** and **orientation** of all vehicles and pedestrians.
To start recording we only need to supply a file name:
```py
client.start_recorder("recording01.log")
```
To stop the recording, we need to call:
```py
client.stop_recorder()
```
At any point we can replay a simulation, specifying the filename:
```py
client.replay_file("recording01.log")
```
The replayer replicates the actor and traffic light information of the recording each frame.
For more details, [Recorder and Playback system](recorder_and_playback.md)
CARLA includes now a recording and replaying API, that allows to record a simulation in a file and
later replay that simulation. The file is written on the server side only, and it includes which
**actors are created or destroyed** in the simulation, the **state of the traffic lights** and

View File

@ -7,8 +7,8 @@ all classes and methods available can be found at
!!! note
**This document assumes the user is familiar with the Python API**. <br>
The user should read the Python API tutorial before reading this document.
[Python API tutorial](python_api_tutorial.md).
The user should read the first steps tutorial before reading this document.
[Core concepts](core_concepts.md).
### Walker skeleton structure

View File

@ -18,36 +18,40 @@ nav:
- 'Running in a Docker': 'carla_docker.md'
- 'F.A.Q.': 'faq.md'
- First steps:
- 'Python API tutorial': 'python_api_tutorial.md'
- 'Core concepts': 'core_concepts.md'
- '1st. World and client': 'core_world.md'
- '2nd. Actors and blueprints': 'core_actors.md'
- '3rd. Maps and navigation': 'core_map.md'
- '4th. Sensors and data': 'cameras_and_sensors.md'
- Advanced steps:
- 'Recorder': 'recorder_and_playback.md'
- 'Configuring the simulation': 'configuring_the_simulation.md'
- 'Cameras and sensors': 'cameras_and_sensors.md'
- 'Python Cookbook': 'python_cookbook.md'
- References:
- 'Python API reference': 'python_api.md'
- 'Code recipes': 'python_cookbook.md'
- 'Blueprint Library': 'bp_library.md'
- 'C++ reference' : 'cpp_reference.md'
- 'Recorder binary file format': 'recorder_binary_file_format.md'
- How to... (general):
- 'Add a new sensor': 'dev/how_to_add_a_new_sensor.md'
- "Add friction triggers": "how_to_add_friction_triggers.md"
- "Control vehicle physics": "how_to_control_vehicle_physics.md"
- "Control walker skeletons": "walker_bone_control.md"
- "Creating standalone asset packages for distribution": 'asset_packages_for_dist.md'
- "Generate pedestrian navigation": 'how_to_generate_pedestrians_navigation.md'
- "How to record and replay": 'recorder_and_playback.md'
- 'Add friction triggers': "how_to_add_friction_triggers.md"
- 'Control vehicle physics': "how_to_control_vehicle_physics.md"
- 'Control walker skeletons': "walker_bone_control.md"
- 'Creating standalone asset packages for distribution': 'asset_packages_for_dist.md'
- 'Generate pedestrian navigation': 'how_to_generate_pedestrians_navigation.md'
- "Link Epic's Automotive Materials": 'epic_automotive_materials.md'
- 'Map customization': 'dev/map_customization.md'
- "Recorder binary file format": 'recorder_binary_file_format.md'
- 'Running without display and selecting GPUs': 'carla_headless.md'
- How to... (content):
- 'Add assets': 'how_to_add_assets.md'
- "Create and import a new map": 'how_to_make_a_new_map.md'
- 'Create and import a new map': 'how_to_make_a_new_map.md'
- 'Model vehicles': 'how_to_model_vehicles.md'
- Contributing:
- 'Contribution guidelines': 'CONTRIBUTING.md'
- 'Coding standard': 'coding_standard.md'
- 'Documentation standard': 'doc_standard.md'
- "Make a release": 'dev/how_to_make_a_release.md'
- "Upgrade the content": 'dev/how_to_upgrade_content.md'
- 'Make a release': 'dev/how_to_make_a_release.md'
- 'Upgrade the content': 'dev/how_to_upgrade_content.md'
- 'Code of conduct': 'CODE_OF_CONDUCT.md'
markdown_extensions: