Sergi e/adv recorder (#2496)

* Recorder first draft

* Recorder second draft

* Prepare to rebase

* Command-line options are back

* Now are back

* New draft for recorder and command line options added to quickstart installation

* Final draft to merge
This commit is contained in:
sergi.e 2020-02-26 16:51:42 +01:00 committed by GitHub
parent 83acc8ec24
commit 0822c30595
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
6 changed files with 192 additions and 254 deletions

View File

@ -160,16 +160,8 @@ actor_snapshot = world_snapshot.find(actual_actor.id) #Get an actor's snapshot
<h4>World settings</h4> <h4>World settings</h4>
The world also has access to some advanced configurations for the simulation that determine rendering conditions, steps in the simulation time and synchrony between clients and server. These are advanced concepts that do better if untouched by newcomers. The world also has access to some advanced configurations for the simulation that determine rendering conditions, steps in the simulation time and synchrony between clients and server. These are advanced concepts that do better if untouched by newcomers.
For the time being let's say that CARLA by default runs in with its best quality, with a variable time-step and asynchronously. The helper class is [carla.WorldSettings](python_api.md#carla.WorldSettings). To dive further in this matters take a look at the __Advanced steps__ section of the documentation and read about [configuring the simulation](configuring_the_simulation.md) For the time being let's say that CARLA by default runs in with its best quality, with a variable time-step and asynchronously. The helper class is [carla.WorldSettings](python_api.md#carla.WorldSettings). To dive further in this matters take a look at the __Advanced steps__ section of the documentation and read about [synchrony and time-step](simulation_time_and_synchrony.md) or [rendering_options.md](../rendering_options).
__THIS IS TO BE DELETED AND MOVED TO SYNC AND TIME-STEP__
```py
# Wait for the next tick and retrieve the snapshot of the tick.
world_snapshot = world.wait_for_tick()
# Register a callback to get called every time we receive a new snapshot.
world.on_tick(lambda world_snapshot: do_something(world_snapshot))
```
--------------- ---------------
That is a wrap on the world and client objects, the very first steps in CARLA. That is a wrap on the world and client objects, the very first steps in CARLA.
The next step should be learning more about actors and blueprints to give life to the simulation. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading: The next step should be learning more about actors and blueprints to give life to the simulation. Keep reading to learn more or visit the forum to post any doubts or suggestions that have come to mind during this reading:

View File

@ -3,6 +3,7 @@
* [Requirements](#requirements) * [Requirements](#requirements)
* [Downloading CARLA](#downloading-carla) * [Downloading CARLA](#downloading-carla)
* [Running CARLA](#running-carla) * [Running CARLA](#running-carla)
* Command-line options
* [Updating CARLA](#updating-carla) * [Updating CARLA](#updating-carla)
* [Summary](#summary) * [Summary](#summary)
--------------- ---------------
@ -62,6 +63,32 @@ A window will open, containing a view over the city. This is the "spectator" vie
!!! note !!! note
If the firewall or any other application are blocking the TCP ports needed, these can be manually changed by adding to the previous command the argument: `-carla-port=N`, being `N` the desired port. The second will be automatically set to `N+1`. If the firewall or any other application are blocking the TCP ports needed, these can be manually changed by adding to the previous command the argument: `-carla-port=N`, being `N` the desired port. The second will be automatically set to `N+1`.
<h4>Command-line options</h4>
There are some configuration options available when launching CARLA:
* `-carla-rpc-port=N` Listen for client connections at port N, streaming port is set to N+1 by default.
* `-carla-streaming-port=N` Specify the port for sensor data streaming, use 0 to get a random unused port.
* `-quality-level={Low,Epic}` Change graphics quality level.
* [Full list of UE4 command-line arguments][ue4clilink] (note that many of these won't work in the release version).
[ue4clilink]: https://docs.unrealengine.com/en-US/Programming/Basics/CommandLineArguments
```sh
> ./CarlaUE4.sh -carla-rpc-port=3000
```
However, some may not be available (especially those provided by UE). For said reason, the script in `PythonAPI/util/config.py` provides for some more configuration options:
```sh
> ./config.py --no-rendering # Disable rendering
> ./config.py --map Town05 # Change map
> ./config.py --weather ClearNoon # Change weather
```
To check all the available configurations, run the following command:
```sh
> ./config.py --help
```
--------------- ---------------
##Updating CARLA ##Updating CARLA

View File

@ -1,120 +1,105 @@
# Recording and Replaying system <h1> Recorder </h1>
CARLA includes now a recording and replaying API, that allows to record a simulation in a file and This is one of the advanced CARLA features. It allows to record and reenact a simulation while providing with a complete log of the events happened and a few queries to ease the trace and study of those.
later replay that simulation. The file is written on server side only, and it includes which To learn about the generated file and its specifics take a look at this [reference](recorder_binary_file_format.md).
**actors are created or destroyed** in the simulation, the **state of the traffic lights**
and the **position** and **orientation** of all vehicles and pedestrians.
To start recording we only need to supply a file name: * [__Recording__](#recording)
* [__Simulation playback__](#simulation-playback):
* Setting a time factor
* [__Recorded file__](#unreal-engine)
* [__Queries__](#queries):
* Collisions
* Blocked actors
* [__Sample Python scripts__](#sample-python-scripts)
---------------
##Recording
All the data is written in a binary file on the server side only. However, the recorder is managed using the [carla.Client](python_api.md#carla.Client).
To reenact the simulation, actors will be updated on every frame according to the data contained in the recorded file. Actors that appear in the simulation will be either moved or re-spawned to emulate the recording. Those that do not appear in the recording will continue their way as if nothing happened.
!!! Important
By the end of the playback, vehicles will be set to autopilot, but __pedestrians will stop at their current location__.
The information registered by the recorder basically includes:
* __Actors:__ Creation and destruction.
* __Traffic lights:__ State changes.
* __Vehicles and pedestrians:__ Position and orientation.
To start recording there is only need for a file name. Using `\`, `/` or `:` characters in the file name will define it as an absolute path. If no path is detailed, the file will be saved in `CarlaUE4/Saved`.
```py ```py
client.start_recorder("recording01.log") client.start_recorder("/home/carla/recording01.log")
``` ```
To stop the recording, we need to call: To stop the recording, the call is also straightforward:
```py ```py
client.stop_recorder() client.stop_recorder()
``` ```
At any point we can replay a simulation, specifying the filename: !!! Note
As an estimate: 1h recording with 50 traffic lights and 100 vehicles takes around 200MB in size.
---------------
##Simulation playback
A playback can be started at any point during a simulation only specifying the file name.
```py ```py
client.replay_file("recording01.log") client.replay_file("recording01.log")
``` ```
Additionally, this method allows for some parameters to specify which segment of the recording is reenacted:
The replayer replicates the actor and traffic light information of the recording each frame.
For more details, [Recorder and Playback system](recorder_and_playback.md)
CARLA includes now a recording and replaying API, that allows to record a simulation in a file and
later replay that simulation. The file is written on the server side only, and it includes which
**actors are created or destroyed** in the simulation, the **state of the traffic lights** and
the **position** and **orientation** of all vehicles and pedestrians.
All data is written in a binary file on the server. We can use filenames with or without a path.
If we specify a filename without any of `\`, `/` or `:` characters, then it is considered to be
only a filename and will be saved on folder **CarlaUE4/Saved**. If we use any of the previous
characters then the filename will be considered as an absolute filename with path
(for example: `/home/carla/recording01.log` or `c:\records\recording01.log`).
As an estimate, a simulation with about 150 actors (50 traffic lights, 100 vehicles) for 1h
of recording takes around 200 MB in size.
## Recording
To start recording we only need to supply a file name:
```py
client.start_recorder("recording01.log")
```
To stop the recording, we need to call:
```py
client.stop_recorder()
```
## Playback
At any point we can replay a simulation, specifying the filename:
```py
client.replay_file("recording01.log")
```
The replayer replicates the actor and traffic light information of the recording each frame.
When replaying we have some other options that we can use, the full API call is:
```py ```py
client.replay_file("recording01.log", start, duration, camera) client.replay_file("recording01.log", start, duration, camera)
``` ```
| Parameters | Description | Notes |
| ---------- | ----------------------------------------------------- | ----- |
| `start` | Recording time in seconds to start the simulation at. | If positive, time will be considered from the beginning of the recording. <br>If negative, it will be considered from the end. |
| `duration` | Seconds to playback. 0 is all the recording. | By the end of the playback, vehicles will be set to autopilot and pedestrians will stop. |
| `camera` | ID of the actor that the camera will focus on. | By default the spectator will move freely. |
* **start**: Time we want to start the simulation from. !!! Note
* If the value is positive, it means the number of seconds starting from the beginning. These parameters allows to recall an event and then let the simulation run free, as vehicles will be set to autopilot when the recording stops.
E.g. a value of 10 will start the simulation at second 10.
* If the value is negative, it means the number of seconds starting from the end.
E.g. a value of -10 will replay only the last 10 seconds of the simulation.
* **duration**: Amount of seconds we want to play. When the simulation reaches the end,
then all actors remaining will have autopilot enabled automatically.
The purpose of this parameter is to allow users to replay a piece of a simulation and then let all
actors start driving in autopilot again.
* **camera**: Id of an actor where the camera will focus on and follow.
To obtain the Ids of the actors, please read right below.
Please note that all vehicles at the end of the playback will be set in autopilot to let them <h4>Setting a time factor</h4>
continue driving by themselves, and all pedestrians will be stopped at their current place
(we plan to set autopilot for pedestrians also, to walk at random places). This behaviour let's
you for example replay a piece of simulation and test how they continue after some changes
in the environment.
### Playback time factor (speed) The time factor will determine the playback speed.
We can specify the time factor (speed) for the replayer at any moment, using the following API call: It can be changed any moment without stopping the playback, using the following API call:
```py ```py
client.set_replayer_time_factor(2.0) client.set_replayer_time_factor(2.0)
``` ```
A value greater than 1.0 will play in fast motion, and a value below 1.0 will play in slow motion, | Parameters | Default | Fast motion | slow motion |
being 1.0 the default value for normal playback. | ------------- | ------- | ----------- | ----------- |
As a performance trick, with values over 2.0 the interpolation of positions is disabled. | `time_factor` | __1.0__ | __>1.0__ | __<1.0__ |
E.g. With a time factor of 20x we can see traffic flow: !!! Important
Over 2.0 position interpolation is disabled and just updated. Pedestrians' animations are not affected by the time factor.
For instance, with a time factor of __20x__ traffic flow is easily appreciated:
![flow](img/RecorderFlow2.gif) ![flow](img/RecorderFlow2.gif)
Pedestrians' animations will not be affected by this time factor and will remain at normal speed. ---------------
Therefore, animations are not accurate yet. ##Recorded file
This API call will not stop the replayer in course, it will just change the speed, The details of a recording can be retrieved using a simple API call. By default, it only retrieves those frames where an event was registered, but setting the parameter `show_all` would return all the information for every frame. The specifics on how the data is stored are detailed in the [recorder's reference](recorder_binary_file_format.md).
so you can change that several times while the replayer is running. The following example only would retrieve remarkable events:
### Info about the recorded file
We can get details about a recorded simulation, using this API call:
```py ```py
client.show_recorder_file_info("recording01.log") client.show_recorder_file_info("recording01.log")
``` ```
* __Opening information:__ map, date and time when the simulation was recorded.
* __Frame information:__ any event that could happen (actor spawning/destruction, collisions...). The output contains the actor's ID and some additional information.
* __Closing information:__ a summary of number of frames and total time recorded.
The output result should be similar to this one: The output result should be similar to this one:
``` ```
@ -126,99 +111,52 @@ Frame 1 at 0 seconds
Create 2190: spectator (0) at (-260, -200, 382.001) Create 2190: spectator (0) at (-260, -200, 382.001)
Create 2191: traffic.traffic_light (3) at (4255, 10020, 0) Create 2191: traffic.traffic_light (3) at (4255, 10020, 0)
Create 2192: traffic.traffic_light (3) at (4025, 7860, 0) Create 2192: traffic.traffic_light (3) at (4025, 7860, 0)
Create 2193: traffic.traffic_light (3) at (1860, 7975, 0)
Create 2194: traffic.traffic_light (3) at (1915, 10170, 0)
... ...
Create 2258: traffic.speed_limit.90 (0) at (21651.7, -1347.59, 15) Create 2258: traffic.speed_limit.90 (0) at (21651.7, -1347.59, 15)
Create 2259: traffic.speed_limit.90 (0) at (5357, 21457.1, 15) Create 2259: traffic.speed_limit.90 (0) at (5357, 21457.1, 15)
Create 2260: traffic.speed_limit.90 (0) at (858, 18176.7, 15)
Frame 2 at 0.0254253 seconds Frame 2 at 0.0254253 seconds
Create 2276: vehicle.mini.cooperst (1) at (4347.63, -8409.51, 120) Create 2276: vehicle.mini.cooperst (1) at (4347.63, -8409.51, 120)
number_of_wheels = 4 number_of_wheels = 4
object_type = object_type =
color = 255,241,0 color = 255,241,0
role_name = autopilot role_name = autopilot
Frame 4 at 0.0758538 seconds ...
Create 2277: vehicle.diamondback.century (1) at (4017.26, 14489.8, 123.86)
number_of_wheels = 2
object_type =
color = 50,96,242
role_name = autopilot
Frame 6 at 0.122666 seconds
Create 2278: vehicle.seat.leon (1) at (3508.17, 7611.85, 120.002)
number_of_wheels = 4
object_type =
color = 237,237,237
role_name = autopilot
Frame 8 at 0.171718 seconds
Create 2279: vehicle.diamondback.century (1) at (3160, 3020.07, 120.002)
number_of_wheels = 2
object_type =
color = 50,96,242
role_name = autopilot
Frame 10 at 0.219568 seconds
Create 2280: vehicle.bmw.grandtourer (1) at (-5405.99, 3489.52, 125.545)
number_of_wheels = 4
object_type =
color = 0,0,0
role_name = autopilot
Frame 2350 at 60.2805 seconds Frame 2350 at 60.2805 seconds
Destroy 2276 Destroy 2276
Frame 2351 at 60.3057 seconds Frame 2351 at 60.3057 seconds
Destroy 2277 Destroy 2277
Frame 2352 at 60.3293 seconds ...
Destroy 2278
Frame 2353 at 60.3531 seconds
Destroy 2279
Frame 2354 at 60.3753 seconds
Destroy 2280
Frames: 2354 Frames: 2354
Duration: 60.3753 seconds Duration: 60.3753 seconds
``` ```
From the previous log, we can retrieve the information regarding the **date** and the **map** ---------------
where the simulation was recorded. ##Queries
Each frame will display information about any event that could happen
(create or destroy an actor, collisions). When creating actors, it outputs for each of them its
corresponding **Id** together with some other additional information. This **Id** is the one we
need to specify in the **camera** attribute when replaying if we want to follow that actor
during the replay.
At the end, we can see as well the **total time** of the recording and also the number
of **frames** that were recorded.
### Info about collisions <h4>Collisions</h4>
In simulations with a **hero actor**, the collisions are automatically saved, In order to record collisions, vehicles must have a [collision detector](cameras_and_sensors.md) attached. The collisions registered by the recorder can be queried using arguments to filter the type of the actors involved in the collisions. For example, `h` identifies actors whose `role_name = hero`, usually assigned to vehicles managed by the user.
so we can query a recorded file to see if any **hero actor** had collisions with some other actor. Currently, the actor types that can be used in the query are:
Currently, the actor types we can use in the query are these:
* **h** = Hero * __h__ = Hero
* **v** = Vehicle * __v__ = Vehicle
* **w** = Walker * __w__ = Walker
* **t** = Traffic light * __t__ = Traffic light
* **o** = Other * __o__ = Other
* **a** = Any * __a__ = Any
The collision query needs to know the type of actors involved in the collision. !!! Note
If we do not want to specify it, we can specify **a** (any) for both. These are some examples: The `manual_control.py` script automatically assigns the `role_name` of the vehicle to `hero` besides providing control over it.
* **a** **a**: Will show all collisions recorded The API call to query collisions requires two of the previous flags to filter the collisions. The following example would show collisions registered between vehicles and any other object:
* **v** **v**: Will show all collisions between vehicles
* **v** **t**: Will show all collisions between a vehicle and a traffic light
* **v** **w**: Will show all collisions between a vehicle and a walker
* **v** **o**: Will show all collisions between a vehicle and other actor, like static meshes
* **h** **w**: Will show all collisions between a hero and a walker
Currently, only **hero actors** record the collisions. Therefore, we have considered that the first
actor will be the hero always.
The API call for querying the collisions is:
```py ```py
client.show_recorder_collisions("recording01.log", "a", "a") client.show_recorder_collisions("recording01.log", "v", "a")
``` ```
The output result should be similar to this one: The output summarizes time of the collision and type, ID and description of the actors involved. It should be similar to this one:
``` ```
Version: 1 Version: 1
@ -233,45 +171,42 @@ Frames: 790
Duration: 46 seconds Duration: 46 seconds
``` ```
We can see here that for each collision the **time** when happened, the **type** of !!! Important
the actors involved, and the **id and description** of each actor. As it is the `hero` or `ego` vehicle who registers the collision, this will always be `Actor 1`.
So, if we want to see what happened on that recording for the first collision where the hero
actor was colliding with a vehicle, we could use this API call. So for example: To understand how that collision happened, it could be a good idea to replay it just moments before the event:
```py ```py
client.replay_file("col2.log", 13, 0, 122) client.replay_file("col2.log", 13, 0, 122)
``` ```
We have started the replayer just a bit before the time of the collision, so we can observe In this case, the playback showed this:
better how it happened.
Also, if the **duration** is set to 0, the entire file will be replayed.
The output result is similar to this:
![collision](img/collision1.gif) ![collision](img/collision1.gif)
### Info about blocked actors <h4>Blocked actors</h4>
There is another API function to get information about actors that have been blocked by an obstacle, This query is used to detect vehicles that where stucked during the recording. An actor is considered blocked if it does not move a minimum distance in a certain time. This definition is made by the user during the query:
not letting them follow their way. That could be helpful for finding incidences. The API call is:
```py ```py
client.show_recorder_actors_blocked("recording01.log", min_time, min_distance) client.show_recorder_actors_blocked("recording01.log", min_time, min_distance)
``` ```
The input parameters are: | Parameters | Description | Default |
* **min_time**: The minimum time that an actor needs to be stopped to be considered as blocked | -------------- | --------------------------------------------------------- | ----- |
(in seconds, by default 30). | `min_time` | Minimum seconds to move `min_distance`. | 30 secs. |
* **min_distance**: The minimum distance to consider an actor to be stopped (in cm, by default 10). | `min_distance` | Minimum centimeters to move to not be considered blocked. | 10 cm. |
Let's say we want to know which actor is stopped (moving less than 1 meter during 60 seconds), !!! Note
we could do the following: Take into account that vehicles are stopped at traffic lights sometimes for longer than expected.
For the sake of comprehension, let's make an example to look for vehicles stopped (moving less than 1 meter during 60 seconds):
```py ```py
client.show_recorder_actors_blocked("col3.log", 60, 100) client.show_recorder_actors_blocked("col3.log", 60, 100)
``` ```
And this is the output format (sorted by duration): The output is sorted by __duration__, which states how long it took to stop being "blocked" and move the `min_distance`:
``` ```
Version: 1 Version: 1
@ -280,47 +215,14 @@ Date: 02/19/19 15:45:01
Time Id Actor Duration Time Id Actor Duration
36 173 vehicle.nissan.patrol 336 36 173 vehicle.nissan.patrol 336
75 104 vehicle.dodge_charger.police 295
75 214 vehicle.chevrolet.impala 295 75 214 vehicle.chevrolet.impala 295
234 76 vehicle.nissan.micra 134
241 162 vehicle.audi.a2 128
302 143 vehicle.bmw.grandtourer 67 302 143 vehicle.bmw.grandtourer 67
303 133 vehicle.nissan.micra 67
303 167 vehicle.audi.a2 66
302 80 vehicle.nissan.micra 67
Frames: 6985 Frames: 6985
Duration: 374 seconds Duration: 374 seconds
``` ```
These lines tell us when an actor was stopped for at least the minimum time specified. In this example, the vehicle `173` was stopped for `336` seconds at time `36` seconds. To check the cause of it , it would be useful to check how it arrived to that situation by replaying a few seconds before the second `36`:
For example, looking at the 6th line, the vehicle 143 was stopped for 67 seconds at time 302 seconds.
We could check what happened at that time by calling the next API command:
```py
client.replay_file("col3.log", 302, 0, 143)
```
![actor blocked](img/actor_blocked1.png)
As we can observe, there is an obstacle that is actually blocking the actor
(see red vehicle in the image).
Looking at another actor using:
```py
client.replay_file("col3.log", 75, 0, 104)
```
![actor blocked](img/actor_blocked2.png)
It is worth noting that it is the same incident but with another vehicle involved in it
(i.e. the police car in this case).
The result is sorted by duration, so the actor that is blocked for more time comes first.
By checking the vehicle with Id 173 at time 36 seconds, it is evident that it stopped for 336 seconds.
To check the cause of it , it would be useful to check how it arrived to that situation by replaying
a few seconds before the second 36:
```py ```py
client.replay_file("col3.log", 34, 0, 173) client.replay_file("col3.log", 34, 0, 173)
@ -328,53 +230,59 @@ client.replay_file("col3.log", 34, 0, 173)
![accident](img/accident.gif) ![accident](img/accident.gif)
And easily determine the responsible of that incident. ---------------
##Sample python scripts
## Sample Python scripts Some of the provided scripts in `PythonAPI/examples` facilitate the use of the recorder:
Here you can find a list of sample scripts you could use: * __start_recording.py__: starts the recording. Optionally actors can be spawned at the beginning and duration of the recording set.
* **start_recording.py**: This will start recording, and optionally you can spawn several actors and | Parameters | Description |
define how much time you want to record. | -------------- | ------------ |
+ `-f`: Filename to write | `-f` | Filename. |
+ `-n`: Vehicles to spawn (optional, 10 by default) | `-n` <small>(optional)</small>| Vehicles to spawn. Default is 10. |
+ `-t`: Duration of the recording (optional) | `-t` <small>(optional)</small>| Duration of the recording. |
--- * __start_replaying.py__: starts the playback of a recording. Starting time, duration and actor to follow can be set.
* **start_replaying.py**: This will start a replay of a file. We can define the starting time, | Parameters | Description |
duration and also an actor to follow. | -------------- | ------------ |
+ `-f`: Filename | `-f` | Filename. |
+ `-s`: Starting time (optional, by default from start) | `-s` <small>(optional)</small>| Starting time. Default is 0. |
+ `-d`: Duration (optional, by default all) | `-d` <small>(optional)</small>| Duration. Default is all. |
+ `-c`: Actor to follow (id) (optional) | `-c` <small>(optional)</small>| ID of the actor to follow. |
--- * __show_recorder_file_info.py__: shows all the information in the recording file.
Two modes of detail: by default it only shows frames where some event is recorded. The second shows all information for all frames.
* **show_recorder_file_info.py**: This will show all the information recorded in file. | Parameters | Description |
It has two modes of detail, by default it only shows the frames where some event is recorded, | -------------- | ------------ |
the second is showing info about all frames (all positions and trafficlight states). | `-f` | Filename. |
+ `-f`: Filename | `-s` <small>(optional)</small>| Flag to show all details. |
+ `-a`: Flag to show all details (optional)
--- * __show_recorder_collisions.py__: shows recorded collisions between two actors of type __A__ and __B__ defined using a series of flags: `-t = vv` would show all collisions between vehicles.
* **show_recorder_collisions.py**: This will show all the collisions hapenned while recording | Parameters | Description |
(currently only involved by hero actors). | ----------- | ------------- |
+ `-f`: Filename | `-f` | Filename. |
+ `-t`: Two letters definning the types of the actors involved, for example: -t aa | `-t` | Flags of the actors involved: <br> `h` = hero <br> `v` = vehicle <br> `w` = walker <br> `t` = traffic light <br> `o` = other <br> `a` = any |
- `h` = Hero
- `v` = Vehicle
- `w` = Walker
- `t` = Traffic light
- `o` = Other
- `a` = Any
---
* **show_recorder_actors_blocked.py**: This will show all the actors that are blocked or stopped * __show_recorder_actors_blocked.py__: shows a register for vehicles considered blocked. Actors are considered blocked when not moving a minimum distance in a certain time.
in the recorder. We can define the *time* that an actor has not been moving and *travelled* distance
by the actor thresholds to determine if a vehicle is considered as blocked or not. | Parameters | Description |
+ `-f`: Filename | --------------- | ------------ |
+ `-t`: Minimum seconds stopped to be considered as blocked (optional) | `-f` | Filename. |
+ `-d`: Minimum distance to be considered stopped (optional) | `-t` <small>(optional)</small> | Time to move `-d` before being considered blocked. |
| `-d` <small>(optional)</small> | Distance to move to not be considered blocked. |
---------------
Now it is time to experiment for a while. Use the recorder to playback a simulation, trace back events, make changes to see new outcomes. Feel free to say your word in the CARLA forum about this matter:
<div class="build-buttons">
<!-- Latest release button -->
<p>
<a href="https://forum.carla.org/" target="_blank" class="btn btn-neutral" title="Go to the CARLA forum">
CARLA forum</a>
</p>
</div>

View File

@ -13,7 +13,6 @@ the most important ones.
* [__Running off-screen using a preferred GPU__](#running-off-screen-using-a-preferred-gpu): * [__Running off-screen using a preferred GPU__](#running-off-screen-using-a-preferred-gpu):
* Docker: recommended approach * Docker: recommended approach
* Deprecated: emulate the virtual display * Deprecated: emulate the virtual display
* [__Command line options__](#command-line-options)
!!! Important !!! Important

View File

@ -1,4 +1,4 @@
<h1>Simulation time and synchrony</h1> <h1>Synchrony and time-step</h1>
This section deals with two concepts that are fundamental to fully comprehend CARLA and gain control over it to achieve the desired results. There are different configurations that define how does time go by in the simulation and how does the server running said simulation work. The following sections will dive deep into these concepts: This section deals with two concepts that are fundamental to fully comprehend CARLA and gain control over it to achieve the desired results. There are different configurations that define how does time go by in the simulation and how does the server running said simulation work. The following sections will dive deep into these concepts:
@ -10,7 +10,7 @@ This section deals with two concepts that are fundamental to fully comprehend CA
* [__Client-server synchrony__](#client-server-synchrony) * [__Client-server synchrony__](#client-server-synchrony)
* Setting synchronous mode * Setting synchronous mode
* Using synchronous mode * Using synchronous mode
* [__Synchrony and time-step__](#synchrony-and-time-step) * [__Possible configurations__](#possible-configurations)
--------------- ---------------
##Simulation time-step ##Simulation time-step
@ -100,6 +100,7 @@ Must be mentioned that synchronous mode cannot be enabled using the script, only
The synchronous mode becomes specially relevant when running with slow clients applications and when synchrony between different elements, such as sensors, is needed. If the client is too slow and the server does not wait for it, the amount of information received will be impossible to manage and it can easily be mixed. On a similar tune, if there are ten sensors waiting to retrieve data and the server is sending all these information without waiting for all of them to have the previous one, it would be impossible to know if all the sensors are using data from the same moment in the simulation. The synchronous mode becomes specially relevant when running with slow clients applications and when synchrony between different elements, such as sensors, is needed. If the client is too slow and the server does not wait for it, the amount of information received will be impossible to manage and it can easily be mixed. On a similar tune, if there are ten sensors waiting to retrieve data and the server is sending all these information without waiting for all of them to have the previous one, it would be impossible to know if all the sensors are using data from the same moment in the simulation.
As a little extension to the previous code, in the following fragment, the client creates a camera sensor that puts the image data received in the current step in a queue and sends ticks to the server only after retrieving it from the queue. A more complex example regarding several sensors can be found [here][syncmodelink]. As a little extension to the previous code, in the following fragment, the client creates a camera sensor that puts the image data received in the current step in a queue and sends ticks to the server only after retrieving it from the queue. A more complex example regarding several sensors can be found [here][syncmodelink].
```py ```py
settings = world.get_settings() settings = world.get_settings()
settings.synchronous_mode = True settings.synchronous_mode = True
@ -115,12 +116,23 @@ while True:
``` ```
[syncmodelink]: https://github.com/carla-simulator/carla/blob/master/PythonAPI/examples/synchronous_mode.py [syncmodelink]: https://github.com/carla-simulator/carla/blob/master/PythonAPI/examples/synchronous_mode.py
!!! Important !!! Important
Data coming from GPU-based sensors (cameras) is usually generated with a delay of a couple of frames when compared with CPU based sensors, so synchrony is essential here. Data coming from GPU-based sensors (cameras) is usually generated with a delay of a couple of frames when compared with CPU based sensors, so synchrony is essential here.
The world also has asynchrony methods to make the client wait for a server tick or do something when it is received:
```py
# Wait for the next tick and retrieve the snapshot of the tick.
world_snapshot = world.wait_for_tick()
# Register a callback to get called every time we receive a new snapshot.
world.on_tick(lambda world_snapshot: do_something(world_snapshot))
```
---------------- ----------------
##Synchrony and time-step ##Possible configurations
The configuration of both concepts explained in this page, simulation time-step and client-server synchrony, leads for different types of simulation and results. Here is a brief summary on the possibilities and a better explanation of the reasoning behind it: The configuration of both concepts explained in this page, simulation time-step and client-server synchrony, leads for different types of simulation and results. Here is a brief summary on the possibilities and a better explanation of the reasoning behind it:

View File

@ -26,7 +26,7 @@ nav:
- Advanced steps: - Advanced steps:
- 'Recorder': 'recorder_and_playback.md' - 'Recorder': 'recorder_and_playback.md'
- 'Rendering options': 'rendering_options.md' - 'Rendering options': 'rendering_options.md'
- 'Simulation time and synchrony': 'simulation_time_and_synchrony.md' - 'Synchrony and time-step': 'simulation_time_and_synchrony.md'
- References: - References:
- 'Python API reference': 'python_api.md' - 'Python API reference': 'python_api.md'
- 'Code recipes': 'python_cookbook.md' - 'Code recipes': 'python_cookbook.md'