Documentation about Recorder system

This commit is contained in:
bernatx 2019-04-12 17:21:54 +02:00 committed by Néstor Subirón
parent e4dd26a50e
commit 948dae364c
23 changed files with 444 additions and 275 deletions

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.7 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 165 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 28 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.7 KiB

BIN
Docs/img/RecorderHeader.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 6.1 KiB

BIN
Docs/img/RecorderLayout.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 13 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 14 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 4.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.4 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.2 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.7 KiB

BIN
Docs/img/RecorderString.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 5.5 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 8.4 KiB

View File

@ -1,4 +1,5 @@
<h1>Python API tutorial</h1>
<h1>Python
API tutorial</h1>
In this tutorial we introduce the basic concepts of the CARLA Python API, as
well as an overview of its most important functionalities. The reference of all
@ -468,277 +469,3 @@ for each road segment in the map.
Finally, to allow access to the whole road information, the map object can be
converted to OpenDrive format, and saved to disk as such.
#### Recording and Replaying system
CARLA includes now a recording and replaying API, that allows to record a simulation in a file and later replay that simulation. The file is written on server side only, and it includes which **actors are created or destroyed** in the simulation, the **state of the traffic lights** and the **position/orientation** of all vehicles and walkers.
All data is written in a binary file on the server. We can use filenames with or without a path. If we specify a filename without any of '\\', '/' or ':' characters, then it is considered to be only a filename and will be saved on folder **CarlaUE4/Saved**. If we use any of the previous characters then the filename will be considered as an absolute filename with path (for example: '/home/carla/recording01.log' or 'c:\\records\\recording01.log').
As estimation, a simulation with about 150 actors (50 traffic lights, 100 vehicles) for 1h of recording takes around 200 Mb in size.
To start recording we only need to supply a file name:
```py
client.start_recorder("recording01.log")
```
To stop the recording, we need to call:
```py
client.stop_recorder()
```
At any point we can replay a simulation, specifying the filename:
```py
client.replay_file("recording01.log")
```
The replayer will create and destroy all actors that were recorded, and move all actors and setting the traffic lights as they were working at that moment.
When replaying we have some other options that we can use, the full API call is:
```py
client.replay_file("recording01.log", start, duration, camera)
```
* **start**: time we want to start the simulation.
* If the value is positive, it means the number of seconds from the beginning.
Ex: a value of 10 will start the simulation at second 10.
* If the value is negative, it means the number of seconds from the end.
Ex: a value of -10 will replay only the last 10 seconds of the simulation.
* **duration**: we can say how many seconds we want to play. If the simulation has not reached the end, then all actors will have autopilot enabled automatically. The intention here is to allow for replaying a piece of a simulation and then let all actors start driving in autopilot again.
* **camera**: we can specify the Id of an actor and then the camera will follow that actor while replaying. Continue reading to know which Id has an actor.
We can specify the time factor (speed) for the replayer at any moment, using the next API:
```py
client.set_replayer_time_factor(2.0)
```
A value greater than 1.0 will play in fast motion, and a value below 1.0 will play in slow motion, being 1.0 the default value for normal playback.
As a performance trick, with values over 2.0 the interpolation of positions is disabled.
The call of this API will not stop the replayer in course, it will change just the speed, so you can change that several times while the replayer is running.
We can know details about a recorded simulation, using this API:
```py
client.show_recorder_file_info("recording01.log")
```
The output result is something like this:
```
Version: 1
Map: Town05
Date: 02/21/19 10:46:20
Frame 1 at 0 seconds
Create 2190: spectator (0) at (-260, -200, 382.001)
Create 2191: traffic.traffic_light (3) at (4255, 10020, 0)
Create 2192: traffic.traffic_light (3) at (4025, 7860, 0)
Create 2193: traffic.traffic_light (3) at (1860, 7975, 0)
Create 2194: traffic.traffic_light (3) at (1915, 10170, 0)
...
Create 2258: traffic.speed_limit.90 (0) at (21651.7, -1347.59, 15)
Create 2259: traffic.speed_limit.90 (0) at (5357, 21457.1, 15)
Create 2260: traffic.speed_limit.90 (0) at (858, 18176.7, 15)
Frame 2 at 0.0254253 seconds
Create 2276: vehicle.mini.cooperst (1) at (4347.63, -8409.51, 120)
number_of_wheels = 4
object_type =
color = 255,241,0
role_name = autopilot
Frame 4 at 0.0758538 seconds
Create 2277: vehicle.diamondback.century (1) at (4017.26, 14489.8, 123.86)
number_of_wheels = 2
object_type =
color = 50,96,242
role_name = autopilot
Frame 6 at 0.122666 seconds
Create 2278: vehicle.seat.leon (1) at (3508.17, 7611.85, 120.002)
number_of_wheels = 4
object_type =
color = 237,237,237
role_name = autopilot
Frame 8 at 0.171718 seconds
Create 2279: vehicle.diamondback.century (1) at (3160, 3020.07, 120.002)
number_of_wheels = 2
object_type =
color = 50,96,242
role_name = autopilot
Frame 10 at 0.219568 seconds
Create 2280: vehicle.bmw.grandtourer (1) at (-5405.99, 3489.52, 125.545)
number_of_wheels = 4
object_type =
color = 0,0,0
role_name = autopilot
Frame 2350 at 60.2805 seconds
Destroy 2276
Frame 2351 at 60.3057 seconds
Destroy 2277
Frame 2352 at 60.3293 seconds
Destroy 2278
Frame 2353 at 60.3531 seconds
Destroy 2279
Frame 2354 at 60.3753 seconds
Destroy 2280
Frames: 2354
Duration: 60.3753 seconds
```
From here we know the **date** and the **map** where the simulation was recorded.
Then for each frame that has an event (create or destroy an actor, collisions) it shows that info. For creating actors we see the **Id** it has and some info about the actor to create. This is the **id** we need to specify in the **camera** option when replaying if we want to follow that actor during the replay.
At the end we can see the **total time** of the recording and also the number of **frames** that were recorded.
In simulations whith a **hero actor** the collisions are automatically saved, so we can query a recorded file to see if any **hero actor** had collisions with some other actor. Currently the actor types we can use in the query are these:
* **h** = Hero
* **v** = Vehicle
* **w** = Walker
* **t** = Traffic light
* **o** = Other
* **a** = Any
The collision query needs to know the type of actors involved in the collision. If we don't care we can specify **a** (any) for both. These are some examples:
* **a** **a**: will show all collisions recorded
* **v** **v**: will show all collisions between vehicles
* **v** **t**: will show all collisions between a vehicle and a traffic light
* **v** **w**: will show all collisions between a vehicle and a walker
* **v** **o**: will show all collisions between a vehicle and other actor, like static meshes
* **h** **w**: will show all collisions between a hero and a walker
Currently only **hero actors** record the collisions, so first actor will be a hero always.
The API for querying the collisions is:
```py
client.show_recorder_collisions("recording01.log", "a", "a")
```
The output is something similar to this:
```
Version: 1
Map: Town05
Date: 02/19/19 15:36:08
Time Types Id Actor 1 Id Actor 2
16 v v 122 vehicle.yamaha.yzf 118 vehicle.dodge_charger.police
27 v o 122 vehicle.yamaha.yzf 0
Frames: 790
Duration: 46 seconds
```
We can see there for each collision the **time** when happened, the **type** of the actors involved, and the **id and description** of each actor.
So, if we want to see what happened on that recording for the first collision where the hero actor was colliding with a vehicle, we could use this API:
```py
client.replay_file("col2.log", 13, 0, 122)
```
We have started the replayer just a bit before the time of the collision, so we can see how it happened.
Also, a value of 0 for the **duration** means to replay all the file (it is the default value).
We can see something like this then:
![collision](img/collision1.gif)
There is another API to get information about actors that has been blocked by something and can not follow its way. That could be good to find incidences in the simulation. The API is:
```py
client.show_recorder_actors_blocked("recording01.log", min_time, min_distance)
```
The parameters are:
* **min_time**: the minimum time that an actor needs to be stopped to be considered as blocked (in seconds).
* **min_distance**: the minimum distance to consider an actor to be stopped (in cm).
So, if we want to know which actor is stopped (moving less than 1 meter during 60 seconds), we could use something like:
```py
client.show_recorder_actors_blocked("col3.log", 60, 100)
```
The result can be something like (it is sorted by the duration):
```
Version: 1
Map: Town05
Date: 02/19/19 15:45:01
Time Id Actor Duration
36 173 vehicle.nissan.patrol 336
75 104 vehicle.dodge_charger.police 295
75 214 vehicle.chevrolet.impala 295
234 76 vehicle.nissan.micra 134
241 162 vehicle.audi.a2 128
302 143 vehicle.bmw.grandtourer 67
303 133 vehicle.nissan.micra 67
303 167 vehicle.audi.a2 66
302 80 vehicle.nissan.micra 67
Frames: 6985
Duration: 374 seconds
```
This lines tell us when an actor was stopped for at least the minimum time specified.
For example the 6th line, the actor 143, at time 302 seconds, was stopped for 67 seconds.
We could check what happened that time with the next API command:
```py
client.replay_file("col3.log", 302, 0, 143)
```
![actor blocked](img/actor_blocked1.png)
We see there is some mess there that actually blocks the actor (red vehicle in the image).
We can check also another actor with:
```py
client.replay_file("col3.log", 75, 0, 104)
```
![actor blocked](img/actor_blocked2.png)
We can see it is the same incidence but from another actor involved (police car).
The result is sorted by duration, so the actor that is blocked for more time comes first. We could check the first line, with Id 173 at time 36 seconds it get stopped for 336 seconds. We could check how it arrived to that situation replaying a few seconds before time 36.
```py
client.replay_file("col3.log", 34, 0, 173)
```
![accident](img/accident.gif)
We can see then the responsible of the incident.
#### Sample PY scripts to use with the recording / replaying system
There are some scripts you could use:
* **start_recording.py**: this will start recording, and optionally you can spawn several actors and define how much time you want to record.
* **-f**: filename of write
* **-n**: vehicles to spawn (optional, 10 by default)
* **-t**: duration of the recording (optional)
* **start_replaying.py**: this will start a replay of a file. We can define the starting time, duration and also an actor to follow.
* **-f**: filename of write
* **-s**: starting time (optional, by default from start)
* **-d**: duration (optional, by default all)
* **-c**: actor to follow (id) (optional)
* **show_recorder_collisions.py**: this will show all the collisions hapenned while recording (currently only involved by hero actors).
* **-f**: filename of write
* **-t**: two letters definning the types of the actors involved, for example: -t aa
* **h** = Hero
* **v** = Vehicle
* **w** = Walker
* **t** = Traffic light
* **o** = Other
* **a** = Any
* **show_recorder_actors_blocked.py**: this will show all the actors that are blocked (stopped) in the recorder. We can define the time and distance to be considered as blocked.
* **-f**: filename of write
* **-t**: minimum seconds stopped to be considered as blocked (optional)
* **-d**: minimum distance to be considered stopped (optional)

View File

@ -0,0 +1,291 @@
### Recording and Replaying system
CARLA includes now a recording and replaying API, that allows to record a simulation in a file and later replay that simulation. The file is written on server side only, and it includes which **actors are created or destroyed** in the simulation, the **state of the traffic lights** and the **position/orientation** of all vehicles and walkers.
All data is written in a binary file on the server. We can use filenames with or without a path. If we specify a filename without any of '\\', '/' or ':' characters, then it is considered to be only a filename and will be saved on folder **CarlaUE4/Saved**. If we use any of the previous characters then the filename will be considered as an absolute filename with path (for example: '/home/carla/recording01.log' or 'c:\\records\\recording01.log').
As estimation, a simulation with about 150 actors (50 traffic lights, 100 vehicles) for 1h of recording takes around 200 Mb in size.
#### Recording
To start recording we only need to supply a file name:
```py
client.start_recorder("recording01.log")
```
To stop the recording, we need to call:
```py
client.stop_recorder()
```
#### Playback
At any point we can replay a simulation, specifying the filename:
```py
client.replay_file("recording01.log")
```
The replayer will create and destroy all actors that were recorded, and move all actors and setting the traffic lights as they were working at that moment.
When replaying we have some other options that we can use, the full API call is:
```py
client.replay_file("recording01.log", start, duration, camera)
```
* **start**: time we want to start the simulation.
* If the value is positive, it means the number of seconds from the beginning.
Ex: a value of 10 will start the simulation at second 10.
* If the value is negative, it means the number of seconds from the end.
Ex: a value of -10 will replay only the last 10 seconds of the simulation.
* **duration**: we can say how many seconds we want to play. If the simulation has not reached the end, then all actors will have autopilot enabled automatically. The intention here is to allow for replaying a piece of a simulation and then let all actors start driving in autopilot again.
* **camera**: we can specify the Id of an actor and then the camera will follow that actor while replaying. Continue reading to know which Id has an actor.
#### Playback time factor (speed)
We can specify the time factor (speed) for the replayer at any moment, using the next API:
```py
client.set_replayer_time_factor(2.0)
```
A value greater than 1.0 will play in fast motion, and a value below 1.0 will play in slow motion, being 1.0 the default value for normal playback.
As a performance trick, with values over 2.0 the interpolation of positions is disabled.
The call of this API will not stop the replayer in course, it will change just the speed, so you can change that several times while the replayer is running.
#### Info about the recorded file
We can get details about a recorded simulation, using this API:
```py
client.show_recorder_file_info("recording01.log")
```
The output result is something like this:
```
Version: 1
Map: Town05
Date: 02/21/19 10:46:20
Frame 1 at 0 seconds
Create 2190: spectator (0) at (-260, -200, 382.001)
Create 2191: traffic.traffic_light (3) at (4255, 10020, 0)
Create 2192: traffic.traffic_light (3) at (4025, 7860, 0)
Create 2193: traffic.traffic_light (3) at (1860, 7975, 0)
Create 2194: traffic.traffic_light (3) at (1915, 10170, 0)
...
Create 2258: traffic.speed_limit.90 (0) at (21651.7, -1347.59, 15)
Create 2259: traffic.speed_limit.90 (0) at (5357, 21457.1, 15)
Create 2260: traffic.speed_limit.90 (0) at (858, 18176.7, 15)
Frame 2 at 0.0254253 seconds
Create 2276: vehicle.mini.cooperst (1) at (4347.63, -8409.51, 120)
number_of_wheels = 4
object_type =
color = 255,241,0
role_name = autopilot
Frame 4 at 0.0758538 seconds
Create 2277: vehicle.diamondback.century (1) at (4017.26, 14489.8, 123.86)
number_of_wheels = 2
object_type =
color = 50,96,242
role_name = autopilot
Frame 6 at 0.122666 seconds
Create 2278: vehicle.seat.leon (1) at (3508.17, 7611.85, 120.002)
number_of_wheels = 4
object_type =
color = 237,237,237
role_name = autopilot
Frame 8 at 0.171718 seconds
Create 2279: vehicle.diamondback.century (1) at (3160, 3020.07, 120.002)
number_of_wheels = 2
object_type =
color = 50,96,242
role_name = autopilot
Frame 10 at 0.219568 seconds
Create 2280: vehicle.bmw.grandtourer (1) at (-5405.99, 3489.52, 125.545)
number_of_wheels = 4
object_type =
color = 0,0,0
role_name = autopilot
Frame 2350 at 60.2805 seconds
Destroy 2276
Frame 2351 at 60.3057 seconds
Destroy 2277
Frame 2352 at 60.3293 seconds
Destroy 2278
Frame 2353 at 60.3531 seconds
Destroy 2279
Frame 2354 at 60.3753 seconds
Destroy 2280
Frames: 2354
Duration: 60.3753 seconds
```
From here we know the **date** and the **map** where the simulation was recorded.
Then for each frame that has an event (create or destroy an actor, collisions) it shows that info. For creating actors we see the **Id** it has and some info about the actor to create. This is the **id** we need to specify in the **camera** option when replaying if we want to follow that actor during the replay.
At the end we can see the **total time** of the recording and also the number of **frames** that were recorded.
#### Info about collisions
In simulations whith a **hero actor** the collisions are automatically saved, so we can query a recorded file to see if any **hero actor** had collisions with some other actor. Currently the actor types we can use in the query are these:
* **h** = Hero
* **v** = Vehicle
* **w** = Walker
* **t** = Traffic light
* **o** = Other
* **a** = Any
The collision query needs to know the type of actors involved in the collision. If we don't care we can specify **a** (any) for both. These are some examples:
* **a** **a**: will show all collisions recorded
* **v** **v**: will show all collisions between vehicles
* **v** **t**: will show all collisions between a vehicle and a traffic light
* **v** **w**: will show all collisions between a vehicle and a walker
* **v** **o**: will show all collisions between a vehicle and other actor, like static meshes
* **h** **w**: will show all collisions between a hero and a walker
Currently only **hero actors** record the collisions, so first actor will be a hero always.
The API for querying the collisions is:
```py
client.show_recorder_collisions("recording01.log", "a", "a")
```
The output is something similar to this:
```
Version: 1
Map: Town05
Date: 02/19/19 15:36:08
Time Types Id Actor 1 Id Actor 2
16 v v 122 vehicle.yamaha.yzf 118 vehicle.dodge_charger.police
27 v o 122 vehicle.yamaha.yzf 0
Frames: 790
Duration: 46 seconds
```
We can see there for each collision the **time** when happened, the **type** of the actors involved, and the **id and description** of each actor.
So, if we want to see what happened on that recording for the first collision where the hero actor was colliding with a vehicle, we could use this API:
```py
client.replay_file("col2.log", 13, 0, 122)
```
We have started the replayer just a bit before the time of the collision, so we can see how it happened.
Also, a value of 0 for the **duration** means to replay all the file (it is the default value).
We can see something like this then:
![collision](img/collision1.gif)
#### Info about blocked actors
There is another API to get information about actors that has been blocked by something and can not follow its way. That could be good to find incidences in the simulation. The API is:
```py
client.show_recorder_actors_blocked("recording01.log", min_time, min_distance)
```
The parameters are:
* **min_time**: the minimum time that an actor needs to be stopped to be considered as blocked (in seconds).
* **min_distance**: the minimum distance to consider an actor to be stopped (in cm).
So, if we want to know which actor is stopped (moving less than 1 meter during 60 seconds), we could use something like:
```py
client.show_recorder_actors_blocked("col3.log", 60, 100)
```
The result can be something like (it is sorted by the duration):
```
Version: 1
Map: Town05
Date: 02/19/19 15:45:01
Time Id Actor Duration
36 173 vehicle.nissan.patrol 336
75 104 vehicle.dodge_charger.police 295
75 214 vehicle.chevrolet.impala 295
234 76 vehicle.nissan.micra 134
241 162 vehicle.audi.a2 128
302 143 vehicle.bmw.grandtourer 67
303 133 vehicle.nissan.micra 67
303 167 vehicle.audi.a2 66
302 80 vehicle.nissan.micra 67
Frames: 6985
Duration: 374 seconds
```
This lines tell us when an actor was stopped for at least the minimum time specified.
For example the 6th line, the actor 143, at time 302 seconds, was stopped for 67 seconds.
We could check what happened that time with the next API command:
```py
client.replay_file("col3.log", 302, 0, 143)
```
![actor blocked](img/actor_blocked1.png)
We see there is some mess there that actually blocks the actor (red vehicle in the image).
We can check also another actor with:
```py
client.replay_file("col3.log", 75, 0, 104)
```
![actor blocked](img/actor_blocked2.png)
We can see it is the same incidence but from another actor involved (police car).
The result is sorted by duration, so the actor that is blocked for more time comes first. We could check the first line, with Id 173 at time 36 seconds it get stopped for 336 seconds. We could check how it arrived to that situation replaying a few seconds before time 36.
```py
client.replay_file("col3.log", 34, 0, 173)
```
![accident](img/accident.gif)
We can see then the responsible of the incident.
### Sample Python scripts
There are some scripts you could use:
* **start_recording.py**: this will start recording, and optionally you can spawn several actors and define how much time you want to record.
* **-f**: filename to write
* **-n**: vehicles to spawn (optional, 10 by default)
* **-t**: duration of the recording (optional)
<br>
* **start_replaying.py**: this will start a replay of a file. We can define the starting time, duration and also an actor to follow.
* **-f**: filename
* **-s**: starting time (optional, by default from start)
* **-d**: duration (optional, by default all)
* **-c**: actor to follow (id) (optional)
<br>
* **show_recorder_file_info.py**: this will show all the information recorded in file. It has two modes of detail, by default it only shows the frames where some event is recorded, the second is showing info about all frames (all positions and trafficlight states).
* **-f**: filename
* **-a**: flag to show all details (optional)
<br>
* **show_recorder_collisions.py**: this will show all the collisions hapenned while recording (currently only involved by hero actors).
* **-f**: filename
* **-t**: two letters definning the types of the actors involved, for example: -t aa
* **h** = Hero
* **v** = Vehicle
* **w** = Walker
* **t** = Traffic light
* **o** = Other
* **a** = Any
<br>
* **show_recorder_actors_blocked.py**: this will show all the actors that are blocked (stopped) in the recorder. We can define the time and distance to be considered as blocked.
* **-f**: filename
* **-t**: minimum seconds stopped to be considered as blocked (optional)
* **-d**: minimum distance to be considered stopped (optional)

View File

@ -0,0 +1,151 @@
## Recorder Binary File Format
The recorder system saves all the info needed to replay the simulation in a binary file, using little endian byte order for the multibyte values. A detailed view of the file format follows as a quick view. Each part will be explained in the following sections:
![file format 1](img/RecorderFileFormat1.png)
In summary, the file format has a small header with general info (version, magic string, date and the map used) and a collection of packets of different types (currently we use 8 types, but that will be growing in the future).
![global file format](img/RecorderFileFormat3.png)
### Strings in binary
Strings are saved with the length of the string first, and then the characters, without ending with a null. For example the string 'Town06' will be saved as hex values: 06 00 54 6f 77 6e 30 36
![binary dynamic string](img/RecorderString.png)
### Info header
The info header only has general information about the recorded file, like the version and a magic string to identify the file as a recorder file. If the header changes then the version will change also. Next is a date timestamp, with the number of seconds from the Epoch 1900, and then a string with the name of the map used, like 'Town04'.
![info header](img/RecorderInfoHeader.png)
A sample info header is:
![info header sample](img/RecorderHeader.png)
### Packets
Each packet starts with a little header of two fields (5 bytes):
![packet header](img/RecorderPacketHeader.png)
* **id**: is the type of the packet
* **size**: is the size of the data that has the packet
* **data**: data bytes of the packet (optional)
If the **size** is greater than 0 means that the packet has **data** bytes. The **data** is optional, and it needs to be reinterpreted in function of the type of the packet.
The header of the packet is useful because we can just ignore those packets we are not interested in when doing playback. We only need to read the header (first 5 bytes) of the packet and jump to the next packet just skipping the data of the packet:
![packets size](img/RecorderPackets.png)
The types of packets are:
![packets type list](img/RecorderPacketsList.png)
I suggest to use **id** over 100 for user custom packets, because this list will grow in the future in sequence.
#### Packet 0: Frame Start
This packet marks the start of a new frame, so it will need to be the first one to start each frame. All packets need to be placed between a **Frame Start** and a **Frame End**.
![frame start](img/RecorderFrameStart.png)
So, elapsed + durationThis = elapsed time for next frame
#### Packet 1: Frame End
This frame has no data and it only marks the end of the current frame. That helps replayer to know the end of each frame just before the new one starts.
Usually the next frame should be a Frame Start packet to start a new frame.
![frame end](img/RecorderFrameEnd.png)
#### Packet 2: Event Add
This packet sais how many actors we need to create at current frame.
![event add](img/RecorderEventAdd.png)
The field **total** sais how many records follow. Each record starts with the **id** field, that is the id the actor has when it was recorded (on playback that id could change internally, but we need to use this id ). The **type** of actor is one value of:
* 0 = Other
* 1 = Vehicle
* 2 = Walker
* 3 = TrafficLight
* 4 = INVALID
Next follows the **location** and the **rotation** where we want to create the actor.
Then we have the **description** of the actor. The description **uid** is the numeric id of the description and the **id** is the textual id, like 'vehicle.seat.leon'.
Then comes a collection of its **attributes** (like the color, number of wheels, role, ...). The number of attributes is variable and could be something like:
* number_of_wheels = 4
* sticky_control = true
* color = 79,33,85
* role_name = autopilot
#### Packet 3: Event Del
This packet sais how many actors need to be destroyed this frame.
![event del](img/RecorderEventDel.png)
It has the **total** of records, and each record has the **id** of the actor to remove.
For example, this packet could be like this:
![event del](img/RecorderPacketSampleEventDel.png)
The 3 identify the packet as (Event Del). The 16 is the size of the data of the packet (4 fields of 4 bytes each). So if we don't want to process this packet, we could skip the next 16 bytes and will be directly to the start of the next packet.
The next 3 sais the total records that follows, and each record is the id of the actor to remove. So, we need to remove at this frame the actors 100, 101 and 120.
#### Packet 4: Event Parent
This packet sais which actor is the child of another (the parent).
![event parent](img/RecorderEventParent.png)
The first id is the child actor, and the second one will be the parent actor.
#### Packet 5: Event Collision
If a collision happens between two actors, it will be registered in this packet. Currently only actors with a collision sensor will report collisions, so currently only hero vehicles has that sensor attached automatically.
![event collision](img/RecorderCollision.png)
The **id** is just a sequence to identify each collision internally.
Several collisions between the same pair of actors can happen in the same frame, because physics frame rate is fixed and usually there are several in the same render frame.
#### Packet 6: Position
This packet records the position and orientation of all actors of type **vehicle** and **walker** that exist in the scene.
![position](img/RecorderPosition.png)
#### Packet 7: TrafficLight
This packet records the state of all **traffic lights** in the scene. That means to store the state (red, orange or green) and the time it is waitting to change to a new state.
![state](img/RecorderTrafficLight.png)
#### File Layout
The layout of the file starts with the **info header** and then follows a collection of packets in groups. The first in each group is the **Frame Start** packet, and the last in the group is the **Frame End** packet. In the middle can go all other packets.
![layout](img/RecorderLayout.png)
Usually it is a good idea to have all packets about events first, and then the packets about position and state later.
The events packets are optional, only appears when they happen, so we could have a layout like this:
![layout](img/RecorderLayoutSample.png)
In frame 1 some actors are created and reparented.
In frame 2 some collisions has been detected.
In frame 3 some actors are removed.