From fc832d9d2602359a853c87ad8109172034b8cc90 Mon Sep 17 00:00:00 2001 From: bernatx Date: Fri, 5 Jul 2019 15:02:17 +0200 Subject: [PATCH] Corrections from PR --- Docs/recorder_and_playback.md | 16 ++++---- Docs/recorder_binary_file_format.md | 62 ++++++++++++++--------------- 2 files changed, 39 insertions(+), 39 deletions(-) diff --git a/Docs/recorder_and_playback.md b/Docs/recorder_and_playback.md index 2f42ed91f..978fa3567 100644 --- a/Docs/recorder_and_playback.md +++ b/Docs/recorder_and_playback.md @@ -1,6 +1,6 @@ ### Recording and Replaying system -CARLA includes now a recording and replaying API, that allows to record a simulation in a file and later replay that simulation. The file is written on server side only, and it includes which **actors are created or destroyed** in the simulation, the **state of the traffic lights** and the **position** and **orientation** of all vehicles and pedestrians. +CARLA includes now a recording and replaying API, that allows to record a simulation in a file and later replay that simulation. The file is written on the server side only, and it includes which **actors are created or destroyed** in the simulation, the **state of the traffic lights** and the **position** and **orientation** of all vehicles and pedestrians. All data is written in a binary file on the server. We can use filenames with or without a path. If we specify a filename without any of '\\', '/' or ':' characters, then it is considered to be only a filename and will be saved on folder **CarlaUE4/Saved**. If we use any of the previous characters then the filename will be considered as an absolute filename with path (for example: '/home/carla/recording01.log' or 'c:\\records\\recording01.log'). @@ -57,7 +57,7 @@ E.g. With a time factor of 20x we can see traffic flow: ![flow](img/RecorderFlow2.gif) -Pedestrian's animations will not be affected by this time factor and will remain at normal speed. Therefore, animations are not accurate yet. +Pedestrians' animations will not be affected by this time factor and will remain at normal speed. Therefore, animations are not accurate yet. This API call will not stop the replayer in course, it will just change the speed, so you can change that several times while the replayer is running. @@ -136,7 +136,7 @@ At the end, we can see as well the **total time** of the recording and also the #### Info about collisions -In simulations whith a **hero actor**, the collisions are automatically saved, so we can query a recorded file to see if any **hero actor** had collisions with some other actor. Currently, the actor types we can use in the query are these: +In simulations with a **hero actor**, the collisions are automatically saved, so we can query a recorded file to see if any **hero actor** had collisions with some other actor. Currently, the actor types we can use in the query are these: * **h** = Hero * **v** = Vehicle @@ -193,7 +193,7 @@ The output result is similar to this: #### Info about blocked actors -There is another API to get information about actors that have been blocked by an obstacle, not letting them follow their way. That could be helpful for finding incidences. The API call is: +There is another API function to get information about actors that have been blocked by an obstacle, not letting them follow their way. That could be helpful for finding incidences. The API call is: ```py client.show_recorder_actors_blocked("recording01.log", min_time, min_distance) @@ -242,7 +242,7 @@ client.replay_file("col3.log", 302, 0, 143) ![actor blocked](img/actor_blocked1.png) -As we can observe, there is an obstacle that is actually blocking the actor (red vehicle in the image). +As we can observe, there is an obstacle that is actually blocking the actor (see red vehicle in). Looking at another actor using: ```py @@ -251,7 +251,7 @@ client.replay_file("col3.log", 75, 0, 104) ![actor blocked](img/actor_blocked2.png) -It is worth noting that it is the same incident but with another vehicle involved in it (police car). +It is worth noting that it is the same incident but with another vehicle involved in it (the police car in this case). The result is sorted by duration, so the actor that is blocked for more time comes first. By checking the vehicle with Id 173 at time 36 seconds, it is evident that it stopped for 336 seconds. To check the cause of it , it would be useful to check how it arrived to that situation by replaying a few seconds before the second 36: @@ -265,7 +265,7 @@ And easily determine the responsible of that incident. ### Sample Python scripts -There are some scripts you could use: +Here you can find a list of sample scripts you could use: * **start_recording.py**: This will start recording, and optionally you can spawn several actors and define how much time you want to record. * **-f**: Filename to write @@ -292,7 +292,7 @@ There are some scripts you could use: * **o** = Other * **a** = Any
-* **show_recorder_actors_blocked.py**: This will show all the actors that are blocked (stopped) in the recorder. We can define the time and distance to be considered as blocked. +* **show_recorder_actors_blocked.py**: This will show all the actors that are blocked or stopped in the recorder. We can define the *time* that an actor has not been moving and *travelled* distance by the actor thresholds to determine if a vehicle is considered as blocked or not. * **-f**: Filename * **-t**: Minimum seconds stopped to be considered as blocked (optional) * **-d**: Minimum distance to be considered stopped (optional) diff --git a/Docs/recorder_binary_file_format.md b/Docs/recorder_binary_file_format.md index b33e28944..73b7b702e 100644 --- a/Docs/recorder_binary_file_format.md +++ b/Docs/recorder_binary_file_format.md @@ -1,16 +1,16 @@ ## Recorder Binary File Format -The recorder system saves all the info needed to replay the simulation in a binary file, using little endian byte order for the multibyte values. A detailed view of the file format follows as a quick view. Each part will be explained in the following sections: +The recorder system saves all the info needed to replay the simulation in a binary file, using little endian byte order for the multibyte values. In the next image representing the file format, we can get a quick view of all the detailed information. Each part that is visualized in the image will be explained in the following sections: ![file format 1](img/RecorderFileFormat1.png) -In summary, the file format has a small header with general info (version, magic string, date and the map used) and a collection of packets of different types (currently we use 10 types, but that will be growing up in the future). +In summary, the file format has a small header with general info (version, magic string, date and the map used) and a collection of packets of different types (currently we use 10 types, but that will continue growing up in the future). ![global file format](img/RecorderFileFormat3.png) ### 1. Strings in binary -Strings are saved with the length of the string first, and then the characters, without ending with a null. For example the string 'Town06' will be saved as hex values: 06 00 54 6f 77 6e 30 36 +Strings are encoded first with the length of it, followed by its characters without null character ending. For example, the string 'Town06' will be saved as hex values: 06 00 54 6f 77 6e 30 36 ![binary dynamic string](img/RecorderString.png) @@ -18,7 +18,7 @@ Strings are saved with the length of the string first, and then the characters, ### 2. Info header -The info header only has general information about the recorded file, like the version and a magic string to identify the file as a recorder file. If the header changes then the version will change also. Next is a date timestamp, with the number of seconds from the Epoch 1900, and then a string with the name of the map used, like 'Town04'. +The info header has general information about the recorded file. Basically, it contains the version and a magic string to identify the file as a recorder file. If the header changes then the version will change also. Furthermore, it contains a date timestamp, with the number of seconds from the Epoch 1900, and also it contains a string with the name of the map that has been used for recording. ![info header](img/RecorderInfoHeader.png) @@ -33,12 +33,12 @@ Each packet starts with a little header of two fields (5 bytes): ![packet header](img/RecorderPacketHeader.png) -* **id**: Is the type of the packet -* **size**: Is the size of the data that has the packet +* **id**: The packet type +* **size**: Size of packet data -Following the header comes the **data**, with its size in bytes determined by the value in the **size** field of the header. +Header information is then followed by the **data**. The **data** is optional, a **size** of 0 means there is no **data** in the packet. -If the **size** is greater than 0 means that the packet has **data** bytes. The **data** needs to be reinterpreted in function of the type of the packet. +If the **size** is greater than 0 it means that the packet has **data** bytes. Therefore, the **data** needs to be reinterpreted depending on the type of the packet. The header of the packet is useful because we can just ignore those packets we are not interested in when doing playback. We only need to read the header (first 5 bytes) of the packet and jump to the next packet just skipping the data of the packet: @@ -48,7 +48,7 @@ The types of packets are: ![packets type list](img/RecorderPacketsList.png) -I suggest to use **id** over 100 for user custom packets, because this list will grow up in sequence in the future. +We suggest to use **id** over 100 for user custom packets, because this list will keep growing in the future. #### 3.1 Packet 0: Frame Start @@ -61,18 +61,18 @@ So, elapsed + durationThis = elapsed time for next frame #### 3.2 Packet 1: Frame End -This frame has no data and it only marks the end of the current frame. That helps replayer to know the end of each frame just before the new one starts. -Usually the next frame should be a Frame Start packet to start a new frame. +This frame has no data and it only marks the end of the current frame. That helps the replayer to know the end of each frame just before the new one starts. +Usually, the next frame should be a Frame Start packet to start a new frame. ![frame end](img/RecorderFrameEnd.png) #### 3.3 Packet 2: Event Add -This packet sais how many actors we need to create at current frame. +This packet says how many actors we need to create at current frame. ![event add](img/RecorderEventAdd.png) -The field **total** sais how many records follow. Each record starts with the **id** field, that is the id the actor has when it was recorded (on playback that id could change internally, but we need to use this id ). The **type** of actor is one value of: +The field **total** says how many records follow. Each record starts with the **id** field, that is the id the actor has when it was recorded (on playback that id could change internally, but we need to use this id ). The **type** of actor can have these possible values: * 0 = Other * 1 = Vehicle @@ -80,11 +80,11 @@ The field **total** sais how many records follow. Each record starts with the ** * 3 = TrafficLight * 4 = INVALID -Next follows the **location** and the **rotation** where we want to create the actor. +After that, the **location** and the **rotation** where we want to create the actor is proceeded. -Then we have the **description** of the actor. The description **uid** is the numeric id of the description and the **id** is the textual id, like 'vehicle.seat.leon'. +Right after we have the **description** of the actor. The description **uid** is the numeric id of the description and the **id** is the textual id, like 'vehicle.seat.leon'. -Then comes a collection of its **attributes** (like the color, number of wheels, role, ...). The number of attributes is variable and could be something like: +Then comes a collection of its **attributes** like color, number of wheels, role, etc. The number of attributes is variable and should look similar to this: * number_of_wheels = 4 * sticky_control = true @@ -93,7 +93,7 @@ Then comes a collection of its **attributes** (like the color, number of wheels, #### 3.4 Packet 3: Event Del -This packet sais how many actors need to be destroyed this frame. +This packet says how many actors need to be destroyed this frame. ![event del](img/RecorderEventDel.png) @@ -103,12 +103,12 @@ For example, this packet could be like this: ![event del](img/RecorderPacketSampleEventDel.png) -The 3 identify the packet as (Event Del). The 16 is the size of the data of the packet (4 fields of 4 bytes each). So if we don't want to process this packet, we could skip the next 16 bytes and will be directly to the start of the next packet. -The next 3 sais the total records that follows, and each record is the id of the actor to remove. So, we need to remove at this frame the actors 100, 101 and 120. +The number 3 identifies the packet as (Event Del). The number 16 is the size of the data of the packet (4 fields of 4 bytes each). So if we don't want to process this packet, we could skip the next 16 bytes and will be directly to the start of the next packet. +The next 3 says the total records that follows, and each record is the id of the actor to remove. So, we need to remove at this frame the actors 100, 101 and 120. #### 3.5 Packet 4: Event Parent -This packet sais which actor is the child of another (the parent). +This packet says which actor is the child of another (the parent). ![event parent](img/RecorderEventParent.png) @@ -116,12 +116,12 @@ The first id is the child actor, and the second one will be the parent actor. #### 3.6 Packet 5: Event Collision -If a collision happens between two actors, it will be registered in this packet. Currently only actors with a collision sensor will report collisions, so currently only hero vehicles has that sensor attached automatically. +If a collision happens between two actors, it will be registered in this packet. Currently only actors with a collision sensor will report collisions, so currently only hero vehicles have that sensor attached automatically. ![event collision](img/RecorderCollision.png) The **id** is just a sequence to identify each collision internally. -Several collisions between the same pair of actors can happen in the same frame, because physics frame rate is fixed and usually there are several in the same render frame. +Several collisions between the same pair of actors can happen in the same frame, because physics frame rate is fixed and usually there are several physics substeps in the same rendered frame. #### 3.7 Packet 6: Position @@ -131,29 +131,29 @@ This packet records the position and orientation of all actors of type **vehicle #### 3.8 Packet 7: TrafficLight -This packet records the state of all **traffic lights** in the scene. That means to store the state (red, orange or green) and the time it is waitting to change to a new state. +This packet records the state of all **traffic lights** in the scene. Which means to it stores the state (red, orange or green) and the time it is waiting to change to a new state. ![state](img/RecorderTrafficLight.png) #### 3.9 Packet 8: Vehicle animation -This packet records the animation of the vehicles, bikes and cycles. This packet store the **throttle**, **sterring**, **brake**, **handbrake** and **gear** inputs, and then set them at playback. +This packet records the animation of the vehicles, bikes and cycles. This packet stores the **throttle**, **sterring**, **brake**, **handbrake** and **gear** inputs, and then set them at playback. ![state](img/RecorderVehicle.png) #### 3.10 Packet 9: Walker animation -This packet records the animation for the walker. It just saves the **speed** of the walker that is used for the animation. +This packet records the animation of the walker. It just saves the **speed** of the walker that is used in the animation. ![state](img/RecorderWalker.png) ### 4. Frame Layout -A frame consist on several packets, all of them optional, unless the packets that **start** and **end** the frame, that must be there always. +A frame consists on several packets, where all of them are optional, except the ones that have the **start** and **end** in that frame, that must be there always. ![layout](img/RecorderFrameLayout.png) -**Event** packets exist only in the frame where they happens. +**Event** packets exist only in the frame where they happen. **Position** and **traffic light** packets should exist in all frames, because they are required to move all actors and set the traffic lights to its state. They are optional but if they are not present then the replayer will not be able to move or set the state of traffic lights. @@ -161,16 +161,16 @@ The **animation** packets are also optional, but by default they are recorded. T ### 5. File Layout -The layout of the file starts with the **info header** and then follows a collection of packets in groups. The first in each group is the **Frame Start** packet, and the last in the group is the **Frame End** packet. In the middle can go all other packets. +The layout of the file starts with the **info header** and then follows a collection of packets in groups. The first in each group is the **Frame Start** packet, and the last in the group is the **Frame End** packet. In between, we can find the rest of packets as well. ![layout](img/RecorderLayout.png) -Usually it is a good idea to have all packets about events first, and then the packets about position and state later. +Usually, it is a good idea to have all packets regarding events first, and then the packets regarding position and state later. -The events packets are optional, only appears when they happen, so we could have a layout like this: +The event packets are optional, since they appear when they happen, so we could have a layout like this: ![layout](img/RecorderLayoutSample.png) -In **frame 1** some actors are created and reparented, so the events are there. In **frame 2** there are no events. In **frame 3** some actors have collided so the collision event appears with that info. In **frame 4** the actors are destroyed. +In **frame 1** some actors are created and reparented, so we can observe its events in the image. In **frame 2** there are no events. In **frame 3** some actors have collided so the collision event appears with that info. In **frame 4** the actors are destroyed.