New fixes on M and D comments
This commit is contained in:
parent
d50cb35a9f
commit
c9e8c09b41
|
@ -74,12 +74,12 @@ The world has two different methods to spawn actors.
|
|||
* [`try_spawn_actor()`](python_api.md#carla.World.try_spawn_actor) returns `None` if the spawning fails.
|
||||
|
||||
```py
|
||||
transform = Transform(Location(x=230, y=195, z=40), Rotation(0,180,0))
|
||||
transform = Transform(Location(x=230, y=195, z=40), Rotation(yaw=180))
|
||||
actor = world.spawn_actor(blueprint, transform)
|
||||
```
|
||||
|
||||
!!! Note
|
||||
CARLA uses the [Unreal Engine coordinates system](https://carla.readthedocs.io/en/latest/python_api/#carlarotation), and rotations are declared in a specific order, `(pitch,yaw,roll)`, which is different in the Unreal Engine Editor.
|
||||
!!! Important
|
||||
CARLA uses the [Unreal Engine coordinates system](https://carla.readthedocs.io/en/latest/python_api/#carlarotation). Remember that [`carla.Rotation`](https://carla.readthedocs.io/en/latest/python_api/#carlarotation) constructor is defined as `(pitch, yaw, roll)`, that differs from Unreal Engine Editor `(roll, pitch, yaw)`.
|
||||
|
||||
The actor will not be spawned in case of collision at the specified location. No matter if this happens with a static object or another actor. It is possible to try avoiding these undesired spawning collisions.
|
||||
|
||||
|
|
|
@ -1728,9 +1728,9 @@ Data contained inside a [carla.SemanticLidarMeasurement](#carla.SemanticLidarMea
|
|||
- <a name="carla.SemanticLidarDetection.cos_inc_angle"></a>**<font color="#f8805a">cos_inc_angle</font>** (_float_)
|
||||
Cosine of the incident angle between the ray, and the normal of the hit object.
|
||||
- <a name="carla.SemanticLidarDetection.object_idx"></a>**<font color="#f8805a">object_idx</font>** (_uint_)
|
||||
[CARLA index](https://[carla.readthedocs.io](#carla.readthedocs.io)/en/latest/ref_sensors/#semantic-segmentation-camera) of the hit actor.
|
||||
ID of the actor hit by the ray.
|
||||
- <a name="carla.SemanticLidarDetection.object_tag"></a>**<font color="#f8805a">object_tag</font>** (_uint_)
|
||||
[Semantic tag](https://[carla.readthedocs.io](#carla.readthedocs.io)/en/latest/ref_sensors/#semantic-segmentation-camera) of the hit component.
|
||||
[Semantic tag](https://[carla.readthedocs.io](#carla.readthedocs.io)/en/latest/ref_sensors/#semantic-segmentation-camera) of the component hit by the ray.
|
||||
|
||||
<h3>Methods</h3>
|
||||
|
||||
|
@ -1745,8 +1745,8 @@ Cosine of the incident angle between the ray, and the normal of the hit object.
|
|||
<h3>Instance Variables</h3>
|
||||
- <a name="carla.SemanticLidarMeasurement.channels"></a>**<font color="#f8805a">channels</font>** (_int_)
|
||||
Number of lasers shot.
|
||||
- <a name="carla.SemanticLidarMeasurement.horizontal_angle"></a>**<font color="#f8805a">horizontal_angle</font>** (_float_)
|
||||
Horizontal angle the LIDAR is rotated at the time of the measurement (in radians).
|
||||
- <a name="carla.SemanticLidarMeasurement.horizontal_angle"></a>**<font color="#f8805a">horizontal_angle</font>** (_float<small> – radians</small>_)
|
||||
Horizontal angle the LIDAR is rotated at the time of the measurement.
|
||||
- <a name="carla.SemanticLidarMeasurement.raw_data"></a>**<font color="#f8805a">raw_data</font>** (_bytes_)
|
||||
Received list of raw detection points. Each point consists of [x,y,z] coordinates plus the cosine of the incident angle, the index of the hit actor, and its semantic tag.
|
||||
|
||||
|
@ -1774,20 +1774,20 @@ Retrieves the number of points sorted by channel that are generated by this meas
|
|||
## carla.Sensor<a name="carla.Sensor"></a>
|
||||
<div style="padding-left:30px;margin-top:-20px"><small><b>Inherited from _[carla.Actor](#carla.Actor)_</b></small></div></p><p>Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at [carla.World](#carla.World) to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from [carla.SensorData](#carla.SensorData) (depending on the sensor).
|
||||
|
||||
Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in [carla.BlueprintLibrary](#carla.BlueprintLibrary). All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
|
||||
<b>Receive data on every tick.</b>
|
||||
- [Depth camera](ref_sensors.md#depth-camera).
|
||||
- [Gnss sensor](ref_sensors.md#gnss-sensor).
|
||||
- [IMU sensor](ref_sensors.md#imu-sensor).
|
||||
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
|
||||
- [SemanticLidar raycast](ref_sensors.md#semanticlidar-raycast-sensor).
|
||||
- [Radar](ref_sensors.md#radar-sensor).
|
||||
- [RGB camera](ref_sensors.md#rgb-camera).
|
||||
- [RSS sensor](ref_sensors.md#rss-sensor).
|
||||
- [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
|
||||
<b>Only receive data when triggered.</b>
|
||||
- [Collision detector](ref_sensors.md#collision-detector).
|
||||
- [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
|
||||
Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in [carla.BlueprintLibrary](#carla.BlueprintLibrary). All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
|
||||
<br><b>Receive data on every tick.</b>
|
||||
- [Depth camera](ref_sensors.md#depth-camera).
|
||||
- [Gnss sensor](ref_sensors.md#gnss-sensor).
|
||||
- [IMU sensor](ref_sensors.md#imu-sensor).
|
||||
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
|
||||
- [SemanticLidar raycast](ref_sensors.md#semanticlidar-raycast-sensor).
|
||||
- [Radar](ref_sensors.md#radar-sensor).
|
||||
- [RGB camera](ref_sensors.md#rgb-camera).
|
||||
- [RSS sensor](ref_sensors.md#rss-sensor).
|
||||
- [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
|
||||
<br><b>Only receive data when triggered.</b>
|
||||
- [Collision detector](ref_sensors.md#collision-detector).
|
||||
- [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
|
||||
- [Obstacle detector](ref_sensors.md#obstacle-detector).
|
||||
|
||||
<h3>Instance Variables</h3>
|
||||
|
|
|
@ -129,6 +129,9 @@ def brackets(buf):
|
|||
def parentheses(buf):
|
||||
return join(['(', buf, ')'])
|
||||
|
||||
def small_html(buf):
|
||||
return join(['<small>'+buf+'</small>'])
|
||||
|
||||
|
||||
def small(buf):
|
||||
return join(['<sub><sup>', buf, '</sup></sub>'])
|
||||
|
@ -334,7 +337,7 @@ def add_doc_method_param(md, param):
|
|||
if valid_dic_val(param, 'doc'):
|
||||
param_doc = create_hyperlinks(md.prettify_doc(param['doc']))
|
||||
if valid_dic_val(param, 'param_units'):
|
||||
param_units = '<small> – '+create_hyperlinks(param['param_units']+'</small>')
|
||||
param_units = small_html(' – '+param['param_units'])
|
||||
param_type = '' if not param_type else parentheses(italic(param_type+param_units))
|
||||
md.list_push(code(param_name))
|
||||
if param_type:
|
||||
|
@ -377,7 +380,7 @@ def add_doc_method(md, method, class_key):
|
|||
md.list_push(bold('Return:') + ' ')
|
||||
return_units = ''
|
||||
if valid_dic_val(method, 'return_units'):
|
||||
return_units = '<small> – '+create_hyperlinks(method['return_units']+'</small>')
|
||||
return_units = small_html(' – '+method['return_units'])
|
||||
md.textn(italic(create_hyperlinks(method['return'])+return_units))
|
||||
md.list_pop()
|
||||
|
||||
|
@ -431,7 +434,7 @@ def add_doc_getter_setter(md, method,class_key,is_getter,other_list):
|
|||
md.list_push(bold('Return:') + ' ')
|
||||
return_units = ''
|
||||
if valid_dic_val(method, 'return_units'):
|
||||
return_units = '<small> – '+create_hyperlinks(method['return_units']+'</small>')
|
||||
return_units = small_html(' – '+method['return_units'])
|
||||
md.textn(italic(create_hyperlinks(method['return'])+return_units))
|
||||
md.list_pop()
|
||||
|
||||
|
@ -507,7 +510,7 @@ def add_doc_inst_var(md, inst_var, class_key):
|
|||
# Instance variable type
|
||||
if valid_dic_val(inst_var, 'type'):
|
||||
if valid_dic_val(inst_var, 'var_units'):
|
||||
var_units = '<small> – '+create_hyperlinks(inst_var['var_units']+'</small>')
|
||||
var_units = small_html(' – '+inst_var['var_units'])
|
||||
var_type = ' ' + parentheses(italic(create_hyperlinks(inst_var['type']+var_units)))
|
||||
md.list_pushn(
|
||||
html_key(var_key) +
|
||||
|
|
|
@ -8,21 +8,21 @@
|
|||
doc: >
|
||||
Sensors compound a specific family of actors quite diverse and unique. They are normally spawned as attachment/sons of a vehicle (take a look at carla.World to learn about actor spawning). Sensors are thoroughly designed to retrieve different types of data that they are listening to. The data they receive is shaped as different subclasses inherited from carla.SensorData (depending on the sensor).
|
||||
|
||||
Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in carla.BlueprintLibrary. All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
|
||||
<b>Receive data on every tick.</b>
|
||||
- [Depth camera](ref_sensors.md#depth-camera).
|
||||
- [Gnss sensor](ref_sensors.md#gnss-sensor).
|
||||
- [IMU sensor](ref_sensors.md#imu-sensor).
|
||||
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
|
||||
- [SemanticLidar raycast](ref_sensors.md#semanticlidar-raycast-sensor).
|
||||
- [Radar](ref_sensors.md#radar-sensor).
|
||||
- [RGB camera](ref_sensors.md#rgb-camera).
|
||||
- [RSS sensor](ref_sensors.md#rss-sensor).
|
||||
- [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
|
||||
<b>Only receive data when triggered.</b>
|
||||
- [Collision detector](ref_sensors.md#collision-detector).
|
||||
- [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
|
||||
- [Obstacle detector](ref_sensors.md#obstacle-detector).
|
||||
Most sensors can be divided in two groups: those receiving data on every tick (cameras, point clouds and some specific sensors) and those who only receive under certain circumstances (trigger detectors). CARLA provides a specific set of sensors and their blueprint can be found in carla.BlueprintLibrary. All the information on their preferences and settlement can be found [here](ref_sensors.md), but the list of those available in CARLA so far goes as follow.
|
||||
<br><b>Receive data on every tick.</b>
|
||||
- [Depth camera](ref_sensors.md#depth-camera).
|
||||
- [Gnss sensor](ref_sensors.md#gnss-sensor).
|
||||
- [IMU sensor](ref_sensors.md#imu-sensor).
|
||||
- [Lidar raycast](ref_sensors.md#lidar-raycast-sensor).
|
||||
- [SemanticLidar raycast](ref_sensors.md#semanticlidar-raycast-sensor).
|
||||
- [Radar](ref_sensors.md#radar-sensor).
|
||||
- [RGB camera](ref_sensors.md#rgb-camera).
|
||||
- [RSS sensor](ref_sensors.md#rss-sensor).
|
||||
- [Semantic Segmentation camera](ref_sensors.md#semantic-segmentation-camera).
|
||||
<br><b>Only receive data when triggered.</b>
|
||||
- [Collision detector](ref_sensors.md#collision-detector).
|
||||
- [Lane invasion detector](ref_sensors.md#lane-invasion-detector).
|
||||
- [Obstacle detector](ref_sensors.md#obstacle-detector).
|
||||
|
||||
# - PROPERTIES -------------------------
|
||||
instance_variables:
|
||||
|
|
|
@ -205,8 +205,9 @@
|
|||
Number of lasers shot.
|
||||
- var_name: horizontal_angle
|
||||
type: float
|
||||
var_units: radians
|
||||
doc: >
|
||||
Horizontal angle the LIDAR is rotated at the time of the measurement (in radians).
|
||||
Horizontal angle the LIDAR is rotated at the time of the measurement.
|
||||
- var_name: raw_data
|
||||
type: bytes
|
||||
doc: >
|
||||
|
@ -266,12 +267,12 @@
|
|||
- var_name: object_idx
|
||||
type: uint
|
||||
doc: >
|
||||
[CARLA index](https://carla.readthedocs.io/en/latest/ref_sensors/#semantic-segmentation-camera) of the hit actor.
|
||||
ID of the actor hit by the ray.
|
||||
# --------------------------------------
|
||||
- var_name: object_tag
|
||||
type: uint
|
||||
doc: >
|
||||
[Semantic tag](https://carla.readthedocs.io/en/latest/ref_sensors/#semantic-segmentation-camera) of the hit component.
|
||||
[Semantic tag](https://carla.readthedocs.io/en/latest/ref_sensors/#semantic-segmentation-camera) of the component hit by the ray.
|
||||
# - METHODS ----------------------------
|
||||
methods:
|
||||
- def_name: __str__
|
||||
|
|
Loading…
Reference in New Issue