Several documentation fixes and rewording

This commit is contained in:
nsubiron 2018-12-16 00:06:03 +01:00
parent 4661c24899
commit c19495d72a
4 changed files with 93 additions and 46 deletions

View File

@ -1,14 +1,15 @@
<h1>Cameras and sensors</h1>
This document describes the details of the different cameras/sensors currently
available as well as the resulting images produced by them.
![Client window](img/client_window.png)
Sensors are one type of actor with the characteristic of having a listen
function, you can subscribe to the sensor by providing a callback function. This
callback is called each time a new measurement is received from the sensor.
Sensors are a special type of actor able to measure and stream data. All the
sensors have a `listen` method that registers the callback function that will
be called each time the sensor produces a new measurement. Sensors are typically
attached to vehicles and produce data either each simulation update, or when a
certain event is registered.
You typically add a sensor to a vehicle with the following Python code, here we
are adding an HD camera
The following Python excerpt shows how you would typically attach a sensor to a
vehicle, in this case we are adding a dashboard HD camera to a vehicle.
```py
# Find the blueprint of the sensor.
@ -16,14 +17,14 @@ blueprint = world.get_blueprint_library().find('sensor.camera.rgb')
# Modify the attributes of the blueprint to set image resolution and field of view.
blueprint.set_attribute('image_size_x', '1920')
blueprint.set_attribute('image_size_y', '1080')
blueprint.set_attribute('fov', '100')
blueprint.set_attribute('fov', '110')
# Provide the position of the sensor relative to the vehicle.
transform = carla.Transform(carla.Location(x=0.8, z=1.7))
# Tell the world to spawn the sensor, don't forget to attach it to your vehicle actor.
sensor = world.spawn_actor(blueprint, transform, attach_to=my_vehicle)
# Subscribe to the sensor stream by providing a callback function, this function is
# called each time a new image is generated by the sensor.
sensor.listen(lambda image: do_something(image))
sensor.listen(lambda data: do_something(data))
```
Note that each sensor has a different set of attributes and produces different
@ -32,7 +33,10 @@ type of data. However, the data produced by a sensor comes always tagged with a
frame at which the measurement took place, the transform gives you the
transformation in world coordinates of the sensor at that same frame.
This is the list of sensors currently available in CARLA
Most sensor data objects, like images and lidar measurements, have a function
for saving the measurements to disk.
This is the list of sensors currently available
* [sensor.camera.rgb](#sensorcamerargb)
* [sensor.camera.depth](#sensorcameradepth)
@ -66,7 +70,8 @@ applied to the image to create a more realistic feel
* **Lens flares** Simulates the reflection of bright objects on the lens.
* **Depth of field** Blurs objects near or very far away of the camera.
This sensor produces `carla.Image` objects.
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
objects.
| Sensor data attribute | Type | Description |
| --------------------- | ---- | ----------- |
@ -91,7 +96,8 @@ pixel to the camera (also known as **depth buffer** or **z-buffer**).
| `image_size_y` | int | 600 | Image height in pixels |
| `fov` | float | 90.0 | Field of view in degrees |
This sensor produces `carla.Image` objects.
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
objects.
| Sensor data attribute | Type | Description |
| --------------------- | ---- | ----------- |
@ -127,7 +133,8 @@ pedestrians appear in a different color than vehicles.
| `image_size_y` | int | 600 | Image height in pixels |
| `fov` | float | 90.0 | Field of view in degrees |
This sensor produces `carla.Image` objects.
This sensor produces [`carla.Image`](python_api.md#carlaimagecarlasensordata)
objects.
| Sensor data attribute | Type | Description |
| --------------------- | ---- | ----------- |
@ -180,7 +187,7 @@ This sensor simulates a rotating Lidar implemented using ray-casting. The points
are computed by adding a laser for each channel distributed in the vertical FOV,
then the rotation is simulated computing the horizontal angle that the Lidar
rotated this frame, and doing a ray-cast for each point that each laser was
supposed to generate this frame; `PointsPerSecond / (FPS * Channels)`.
supposed to generate this frame; `points_per_second / (FPS * channels)`.
| Blueprint attribute | Type | Default | Description |
| -------------------- | ---- | ------- | ----------- |
@ -191,7 +198,9 @@ supposed to generate this frame; `PointsPerSecond / (FPS * Channels)`.
| `upper_fov` | float | 10.0 | Angle in degrees of the upper most laser |
| `lower_fov` | float | -30.0 | Angle in degrees of the lower most laser |
This sensor produces `carla.LidarMeasurement` objects.
This sensor produces
[`carla.LidarMeasurement`](python_api.md#carlalidarmeasurementcarlasensordata)
objects.
| Sensor data attribute | Type | Description |
| -------------------------- | ---------- | ----------- |
@ -227,8 +236,9 @@ This sensor, when attached to an actor, it registers an event each time the
actor collisions against something in the world. This sensor does not have any
configurable attribute.
This sensor produces a `carla.CollisionEvent` object for each collision
registered
This sensor produces a
[`carla.CollisionEvent`](python_api.md#carlacollisioneventcarlasensordata)
object for each collision registered
| Sensor data attribute | Type | Description |
| ---------------------- | ----------- | ----------- |
@ -257,8 +267,9 @@ by this sensor.
This sensor does not have any configurable attribute.
This sensor produces a `carla.LaneInvasionEvent` object for each lane marking
crossed by the actor
This sensor produces a
[`carla.LaneInvasionEvent`](python_api.md#carlalaneinvasioneventcarlasensordata)
object for each lane marking crossed by the actor
| Sensor data attribute | Type | Description |
| ----------------------- | ----------- | ----------- |

View File

@ -1,12 +1,5 @@
# Download
### Stable [[Documentation](https://carla.readthedocs.io/en/stable/)]
> The most tested and robust release out there!
- [CARLA 0.8.2](https://github.com/carla-simulator/carla/releases/tag/0.8.2) -
[[Blog post](http://carla.org/2018/04/23/release-0.8.2/)] - _Driving Benchmark_
### Development [[Documentation](https://carla.readthedocs.io/en/latest/)]
> These are the version of CARLA, more frequently updated and with the latest
@ -21,3 +14,10 @@
[[Blog post](http://carla.org/2018/06/18/release-0.8.4/)] - _Fixes And More!_
- [CARLA 0.8.3](https://github.com/carla-simulator/carla/releases/tag/0.8.3) -
[[Blog post](http://carla.org/2018/06/08/release-0.8.3/)] - _Now with bikes!_
### Stable [[Documentation](https://carla.readthedocs.io/en/stable/)]
> The most tested and robust release out there!
- [CARLA 0.8.2](https://github.com/carla-simulator/carla/releases/tag/0.8.2) -
[[Blog post](http://carla.org/2018/04/23/release-0.8.2/)] - _Driving Benchmark_

View File

@ -14,11 +14,11 @@ CARLA consists mainly of two modules, the **CARLA Simulator** and the **CARLA
Python API** module. The simulator does most of the heavy work, controls the
logic, physics, and rendering of all the actors and sensors in the scene; it
requires a machine with a dedicated GPU to run. The CARLA Python API is a module
that you can import into your Python scripts and provides and interface for
controlling the simulator and retrieving data. Through this Python API you can
for instance control any vehicle in the simulation, attach sensors to it, and
read back the data these sensors generate. Most of the aspects of the simulation
are accessible from our Python API, and more will be in future releases.
that you can import into your Python scripts, it provides an interface for
controlling the simulator and retrieving data. With this Python API you can, for
instance, control any vehicle in the simulation, attach sensors to it, and read
back the data these sensors generate. Most of the aspects of the simulation are
accessible from our Python API, and more will be in future releases.
<h2>How to run CARLA</h2>
@ -61,15 +61,16 @@ waiting for a client app to connect and interact with the world.
the simulator with the command-line argument `-carla-port=N`, the second
port will be automatically set to `N+1`.
Let's add now some live to the city, open a new terminal window and execute
Let's add now some life to the city, open a new terminal window and execute
```sh
python spawn_npc.py -n 80
```
This adds 80 vehicles to the world driving in "autopilot" mode. Back to the
simulator window we should see these vehicles driving around the city. They will
keep driving randomly until we stop the script. Let's leave them there for now.
With this script we are adding 80 vehicles to the world driving in "autopilot"
mode. Back to the simulator window we should see these vehicles driving around
the city. They will keep driving randomly until we stop the script. Let's leave
them there for now.
Now, it's nice and sunny in CARLA, but that's not a very interesting driving
condition. One of the cool features of CARLA is that you can control the weather
@ -91,3 +92,15 @@ This should open a new window with a 3rd person view of a car, you can drive
this car with the WASD/arrow keys. Press 'h' to see all the options available.
![manual_control.py](img/manual_control.png)
As you have noticed, we can connect as many scripts as we want to control the
simulation and gather data. Even someone with a different computer can jump now
into your simulation and drive along with you
```sh
python manual_control.py --host=<your-ip-address-here>
```
<br>
Now that we covered the basics, in the next section we'll take a look at some of
the details of the Python API to help you write your own scripts.

View File

@ -8,8 +8,9 @@ Install the build tools and dependencies
```
sudo add-apt-repository ppa:ubuntu-toolchain-r/test
sudo apt-get update
sudo apt-get install build-essential clang-5.0 lld-5.0 g++-7 ninja-build python python-pip python-dev libpng16-dev libtiff5-dev libjpeg-dev tzdata sed curl wget unzip autoconf libtool
pip install --user setuptools nose2
sudo apt-get install build-essential clang-5.0 lld-5.0 g++-7 cmake ninja-build python python-pip python-dev python3-dev python3-pip libpng16-dev libtiff5-dev libjpeg-dev tzdata sed curl wget unzip autoconf libtool
pip2 install --user setuptools nose2
pip3 install --user setuptools nose2
```
To avoid compatibility issues between Unreal Engine and the CARLA dependencies,
@ -23,8 +24,6 @@ sudo update-alternatives --install /usr/bin/clang++ clang++ /usr/lib/llvm-5.0/bi
sudo update-alternatives --install /usr/bin/clang clang /usr/lib/llvm-5.0/bin/clang 101
```
[cmakelink]: https://cmake.org/download/
Build Unreal Engine
-------------------
@ -61,7 +60,7 @@ Note that the `master` branch contains the latest fixes and features, for the
latest stable code may be best to switch to the `stable` branch.
Now you need to download the assets package, to do so we provide a handy script
that downloads and extracts the latest version (note that the package is >10GB,
that downloads and extracts the latest version (note that this package is >3GB,
this step might take some time depending on your connection)
```sh
@ -77,19 +76,21 @@ export UE4_ROOT=~/UnrealEngine_4.19
You can also add this variable to your `~/.bashrc` or `~/.profile`.
Now that the environment is set up, you can run make to run different commands
Now that the environment is set up, you can use make to run different commands
and build the different modules
```sh
make launch # Compiles CARLA and launches Unreal Engine's Editor.
make package # Compiles CARLA and creates a packaged version for distribution.
make help # Print all available commands.
make launch # Compiles the simulator and launches Unreal Engine's Editor.
make PythonAPI # Compiles the PythonAPI module necessary for running the Python examples.
make package # Compiles everything and creates a packaged version able to run without UE4 editor.
make help # Print all available commands.
```
Updating CARLA
--------------
Every new release of CARLA we release a new package with the latest changes in
the CARLA assets. To download the latest version and recompile CARLA, run
Every new release of CARLA, we release too a new package with the latest changes
in the CARLA assets. To download the latest version and recompile CARLA, run
```sh
make clean
@ -97,3 +98,25 @@ git pull
./Update.sh
make launch
```
- - -
<h2>Assets repository (development only)</h2>
Our 3D assets, models, and maps have also a
[publicly available git repository][contentrepolink]. We regularly push latest
updates to this repository. However, using this version of the content is only
recommended to developers, as we often have work in progress maps and models.
Handling this repository requires [git-lfs][gitlfslink] installed in your
machine. Clone this repository to "Unreal/CarlaUE4/Content/Carla"
```sh
git lfs clone https://bitbucket.org/carla-simulator/carla-content Unreal/CarlaUE4/Content/Carla
```
It is recommended to clone with "git lfs clone" as this is significantly faster
in older versions of git.
[contentrepolink]: https://bitbucket.org/carla-simulator/carla-content
[gitlfslink]: https://git-lfs.github.com/