Merge branch 'sensor-interface#200' into pointCloud

This commit is contained in:
Néstor Subirón 2018-02-03 14:26:19 +01:00 committed by GitHub
commit 66af651711
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
216 changed files with 4339 additions and 465 deletions

4
.gitignore vendored
View File

@ -1,9 +1,11 @@
Dist
Doxygen
PythonClient/dist
Util/Build
*.VC.db
*.VC.opendb
*.egg-info
*.kdev4
*.log
*.pb.cc
@ -22,5 +24,7 @@ Util/Build
.tags*
.vs
__pycache__
_benchmarks_results
_images*
_out
core

View File

@ -24,4 +24,4 @@ matrix:
packages:
- cppcheck
script:
- cppcheck Unreal/CarlaUE4/Source Unreal/CarlaUE4/Plugins/Carla/Source Util/ -iUtil/Build -iUtil/CarlaServer/source/carla/server/carla_server.pb.cc --quiet --error-exitcode=1 --enable=warning
- cppcheck . -iBuild -i.pb.cc --error-exitcode=1 --enable=warning --quiet

View File

@ -5,18 +5,19 @@
"path": ".",
"folder_exclude_patterns":
[
"__pycache__",
".clang",
".codelite",
".kdev4",
".vs",
"Build",
"Binaries",
"Unreal/CarlaUE4/Content*",
"Build",
"DerivedDataCache",
"Dist",
"Doxygen",
"Intermediate",
"Saved"
"Saved",
"Unreal/CarlaUE4/Content*",
"__pycache__"
],
"file_exclude_patterns":
[
@ -47,6 +48,18 @@
},
"build_systems":
[
{
"name": "CARLA - Pylint",
"working_dir": "${project_path}",
"file_regex": "^\\[([^:]*):([0-9]+):?([0-9]+)?\\]:? (.*)$",
"shell_cmd": "pylint --disable=R,C --rcfile=PythonClient/.pylintrc PythonClient/carla PythonClient/*.py --msg-template='[{path}:{line:3d}:{column}]: {msg_id} {msg}'"
},
{
"name": "CARLA - CppCheck",
"working_dir": "${project_path}",
"file_regex": "^\\[([^:]*):([0-9]+):?([0-9]+)?\\]:? (.*)$",
"shell_cmd": "cppcheck . -iBuild -i.pb.cc --error-exitcode=0 --enable=warning --quiet"
},
{
"name": "CARLA - Rebuild script",
"working_dir": "${project_path}",

View File

@ -1,3 +1,21 @@
## CARLA 0.7.1
* New Python API module: Benchmark
- Defines a set of tasks and conditions to test a certain agent
- Contains a starting benchmark, CoRL2017
- Contains Agent Class: Interface for benchmarking AIs
* New Python API module: Basic Planner (Temporary Hack)
- Provide routes for the agent
- Contains AStar module to find the shortest route
* Other Python API improvements
- Converter class to convert between Unreal world and map units
- Metrics module to summarize benchmark results
* Send vehicle's roll, pitch, and yaw to client (orientation is now deprecated)
* New RoutePlanner class for assigning fixed routes to autopilot (IntersectionEntrance has been removed)
* Create a random engine for each vehicle, which greatly improves repeatability
* Add option to skip content download in Setup.sh
* Few small fixes to the city assets
## CARLA 0.7.0
* New Python client API
@ -9,7 +27,7 @@
- Better documentation
- Protocol: renamed "ai_control" to "autopilot_control"
- Merged testing client
- Added the maps for both cities, the client can now access the car position within the lane.
- Added the maps for both cities, the client can now access the car position within the lane
* Make CARLA start without client by default
* Added wind effect to some trees and plants
* Improvements to the existing weather presets

73
Docs/CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,73 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [INSERT EMAIL ADDRESS]. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

View File

@ -1,23 +1,25 @@
Contributing to CARLA
=====================
> _This document is a work in progress and might be incomplete._
We are more than happy to accept contributions!
How can I contribute?
* Reporting bugs
* Feature requests
* Improving documentation
* Code contributions
Reporting bugs
--------------
Use our [issue section](issueslink) on GitHub. Please check before that the
issue is not already reported.
Use our [issue section][issueslink] on GitHub. Please check before that the
issue is not already reported, and make sure you have read our
[Documentation][docslink] and [FAQ][faqlink].
[issueslink]: https://github.com/carla-simulator/carla/issues
[docslink]: http://carla.readthedocs.io
[faqlink]: http://carla.readthedocs.io/en/latest/faq/
Feature requests
----------------
@ -28,6 +30,25 @@ your request as a new issue.
[frlink]: https://github.com/carla-simulator/carla/issues?q=is%3Aissue+is%3Aopen+label%3A%22feature+request%22
Improving documentation
-----------------------
If you feel something is missing in the documentation, please don't hesitate to
open an issue to let us know. Even better, if you think you can improve it
yourself, it would be a great contribution to the community!
We build our documentation with [MkDocs](http://www.mkdocs.org/) based on the
Markdown files inside the "Docs" folder. You can either directly modify them on
GitHub or locally in your machine.
Once you are done with your changes, please submit a pull-request.
**TIP:** You can build and serve it locally by running `mkdocs` in the project's
main folder
$ sudo pip install mkdocs
$ mkdocs serve
Code contributions
------------------
@ -42,6 +63,17 @@ please contact one of us (or send an email to carla.simulator@gmail.com).
[projectslink]: https://github.com/carla-simulator/carla/projects/1
#### Where can I learn more about Unreal Engine?
A basic introduction to C++ programming with UE4 can be found at Unreal's
[C++ Programming Tutorials][ue4tutorials]. Then, if you want to dive into UE4
C++ development there are few paying options online. The
[Unreal C++ Course at Udemy][ue4course] it's pretty complete and there are
usually offers that make it very affordable.
[ue4tutorials]: https://docs.unrealengine.com/latest/INT/Programming/Tutorials/
[ue4course]: https://www.udemy.com/unrealcourse/
#### What should I know before I get started?
Check out the ["CARLA Design"](carla_design.md) document to get an idea on the
@ -50,36 +82,56 @@ the new feature. We are aware the developers documentation is still scarce,
please ask us in case of doubt, and of course don't hesitate to improve the
current documentation if you feel confident enough.
#### Are there any examples in CARLA to see how Unreal programming works?
You can find an example of how C++ classes work in UE4 in
[`ASceneCaptureToDiskCamera`][capturelink] (and its parent class
`ASceneCaptureCamera`). This class creates an actor that can be dropped into the
scene. In the editor, type _"Scene Capture To Disk"_ in the Modes tab, and drag
and drop the actor into the scene. Now searching for its detail tab you can find
all the `UPROPERTY` members reflected. This shows the basic mechanism to use C++
classes in Unreal Editor.
For a more advanced example on how to extend C++ classes with blueprints, you
can take a look at the _"VehicleSpawner"_ blueprint. It derives from the C++
class `AVehicleSpawnerBase`. The C++ class decides where and when it spawns a
vehicle, then calls the function `SpawnVehicle()`, which is implemented in the
blueprint. The blueprint then decides model and color of the vehicle being
spawned. Note that the blueprint can call back C++ functions, for instance for
getting the random engine. This way there is a back-and-forth communication
between C++ code and blueprints.
[capturelink]: https://github.com/carla-simulator/carla/blob/master/Unreal/CarlaUE4/Plugins/Carla/Source/Carla/SceneCaptureToDiskCamera.h
#### Coding standard
Please follow the current coding style when submitting new code.
###### General
* Use spaces, not tabs.
* Avoid adding trailing whitespace as it creates noise in the diffs.
* Comments should not exceed 80 columns, code may exceed this limit a bit in rare occasions if it results in clearer code.
###### Python
* All code must be compatible with Python 2.7, 3.5, and 3.6.
* [Pylint](https://www.pylint.org/) should not give any error or warning (few exceptions apply with external classes like `numpy`, see our `.pylintrc`).
* Python code follows [PEP8 style guide](https://www.python.org/dev/peps/pep-0008/) (use `autopep8` whenever possible).
###### C++
* Compilation should not give any error or warning (`clang++ -Wall -Wextra -std=C++14`).
* Unreal C++ code (CarlaUE4 and Carla plugin) follow the [Unreal Engine's Coding Standard](https://docs.unrealengine.com/latest/INT/Programming/Development/CodingStandard/) with the exception of using spaces instead of tabs.
* CarlaServer uses [Google's style guide](https://google.github.io/styleguide/cppguide.html).
Please follow the current [coding standard](coding_standard.md) when submitting
new code.
#### Pull-requests
Once you think your contribution is ready to be added to CARLA, please submit a
pull-request to the `dev` branch.
pull-request.
Try to be as descriptive as possible when filling the pull-request description.
Adding images and gifs may help people to understand your changes or new
features.
Please note that there are some checks that the new code is required to pass
before we can do the merge. The checks are automatically run by the continuous
integration system, you will see a green tick mark if all the checks succeeded.
If you see a red mark, please correct your code accordingly.
###### Checklist
<!--
If you modify this list please keep it up-to-date with pull_request_template.md
-->
- [ ] Your branch is up-to-date with the `master` branch and tested with latest changes
- [ ] Extended the README / documentation, if necessary
- [ ] Code compiles correctly
- [ ] All tests passing
- [ ] `make check`
- [ ] `pylint --disable=R,C --rcfile=PythonClient/.pylintrc PythonClient/carla PythonClient/*.py`
- [ ] `cppcheck . -iBuild -i.pb.cc --enable=warning`

View File

@ -1,4 +1,6 @@
; Example of settings file for CARLA.
;
; Use it with `./CarlaUE4.sh -carla-settings=Path/To/This/File`.
[CARLA/Server]
; If set to false, a mock controller will be used instead of waiting for a real

66
Docs/benchmark.md Normal file
View File

@ -0,0 +1,66 @@
CARLA Benchmark
===============
Running the Benchmark
---------------------
The "carla" api provides a basic benchmarking system, that allows making several
tests on a certain agent. We already provide the same benchmark used in the CoRL
2017 paper. By running this benchmark you can compare the results of your agent
to the results obtained by the agents show in the paper.
Besides the requirements of the CARLA client, the benchmark package also needs
the future package
$ sudo pip install future
By running the benchmark a default agent that just go straight will be tested.
To run the benchmark you need a server running. For a default localhost server
on port 2000, to run the benchmark you just need to run
$ ./run_benchmark.py
or
$ python run_benchmark.py
Run the help command to see options available
$ ./run_benchmark.py --help
Benchmarking your Agent
---------------------
The benchmark works by calling three lines of code
corl = CoRL2017(city_name=args.city_name, name_to_save=args.log_name)
agent = Manual(args.city_name)
results = corl.benchmark_agent(agent, client)
This is excerpt is executed in the [run_benchmark.py](https://github.com/carla-simulator/carla/blob/master/PythonClient/run_benchmark.py) example.
First a *benchmark* object is defined, for this case, a CoRL2017 benchmark. This is object is used to benchmark a certain Agent. <br>
On the second line of our sample code, there is an object of a Manual class instanced. This class inherited an Agent base class
that is used by the *benchmark* object.
To be benchmarked, an Agent subclass must redefine the *run_step* function as it is done in the following excerpt:
def run_step(self, measurements, sensor_data, target):
"""
Function to run a control step in the CARLA vehicle.
:param measurements: object of the Measurements type
:param sensor_data: images list object
:param target: target position of Transform type
:return: an object of the control type.
"""
control = VehicleControl()
control.throttle = 0.9
return control
The function receives measurements from the world, sensor data and a target position. With this, the function must return a control to the car, *i.e.* steering value, throttle value, brake value, etc.
The [measurements](measurements.md), [target](measurements.md), [sensor_data](cameras_and_sensors.md) and [control](measurements.md) types are described on the documentation.
Creating your Benchmark
---------------------
Tutorial to be added

View File

@ -9,7 +9,7 @@ This document describes the details of the different cameras/sensors currently
available as well as the resulting images produced by them.
Although we plan to extend the sensor suite of CARLA in the near future, at the
moment there are three different sensors available. These three sensors are
moment there are only three different sensors available. These three sensors are
implemented as different post-processing effects applied to scene capture
cameras.
@ -38,6 +38,8 @@ more human readable palette of colors. It can be found at
Scene final
-----------
![SceneFinal](img/capture_scenefinal.png)<br>
The "scene final" camera provides a view of the scene after applying some
post-processing effects to create a more realistic feel. These are actually
stored on the Level, in an actor called [PostProcessVolume][postprolink] and not
@ -55,6 +57,8 @@ in the Camera. We use the following post process effects:
Depth map
---------
![Depth](img/capture_depth.png)
The "depth map" camera provides an image with 24 bit floating precision point
codified in the 3 channels of the RGB color space. The order from less to more
significant bytes is R -> G -> B.
@ -81,12 +85,14 @@ Our max render distance (far) is 1km.
Semantic segmentation
---------------------
![SemanticSegmentation](img/capture_semseg.png)
The "semantic segmentation" camera classifies every object in the view by
displaying it in a different color according to the object class. E.g.,
pedestrians appear in a different color than vehicles.
The server provides an image with the tag information encoded in the red
channel. A pixel with a red value of x displays an object with tag x. The
The server provides an image with the tag information **encoded in the red
channel**. A pixel with a red value of x displays an object with tag x. The
following tags are currently available
Value | Tag
@ -106,6 +112,13 @@ Value | Tag
12 | TrafficSigns
This is implemented by tagging every object in the scene before hand (either at
begin play or on spawn). The objects are classified by their relative file
system path in the project. E.g., every mesh stored in the "pedestrians" folder
it's tagged as pedestrian.
begin play or on spawn). The objects are classified by their relative file path
in the project. E.g., every mesh stored in the
_"Unreal/CarlaUE4/Content/Static/Pedestrians"_ folder it's tagged as pedestrian.
!!! note
**Adding new tags**:
At the moment adding new tags is not very flexible and requires to modify
the C++ code. Add a new label to the `ECityObjectLabel` enum in "Tagger.h",
and its corresponding filepath check inside `GetLabelByFolderName()`
function in "Tagger.cpp".

View File

@ -12,6 +12,10 @@ CARLA is composed by the following modules
- Carla plugin for Unreal Engine: "Unreal/CarlaUE4/Plugins/Carla"
- CarlaServer: "Util/CarlaServer"
!!! tip
Documentation for the C++ code can be generated by running
[Doxygen](http://www.doxygen.org) in the main folder of CARLA project.
Python client API
-----------------

89
Docs/carla_headless.md Normal file
View File

@ -0,0 +1,89 @@
Running CARLA without Display and Selecting GPUs
------
This tutorial is designed for:
- Remote server users that have several nvidia graphical cards and want to effectively use CARLA on all GPUs.
- Desktop users who want to use the GPU that is not plugged on the screen for rendering CARLA.
On this tutorial you will learn.
- How to configure your server to have nvidia working on rendering without a display attached.
- How to use VNC + VGL to simulate a display connected to any GPU you have in your machine.
- And Finally, how to run CARLA in this environment
This tutorial was tested in Ubuntu 16.04 and using NVIDIA 384.11 drivers.
## Preliminaries
A few things need to be working in your server before.
Latest NVIDIA Drivers, OpenGL, VirtualGL(VGL), TurboVNC 2.11, ,
#### NVIDIA Drivers
Download and install NVIDIA-drivers with typical tutorials
http://www.nvidia.es/Download/index.aspx
#### OpenGL
Openg GL is necessary for Virtual GL. Normally OpenGL
can be installed through apt.
sudo apt-get install freeglut3-dev mesa-utils
#### VGL
Follow this tutorial and install vgl: <br>
[Installing VGL](https://virtualgl.org/vgldoc/2_2_1/#hd004001)
#### TurboVNC
Follow the tutorial below to install TurboVNC 2.11:<br>
[Installing TurboVNC](https://cdn.rawgit.com/TurboVNC/turbovnc/2.1.1/doc/index.html#hd005001)
WARNING: Take care on which VNC you install as it may not be compatible with Unreal. The one above was the only one that worked for me.
#### Extra Packages
These extra packages were necessary to make unreal to work.
sudo apt install x11-xserver-utils libxrandr-dev
#### Configure your X
You must generate a X compatible with your nvdia and compatible to run without display. For that, the following command worked:
sudo nvidia-xconfig -a --use-display-device=None --virtual=1280x1024
## Emulating The Virtual Display
Run your own Xorg. Here I use number 7, but it could be labeled with any free number.
sudo nohup Xorg :7 &
Run an auxiliary remote VNC-Xserver. This will create a
virtual display "8".
/opt/TurboVNC/bin/vncserver :8
If everything is working fine the following command
should run smoothly.
DISPLAY=:8 vglrun -d :7.0 glxinfo
Note. This will run glxinfo on Xserver 7, device 0. This means you are selecting the GPU 0 on your machine. To run on other GPU, such as GPU 1 run:
DISPLAY=:8 vglrun -d :7.1 glxinfo
#### Extra
If you want disable the need of sudo when creating the 'nohup Xorg'
go to the '/etc/X11/Xwrapper.config' file and change 'allowed_users=console' to 'allowed_users=anybody'
It may be needed to stop all Xorg servers before running nohup Xorg.
The command for that could change depending on your system. Generally for Ubuntu 16.04
you should use:
sudo service lightdm stop
## Running CARLA
Now, finally, to run CARLA on a certain gpu_number placed in a certain $CARLA_PATH, run.
DISPLAY=:8 vglrun -d :7.<gpu_number> $CARLA_PATH/CarlaUE4/Binaries/Linux/CarlaUE4

25
Docs/coding_standard.md Normal file
View File

@ -0,0 +1,25 @@
Coding standard
===============
> _This document is a work in progress and might be incomplete._
General
-------
* Use spaces, not tabs.
* Avoid adding trailing whitespace as it creates noise in the diffs.
* Comments should not exceed 80 columns, code may exceed this limit a bit in rare occasions if it results in clearer code.
Python
------
* All code must be compatible with Python 2.7, 3.5, and 3.6.
* [Pylint](https://www.pylint.org/) should not give any error or warning (few exceptions apply with external classes like `numpy`, see our `.pylintrc`).
* Python code follows [PEP8 style guide](https://www.python.org/dev/peps/pep-0008/) (use `autopep8` whenever possible).
C++
---
* Compilation should not give any error or warning (`clang++ -Wall -Wextra -std=C++14`).
* Unreal C++ code (CarlaUE4 and Carla plugin) follow the [Unreal Engine's Coding Standard](https://docs.unrealengine.com/latest/INT/Programming/Development/CodingStandard/) with the exception of using spaces instead of tabs.
* CarlaServer uses [Google's style guide](https://google.github.io/styleguide/cppguide.html).

View File

@ -1,7 +1,20 @@
CARLA F.A.Q.
============
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
What is the recommended hardware to run CARLA?
</h4></summary>
#### What is the expected disk space needed for building CARLA?
CARLA is a very performance demanding software, at the very minimum you would
need a computer with a dedicated GPU capable of running Unreal Engine. See
[Unreal Engine's recommended hardware](https://wiki.unrealengine.com/Recommended_Hardware).
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
What is the expected disk space needed for building CARLA?
</h4></summary>
Building CARLA from source requires about 15GB of disk space, not counting
Unreal Engine installation.
@ -10,7 +23,109 @@ However, you will also need to build and install Unreal Engine, which on Linux
requires much more disk space as it keeps all the intermediate files,
[see this thread](https://answers.unrealengine.com/questions/430541/linux-engine-size.html).
#### Is it possible to dump images from the CARLA server view?
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
I downloaded CARLA source from GitHub, where is the "CarlaUE4.sh" script?
</h4></summary>
There is no "CarlaUE4.sh" script in the source version of CARLA, you need to
follow the instructions in the [documentation](http://carla.readthedocs.io) for
building CARLA from source.
Once you open the project in the Unreal Editor, you can hit Play to test CARLA.
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Setup.sh fails to download content, can I skip this step?
</h4></summary>
It is possible to skip the download step by passing the `-s` argument to the
setup script
$ ./Setup.sh -s
Bear in mind that if you do so, you are supposed to manually download and
extract the content package yourself, check out the last output of the Setup.sh
for instructions or run
$ ./Update.sh -s
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Can I run the server from within Unreal Editor?
</h4></summary>
Yes, you can connect the Python client to a server running within Unreal Editor
as if it was the standalone server.
Go to **"Unreal/CarlaUE4/Config/CarlaSettings.ini"** (this file should have been
created by the Setup.sh) and enable networking. If for whatever reason you don't
have this file, just create it and add the following
```ini
[CARLA/Server]
UseNetworking=true
```
Now when you hit Play the editor will hang until a client connects.
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Why Unreal Editor hangs after hitting Play?
</h4></summary>
This is most probably happening because CARLA is starting in server mode. Check
your **"Unreal/CarlaUE4/Config/CarlaSettings.ini"** and set
```ini
[CARLA/Server]
UseNetworking=false
```
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
How can I create a binary version of CARLA?
</h4></summary>
To compile a binary (packaged) version of CARLA, open the CarlaUE4 project with
Unreal Editor, go to the menu "File -> Package Project", and select your
platform. This takes a while, but in the end it should generate a packaged
version of CARLA to execute without Unreal Editor.
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Why do I have very low FPS when running the server in Unreal Editor?
</h4></summary>
UE4 Editor goes to a low performance mode when out of focus. It can be disabled
in the editor preferences. Go to "Edit->Editor Preferences->Performance" and
disable the "Use Less CPU When in Background" option.
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Is it possible to dump images from the CARLA server view?
</h4></summary>
Yes, this is an Unreal Engine feature. You can dump the images of the server
camera by running CARLA with
@ -19,17 +134,70 @@ camera by running CARLA with
Images are saved to "CarlaUE4/Saved/Screenshots/LinuxNoEditor".
#### I downloaded CARLA source from GitHub, where is the "CarlaUE4.sh" script?
</details>
There is no "CarlaUE4.sh" script in the source version of CARLA, you need to
follow the instructions in the [documentation](http://carla.readthedocs.io) on
building CARLA from source.
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Fatal error: 'version.h' has been modified since the precompiled header.
</h4></summary>
Once you open the project in the Unreal Editor, you can hit Play to test CARLA.
This happens from time to time due to Linux updates. It is possible to force a
rebuild of all the project files with
#### How can I create a binary version of CARLA?
$ cd Unreal/CarlaUE4/
$ make CarlaUE4Editor ARGS=-clean
$ make CarlaUE4Editor
To compile a binary (packaged) version of CARLA, open the CarlaUE4 project with
Unreal Editor, go to the menu “File -> Package Project”, and select your
platform. This takes a while, but in the end it should generate a packaged
version of CARLA to execute without Unreal Editor.
It takes a long time but fixes the issue. Sometimes a reboot is also needed.
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Fatal error: 'carla/carla_server.h' file not found.
</h4></summary>
This indicates that the CarlaServer dependency failed to compile.
Please follow the instructions at
[How to build on Linux](http://carla.readthedocs.io/en/latest/how_to_build_on_linux/).
Make sure that the Setup script does print _"Success!"_ at the end
$ ./Setup.sh
...
...
****************
*** Success! ***
****************
Then check if CarlaServer compiles without errors running make
$ make
It should end printing something like
```
[1/1] Install the project...
-- Install configuration: "Release"
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++abi.so.1
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++abi.so.1.0
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++.so.1
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++.so.1.0
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++.so
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/shared/libc++abi.so
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/lib/libc++abi.a
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/lib/libboost_system.a
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/lib/libprotobuf.a
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/include/carla
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/include/carla/carla_server.h
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/lib/libcarlaserver.a
-- Installing: Unreal/CarlaUE4/Plugins/Carla/CarlaServer/bin/test_carlaserver
-- Set runtime path of "Unreal/CarlaUE4/Plugins/Carla/CarlaServer/bin/test_carlaserver" to ""
```
If so you can safely run Rebuild.sh.
</details>

View File

@ -20,13 +20,15 @@ change your default clang version to compile Unreal
Build Unreal Engine
-------------------
!!! note
Unreal Engine repositories are set to private. In order to gain access you
need to add your GitHub username when you sign up at
[www.unrealengine.com](https://www.unrealengine.com).
Download and compile Unreal Engine 4.17. Here we will assume you install it at
"~/UnrealEngine_4.17", but you can install it anywhere, just replace the path
where necessary.
Unreal Engine repositories are set to private. In order to gain access you need
to add your GitHub username when you sign up at https://unrealengine.com.
$ git clone --depth=1 -b 4.17 https://github.com/EpicGames/UnrealEngine.git ~/UnrealEngine_4.17
$ cd ~/UnrealEngine_4.17
$ ./Setup.sh && ./GenerateProjectFiles.sh && make

View File

@ -38,15 +38,9 @@ The "carla" Python module provides a basic API for communicating with the CARLA
server. In the "PythonClient" folder we provide a couple of examples on how to
use this API. We recommend Python 3, but they are also compatible with Python 2.
The basic functionality requires only the protobuf module to be installed
Install the dependencies with
$ sudo apt-get install python3 python3-pip
$ sudo pip3 install protobuf
However, other operations as handling images require some extra modules, and the
"manual_control.py" example requires pygame
$ sudo pip3 install numpy Pillow pygame
$ pip install -r PythonClient/requirements.txt
The script "PythonClient/client_example.py" provides basic functionality for
controlling the vehicle and saving images to disk. Run the help command to see

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.5 MiB

BIN
Docs/img/capture_depth.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 93 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 128 KiB

BIN
Docs/img/capture_semseg.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 7.3 KiB

View File

@ -7,8 +7,9 @@ CARLA Documentation
* [CARLA settings](carla_settings.md)
* [Measurements](measurements.md)
* [Cameras and sensors](cameras_and_sensors.md)
* [CARLA without Display and Selecting GPUs](carla_headless.md)
* [Benchmark](benchmark.md)
* [F.A.Q.](faq.md)
* [Troubleshooting](troubleshooting.md)
#### Building from source
@ -19,6 +20,8 @@ CARLA Documentation
#### Contributing
* [Contribution guidelines](CONTRIBUTING.md)
* [Coding standard](coding_standard.md)
* [Code of conduct](CODE_OF_CONDUCT.md)
#### Development

12
Docs/issue_template.md Normal file
View File

@ -0,0 +1,12 @@
<!--
Thanks for contributing to CARLA!
If you are asking a question please make sure your question was not asked before
by searching among the existing issues. Also make sure you have read our
documentation and FAQ at carla.readthedocs.io.
If you are reporting an issue, please provide the CARLA version number you are
using and your Platform/OS.
-->

View File

@ -1,10 +1,50 @@
# Map customization
## Creating a new map
!!! Bug
Creating a map from scratch with the Carla tools causes a crash with
UE4.17.2 ([Issue #99](https://github.com/carla-simulator/carla/issues/99)),
this guide will suggest duplicating an existing level instead of creating
one from scratch.
### Requirements
- Checkout and build Carla from source on [Linux](how_to_build_on_linux.md) or [Windows](how_to_build_on_windows.md)
### Creating
- Duplicate an existing map
- Remove everything you don't need from the map
- Keep the folder "Lighting", "AtmosphericFog", "PostProcessVol" and "CarlaMapGenerator" this will keep the climate working as intended and the post process saved.
- It might be interesting to keep the empty level as a template and duplicate it before starting to populate it.
- In the CarlaMapGenerator, there is a field "seed". You can change the map by altering that seed and clicking "Trigger Road Map Generation". "Save Road Map To Disk" should also be checked.
- You can change the seed until you have a map you are satisfied with.
- After that you can place new PlayerStarts at the places you want the cars to be spawned.
- The AI already works, but the cars won't act randomly. Vehicles will follow the instructions given by the RoadMapGenerator. They will follow the road easily while in straight roads but wont so much when entering Intersections:
![Road_Instructions_Example.png](img/Road_Instructions_Example.png)
> (This is a debug view of the instructions the road gives to the Vehicle. They will always follow the green arrows, the white points are shared points between one or more routes, by default they order the vehicle to continue straight; Black points are off the road, the vehicle gets no instructions and drives to the left, trying to get back to the road)
- To get a random behavior, you have to place IntersectionEntrances, this will let you redefine the direction the vehicle will take overwriting the directions given by the road map (until they finish their given order).
(See the two example towns how it exactly works).
- Before version 0.7.1: For every entrance you'll have to create a series of empty actors that will be the waypoints to guide the car through the intersection; Then you'll have to assign the corresponding actors to every Path
- After version 0.7.1: Every IntersectionEntrance has an array called routes, adding an element to this creates an editable spline in the world with the first point on the IntersectionEntrance (You might have to select another object before you can see it) This spline defines the possible routes any car will take when entering the intersection (as the Empty actors did before) you might configure this routes as you would edit any Unreal spline. Each route will create an element in the field bellow: "Probabilities" every number in this array defines the chances of any vehicle to take the corresponding route.
- To change the speed of the car, use the SpeedLimiters. They are straightforward to use. (Make sure you limit the speed for the corners, otherwise the cars will try and fail to take them at full speed)
- Traffic lights need to be scripted to avoid traffic accidents.
Every street at a crossing should have its own turn at green without the other streets having green.
- Then you can populate the world with landscape and buildings.
## Blueprint Assets
This are the specific blueprint assets created to help building the environment.
## MultipleFloorBuilding:
The purpose of this blueprint is to make repeating and varying tall buildings a bit easier. Provided a Base, a MiddleFloor and a roof; this blueprint repeats the middle floor to the desired number of stores and tops it whith the last floor given some conditions:
The purpose of this blueprint is to make repeating and varying tall buildings a
bit easier. Provided a Base, a MiddleFloor and a roof; this blueprint repeats
the middle floor to the desired number of stores and tops it with the last floor
given some conditions:
- All model pivots should be in the bottom center of the Specific mesh.
- Al models must start and end exactly where the repetition happen.
@ -14,53 +54,78 @@ This blueprint is controlled by this 6 specific Parameters:
- Floor: The mesh to be repeated along the building.
- Roof: Final mesh to top the building.
- FloorNumber: Number of stores of the building.
- FloorHeightOffset: Adjust The placement of every floor verticaly.
- RoofOffset: Adjust the placement of the roof verticaly.
- FloorHeightOffset: Adjust The placement of every floor vertically.
- RoofOffset: Adjust the placement of the roof vertically.
All of This parameters can be modified once this blueprint is placed in the world.
All of This parameters can be modified once this blueprint is placed in the
world.
## SplinemeshRepeater:
!!! Bug
See [#35 SplineMeshRepeater loses its collider mesh](https://github.com/carla-simulator/carla/issues/35)
### Standard use:
SplineMeshRepeater "Content/Blueprints/SplineMeshRepeater" is a tool included in the Carla Project to help building urban environments; It repeats and aligns a specific choosen mesh along a [Spline](https://docs.unrealengine.com/latest/INT/Engine/BlueprintSplines/Reference/SplineEditorTool/index.html) (Unreal Component). Its principal function is to build Tipicaly tiled and repetitive structures as Walls, Roads, Bridges, Fences... Once the actor is placed into the world the spline can be modified so the object gets the desired form. Each Point Defining the spline Generates a new tile so that as more points the Spline has, the more defined it will be, but also heavyer on the world. This actor is defined by the following parameters:
SplineMeshRepeater "Content/Blueprints/SplineMeshRepeater" is a tool included in
the Carla Project to help building urban environments; It repeats and aligns a
specific chosen mesh along a
[Spline](https://docs.unrealengine.com/latest/INT/Engine/BlueprintSplines/Reference/SplineEditorTool/index.html)
(Unreal Component). Its principal function is to build Typically tiled and
repetitive structures as Walls, Roads, Bridges, Fences... Once the actor is
placed into the world the spline can be modified so the object gets the desired
form. Each Point Defining the spline Generates a new tile so that as more points
the Spline has, the more defined it will be, but also heavier on the world. This
actor is defined by the following parameters:
- StaticMesh: The mesh to be repeated along the spline.
- ForWardAxis: Changes the mesh axis to be alingned with the spline.
- ForWardAxis: Changes the mesh axis to be aligned with the spline.
- Material: Overrides the mesh' default material.
- Colission Enabled: Chooses the tipe of colission to use.
- Collision Enabled: Chooses the type of collision to use.
- Gap distance: Places a Gap between each repeated mesh, for repetitive non continuous walls: bush chains, bollards...
(Last three variables are specific for some particular assets to be defined in the next point) A requisite to create assets compatibles with this componenis is that all the meshes have their pivot placed wherever the repetition starts in the lower point possible with the rest of the mesh pointing positive (Preferably by the X axis)
(Last three variables are specific for some particular assets to be defined in
the next point) A requisite to create assets compatibles with this component is
that all the meshes have their pivot placed wherever the repetition starts in
the lower point possible with the rest of the mesh pointing positive (Preferably
by the X axis)
### Specific Walls (Dynamic material)
In the project folder "Content/Static/Walls" are included some specific assets to be used with this SplineMeshRepeater with a series of special caracteristics. The uv space of this meshes and their materials are the same for all of them, making them excangeable. each material is composed of three different surfaces the last three parameters slightly modify the color of this surfaces:
In the project folder "Content/Static/Walls" are included some specific assets
to be used with this SplineMeshRepeater with a series of special
characteristics. The UV space of this meshes and their materials are the same
for all of them, making them exchangeable. each material is composed of three
different surfaces the last three parameters slightly modify the color of this
surfaces:
- MainMaterialColor: Change the main material of the Wall
- DetailsColor: Change the color of the details (if any)
- TopWallColor: Cambia el color de la cubierta del muro (if any)
- TopWallColor: Change the color of the wall cover (if any)
To add elements that profit from this functions exist in the (Carpeta) folder the GardenWallMask File that defines the uv space to place the materials: (Blue space: MainMaterial; green space: Details; red space TopWall).
Between the material masters is WallMaster which is going to be the master of the materials using this function. An instance of this material will be created and the correspondent textures will be added. This material includes the following parameters to be modified by the material to use:
- Normal Flattener: Slightly modifies the normal map values to exagerate it or flatten it.
- RoughnessCorrection: Para corregir El valor de rugosidad dado por la textura.
The rest of the parameters are the mask the textures and the color corrections that won't be modified in this instance but in the blueprint that will be launched into the world.
To add elements that profit from this functions exist the GardenWallMask File that defines the uv space to place the materials: (Blue space: MainMaterial; green space: Details; red space TopWall).
Between the material masters is WallMaster which is going to be the master of
the materials using this function. An instance of this material will be created
and the correspondent textures will be added. This material includes the
following parameters to be modified by the material to use:
- Normal Flattener: Slightly modifies the normal map values to exaggerate it or flatten it.
- RoughnessCorrection: Changes the Roughness value given by the texture.
The rest of the parameters are the mask the textures and the color corrections
that won't be modified in this instance but in the blueprint that will be
launched into the world.
## Weather
This is the actor in charge of modifying all the lighting, environmental actors an anything that affects the impression of the climate. It runs automaticaly with the game when is not specified otherwise In the Config.Ini but has Its own actor to launch in editor mode to configure the climatic conditions. To fuly work It will need One of each of the folowing actors: SkySphere, Skylight, Postprocess Volume (Boundless) And Light Source to exist in the world.
This is the actor in charge of modifying all the lighting, environmental actors
an anything that affects the impression of the climate. It runs automatically
with the game when is not specified otherwise In the Config. Ini but has Its own
actor to launch in editor mode to configure the climatic conditions. To fully
work It will need One of each of the following actors: SkySphere, Skylight,
Postprocess Volume (Boundless) And Light Source to exist in the world.
- SunPolarAngle: polar angle of the sun, determines time of the day
- SunAzimuthAngle: adds to the location of the sun in the current level
@ -68,7 +133,7 @@ This is the actor in charge of modifying all the lighting, environmental actors
- SunDirectionalLightIntensity: Intensity of the sunlight
- SunDirectionalLightColor: Color of the sunlight
- SunIndirectLightIntensity: intensity of the bounces of the main light
- CloudOpacity: visivility of the cloud rendering on the skybox
- CloudOpacity: visibility of the cloud rendering on the skybox
- HorizontFalloff: determines the height of the gradient between the zenith and horizon color
- ZenithColor: Defines the color of the zenith.
- HorizonColor: Defines the color of the horizon.
@ -79,7 +144,7 @@ This is the actor in charge of modifying all the lighting, environmental actors
- Precipitation: Defines if any precipitation is active.
- PrecipitationType: the type of precipitation to active.
- PrecipitationAmount: the quantity of the chosen precipitation.
- PrecipitationAccumulation: the acumulation of the chosen precipitation.
- PrecipitationAccumulation: the accumulation of the chosen precipitation.
- bWind: defines if there is any wind.
- WindIntensity: defines the wind intensity.
- WindAngle: defines the wind direction.
@ -89,5 +154,7 @@ This is the actor in charge of modifying all the lighting, environmental actors
- CameraPostProcessParameters.AutoExposureMaxBrightness: defines the maximum brightness the autoexposure will count as right in the final image.
- CameraPostProcessParameters.AutoExposureBias: Darkens or brightens the final image towards a defined bias.
You can have as many different configurations saved in the proyect as you want and choose the configuration to aply while on the build, through the [settings file](carla_settings.md); or in the editor while building the level or testing.
You can have as many different configurations saved in the project as you want
and choose the configuration to apply while on the build, through the
[settings file](carla_settings.md); or in the editor while building the level or
testing.

View File

@ -0,0 +1,32 @@
<!--
Thanks for sending a pull request! Please make sure you click the link above to
view the contribution guidelines, then fill out the blanks below.
Checklist:
- [ ] Your branch is up-to-date with the `master` branch and tested with latest changes
- [ ] Extended the README / documentation, if necessary
- [ ] Code compiles correctly
- [ ] All tests passing
- [ ] `make check`
- [ ] `pylint --disable=R,C --rcfile=PythonClient/.pylintrc PythonClient/carla PythonClient/*.py`
- [ ] `cppcheck . -iBuild -i.pb.cc --enable=warning`
-->
#### Description
<!-- Please explain the changes you made here as detailed as possible. -->
Fixes # <!-- If fixes an issue, please add here the issue number. -->
#### Where has this been tested?
* **Platform(s):** ...
* **Python version(s):** ...
* **Unreal Engine version(s):** ...
#### Possible Drawbacks
<!-- What are the possible side-effects or negative impacts of the code change? -->

View File

@ -1,28 +0,0 @@
Troubleshooting
===============
#### Editor hangs after hitting Play
This is most probably happening because CARLA is started in server mode. Check
in your CarlaSettings.ini file ("./Unreal/CarlaUE4/Config/CarlaSettings.ini")
and set
```ini
[CARLA/Server]
UseNetworking=false
```
#### Very low FPS in editor when not in focus
UE4 Editor goes to a low performance mode when out of focus. It can be disabled
in the editor preferences. Go to "Edit->Editor Preferences->Performance" and
disable the "Use Less CPU When in Background" option.
#### Fatal error: file '/usr/include/linux/version.h' has been modified since the precompiled header
This happens from time to time due to Linux updates. It is possible to force a
rebuild of all the project files with
$ cd Unreal/CarlaUE4/
$ make CarlaUE4Editor ARGS=-clean
$ make CarlaUE4Editor

View File

@ -1,7 +1,7 @@
MIT License
Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
Barcelona (UAB), and the INTEL Visual Computing Lab.
Barcelona (UAB).
Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal

176
Package.sh Executable file
View File

@ -0,0 +1,176 @@
#! /bin/bash
################################################################################
# Packages a CARLA build.
################################################################################
set -e
DOC_STRING="Makes a packaged version of CARLA for distribution.
Please make sure to run Rebuild.sh before!"
USAGE_STRING="Usage: $0 [-h|--help] [--no-packaging] [--no-zip] [--clean-intermediate]"
# ==============================================================================
# -- Parse arguments -----------------------------------------------------------
# ==============================================================================
DO_PACKAGE=true
DO_COPY_FILES=true
DO_TARBALL=true
DO_CLEAN_INTERMEDIATE=false
OPTS=`getopt -o h --long help,no-packaging,no-zip,clean-intermediate -n 'parse-options' -- "$@"`
if [ $? != 0 ] ; then echo "$USAGE_STRING" ; exit 2 ; fi
eval set -- "$OPTS"
while true; do
case "$1" in
--no-packaging )
DO_PACKAGE=false
shift ;;
--no-zip )
DO_TARBALL=false
shift ;;
--clean-intermediate )
DO_CLEAN_INTERMEDIATE=true
shift ;;
-h | --help )
echo "$DOC_STRING"
echo "$USAGE_STRING"
exit 1
;;
* )
break ;;
esac
done
# ==============================================================================
# -- Set up environment --------------------------------------------------------
# ==============================================================================
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
pushd "$SCRIPT_DIR" >/dev/null
REPOSITORY_TAG=`git describe --tags --dirty --always`
echo "Packaging version '$REPOSITORY_TAG'."
UNREAL_PROJECT_FOLDER=${PWD}/Unreal/CarlaUE4
DIST_FOLDER=${PWD}/Dist
BUILD_FOLDER=${DIST_FOLDER}/${REPOSITORY_TAG}
function fatal_error {
echo -e "\033[0;31mERROR: $1\033[0m"
exit 1
}
function log {
echo -e "\033[0;33m$1\033[0m"
}
# ==============================================================================
# -- Package project -----------------------------------------------------------
# ==============================================================================
if $DO_PACKAGE ; then
pushd "$UNREAL_PROJECT_FOLDER" >/dev/null
log "Packaging the project..."
if [ ! -d "${UE4_ROOT}" ]; then
fatal_error "UE4_ROOT is not defined, or points to a non-existant directory, please set this environment variable."
fi
rm -Rf ${BUILD_FOLDER}
mkdir -p ${BUILD_FOLDER}
${UE4_ROOT}/Engine/Build/BatchFiles/RunUAT.sh BuildCookRun \
-project="${PWD}/CarlaUE4.uproject" \
-nocompileeditor -nop4 -cook -stage -archive -package \
-clientconfig=Development -ue4exe=UE4Editor \
-pak -prereqs -nodebuginfo \
-targetplatform=Linux -build -CrashReporter -utf8output \
-archivedirectory="${BUILD_FOLDER}"
popd >/dev/null
fi
if [[ ! -d ${BUILD_FOLDER}/LinuxNoEditor ]] ; then
fatal_error "Failed to package the project!"
fi
# ==============================================================================
# -- Copy files (Python server, README, etc) -----------------------------------
# ==============================================================================
if $DO_COPY_FILES ; then
DESTINATION=${BUILD_FOLDER}/LinuxNoEditor
log "Copying extra files..."
cp -v ./LICENSE ${DESTINATION}/LICENSE
cp -v ./CHANGELOG.md ${DESTINATION}/CHANGELOG
cp -v ./Docs/release_readme.md ${DESTINATION}/README
cp -v ./Docs/Example.CarlaSettings.ini ${DESTINATION}/Example.CarlaSettings.ini
rsync -vhr --delete --delete-excluded \
--exclude "__pycache__" \
--exclude "*.pyc" \
--exclude ".*" \
PythonClient/ ${DESTINATION}/PythonClient
echo
fi
# ==============================================================================
# -- Zip the project -----------------------------------------------------------
# ==============================================================================
if $DO_TARBALL ; then
DESTINATION=${DIST_FOLDER}/CARLA_${REPOSITORY_TAG}.tar.gz
SOURCE=${BUILD_FOLDER}/LinuxNoEditor
pushd "$SOURCE" >/dev/null
log "Packaging build..."
rm -f ./Manifest_NonUFSFiles_Linux.txt
rm -Rf ./CarlaUE4/Saved
rm -Rf ./Engine/Saved
tar -czvf ${DESTINATION} *
popd >/dev/null
fi
# ==============================================================================
# -- Remove intermediate files -------------------------------------------------
# ==============================================================================
if $DO_CLEAN_INTERMEDIATE ; then
log "Removing intermediate build..."
rm -Rf ${BUILD_FOLDER}
fi
# ==============================================================================
# -- ...and we are done --------------------------------------------------------
# ==============================================================================
echo ""
echo "****************"
echo "*** Success! ***"
echo "****************"
popd >/dev/null

2
PythonClient/.pep8 Normal file
View File

@ -0,0 +1,2 @@
[pep8]
max-line-length = 120

2
PythonClient/MANIFEST.in Normal file
View File

@ -0,0 +1,2 @@
include carla/planner/*.txt
include carla/planner/*.png

View File

@ -0,0 +1,38 @@
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
# @author: german,felipecode
from __future__ import print_function
import abc
from carla.planner.planner import Planner
class Agent(object):
def __init__(self, city_name):
self.__metaclass__ = abc.ABCMeta
self._planner = Planner(city_name)
def get_distance(self, start_point, end_point):
path_distance = self._planner.get_shortest_path_distance(
[start_point.location.x, start_point.location.y, 22]
, [start_point.orientation.x, start_point.orientation.y, 22]
, [end_point.location.x, end_point.location.y, 22]
, [end_point.orientation.x, end_point.orientation.y, 22])
# We calculate the timout based on the distance
return path_distance
@abc.abstractmethod
def run_step(self, measurements, sensor_data, target):
"""
Function to be redefined by an agent.
:param The measurements like speed, the image data and a target
:returns A carla Control object, with the steering/gas/brake for the agent
"""

View File

@ -0,0 +1,377 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import csv
import datetime
import math
import os
import abc
import logging
from builtins import input as input_data
from carla.client import VehicleControl
def sldist(c1, c2):
return math.sqrt((c2[0] - c1[0])**2 + (c2[1] - c1[1])**2)
class Benchmark(object):
"""
The Benchmark class, controls the execution of the benchmark by an
Agent class.
The benchmark class must be inherited
"""
def __init__(
self,
city_name,
name_to_save,
continue_experiment=False,
save_images=False
):
self.__metaclass__ = abc.ABCMeta
self._city_name = city_name
self._base_name = name_to_save
self._dict_stats = {'exp_id': -1,
'rep': -1,
'weather': -1,
'start_point': -1,
'end_point': -1,
'result': -1,
'initial_distance': -1,
'final_distance': -1,
'final_time': -1,
'time_out': -1
}
self._dict_rewards = {'exp_id': -1,
'rep': -1,
'weather': -1,
'collision_gen': -1,
'collision_ped': -1,
'collision_car': -1,
'lane_intersect': -1,
'sidewalk_intersect': -1,
'pos_x': -1,
'pos_y': -1
}
self._experiments = self._build_experiments()
# Create the log files and get the names
self._suffix_name, self._full_name = self._create_log_record(name_to_save, self._experiments)
# Get the line for the experiment to be continued
self._line_on_file = self._continue_experiment(continue_experiment)
self._save_images = save_images
self._image_filename_format = os.path.join(
self._full_name, '_images/episode_{:s}/{:s}/image_{:0>5d}.jpg')
def run_navigation_episode(
self,
agent,
carla,
time_out,
target,
episode_name):
measurements, sensor_data = carla.read_data()
carla.send_control(VehicleControl())
t0 = measurements.game_timestamp
t1 = t0
success = False
measurement_vec = []
frame = 0
distance = 10000
while(t1 - t0) < (time_out * 1000) and not success:
measurements, sensor_data = carla.read_data()
control = agent.run_step(measurements, sensor_data, target)
logging.info("Controller is Inputting:")
logging.info('Steer = %f Throttle = %f Brake = %f ',
control.steer, control.throttle, control.brake)
carla.send_control(control)
# measure distance to target
if self._save_images:
for name, image in sensor_data.items():
image.save_to_disk(self._image_filename_format.format(
episode_name, name, frame))
curr_x = measurements.player_measurements.transform.location.x
curr_y = measurements.player_measurements.transform.location.y
measurement_vec.append(measurements.player_measurements)
t1 = measurements.game_timestamp
distance = sldist([curr_x, curr_y],
[target.location.x, target.location.y])
logging.info('Status:')
logging.info(
'[d=%f] c_x = %f, c_y = %f ---> t_x = %f, t_y = %f',
float(distance), curr_x, curr_y, target.location.x,
target.location.y)
if distance < 200.0:
success = True
frame += 1
if success:
return 1, measurement_vec, float(t1 - t0) / 1000.0, distance
return 0, measurement_vec, time_out, distance
def benchmark_agent(self, agent, carla):
if self._line_on_file == 0:
# The fixed name considering all the experiments being run
with open(os.path.join(self._full_name,
self._suffix_name), 'w') as ofd:
w = csv.DictWriter(ofd, self._dict_stats.keys())
w.writeheader()
with open(os.path.join(self._full_name,
'details_' + self._suffix_name), 'w') as rfd:
rw = csv.DictWriter(rfd, self._dict_rewards.keys())
rw.writeheader()
start_task = 0
start_pose = 0
else:
(start_task, start_pose) = self._get_pose_and_task(self._line_on_file)
logging.info(' START ')
for experiment in self._experiments[start_task:]:
positions = carla.load_settings(
experiment.conditions).player_start_spots
for pose in experiment.poses[start_pose:]:
for rep in range(experiment.repetitions):
start_point = pose[0]
end_point = pose[1]
carla.start_episode(start_point)
logging.info('======== !!!! ==========')
logging.info(' Start Position %d End Position %d ',
start_point, end_point)
path_distance = agent.get_distance(
positions[start_point], positions[end_point])
euclidean_distance = \
sldist([positions[start_point].location.x, positions[start_point].location.y],
[positions[end_point].location.x, positions[end_point].location.y])
time_out = self._calculate_time_out(path_distance)
# running the agent
(result, reward_vec, final_time, remaining_distance) = \
self.run_navigation_episode(
agent, carla, time_out, positions[end_point],
str(experiment.Conditions.WeatherId) + '_'
+ str(experiment.id) + '_' + str(start_point)
+ '.' + str(end_point))
# compute stats for the experiment
self._write_summary_results(
experiment, pose, rep, euclidean_distance,
remaining_distance, final_time, time_out, result)
self._write_details_results(experiment, rep, reward_vec)
if(result > 0):
logging.info('+++++ Target achieved in %f seconds! +++++',
final_time)
else:
logging.info('----- Timeout! -----')
return self.get_all_statistics()
def _write_summary_results(self, experiment, pose, rep,
path_distance, remaining_distance,
final_time, time_out, result):
self._dict_stats['exp_id'] = experiment.id
self._dict_stats['rep'] = rep
self._dict_stats['weather'] = experiment.Conditions.WeatherId
self._dict_stats['start_point'] = pose[0]
self._dict_stats['end_point'] = pose[1]
self._dict_stats['result'] = result
self._dict_stats['initial_distance'] = path_distance
self._dict_stats['final_distance'] = remaining_distance
self._dict_stats['final_time'] = final_time
self._dict_stats['time_out'] = time_out
with open(os.path.join(self._full_name, self._suffix_name), 'a+') as ofd:
w = csv.DictWriter(ofd, self._dict_stats.keys())
w.writerow(self._dict_stats)
def _write_details_results(self, experiment, rep, reward_vec):
with open(os.path.join(self._full_name,
'details_' + self._suffix_name), 'a+') as rfd:
rw = csv.DictWriter(rfd, self._dict_rewards.keys())
for i in range(len(reward_vec)):
self._dict_rewards['exp_id'] = experiment.id
self._dict_rewards['rep'] = rep
self._dict_rewards['weather'] = experiment.Conditions.WeatherId
self._dict_rewards['collision_gen'] = reward_vec[
i].collision_other
self._dict_rewards['collision_ped'] = reward_vec[
i].collision_pedestrians
self._dict_rewards['collision_car'] = reward_vec[
i].collision_vehicles
self._dict_rewards['lane_intersect'] = reward_vec[
i].intersection_otherlane
self._dict_rewards['sidewalk_intersect'] = reward_vec[
i].intersection_offroad
self._dict_rewards['pos_x'] = reward_vec[
i].transform.location.x
self._dict_rewards['pos_y'] = reward_vec[
i].transform.location.y
rw.writerow(self._dict_rewards)
def _create_log_record(self, base_name, experiments):
"""
This function creates the log files for the benchmark.
"""
suffix_name = self._get_experiments_names(experiments)
full_name = os.path.join('_benchmarks_results',
base_name + '_'
+ self._get_details() + '/')
folder = os.path.dirname(full_name)
if not os.path.isdir(folder):
os.makedirs(folder)
# Make a date file: to show when this was modified,
# the number of times the experiments were run
now = datetime.datetime.now()
open(os.path.join(full_name, now.strftime("%Y%m%d%H%M")),'w').close()
return suffix_name, full_name
def _continue_experiment(self, continue_experiment):
if self._experiment_exist():
if continue_experiment:
line_on_file = self._get_last_position()
else:
# Ask question, to avoid mistaken override situations
answer = input_data("The experiment was already found in the files"
+ ", Do you want to continue (y/n)? \n"
)
if answer == 'Yes' or answer == 'y':
line_on_file = self._get_last_position()
else:
line_on_file = 0
else:
line_on_file = 0
return line_on_file
def _experiment_exist(self):
return os.path.isfile(self._full_name)
def _get_last_position(self):
with open(os.path.join(self._full_name, self._suffix_name)) as f:
return sum(1 for _ in f)
# To be redefined on subclasses on how to calculate timeout for an episode
@abc.abstractmethod
def _calculate_time_out(self, distance):
pass
@abc.abstractmethod
def _get_details(self):
"""
Get details
:return: a string with name and town of the subclass
"""
@abc.abstractmethod
def _build_experiments(self):
"""
Returns a set of experiments to be evaluated
Must be redefined in an inherited class.
"""
@abc.abstractmethod
def get_all_statistics(self):
"""
Get the statistics of the evaluated experiments
:return:
"""
@abc.abstractmethod
def _get_pose_and_task(self, line_on_file):
"""
Parse the experiment depending on number of poses and tasks
"""
@abc.abstractmethod
def plot_summary_train(self):
"""
returns the summary for the train weather/task episodes
"""
@abc.abstractmethod
def plot_summary_test(self):
"""
returns the summary for the test weather/task episodes
"""
@staticmethod
def _get_experiments_names(experiments):
name_cat = 'w'
for experiment in experiments:
name_cat += str(experiment.Conditions.WeatherId) + '.'
return name_cat

View File

@ -0,0 +1,203 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
# CORL experiment set.
from __future__ import print_function
import os
from .benchmark import Benchmark
from .experiment import Experiment
from carla.sensor import Camera
from carla.settings import CarlaSettings
from .metrics import compute_summary
class CoRL2017(Benchmark):
def get_all_statistics(self):
summary = compute_summary(os.path.join(
self._full_name, self._suffix_name), [3])
return summary
def plot_summary_train(self):
self._plot_summary([1.0, 3.0, 6.0, 8.0])
def plot_summary_test(self):
self._plot_summary([4.0, 14.0])
def _plot_summary(self, weathers):
"""
We plot the summary of the testing for the set selected weathers.
The test weathers are [4,14]
"""
metrics_summary = compute_summary(os.path.join(
self._full_name, self._suffix_name), [3])
for metric, values in metrics_summary.items():
print('Metric : ', metric)
for weather, tasks in values.items():
if weather in set(weathers):
print(' Weather: ', weather)
count = 0
for t in tasks:
print(' Task ', count, ' -> ', t)
count += 1
print(' AvG -> ', float(sum(tasks)) / float(len(tasks)))
def _calculate_time_out(self, distance):
"""
Function to return the timeout ( in miliseconds) that is calculated based on distance to goal.
This is the same timeout as used on the CoRL paper.
"""
return ((distance / 100000.0) / 10.0) * 3600.0 + 10.0
def _poses_town01(self):
"""
Each matrix is a new task. We have all the four tasks
"""
def _poses_straight():
return [[36, 40], [39, 35], [110, 114], [7, 3], [0, 4],
[68, 50], [61, 59], [47, 64], [147, 90], [33, 87],
[26, 19], [80, 76], [45, 49], [55, 44], [29, 107],
[95, 104], [84, 34], [53, 67], [22, 17], [91, 148],
[20, 107], [78, 70], [95, 102], [68, 44], [45, 69]]
def _poses_one_curve():
return [[138, 17], [47, 16], [26, 9], [42, 49], [140, 124],
[85, 98], [65, 133], [137, 51], [76, 66], [46, 39],
[40, 60], [0, 29], [4, 129], [121, 140], [2, 129],
[78, 44], [68, 85], [41, 102], [95, 70], [68, 129],
[84, 69], [47, 79], [110, 15], [130, 17], [0, 17]]
def _poses_navigation():
return [[105, 29], [27, 130], [102, 87], [132, 27], [24, 44],
[96, 26], [34, 67], [28, 1], [140, 134], [105, 9],
[148, 129], [65, 18], [21, 16], [147, 97], [42, 51],
[30, 41], [18, 107], [69, 45], [102, 95], [18, 145],
[111, 64], [79, 45], [84, 69], [73, 31], [37, 81]]
return [_poses_straight(),
_poses_one_curve(),
_poses_navigation(),
_poses_navigation()]
def _poses_town02(self):
def _poses_straight():
return [[38, 34], [4, 2], [12, 10], [62, 55], [43, 47],
[64, 66], [78, 76], [59, 57], [61, 18], [35, 39],
[12, 8], [0, 18], [75, 68], [54, 60], [45, 49],
[46, 42], [53, 46], [80, 29], [65, 63], [0, 81],
[54, 63], [51, 42], [16, 19], [17, 26], [77, 68]]
def _poses_one_curve():
return [[37, 76], [8, 24], [60, 69], [38, 10], [21, 1],
[58, 71], [74, 32], [44, 0], [71, 16], [14, 24],
[34, 11], [43, 14], [75, 16], [80, 21], [3, 23],
[75, 59], [50, 47], [11, 19], [77, 34], [79, 25],
[40, 63], [58, 76], [79, 55], [16, 61], [27, 11]]
def _poses_navigation():
return [[19, 66], [79, 14], [19, 57], [23, 1],
[53, 76], [42, 13], [31, 71], [33, 5],
[54, 30], [10, 61], [66, 3], [27, 12],
[79, 19], [2, 29], [16, 14], [5, 57],
[70, 73], [46, 67], [57, 50], [61, 49], [21, 12],
[51, 81], [77, 68], [56, 65], [43, 54]]
return [_poses_straight(),
_poses_one_curve(),
_poses_navigation(),
_poses_navigation()
]
def _build_experiments(self):
"""
Creates the whole set of experiment objects,
The experiments created depend on the selected Town.
"""
# We set the camera
# This single RGB camera is used on every experiment
camera = Camera('CameraRGB')
camera.set(CameraFOV=100)
camera.set_image_size(800, 600)
camera.set_position(200, 0, 140)
camera.set_rotation(-15.0, 0, 0)
weathers = [1, 3, 6, 8, 4, 14]
if self._city_name == 'Town01':
poses_tasks = self._poses_town01()
vehicles_tasks = [0, 0, 0, 20]
pedestrians_tasks = [0, 0, 0, 50]
else:
poses_tasks = self._poses_town02()
vehicles_tasks = [0, 0, 0, 15]
pedestrians_tasks = [0, 0, 0, 50]
experiments_vector = []
for weather in weathers:
for iteration in range(len(poses_tasks)):
poses = poses_tasks[iteration]
vehicles = vehicles_tasks[iteration]
pedestrians = pedestrians_tasks[iteration]
conditions = CarlaSettings()
conditions.set(
SynchronousMode=True,
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=vehicles,
NumberOfPedestrians=pedestrians,
WeatherId=weather,
SeedVehicles=123456789,
SeedPedestrians=123456789
)
# Add all the cameras that were set for this experiments
conditions.add_sensor(camera)
experiment = Experiment()
experiment.set(
Conditions=conditions,
Poses=poses,
Id=iteration,
Repetitions=1
)
experiments_vector.append(experiment)
return experiments_vector
def _get_details(self):
# Function to get automatic information from the experiment for writing purposes
return 'corl2017_' + self._city_name
def _get_pose_and_task(self, line_on_file):
"""
Returns the pose and task this experiment is, based on the line it was
on the log file.
"""
# We assume that the number of poses is constant
return int(line_on_file / len(self._experiments)), line_on_file % 25

View File

@ -0,0 +1,38 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
from carla.settings import CarlaSettings
class Experiment(object):
def __init__(self):
self.Id = ''
self.Conditions = CarlaSettings()
self.Poses = [[]]
self.Repetitions = 1
def set(self, **kwargs):
for key, value in kwargs.items():
if not hasattr(self, key):
raise ValueError('Experiment: no key named %r' % key)
setattr(self, key, value)
@property
def id(self):
return self.Id
@property
def conditions(self):
return self.Conditions
@property
def poses(self):
return self.Poses
@property
def repetitions(self):
return self.Repetitions

View File

@ -0,0 +1,205 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import numpy as np
import math
import os
sldist = lambda c1, c2: math.sqrt((c2[0] - c1[0])**2 + (c2[1] - c1[1])**2)
flatten = lambda l: [item for sublist in l for item in sublist]
def get_colisions(selected_matrix, header):
count_gen = 0
count_ped = 0
count_car = 0
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_gen')]
- selected_matrix[(i-10), header.index('collision_gen')]) > 40000:
count_gen += 1
i += 20
i += 1
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_car')]
- selected_matrix[(i-10), header.index('collision_car')]) > 40000:
count_car += 1
i += 30
i += 1
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_ped')]
- selected_matrix[i-5, header.index('collision_ped')]) > 30000:
count_ped += 1
i += 100
i += 1
return count_gen, count_car, count_ped
def get_distance_traveled(selected_matrix, header):
prev_x = selected_matrix[0, header.index('pos_x')]
prev_y = selected_matrix[0, header.index('pos_y')]
i = 1
acummulated_distance = 0
while i < selected_matrix.shape[0]:
x = selected_matrix[i, header.index('pos_x')]
y = selected_matrix[i, header.index('pos_y')]
# Here we defined a maximun distance in a tick, this case 8 meters or 288km/h
if sldist((x, y), (prev_x, prev_y)) < 800:
acummulated_distance += sldist((x, y), (prev_x, prev_y))
prev_x = x
prev_y = y
i += 1
return float(acummulated_distance)/float(100*1000)
def get_out_of_road_lane(selected_matrix, header):
count_road = 0
count_lane = 0
i = 0
while i < selected_matrix.shape[0]:
# print selected_matrix[i,6]
if (selected_matrix[i, header.index('sidewalk_intersect')]
- selected_matrix[(i-10), header.index('sidewalk_intersect')]) > 0.3:
count_road += 1
i += 20
if i >= selected_matrix.shape[0]:
break
if (selected_matrix[i, header.index('lane_intersect')]
- selected_matrix[(i-10), header.index('lane_intersect')]) > 0.4:
count_lane += 1
i += 20
i += 1
return count_lane, count_road
def compute_summary(filename, dynamic_episodes):
# Separate the PATH and the basename
path = os.path.dirname(filename)
base_name = os.path.basename(filename)
f = open(filename, "rb")
header = f.readline()
header = header.split(',')
header[-1] = header[-1][:-2]
f.close()
f = open(os.path.join(path, 'details_' + base_name), "rb")
header_details = f.readline()
header_details = header_details.split(',')
header_details[-1] = header_details[-1][:-2]
f.close()
data_matrix = np.loadtxt(open(filename, "rb"), delimiter=",", skiprows=1)
# Corner Case: The presented test just had one episode
if data_matrix.ndim == 1:
data_matrix = np.expand_dims(data_matrix, axis=0)
tasks = np.unique(data_matrix[:, header.index('exp_id')])
all_weathers = np.unique(data_matrix[:, header.index('weather')])
reward_matrix = np.loadtxt(open(os.path.join(
path, 'details_' + base_name), "rb"), delimiter=",", skiprows=1)
metrics_dictionary = {'average_completion': {w: [0.0]*len(tasks) for w in all_weathers},
'intersection_offroad': {w: [0.0]*len(tasks) for w in all_weathers},
'intersection_otherlane': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_pedestrians': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_vehicles': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_other': {w: [0.0]*len(tasks) for w in all_weathers},
'average_fully_completed': {w: [0.0]*len(tasks) for w in all_weathers},
'average_speed': {w: [0.0]*len(tasks) for w in all_weathers},
'driven_kilometers': {w: [0.0]*len(tasks) for w in all_weathers}
}
for t in tasks:
task_data_matrix = data_matrix[
data_matrix[:, header.index('exp_id')] == t]
weathers = np.unique(task_data_matrix[:, header.index('weather')])
for w in weathers:
t = int(t)
task_data_matrix = data_matrix[np.logical_and(data_matrix[:, header.index(
'exp_id')] == t, data_matrix[:, header.index('weather')] == w)]
task_reward_matrix = reward_matrix[np.logical_and(reward_matrix[:, header_details.index(
'exp_id')] == float(t), reward_matrix[:, header_details.index('weather')] == float(w))]
km_run = get_distance_traveled(
task_reward_matrix, header_details)
metrics_dictionary['average_fully_completed'][w][t] = sum(
task_data_matrix[:, header.index('result')])/task_data_matrix.shape[0]
metrics_dictionary['average_completion'][w][t] = sum(
(task_data_matrix[:, header.index('initial_distance')]
- task_data_matrix[:, header.index('final_distance')])
/ task_data_matrix[:, header.index('initial_distance')]) \
/ len(task_data_matrix[:, header.index('final_distance')])
metrics_dictionary['driven_kilometers'][w][t]= km_run
metrics_dictionary['average_speed'][w][t]= km_run/ \
((sum(task_data_matrix[:, header.index('final_time')]))/3600.0)
if list(tasks).index(t) in set(dynamic_episodes):
lane_road = get_out_of_road_lane(
task_reward_matrix, header_details)
colisions = get_colisions(task_reward_matrix, header_details)
metrics_dictionary['intersection_offroad'][
w][t] = lane_road[0]/km_run
metrics_dictionary['intersection_otherlane'][
w][t] = lane_road[1]/km_run
metrics_dictionary['collision_pedestrians'][
w][t] = colisions[2]/km_run
metrics_dictionary['collision_vehicles'][
w][t] = colisions[1]/km_run
metrics_dictionary['collision_other'][
w][t] = colisions[0]/km_run
return metrics_dictionary

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
@ -171,20 +171,28 @@ class CarlaClient(object):
@staticmethod
def _iterate_sensor_data(raw_data):
# At this point the only sensors available are images, the raw_data
# consists of images only.
image_types = ['None', 'SceneFinal', 'Depth', 'SemanticSegmentation']
image_types = ['None', 'SceneFinal', 'Depth', 'SemanticSegmentation', 'Lidar']
gettype = lambda id: image_types[id] if len(image_types) > id else 'Unknown'
getint = lambda index: struct.unpack('<L', raw_data[index*4:index*4+4])[0]
getfloat = lambda index: struct.unpack('<f', raw_data[index*4:index*4+4])[0]
getdouble = lambda index: struct.unpack('<d', raw_data[index*4:index*4+8])[0]
total_size = len(raw_data) / 4
index = 0
while index < total_size:
width = getint(index)
height = getint(index + 1)
image_type = gettype(getint(index + 2))
fov = getfloat(index + 3)
begin = index + 4
end = begin + width * height
index = end
yield sensor.Image(width, height, image_type, fov, raw_data[begin*4:end*4])
sensor_type = gettype(getval(index + 2))
if sensor_type == 'Lidar':
horizontal_angle = getdouble(index)
channels_count = getint(index + 3)
lm = sensor.LidarMeasurement(
horizontal_angle, channels_count,
sensor_type, raw_data[index*4:])
index += lm.size_in_bytes
yield lm
else:
width = getint(index)
height = getint(index + 1)
fov = getfloat(index + 3)
begin = index + 4
end = begin + width * height
index = end
yield sensor.Image(width, height, sensor_type, fov, raw_data[begin*4:end*4])

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -0,0 +1,156 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import heapq
class Cell(object):
def __init__(self, x, y, reachable):
"""Initialize new cell.
@param reachable is cell reachable? not a wall?
@param x cell x coordinate
@param y cell y coordinate
@param g cost to move from the starting cell to this cell.
@param h estimation of the cost to move from this cell
to the ending cell.
@param f f = g + h
"""
self.reachable = reachable
self.x = x
self.y = y
self.parent = None
self.g = 0
self.h = 0
self.f = 0
def __lt__(self, other):
return self.g < other.g
class AStar(object):
def __init__(self):
# open list
self.opened = []
heapq.heapify(self.opened)
# visited cells list
self.closed = set()
# grid cells
self.cells = []
self.grid_height = None
self.grid_width = None
self.start = None
self.end = None
def init_grid(self, width, height, walls, start, end):
"""Prepare grid cells, walls.
@param width grid's width.
@param height grid's height.
@param walls list of wall x,y tuples.
@param start grid starting point x,y tuple.
@param end grid ending point x,y tuple.
"""
self.grid_height = height
self.grid_width = width
for x in range(self.grid_width):
for y in range(self.grid_height):
if (x, y) in walls:
reachable = False
else:
reachable = True
self.cells.append(Cell(x, y, reachable))
self.start = self.get_cell(*start)
self.end = self.get_cell(*end)
def get_heuristic(self, cell):
"""Compute the heuristic value H for a cell.
Distance between this cell and the ending cell multiply by 10.
@returns heuristic value H
"""
return 10 * (abs(cell.x - self.end.x) + abs(cell.y - self.end.y))
def get_cell(self, x, y):
"""Returns a cell from the cells list.
@param x cell x coordinate
@param y cell y coordinate
@returns cell
"""
return self.cells[x * self.grid_height + y]
def get_adjacent_cells(self, cell):
"""Returns adjacent cells to a cell.
Clockwise starting from the one on the right.
@param cell get adjacent cells for this cell
@returns adjacent cells list.
"""
cells = []
if cell.x < self.grid_width - 1:
cells.append(self.get_cell(cell.x + 1, cell.y))
if cell.y > 0:
cells.append(self.get_cell(cell.x, cell.y - 1))
if cell.x > 0:
cells.append(self.get_cell(cell.x - 1, cell.y))
if cell.y < self.grid_height - 1:
cells.append(self.get_cell(cell.x, cell.y + 1))
return cells
def get_path(self):
cell = self.end
path = [(cell.x, cell.y)]
while cell.parent is not self.start:
cell = cell.parent
path.append((cell.x, cell.y))
path.append((self.start.x, self.start.y))
path.reverse()
return path
def update_cell(self, adj, cell):
"""Update adjacent cell.
@param adj adjacent cell to current cell
@param cell current cell being processed
"""
adj.g = cell.g + 10
adj.h = self.get_heuristic(adj)
adj.parent = cell
adj.f = adj.h + adj.g
def solve(self):
"""Solve maze, find path to ending cell.
@returns path or None if not found.
"""
# add starting cell to open heap queue
heapq.heappush(self.opened, (self.start.f, self.start))
while len(self.opened):
# pop cell from heap queue
_, cell = heapq.heappop(self.opened)
# add cell to closed list so we don't process it twice
self.closed.add(cell)
# if ending cell, return found path
if cell is self.end:
return self.get_path()
# get adjacent cells for cell
adj_cells = self.get_adjacent_cells(cell)
for adj_cell in adj_cells:
if adj_cell.reachable and adj_cell not in self.closed:
if (adj_cell.f, adj_cell) in self.opened:
# if adj cell in open list, check if current path is
# better than the one previously found
# for this adj cell.
if adj_cell.g > cell.g + 10:
self.update_cell(adj_cell, cell)
else:
self.update_cell(adj_cell, cell)
# add adj cell to open list
heapq.heappush(self.opened, (adj_cell.f, adj_cell))

View File

@ -0,0 +1,136 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
from carla.planner.graph import sldist
from carla.planner.astar import AStar
from carla.planner.map import CarlaMap
class CityTrack(object):
def __init__(self, city_name):
self._node_density = 50.0
self._pixel_density = 16.43
self._map = CarlaMap(city_name, self._pixel_density, self._node_density)
self._astar = AStar()
# Refers to the start position of the previous route computation
self._previous_node = []
# The current computed route
self._route = None
def project_node(self, position):
"""
Projecting the graph node into the city road
"""
node = self._map.convert_to_node(position)
# To change the orientation with respect to the map standards
node = tuple([int(x) for x in node])
# Set to zero if it is less than zero.
node = (max(0, node[0]), max(0, node[1]))
node = (min(self._map.get_graph_resolution()[0] - 1, node[0]),
min(self._map.get_graph_resolution()[1] - 1, node[1]))
node = self._map.search_on_grid(node)
return node
def get_intersection_nodes(self):
return self._map.get_intersection_nodes()
def get_pixel_density(self):
return self._pixel_density
def get_node_density(self):
return self._node_density
def is_at_goal(self, source, target):
return source == target
def is_at_new_node(self, current_node):
return current_node != self._previous_node
def is_away_from_intersection(self, current_node):
return self._closest_intersection_position(current_node) > 1
def is_far_away_from_route_intersection(self, current_node):
# CHECK FOR THE EMPTY CASE
if self._route is None:
raise RuntimeError('Impossible to find route'
+ ' Current planner is limited'
+ ' Try to select start points away from intersections')
return self._closest_intersection_route_position(current_node,
self._route) > 4
def compute_route(self, node_source, source_ori, node_target, target_ori):
self._previous_node = node_source
a_star = AStar()
a_star.init_grid(self._map.get_graph_resolution()[0],
self._map.get_graph_resolution()[1],
self._map.get_walls_directed(node_source, source_ori,
node_target, target_ori), node_source,
node_target)
route = a_star.solve()
# JuSt a Corner Case
# Clean this to avoid having to use this function
if route is None:
a_star = AStar()
a_star.init_grid(self._map.get_graph_resolution()[0],
self._map.get_graph_resolution()[1], self._map.get_walls(),
node_source, node_target)
route = a_star.solve()
self._route = route
return route
def get_distance_closest_node_route(self, pos, route):
distance = []
for node_iter in route:
if node_iter in self._map.get_intersection_nodes():
distance.append(sldist(node_iter, pos))
if not distance:
return sldist(route[-1], pos)
return sorted(distance)[0]
def _closest_intersection_position(self, current_node):
distance_vector = []
for node_iterator in self._map.get_intersection_nodes():
distance_vector.append(sldist(node_iterator, current_node))
return sorted(distance_vector)[0]
def _closest_intersection_route_position(self, current_node, route):
distance_vector = []
for _ in route:
for node_iterator in self._map.get_intersection_nodes():
distance_vector.append(sldist(node_iterator, current_node))
return sorted(distance_vector)[0]

View File

@ -0,0 +1,166 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import math
import numpy as np
from carla.planner.graph import string_to_floats
# Constant definition enumeration
PIXEL = 0
WORLD = 1
NODE = 2
class Converter(object):
def __init__(self, city_file, pixel_density, node_density):
self._node_density = node_density
self._pixel_density = pixel_density
with open(city_file, 'r') as f:
# The offset of the world from the zero coordinates ( The
# coordinate we consider zero)
self._worldoffset = string_to_floats(f.readline())
angles = string_to_floats(f.readline())
# If there is an rotation between the world and map coordinates.
self._worldrotation = np.array([
[math.cos(math.radians(angles[2])), -math.sin(math.radians(angles[2])), 0.0],
[math.sin(math.radians(angles[2])), math.cos(math.radians(angles[2])), 0.0],
[0.0, 0.0, 1.0]])
# Ignore for now, these are offsets for map coordinates and scale
# (not used).
_ = f.readline()
# The offset of the map zero coordinate.
self._mapoffset = string_to_floats(f.readline())
def convert_to_node(self, input_data):
"""
Receives a data type (Can Be Pixel or World )
:param input_data: position in some coordinate
:return: A vector representing a node
"""
input_type = self._check_input_type(input_data)
if input_type == PIXEL:
return self._pixel_to_node(input_data)
elif input_type == WORLD:
return self._world_to_node(input_data)
else:
raise ValueError('Invalid node to be converted')
def convert_to_pixel(self, input_data):
"""
Receives a data type (Can Be Node or World )
:param input_data: position in some coordinate
:return: A vector with pixel coordinates
"""
input_type = self._check_input_type(input_data)
if input_type == NODE:
return self._node_to_pixel(input_data)
elif input_type == WORLD:
return self._world_to_pixel(input_data)
else:
raise ValueError('Invalid node to be converted')
def convert_to_world(self, input_data):
"""
Receives a data type (Can Be Pixel or Node )
:param input_data: position in some coordinate
:return: vector with world coordinates
"""
input_type = self._check_input_type(input_data)
if input_type == NODE:
return self._node_to_world(input_data)
elif input_type == PIXEL:
return self._pixel_to_world(input_data)
else:
raise ValueError('Invalid node to be converted')
def _node_to_pixel(self, node):
"""
Conversion from node format (graph) to pixel (image)
:param node:
:return: pixel
"""
pixel = [((node[0] + 2) * self._node_density)
, ((node[1] + 2) * self._node_density)]
return pixel
def _pixel_to_node(self, pixel):
"""
Conversion from pixel format (image) to node (graph)
:param node:
:return: pixel
"""
node = [int(((pixel[0]) / self._node_density) - 2)
, int(((pixel[1]) / self._node_density) - 2)]
return tuple(node)
def _pixel_to_world(self, pixel):
"""
Conversion from pixel format (image) to world (3D)
:param pixel:
:return: world
"""
relative_location = [pixel[0] * self._pixel_density,
pixel[1] * self._pixel_density]
world = [
relative_location[0] + self._mapoffset[0] - self._worldoffset[0],
relative_location[1] + self._mapoffset[1] - self._worldoffset[1],
22
]
return world
def _world_to_pixel(self, world):
"""
Conversion from world format (3D) to pixel
:param world:
:return: pixel
"""
rotation = np.array([world[0], world[1], world[2]])
rotation = rotation.dot(self._worldrotation)
relative_location = [rotation[0] + self._worldoffset[0] - self._mapoffset[0],
rotation[1] + self._worldoffset[1] - self._mapoffset[1],
rotation[2] + self._worldoffset[2] - self._mapoffset[2]]
pixel = [math.floor(relative_location[0] / float(self._pixel_density)),
math.floor(relative_location[1] / float(self._pixel_density))]
return pixel
def _world_to_node(self, world):
return self._pixel_to_node(self._world_to_pixel(world))
def _node_to_world(self, node):
return self._pixel_to_world(self._node_to_pixel(node))
def _check_input_type(self, input_data):
if len(input_data) > 2:
return WORLD
elif type(input_data[0]) is int:
return NODE
else:
return PIXEL

View File

@ -0,0 +1,141 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import math
import numpy as np
def string_to_node(string):
vec = string.split(',')
return (int(vec[0]), int(vec[1]))
def string_to_floats(string):
vec = string.split(',')
return (float(vec[0]), float(vec[1]), float(vec[2]))
def sldist(c1, c2):
return math.sqrt((c2[0] - c1[0]) ** 2 + (c2[1] - c1[1]) ** 2)
def sldist3(c1, c2):
return math.sqrt((c2[0] - c1[0]) ** 2 + (c2[1] - c1[1])
** 2 + (c2[2] - c1[2]) ** 2)
class Graph(object):
"""
A simple directed, weighted graph
"""
def __init__(self, graph_file=None, node_density=50):
self._nodes = set()
self._angles = {}
self._edges = {}
self._distances = {}
self._node_density = node_density
if graph_file is not None:
with open(graph_file, 'r') as f:
# Skipe the first four lines that
lines_after_4 = f.readlines()[4:]
# the graph resolution.
linegraphres = lines_after_4[0]
self._resolution = string_to_node(linegraphres)
for line in lines_after_4[1:]:
from_node, to_node, d = line.split()
from_node = string_to_node(from_node)
to_node = string_to_node(to_node)
if from_node not in self._nodes:
self.add_node(from_node)
if to_node not in self._nodes:
self.add_node(to_node)
self._edges.setdefault(from_node, [])
self._edges[from_node].append(to_node)
self._distances[(from_node, to_node)] = float(d)
def add_node(self, value):
self._nodes.add(value)
def make_orientations(self, node, heading):
import collections
distance_dic = {}
for node_iter in self._nodes:
if node_iter != node:
distance_dic[sldist(node, node_iter)] = node_iter
distance_dic = collections.OrderedDict(
sorted(distance_dic.items()))
self._angles[node] = heading
for _, v in distance_dic.items():
start_to_goal = np.array([node[0] - v[0], node[1] - v[1]])
print(start_to_goal)
self._angles[v] = start_to_goal / np.linalg.norm(start_to_goal)
def add_edge(self, from_node, to_node, distance):
self._add_edge(from_node, to_node, distance)
def _add_edge(self, from_node, to_node, distance):
self._edges.setdefault(from_node, [])
self._edges[from_node].append(to_node)
self._distances[(from_node, to_node)] = distance
def get_resolution(self):
return self._resolution
def get_edges(self):
return self._edges
def intersection_nodes(self):
intersect_nodes = []
for node in self._nodes:
if len(self._edges[node]) > 2:
intersect_nodes.append(node)
return intersect_nodes
# This contains also the non-intersection turns...
def turn_nodes(self):
return self._nodes
def plot_ori(self, c):
from matplotlib import collections as mc
import matplotlib.pyplot as plt
line_len = 1
lines = [[(p[0], p[1]), (p[0] + line_len * self._angles[p][0],
p[1] + line_len * self._angles[p][1])] for p in self._nodes]
lc = mc.LineCollection(lines, linewidth=2, color='green')
_, ax = plt.subplots()
ax.add_collection(lc)
ax.autoscale()
ax.margins(0.1)
xs = [p[0] for p in self._nodes]
ys = [p[1] for p in self._nodes]
plt.scatter(xs, ys, color=c)
def plot(self, c):
import matplotlib.pyplot as plt
xs = [p[0] for p in self._nodes]
ys = [p[1] for p in self._nodes]
plt.scatter(xs, ys, color=c)

View File

@ -0,0 +1,135 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import copy
import numpy as np
def angle_between(v1, v2):
return np.arccos(np.dot(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2))
class Grid(object):
def __init__(self, graph):
self._graph = graph
self._structure = self._make_structure()
self._walls = self._make_walls()
def search_on_grid(self, x, y):
visit = [[0, 1], [0, -1], [1, 0], [1, 1],
[1, -1], [-1, 0], [-1, 1], [-1, -1]]
c_x, c_y = x, y
scale = 1
while self._structure[c_x, c_y] != 0:
for offset in visit:
c_x, c_y = x + offset[0] * scale, y + offset[1] * scale
if c_x >= 0 and c_x < self._graph.get_resolution()[
0] and c_y >= 0 and c_y < self._graph.get_resolution()[1]:
if self._structure[c_x, c_y] == 0:
break
else:
c_x, c_y = x, y
scale += 1
return c_x, c_y
def get_walls(self):
return self._walls
def get_wall_source(self, pos, pos_ori, target):
free_nodes = self._get_adjacent_free_nodes(pos)
# print self._walls
final_walls = copy.copy(self._walls)
# print final_walls
heading_start = np.array([pos_ori[0], pos_ori[1]])
for adj in free_nodes:
start_to_goal = np.array([adj[0] - pos[0], adj[1] - pos[1]])
angle = angle_between(heading_start, start_to_goal)
if (angle > 1.6 and adj != target):
final_walls.add((adj[0], adj[1]))
return final_walls
def get_wall_target(self, pos, pos_ori, source):
free_nodes = self._get_adjacent_free_nodes(pos)
final_walls = copy.copy(self._walls)
heading_start = np.array([pos_ori[0], pos_ori[1]])
for adj in free_nodes:
start_to_goal = np.array([adj[0] - pos[0], adj[1] - pos[1]])
angle = angle_between(heading_start, start_to_goal)
if (angle < 1.0 and adj != source):
final_walls.add((adj[0], adj[1]))
return final_walls
def _draw_line(self, grid, xi, yi, xf, yf):
if xf < xi:
aux = xi
xi = xf
xf = aux
if yf < yi:
aux = yi
yi = yf
yf = aux
for i in range(xi, xf + 1):
for j in range(yi, yf + 1):
grid[i, j] = 0.0
return grid
def _make_structure(self):
structure = np.ones(
(self._graph.get_resolution()[0],
self._graph.get_resolution()[1]))
for key, connections in self._graph.get_edges().items():
# draw a line
for con in connections:
# print key[0],key[1],con[0],con[1]
structure = self._draw_line(
structure, key[0], key[1], con[0], con[1])
# print grid
return structure
def _make_walls(self):
walls = set()
for i in range(self._structure.shape[0]):
for j in range(self._structure.shape[1]):
if self._structure[i, j] == 1.0:
walls.add((i, j))
return walls
def _get_adjacent_free_nodes(self, pos):
""" Eight nodes in total """
visit = [[0, 1], [0, -1], [1, 0], [1, 1],
[1, -1], [-1, 0], [-1, 1], [-1, -1]]
adjacent = set()
for offset in visit:
node = (pos[0] + offset[0], pos[1] + offset[1])
if (node[0] >= 0 and node[0] < self._graph.get_resolution()[0]
and node[1] >= 0 and node[1] < self._graph.get_resolution()[1]):
if self._structure[node[0], node[1]] == 0.0:
adjacent.add(node)
return adjacent

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
@ -19,54 +19,36 @@ try:
except ImportError:
raise RuntimeError('cannot import PIL, make sure pillow package is installed')
def string_to_node(string):
vec = string.split(',')
return (int(vec[0]), int(vec[1]))
from carla.planner.graph import Graph
from carla.planner.graph import sldist
from carla.planner.grid import Grid
from carla.planner.converter import Converter
def string_to_floats(string):
vec = string.split(',')
return (float(vec[0]), float(vec[1]), float(vec[2]))
def color_to_angle(color):
return (float(color) / 255.0) * 2 * math.pi
class CarlaMap(object):
def __init__(self, city):
def __init__(self, city, pixel_density, node_density):
dir_path = os.path.dirname(__file__)
city_file = os.path.join(dir_path, city + '.txt')
city_map_file = os.path.join(dir_path, city + '.png')
city_map_file_lanes = os.path.join(dir_path, city + 'Lanes.png')
city_map_file_center = os.path.join(dir_path, city + 'Central.png')
with open(city_file, 'r') as file_object:
# The built graph. This is the exact same graph that unreal builds. This
# is a generic structure used for many cases
self._graph = Graph(city_file, node_density)
linewordloffset = file_object.readline()
# The offset of the world from the zero coordinates ( The
# coordinate we consider zero)
self.worldoffset = string_to_floats(linewordloffset)
self._pixel_density = pixel_density
self._grid = Grid(self._graph)
# The number of game units per pixel. For now this is fixed.
lineworldangles = file_object.readline()
self.angles = string_to_floats(lineworldangles)
self._converter = Converter(city_file, pixel_density, node_density)
self.worldrotation = np.array([
[math.cos(math.radians(self.angles[2])), -math.sin(math.radians(self.angles[2])), 0.0],
[math.sin(math.radians(self.angles[2])), math.cos(math.radians(self.angles[2])), 0.0],
[0.0, 0.0, 1.0]])
# Ignore for now, these are offsets for map coordinates and scale
# (not used).
_ = file_object.readline()
linemapoffset = file_object.readline()
# The offset of the map zero coordinate.
self.mapoffset = string_to_floats(linemapoffset)
# the graph resolution.
linegraphres = file_object.readline()
self.resolution = string_to_node(linegraphres)
# The number of game units per pixel.
self.pixel_density = 16.43
# Load the lanes image
self.map_image_lanes = Image.open(city_map_file_lanes)
self.map_image_lanes.load()
@ -76,72 +58,95 @@ class CarlaMap(object):
self.map_image.load()
self.map_image = np.asarray(self.map_image, dtype="int32")
# Load the lanes image
self.map_image_center = Image.open(city_map_file_center)
self.map_image_center.load()
self.map_image_center = np.asarray(self.map_image_center, dtype="int32")
def get_graph_resolution(self):
return self._graph.get_resolution()
def get_map(self, height=None):
if height is not None:
img = Image.fromarray(self.map_image.astype(np.uint8))
aspect_ratio = height/float(self.map_image.shape[0])
aspect_ratio = height / float(self.map_image.shape[0])
img = img.resize((int(aspect_ratio*self.map_image.shape[1]),height), Image.ANTIALIAS)
img = img.resize((int(aspect_ratio * self.map_image.shape[1]), height), Image.ANTIALIAS)
img.load()
return np.asarray(img, dtype="int32")
return np.fliplr(self.map_image)
def get_map_lanes(self, height=None):
# if size is not None:
# img = Image.fromarray(self.map_image_lanes.astype(np.uint8))
# img = img.resize((size[1], size[0]), Image.ANTIALIAS)
# img.load()
# return np.fliplr(np.asarray(img, dtype="int32"))
# return np.fliplr(self.map_image_lanes)
raise NotImplementedError
def get_position_on_map(self, world):
"""Get the position on the map for a certain world position."""
relative_location = []
pixel = []
rotation = np.array([world[0], world[1], world[2]])
rotation = rotation.dot(self.worldrotation)
relative_location.append(rotation[0] + self.worldoffset[0] - self.mapoffset[0])
relative_location.append(rotation[1] + self.worldoffset[1] - self.mapoffset[1])
relative_location.append(rotation[2] + self.worldoffset[2] - self.mapoffset[2])
pixel.append(math.floor(relative_location[0] / float(self.pixel_density)))
pixel.append(math.floor(relative_location[1] / float(self.pixel_density)))
return pixel
def get_position_on_world(self, pixel):
"""Get world position of a certain map position."""
relative_location = []
world_vertex = []
relative_location.append(pixel[0] * self.pixel_density)
relative_location.append(pixel[1] * self.pixel_density)
world_vertex.append(relative_location[0] + self.mapoffset[0] - self.worldoffset[0])
world_vertex.append(relative_location[1] + self.mapoffset[1] - self.worldoffset[1])
world_vertex.append(22) # Z does not matter for now.
return world_vertex
def get_map_lanes(self, size=None):
if size is not None:
img = Image.fromarray(self.map_image_lanes.astype(np.uint8))
img = img.resize((size[1], size[0]), Image.ANTIALIAS)
img.load()
return np.fliplr(np.asarray(img, dtype="int32"))
return np.fliplr(self.map_image_lanes)
def get_lane_orientation(self, world):
"""Get the lane orientation of a certain world position."""
relative_location = []
pixel = []
rotation = np.array([world[0], world[1], world[2]])
rotation = rotation.dot(self.worldrotation)
relative_location.append(rotation[0] + self.worldoffset[0] - self.mapoffset[0])
relative_location.append(rotation[1] + self.worldoffset[1] - self.mapoffset[1])
relative_location.append(rotation[2] + self.worldoffset[2] - self.mapoffset[2])
pixel.append(math.floor(relative_location[0] / float(self.pixel_density)))
pixel.append(math.floor(relative_location[1] / float(self.pixel_density)))
pixel = self.convert_to_pixel(world)
ori = self.map_image_lanes[int(pixel[1]), int(pixel[0]), 2]
ori = ((float(ori) / 255.0)) * 2 * math.pi
ori = color_to_angle(ori)
return (-math.cos(ori), -math.sin(ori))
def convert_to_node(self, input_data):
"""
Receives a data type (Can Be Pixel or World )
:param input_data: position in some coordinate
:return: A node object
"""
return self._converter.convert_to_node(input_data)
def convert_to_pixel(self, input_data):
"""
Receives a data type (Can Be Node or World )
:param input_data: position in some coordinate
:return: A node object
"""
return self._converter.convert_to_pixel(input_data)
def convert_to_world(self, input_data):
"""
Receives a data type (Can Be Pixel or Node )
:param input_data: position in some coordinate
:return: A node object
"""
return self._converter.convert_to_world(input_data)
def get_walls_directed(self, node_source, source_ori, node_target, target_ori):
"""
This is the most hacky function. Instead of planning on two ways,
we basically use a one way road and interrupt the other road by adding
an artificial wall.
"""
final_walls = self._grid.get_wall_source(node_source, source_ori, node_target)
final_walls = final_walls.union(self._grid.get_wall_target(
node_target, target_ori, node_source))
return final_walls
def get_walls(self):
return self._grid.get_walls()
def get_distance_closest_node(self, pos):
distance = []
for node_iter in self._graph.intersection_nodes():
distance.append(sldist(node_iter, pos))
return sorted(distance)[0]
def get_intersection_nodes(self):
return self._graph.intersection_nodes()
def search_on_grid(self,node):
return self._grid.search_on_grid(node[0], node[1])

View File

@ -0,0 +1,173 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import collections
import math
import numpy as np
from . import city_track
def compare(x, y):
return collections.Counter(x) == collections.Counter(y)
# Constants Used for the high level commands
REACH_GOAL = 0.0
GO_STRAIGHT = 5.0
TURN_RIGHT = 4.0
TURN_LEFT = 3.0
LANE_FOLLOW = 2.0
# Auxiliary algebra function
def angle_between(v1, v2):
return np.arccos(np.dot(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2))
def sldist(c1, c2): return math.sqrt((c2[0] - c1[0]) ** 2 + (c2[1] - c1[1]) ** 2)
def signal(v1, v2):
return np.cross(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2)
class Planner(object):
def __init__(self, city_name):
self._city_track = city_track.CityTrack(city_name)
self._commands = []
def get_next_command(self, source, source_ori, target, target_ori):
"""
Computes the full plan and returns the next command,
:param source: source position
:param source_ori: source orientation
:param target: target position
:param target_ori: target orientation
:return: a command ( Straight,Lane Follow, Left or Right)
"""
track_source = self._city_track.project_node(source)
track_target = self._city_track.project_node(target)
# reach the goal
if self._city_track.is_at_goal(track_source, track_target):
return REACH_GOAL
if (self._city_track.is_at_new_node(track_source)
and self._city_track.is_away_from_intersection(track_source)):
route = self._city_track.compute_route(track_source, source_ori,
track_target, target_ori)
if route is None:
raise RuntimeError('Impossible to find route')
self._commands = self._route_to_commands(route)
if self._city_track.is_far_away_from_route_intersection(
track_source):
return LANE_FOLLOW
else:
if self._commands:
return self._commands[0]
else:
return LANE_FOLLOW
else:
if self._city_track.is_far_away_from_route_intersection(
track_source):
return LANE_FOLLOW
# If there is computed commands
if self._commands:
return self._commands[0]
else:
return LANE_FOLLOW
def get_shortest_path_distance(
self,
source,
source_ori,
target,
target_ori):
distance = 0
track_source = self._city_track.project_node(source)
track_target = self._city_track.project_node(target)
current_pos = track_source
route = self._city_track.compute_route(track_source, source_ori,
track_target, target_ori)
# No Route, distance is zero
if route is None:
return 0.0
for node_iter in route:
distance += sldist(node_iter, current_pos)
current_pos = node_iter
# We multiply by these values to convert distance to world coordinates
return distance * self._city_track.get_pixel_density() \
* self._city_track.get_node_density()
def is_there_posible_route(self, source, source_ori, target, target_ori):
track_source = self._city_track.project_node(source)
track_target = self._city_track.project_node(target)
return not self._city_track.compute_route(
track_source, source_ori, track_target, target_ori) is None
def test_position(self, source):
node_source = self._city_track.project_node(source)
return self._city_track.is_away_from_intersection(node_source)
def _route_to_commands(self, route):
"""
from the shortest path graph, transform it into a list of commands
:param route: the sub graph containing the shortest path
:return: list of commands encoded from 0-5
"""
commands_list = []
for i in range(0, len(route)):
if route[i] not in self._city_track.get_intersection_nodes():
continue
current = route[i]
past = route[i - 1]
future = route[i + 1]
past_to_current = np.array(
[current[0] - past[0], current[1] - past[1]])
current_to_future = np.array(
[future[0] - current[0], future[1] - current[1]])
angle = signal(current_to_future, past_to_current)
if angle < -0.1:
command = TURN_RIGHT
elif angle > 0.1:
command = TURN_LEFT
else:
command = GO_STRAIGHT
commands_list.append(command)
return commands_list

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
@ -8,6 +8,8 @@
import os
import numpy as np
import json
try:
import numpy
@ -39,7 +41,12 @@ class Sensor(object):
"""
Base class for sensor descriptions. Used to add sensors to CarlaSettings.
"""
pass
def set(self, **kwargs):
for key, value in kwargs.items():
if not hasattr(self, key):
raise ValueError('CarlaSettings.Camera: no key named %r' % key)
setattr(self, key, value)
class Camera(Sensor):
@ -62,12 +69,6 @@ class Camera(Sensor):
self.CameraRotationYaw = 0
self.set(**kwargs)
def set(self, **kwargs):
for key, value in kwargs.items():
if not hasattr(self, key):
raise ValueError('CarlaSettings.Camera: no key named %r' % key)
setattr(self, key, value)
def set_image_size(self, pixels_x, pixels_y):
self.ImageSizeX = pixels_x
self.ImageSizeY = pixels_y
@ -83,6 +84,51 @@ class Camera(Sensor):
self.CameraRotationYaw = yaw
class Lidar(Sensor):
"""
Lidar description. This class can be added to a CarlaSettings object to add
a Lidar to the player vehicle.
"""
def __init__(self, name, **kwargs):
self.LidarName = name
# Number of lasers
self.Channels = 32
# Measure distance
self.Range = 5000
# Points generated by all lasers per second
self.PointsPerSecond = 100000
# Lidar rotation frequency
self.RotationFrequency = 10
# Upper laser angle, counts from horizontal,
# positive values means above horizontal line
self.UpperFovLimit = 10
# Lower laser angle, counts from horizontal,
# negative values means under horizontal line
self.LowerFovLimit = -30
# wether to show debug points of laser hits in simulator
self.ShowDebugPoints = False
# Position relative to the player.
self.LidarPositionX = 0
self.LidarPositionY = 0
self.LidarPositionZ = 250
# Rotation relative to the player.
self.LidarRotationPitch = 0
self.LidarRotationRoll = 0
self.LidarRotationYaw = 0
self.set(**kwargs)
def set_position(self, x, y, z):
self.LidarPositionX = x
self.LidarPositionY = y
self.LidarPositionZ = z
def set_rotation(self, pitch, roll, yaw):
self.LidarRotationPitch = pitch
self.LidarRotationRoll = roll
self.LidarRotationYaw = yaw
# ==============================================================================
# -- SensorData ----------------------------------------------------------
# ==============================================================================
@ -252,3 +298,62 @@ class PointCloud(SensorData):
def __str__(self):
return str(self.array)
class LidarMeasurement(SensorData):
"""Data generated by a Lidar."""
def __init__(self, horizontal_angle, channels_count, lidar_type, raw_data):
self.horizontal_angle = horizontal_angle
self.channels_count = channels_count
self.type = lidar_type
self._converted_data = None
points_count_by_channel_size = channels_count * 4
points_count_by_channel_bytes = raw_data[4*4:4*4 + points_count_by_channel_size]
self.points_count_by_channel = np.frombuffer(points_count_by_channel_bytes, dtype=np.dtype('uint32'))
self.points_size = int(np.sum(self.points_count_by_channel) * 3 * 8) # three floats X, Y, Z
begin = 4*4 + points_count_by_channel_size # 4*4 is horizontal_angle, type, channels_count
end = begin + self.points_size
self.points_data = raw_data[begin:end]
self._data_size = 4*4 + points_count_by_channel_size + self.points_size
@property
def size_in_bytes(self):
return self._data_size
@property
def data(self):
"""
Lazy initialization for data property, stores converted data in its
default format.
"""
if self._converted_data is None:
points_in_one_channel = self.points_count_by_channel[0]
points = np.frombuffer(self.points_data[:self.points_size], dtype='float')
points = np.reshape(points, (self.channels_count, points_in_one_channel, 3))
self._converted_data = {
'horizontal_angle' : self.horizontal_angle,
'channels_count' : self.channels_count,
'points_count_by_channel' : self.points_count_by_channel,
'points' : points
}
return self._converted_data
def save_to_disk(self, filename):
"""Save lidar measurements to disk"""
folder = os.path.dirname(filename)
if not os.path.isdir(folder):
os.makedirs(folder)
with open(filename, 'wt') as f:
f.write(json.dumps({
'horizontal_angle' : self.horizontal_angle,
'channels_count' : self.channels_count,
'points_count_by_channel' : self.points_count_by_channel.tolist(),
'points' : self.data['points'].tolist()
}))

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
@ -46,6 +46,7 @@ class CarlaSettings(object):
self.randomize_weather()
self.set(**kwargs)
self._cameras = []
self._lidars = []
def set(self, **kwargs):
for key, value in kwargs.items():
@ -69,13 +70,15 @@ class CarlaSettings(object):
"""Add a sensor to the player vehicle (see sensor.py)."""
if isinstance(sensor, carla_sensor.Camera):
self._cameras.append(sensor)
elif isinstance(sensor, carla_sensor.Lidar):
self._lidars.append(sensor)
else:
raise ValueError('Sensor not supported')
def __str__(self):
"""Converts this object to an INI formatted string."""
ini = ConfigParser()
ini.optionxform=str
ini.optionxform = str
S_SERVER = 'CARLA/Server'
S_LEVEL = 'CARLA/LevelSettings'
S_CAPTURE = 'CARLA/SceneCapture'
@ -99,6 +102,7 @@ class CarlaSettings(object):
ini.add_section(S_CAPTURE)
ini.set(S_CAPTURE, 'Cameras', ','.join(c.CameraName for c in self._cameras))
ini.set(S_CAPTURE, 'Lidars', ','.join(l.LidarName for l in self._lidars))
for camera in self._cameras:
add_section(S_CAPTURE + '/' + camera.CameraName, camera, [
@ -113,6 +117,22 @@ class CarlaSettings(object):
'CameraRotationRoll',
'CameraRotationYaw'])
for lidar in self._lidars:
add_section(S_CAPTURE + '/' + lidar.LidarName, lidar, [
'Channels',
'Range',
'PointsPerSecond',
'RotationFrequency',
'UpperFovLimit',
'LowerFovLimit',
'ShowDebugPoints',
'LidarPositionX',
'LidarPositionY',
'LidarPositionZ',
'LidarRotationPitch',
'LidarRotationRoll',
'LidarRotationYaw'])
if sys.version_info >= (3, 0):
text = io.StringIO()
else:
@ -129,14 +149,15 @@ def get_sensor_names(settings):
"""
if isinstance(settings, CarlaSettings):
# pylint: disable=protected-access
return [camera.CameraName for camera in settings._cameras]
return [camera.CameraName for camera in settings._cameras] + \
[lidar.LidarName for lidar in settings._lidars]
ini = ConfigParser()
if sys.version_info >= (3, 2):
ini.read_string(settings)
elif sys.version_info >= (3, 0):
ini.readfp(io.StringIO(settings)) # pylint: disable=deprecated-method
ini.readfp(io.StringIO(settings)) # pylint: disable=deprecated-method
else:
ini.readfp(io.BytesIO(settings)) # pylint: disable=deprecated-method
ini.readfp(io.BytesIO(settings)) # pylint: disable=deprecated-method
section_name = 'CARLA/SceneCapture'
option_name = 'Cameras'

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
@ -41,7 +41,7 @@ class TCPClient(object):
self._socket.settimeout(self._timeout)
logging.debug('%sconnected', self._logprefix)
return
except OSError as exception:
except socket.error as exception:
error = exception
logging.debug('%sconnection attempt %d: %s', self._logprefix, attempt, error)
time.sleep(1)
@ -65,7 +65,7 @@ class TCPClient(object):
header = struct.pack('<L', len(message))
try:
self._socket.sendall(header + message)
except OSError as exception:
except socket.error as exception:
self._reraise_exception_as_tcp_error('failed to write data', exception)
def read(self):
@ -85,7 +85,7 @@ class TCPClient(object):
while length > 0:
try:
data = self._socket.recv(length)
except OSError as exception:
except socket.error as exception:
self._reraise_exception_as_tcp_error('failed to read data', exception)
if not data:
raise TCPConnectionError(self._logprefix + 'connection closed')

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
@ -16,13 +16,13 @@ import random
import time
from carla.client import make_carla_client
from carla.sensor import Camera
from carla.sensor import Camera, Lidar, LidarMeasurement
from carla.settings import CarlaSettings
from carla.tcp import TCPConnectionError
from carla.util import print_over_same_line
def run_carla_client(host, port, autopilot_on, save_images_to_disk, image_filename_format, settings_filepath):
def run_carla_client(host, port, autopilot_on, save_images_to_disk, image_filename_format, lidar_filename_format, settings_filepath):
# Here we will run 3 episodes with 300 frames each.
number_of_episodes = 3
frames_per_episode = 300
@ -70,6 +70,19 @@ def run_carla_client(host, port, autopilot_on, save_images_to_disk, image_filena
camera1.set_position(30, 0, 130)
settings.add_sensor(camera1)
lidar0 = Lidar('Lidar32')
lidar0.set_position(0, 0, 250)
lidar0.set_rotation(0, 0, 0)
lidar0.set(
Channels = 32,
Range = 5000,
PointsPerSecond = 100000,
RotationFrequency = 10,
UpperFovLimit = 10,
LowerFovLimit = -30,
ShowDebugPoints = False)
settings.add_sensor(lidar0)
else:
# Alternatively, we can load these settings from a file.
@ -103,8 +116,11 @@ def run_carla_client(host, port, autopilot_on, save_images_to_disk, image_filena
# Save the images to disk if requested.
if save_images_to_disk:
for name, image in sensor_data.items():
image.save_to_disk(image_filename_format.format(episode, name, frame))
for name, measurement in sensor_data.items():
if isinstance(measurement, LidarMeasurement):
measurement.save_to_disk(lidar_filename_format.format(episode, name, frame))
else:
measurement.save_to_disk(image_filename_format.format(episode, name, frame))
# We can access the encoded data of a given image as numpy
# array using its "data" property. For instance, to get the
@ -209,9 +225,11 @@ def main():
autopilot_on=args.autopilot,
save_images_to_disk=args.images_to_disk,
image_filename_format='_out/episode_{:0>4d}/{:s}/{:0>6d}.png',
lidar_filename_format='_out/episode_{:0>4d}/{:s}/{:0>6d}.json',
settings_filepath=args.carla_settings)
print('Done.')
return
except TCPConnectionError as error:
logging.error(error)

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
@ -93,6 +93,19 @@ def make_carla_settings():
camera2.set_position(200, 0, 140)
camera2.set_rotation(0.0, 0.0, 0.0)
settings.add_sensor(camera2)
lidar0 = sensor.Lidar('Lidar32')
lidar0.set_position(0, 0, 250)
lidar0.set_rotation(0, 0, 0)
lidar0.set(
Channels = 32,
Range = 5000,
PointsPerSecond = 100000,
RotationFrequency = 10,
UpperFovLimit = 10,
LowerFovLimit = -30,
ShowDebugPoints = False)
settings.add_sensor(lidar0)
return settings
@ -124,10 +137,11 @@ class CarlaGame(object):
self._main_image = None
self._mini_view_image1 = None
self._mini_view_image2 = None
self._lidar_measurement = None
self._map_view = None
self._is_on_reverse = False
self._city_name = city_name
self._map = CarlaMap(city_name) if city_name is not None else None
self._map = CarlaMap(city_name, 16.43, 50.0) if city_name is not None else None
self._map_shape = self._map.map_image.shape if city_name is not None else None
self._map_view = self._map.get_map(WINDOW_HEIGHT) if city_name is not None else None
self._position = None
@ -177,12 +191,13 @@ class CarlaGame(object):
self._main_image = sensor_data['CameraRGB']
self._mini_view_image1 = sensor_data['CameraDepth']
self._mini_view_image2 = sensor_data['CameraSemSeg']
self._lidar_measurement = sensor_data['Lidar32']
# Print measurements every second.
if self._timer.elapsed_seconds_since_lap() > 1.0:
if self._city_name is not None:
# Function to get car position on map.
map_position = self._map.get_position_on_map([
map_position = self._map.convert_to_pixel([
measurements.player_measurements.transform.location.x,
measurements.player_measurements.transform.location.y,
measurements.player_measurements.transform.location.z])
@ -206,7 +221,7 @@ class CarlaGame(object):
control = self._get_keyboard_control(pygame.key.get_pressed())
# Set the player position
if self._city_name is not None:
self._position = self._map.get_position_on_map([
self._position = self._map.convert_to_pixel([
measurements.player_measurements.transform.location.x,
measurements.player_measurements.transform.location.y,
measurements.player_measurements.transform.location.z])
@ -295,25 +310,45 @@ class CarlaGame(object):
self._display.blit(
surface, (2 * gap_x + MINI_WINDOW_WIDTH, mini_image_y))
if self._lidar_measurement is not None:
lidar_data = np.array(self._lidar_measurement.data['points'][:, :, :2])
lidar_data /= 50.0
lidar_data += 100.0
lidar_data = np.fabs(lidar_data)
lidar_data = lidar_data.astype(np.int32)
lidar_data = np.reshape(lidar_data, (-1, 2))
#draw lidar
lidar_img_size = (200, 200, 3)
lidar_img = np.zeros(lidar_img_size)
lidar_img[tuple(lidar_data.T)] = (255, 255, 255)
surface = pygame.surfarray.make_surface(
lidar_img
)
self._display.blit(surface, (10, 10))
if self._map_view is not None:
array = self._map_view
array = array[:, :, :3]
new_window_width =(float(WINDOW_HEIGHT)/float(self._map_shape[0]))*float(self._map_shape[1])
surface = pygame.surfarray.make_surface(array.swapaxes(0, 1))
w_pos = int(self._position[0]*(float(WINDOW_HEIGHT)/float(self._map_shape[0])))
h_pos =int(self._position[1] *(new_window_width/float(self._map_shape[1])))
h_pos = int(self._position[1] *(new_window_width/float(self._map_shape[1])))
pygame.draw.circle(surface, [255, 0, 0, 255], (w_pos,h_pos), 6, 0)
for agent in self._agent_positions:
if agent.HasField('vehicle'):
agent_position = self._map.get_position_on_map([
agent_position = self._map.convert_to_pixel([
agent.vehicle.transform.location.x,
agent.vehicle.transform.location.y,
agent.vehicle.transform.location.z])
w_pos = int(agent_position[0]*(float(WINDOW_HEIGHT)/float(self._map_shape[0])))
h_pos =int(agent_position[1] *(new_window_width/float(self._map_shape[1])))
pygame.draw.circle(surface, [255, 0, 255, 255], (w_pos,h_pos), 4, 0)
h_pos = int(agent_position[1] *(new_window_width/float(self._map_shape[1])))
pygame.draw.circle(surface, [255, 0, 255, 255], (w_pos ,h_pos), 4, 0)
self._display.blit(surface, (WINDOW_WIDTH, 0))

View File

@ -2,3 +2,5 @@ Pillow
numpy
protobuf
pygame
matplotlib
future

94
PythonClient/run_benchmark.py Executable file
View File

@ -0,0 +1,94 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import argparse
import logging
import time
from carla.benchmarks.agent import Agent
from carla.benchmarks.corl_2017 import CoRL2017
from carla.client import make_carla_client, VehicleControl
from carla.tcp import TCPConnectionError
class Manual(Agent):
"""
Sample redefinition of the Agent,
An agent that goes straight
"""
def run_step(self, measurements, sensor_data, target):
control = VehicleControl()
control.throttle = 0.9
return control
if __name__ == '__main__':
argparser = argparse.ArgumentParser(description=__doc__)
argparser.add_argument(
'-v', '--verbose',
action='store_true',
dest='verbose',
help='print some extra status information')
argparser.add_argument(
'-db', '--debug',
action='store_true',
dest='debug',
help='print debug information')
argparser.add_argument(
'--host',
metavar='H',
default='localhost',
help='IP of the host server (default: localhost)')
argparser.add_argument(
'-p', '--port',
metavar='P',
default=2000,
type=int,
help='TCP port to listen to (default: 2000)')
argparser.add_argument(
'-c', '--city-name',
metavar='C',
default='Town01',
help='The town that is going to be used on benchmark'
+ '(needs to match active town in server, options: Town01 or Town02)')
argparser.add_argument(
'-n', '--log_name',
metavar='T',
default='test',
help='The name of the log file to be created by the benchmark'
)
args = argparser.parse_args()
if args.debug:
log_level = logging.DEBUG
elif args.verbose:
log_level = logging.INFO
else:
log_level = logging.WARNING
logging.basicConfig(format='%(levelname)s: %(message)s', level=log_level)
logging.info('listening to server %s:%s', args.host, args.port)
while True:
try:
with make_carla_client(args.host, args.port) as client:
corl = CoRL2017(city_name=args.city_name, name_to_save=args.log_name)
agent = Manual(args.city_name)
results = corl.benchmark_agent(agent, client)
corl.plot_summary_test()
corl.plot_summary_train()
break
except TCPConnectionError as error:
logging.error(error)
time.sleep(1)

15
PythonClient/setup.py Normal file
View File

@ -0,0 +1,15 @@
from setuptools import setup
# @todo Dependencies are missing.
setup(
name='carla_client',
version='0.7.1',
packages=['carla', 'carla.benchmarks', 'carla.planner'],
license='MIT License',
description='Python API for communicating with the CARLA server.',
url='https://github.com/carla-simulator/carla',
author='The CARLA team',
author_email='carla.simulator@gmail.com',
include_package_data=True
)

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -0,0 +1,168 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
"""Client that runs two servers simultaneously to test repeatability."""
import argparse
import logging
import os
import random
import sys
import time
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from carla.client import make_carla_client
from carla.sensor import Camera, Image
from carla.settings import CarlaSettings
from carla.tcp import TCPConnectionError
def run_carla_clients(args):
filename = '_images_repeatability/server{:d}/{:0>6d}.png'
with make_carla_client(args.host1, args.port1) as client1:
logging.info('1st client connected')
with make_carla_client(args.host2, args.port2) as client2:
logging.info('2nd client connected')
settings = CarlaSettings()
settings.set(
SynchronousMode=True,
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=50,
NumberOfPedestrians=50,
WeatherId=random.choice([1, 3, 7, 8, 14]))
settings.randomize_seeds()
if args.images_to_disk:
camera = Camera('DefaultCamera')
camera.set_image_size(800, 600)
settings.add_sensor(camera)
scene1 = client1.load_settings(settings)
scene2 = client2.load_settings(settings)
number_of_player_starts = len(scene1.player_start_spots)
assert number_of_player_starts == len(scene2.player_start_spots)
player_start = random.randint(0, max(0, number_of_player_starts - 1))
logging.info(
'start episode at %d/%d player start (run forever, press ctrl+c to cancel)',
player_start,
number_of_player_starts)
client1.start_episode(player_start)
client2.start_episode(player_start)
frame = 0
while True:
frame += 1
meas1, sensor_data1 = client1.read_data()
meas2, sensor_data2 = client2.read_data()
player1 = meas1.player_measurements
player2 = meas2.player_measurements
images1 = [x for x in sensor_data1.values() if isinstance(x, Image)]
images2 = [x for x in sensor_data2.values() if isinstance(x, Image)]
control1 = player1.autopilot_control
control2 = player2.autopilot_control
try:
assert len(images1) == len(images2)
assert len(meas1.non_player_agents) == len(meas2.non_player_agents)
assert player1.transform.location.x == player2.transform.location.x
assert player1.transform.location.y == player2.transform.location.y
assert player1.transform.location.z == player2.transform.location.z
assert control1.steer == control2.steer
assert control1.throttle == control2.throttle
assert control1.brake == control2.brake
assert control1.hand_brake == control2.hand_brake
assert control1.reverse == control2.reverse
except AssertionError:
logging.exception('assertion failed')
if args.images_to_disk:
assert len(images1) == 1
images1[0].save_to_disk(filename.format(1, frame))
images2[0].save_to_disk(filename.format(2, frame))
client1.send_control(control1)
client2.send_control(control2)
def main():
argparser = argparse.ArgumentParser(description=__doc__)
argparser.add_argument(
'-v', '--verbose',
action='store_true',
dest='debug',
help='print debug information')
argparser.add_argument(
'--log',
metavar='LOG_FILE',
default=None,
help='print output to file')
argparser.add_argument(
'--host1',
metavar='H',
default='127.0.0.1',
help='IP of the first host server (default: 127.0.0.1)')
argparser.add_argument(
'-p1', '--port1',
metavar='P',
default=2000,
type=int,
help='TCP port to listen to the first server (default: 2000)')
argparser.add_argument(
'--host2',
metavar='H',
default='127.0.0.1',
help='IP of the second host server (default: 127.0.0.1)')
argparser.add_argument(
'-p2', '--port2',
metavar='P',
default=3000,
type=int,
help='TCP port to listen to the second server (default: 3000)')
argparser.add_argument(
'-i', '--images-to-disk',
action='store_true',
help='save images to disk')
args = argparser.parse_args()
logging_config = {
'format': '%(levelname)s: %(message)s',
'level': logging.DEBUG if args.debug else logging.INFO
}
if args.log:
logging_config['filename'] = args.log
logging_config['filemode'] = 'w+'
logging.basicConfig(**logging_config)
logging.info('listening to 1st server at %s:%s', args.host1, args.port1)
logging.info('listening to 2nd server at %s:%s', args.host2, args.port2)
while True:
try:
run_carla_clients(args)
except TCPConnectionError as error:
logging.error(error)
time.sleep(1)
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print('\nCancelled by user. Bye!')

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,6 +1,10 @@
CARLA Simulator
===============
[![Build Status](https://travis-ci.org/carla-simulator/carla.svg?branch=master)](https://travis-ci.org/carla-simulator/carla)
[![Documentation](https://readthedocs.org/projects/docs/badge/?version=latest)](https://carla.readthedocs.io)
[![Waffle.io](https://badge.waffle.io/carla-simulator/carla.svg?columns=Next,In%20Progress,Review)](https://waffle.io/carla-simulator/carla)
CARLA is an open-source simulator for autonomous driving research. CARLA has
been developed from the ground up to support development, training, and
validation of autonomous urban driving systems. In addition to open-source code
@ -16,9 +20,15 @@ environmental conditions.
For instructions on how to use and compile CARLA, check out
[CARLA Documentation](http://carla.readthedocs.io).
If you want to benchmark your model in the same conditions as in our CoRL17
paper, check out
[Benchmarking](http://carla.readthedocs.io/en/latest/benchmark/).
News
----
- 25.01.2018 CARLA 0.7.1 released: [change log](https://github.com/carla-simulator/carla/blob/master/CHANGELOG.md#carla-071), [release](https://github.com/carla-simulator/carla/releases/tag/0.7.1).
- 22.01.2018 Job opening: [C++ (UE4) Programmer](https://drive.google.com/open?id=1Hx0eUgpXl95d4IL9meEGhJECgSRos1T1).
- 28.11.2017 CARLA 0.7.0 released: [change log](https://github.com/carla-simulator/carla/blob/master/CHANGELOG.md#carla-070), [release](https://github.com/carla-simulator/carla/releases/tag/0.7.0).
- 15.11.2017 CARLA 0.6.0 released: [change log](https://github.com/carla-simulator/carla/blob/master/CHANGELOG.md#carla-060), [release](https://github.com/carla-simulator/carla/releases/tag/0.6.0).
@ -55,6 +65,42 @@ Felipe Codevilla, Antonio Lopez, Vladlen Koltun; PMLR 78:1-16
}
```
Building CARLA
--------------
Use `git clone` or download the project from this page. Note that the master
branch contains the latest fixes and features, for the latest stable code may be
best to switch to the `stable` branch.
Then follow the instruction at [How to build on Linux][buildlink].
Unfortunately we don't have yet official instructions to build on other
platforms, please check the progress for [Windows][issue21] and [Mac][issue150].
[buildlink]: http://carla.readthedocs.io/en/latest/how_to_build_on_linux
[issue21]: https://github.com/carla-simulator/carla/issues/21
[issue150]: https://github.com/carla-simulator/carla/issues/150
Contributing
------------
Please take a look at our [Contribution guidelines][contriblink].
[contriblink]: http://carla.readthedocs.io/en/latest/CONTRIBUTING
F.A.Q.
------
If you run into problems, check our
[FAQ](http://carla.readthedocs.io/en/latest/faq/).
Jobs
----
We are currently looking for a new programmer to join our team
* [C++ (UE4) Programmer](https://drive.google.com/open?id=1Hx0eUgpXl95d4IL9meEGhJECgSRos1T1)
License
-------

View File

@ -1,6 +1,46 @@
#!/bin/bash
################################################################################
# Updates CARLA content.
################################################################################
set -e
DOC_STRING="Update CARLA content to the latest version, to be run after 'git pull'."
USAGE_STRING="Usage: $0 [-h|--help] [--no-editor]"
# ==============================================================================
# -- Parse arguments -----------------------------------------------------------
# ==============================================================================
LAUNCH_UE4_EDITOR=true
OPTS=`getopt -o h --long help,no-editor -n 'parse-options' -- "$@"`
if [ $? != 0 ] ; then echo "$USAGE_STRING" ; exit 2 ; fi
eval set -- "$OPTS"
while true; do
case "$1" in
--no-editor )
LAUNCH_UE4_EDITOR=false;
shift ;;
-h | --help )
echo "$DOC_STRING"
echo "$USAGE_STRING"
exit 1
;;
* )
break ;;
esac
done
# ==============================================================================
# -- Set up environment --------------------------------------------------------
# ==============================================================================
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
pushd "$SCRIPT_DIR" >/dev/null
@ -60,8 +100,15 @@ set -e
log "Build CarlaUE4 project..."
make CarlaUE4Editor
log "Launching UE4Editor..."
${UE4_ROOT}/Engine/Binaries/Linux/UE4Editor "${PWD}/CarlaUE4.uproject"
if $LAUNCH_UE4_EDITOR ; then
log "Launching UE4Editor..."
${UE4_ROOT}/Engine/Binaries/Linux/UE4Editor "${PWD}/CarlaUE4.uproject"
else
echo ""
echo "****************"
echo "*** Success! ***"
echo "****************"
fi
popd >/dev/null

View File

@ -1,10 +1,14 @@
#! /bin/bash
################################################################################
# CARLA Util Setup
# CARLA Setup.sh
#
# This downloads and compiles libc++. So we can build and compile our
# dependencies with libc++ for linking against Unreal.
# This script sets up the environment and dependencies for compiling CARLA on
# Linux.
#
# 1) Download CARLA Content if necessary.
# 2) Download and compile libc++.
# 3) Download other third-party libraries and compile them with libc++.
#
# Thanks to the people at https://github.com/Microsoft/AirSim for providing the
# important parts of this script.
@ -12,6 +16,10 @@
set -e
# ==============================================================================
# -- Set up environment --------------------------------------------------------
# ==============================================================================
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" && pwd )"
pushd "$SCRIPT_DIR" >/dev/null
@ -22,9 +30,6 @@ command -v clang++-3.9 >/dev/null 2>&1 || {
exit 1;
}
# Update content.
./Update.sh
mkdir -p Util/Build
pushd Util/Build >/dev/null
@ -161,6 +166,13 @@ fi
./Util/Protoc.sh
# ==============================================================================
# -- Update CARLA Content ------------------------------------------------------
# ==============================================================================
echo
./Update.sh $@
# ==============================================================================
# -- ...and we are done --------------------------------------------------------
# ==============================================================================

View File

@ -2,8 +2,8 @@
ProjectID=675BF8694238308FA9368292CC440350
ProjectName=CARLA UE4
CompanyName=CVC
CopyrightNotice="Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonomade Barcelona (UAB), and the INTEL Visual Computing Lab.This work is licensed under the terms of the MIT license.For a copy, see <https://opensource.org/licenses/MIT>."
ProjectVersion=0.7.0
CopyrightNotice="Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de Barcelona (UAB). This work is licensed under the terms of the MIT license. For a copy, see <https://opensource.org/licenses/MIT>."
ProjectVersion=0.7.1
[/Script/UnrealEd.ProjectPackagingSettings]
BuildConfiguration=PPBC_Development

View File

@ -1,7 +1,7 @@
{
"FileVersion": 3,
"Version": 1,
"VersionName": "0.7.0",
"VersionName": "0.7.1",
"FriendlyName": "CARLA",
"Description": "",
"Category": "Science",

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.
@ -104,7 +104,7 @@ void AVehicleSpawnerBase::SpawnVehicleAtSpawnPoint(
Vehicle->SpawnDefaultController();
auto Controller = GetController(Vehicle);
if (Controller != nullptr) { // Sometimes fails...
Controller->SetRandomEngine(GetRandomEngine());
Controller->GetRandomEngine()->Seed(GetRandomEngine()->GenerateSeed());
Controller->SetRoadMap(GetRoadMap());
Controller->SetAutopilot(true);
Vehicles.Add(Vehicle);

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.
@ -73,6 +73,8 @@ static void ClearQueue(std::queue<T> &Queue)
AWheeledVehicleAIController::AWheeledVehicleAIController(const FObjectInitializer& ObjectInitializer) :
Super(ObjectInitializer)
{
RandomEngine = CreateDefaultSubobject<URandomEngine>(TEXT("RandomEngine"));
PrimaryActorTick.bCanEverTick = true;
PrimaryActorTick.TickGroup = TG_PrePhysics;
}

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.
@ -98,14 +98,10 @@ public:
/// @{
public:
void SetRandomEngine(URandomEngine *InRandomEngine)
{
RandomEngine = InRandomEngine;
}
UFUNCTION(Category = "Random Engine", BlueprintCallable)
URandomEngine *GetRandomEngine()
{
check(RandomEngine != nullptr);
return RandomEngine;
}
@ -221,13 +217,13 @@ private:
private:
UPROPERTY()
ACarlaWheeledVehicle *Vehicle;
ACarlaWheeledVehicle *Vehicle = nullptr;
UPROPERTY()
URoadMap *RoadMap;
URoadMap *RoadMap = nullptr;
UPROPERTY()
URandomEngine *RandomEngine;
URandomEngine *RandomEngine = nullptr;
UPROPERTY(VisibleAnywhere)
bool bAutopilotEnabled = false;

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -0,0 +1,29 @@
// Fill out your copyright notice in the Description page of Project Settings.
#pragma once
#include "CapturedLidarSegment.generated.h"
///
/// Lidar segment captured by tick
///
USTRUCT()
struct FCapturedLidarLaserSegment
{
GENERATED_USTRUCT_BODY()
UPROPERTY(VisibleAnywhere)
TArray<FVector> Points;
};
USTRUCT()
struct FCapturedLidarSegment
{
GENERATED_USTRUCT_BODY()
UPROPERTY(VisibleAnywhere)
float HorizontalAngle = 0;
UPROPERTY(VisibleAnywhere)
TArray<FCapturedLidarLaserSegment> LidarLasersSegments;
};

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.
@ -176,7 +176,8 @@ void ACarlaGameModeBase::BeginPlay()
VehicleSpawner->SetSeed(CarlaSettings.SeedVehicles);
VehicleSpawner->SetRoadMap(RoadMap);
if (PlayerController != nullptr) {
PlayerController->SetRandomEngine(VehicleSpawner->GetRandomEngine());
PlayerController->GetRandomEngine()->Seed(
VehicleSpawner->GetRandomEngine()->GenerateSeed());
}
} else {
UE_LOG(LogCarla, Error, TEXT("Missing vehicle spawner actor!"));
@ -224,6 +225,10 @@ void ACarlaGameModeBase::AttachCaptureCamerasToPlayer()
for (const auto &Item : Settings.CameraDescriptions) {
PlayerController->AddSceneCaptureCamera(Item.Value, OverridePostProcessParameters);
}
for (const auto &Item : Settings.LidarDescriptions) {
PlayerController->AddSceneCaptureLidar(Item.Value);
}
}
void ACarlaGameModeBase::TagActorsForSemanticSegmentation()

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab.
// de Barcelona (UAB).
//
// This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>.

Some files were not shown because too many files have changed in this diff Show More