Merge branch 'master' into build-sytem

This commit is contained in:
nsubiron 2018-01-23 16:02:24 +01:00
commit fcb32a96b2
187 changed files with 2744 additions and 347 deletions

5
.gitignore vendored
View File

@ -1,9 +1,11 @@
Dist Dist
Doxygen Doxygen
PythonClient/dist
Util/Build Util/Build
*.VC.db *.VC.db
*.VC.opendb *.VC.opendb
*.egg-info
*.kdev4 *.kdev4
*.log *.log
*.pb.cc *.pb.cc
@ -22,5 +24,6 @@ Util/Build
.tags* .tags*
.vs .vs
__pycache__ __pycache__
_images _benchmarks_results
_images*
core core

View File

@ -24,4 +24,4 @@ matrix:
packages: packages:
- cppcheck - cppcheck
script: script:
- cppcheck Unreal/CarlaUE4/Source Unreal/CarlaUE4/Plugins/Carla/Source Util/ -iUtil/Build -iUtil/CarlaServer/source/carla/server/carla_server.pb.cc --quiet --error-exitcode=1 --enable=warning - cppcheck . -iBuild -i.pb.cc --error-exitcode=1 --enable=warning --quiet

73
Docs/CODE_OF_CONDUCT.md Normal file
View File

@ -0,0 +1,73 @@
# Contributor Covenant Code of Conduct
## Our Pledge
In the interest of fostering an open and welcoming environment, we as
contributors and maintainers pledge to making participation in our project and
our community a harassment-free experience for everyone, regardless of age, body
size, disability, ethnicity, gender identity and expression, level of experience,
education, socio-economic status, nationality, personal appearance, race,
religion, or sexual identity and orientation.
## Our Standards
Examples of behavior that contributes to creating a positive environment
include:
* Using welcoming and inclusive language
* Being respectful of differing viewpoints and experiences
* Gracefully accepting constructive criticism
* Focusing on what is best for the community
* Showing empathy towards other community members
Examples of unacceptable behavior by participants include:
* The use of sexualized language or imagery and unwelcome sexual attention or
advances
* Trolling, insulting/derogatory comments, and personal or political attacks
* Public or private harassment
* Publishing others' private information, such as a physical or electronic
address, without explicit permission
* Other conduct which could reasonably be considered inappropriate in a
professional setting
## Our Responsibilities
Project maintainers are responsible for clarifying the standards of acceptable
behavior and are expected to take appropriate and fair corrective action in
response to any instances of unacceptable behavior.
Project maintainers have the right and responsibility to remove, edit, or
reject comments, commits, code, wiki edits, issues, and other contributions
that are not aligned to this Code of Conduct, or to ban temporarily or
permanently any contributor for other behaviors that they deem inappropriate,
threatening, offensive, or harmful.
## Scope
This Code of Conduct applies both within project spaces and in public spaces
when an individual is representing the project or its community. Examples of
representing a project or community include using an official project e-mail
address, posting via an official social media account, or acting as an appointed
representative at an online or offline event. Representation of a project may be
further defined and clarified by project maintainers.
## Enforcement
Instances of abusive, harassing, or otherwise unacceptable behavior may be
reported by contacting the project team at [INSERT EMAIL ADDRESS]. All
complaints will be reviewed and investigated and will result in a response that
is deemed necessary and appropriate to the circumstances. The project team is
obligated to maintain confidentiality with regard to the reporter of an incident.
Further details of specific enforcement policies may be posted separately.
Project maintainers who do not follow or enforce the Code of Conduct in good
faith may face temporary or permanent repercussions as determined by other
members of the project's leadership.
## Attribution
This Code of Conduct is adapted from the [Contributor Covenant][homepage], version 1.4,
available at https://www.contributor-covenant.org/version/1/4/code-of-conduct.html
[homepage]: https://www.contributor-covenant.org

View File

@ -1,23 +1,25 @@
Contributing to CARLA Contributing to CARLA
===================== =====================
> _This document is a work in progress and might be incomplete._
We are more than happy to accept contributions! We are more than happy to accept contributions!
How can I contribute? How can I contribute?
* Reporting bugs * Reporting bugs
* Feature requests * Feature requests
* Improving documentation
* Code contributions * Code contributions
Reporting bugs Reporting bugs
-------------- --------------
Use our [issue section](issueslink) on GitHub. Please check before that the Use our [issue section][issueslink] on GitHub. Please check before that the
issue is not already reported. issue is not already reported, and make sure you have read our
[Documentation][docslink] and [FAQ][faqlink].
[issueslink]: https://github.com/carla-simulator/carla/issues [issueslink]: https://github.com/carla-simulator/carla/issues
[docslink]: http://carla.readthedocs.io
[faqlink]: http://carla.readthedocs.io/en/latest/faq/
Feature requests Feature requests
---------------- ----------------
@ -28,6 +30,25 @@ your request as a new issue.
[frlink]: https://github.com/carla-simulator/carla/issues?q=is%3Aissue+is%3Aopen+label%3A%22feature+request%22 [frlink]: https://github.com/carla-simulator/carla/issues?q=is%3Aissue+is%3Aopen+label%3A%22feature+request%22
Improving documentation
-----------------------
If you feel something is missing in the documentation, please don't hesitate to
open an issue to let us know. Even better, if you think you can improve it
yourself, it would be a great contribution to the community!
We build our documentation with [MkDocs](http://www.mkdocs.org/) based on the
Markdown files inside the "Docs" folder. You can either directly modify them on
GitHub or locally in your machine.
Once you are done with your changes, please submit a pull-request.
**TIP:** You can build and serve it locally by running `mkdocs` in the project's
main folder
$ sudo pip install mkdocs
$ mkdocs serve
Code contributions Code contributions
------------------ ------------------
@ -52,34 +73,33 @@ current documentation if you feel confident enough.
#### Coding standard #### Coding standard
Please follow the current coding style when submitting new code. Please follow the current [coding standard](coding_standard.md) when submitting
new code.
###### General
* Use spaces, not tabs.
* Avoid adding trailing whitespace as it creates noise in the diffs.
* Comments should not exceed 80 columns, code may exceed this limit a bit in rare occasions if it results in clearer code.
###### Python
* All code must be compatible with Python 2.7, 3.5, and 3.6.
* [Pylint](https://www.pylint.org/) should not give any error or warning (few exceptions apply with external classes like `numpy`, see our `.pylintrc`).
* Python code follows [PEP8 style guide](https://www.python.org/dev/peps/pep-0008/) (use `autopep8` whenever possible).
###### C++
* Compilation should not give any error or warning (`clang++ -Wall -Wextra -std=C++14`).
* Unreal C++ code (CarlaUE4 and Carla plugin) follow the [Unreal Engine's Coding Standard](https://docs.unrealengine.com/latest/INT/Programming/Development/CodingStandard/) with the exception of using spaces instead of tabs.
* CarlaServer uses [Google's style guide](https://google.github.io/styleguide/cppguide.html).
#### Pull-requests #### Pull-requests
Once you think your contribution is ready to be added to CARLA, please submit a Once you think your contribution is ready to be added to CARLA, please submit a
pull-request to the `dev` branch. pull-request.
Try to be as descriptive as possible when filling the pull-request description. Try to be as descriptive as possible when filling the pull-request description.
Adding images and gifs may help people to understand your changes or new
features.
Please note that there are some checks that the new code is required to pass Please note that there are some checks that the new code is required to pass
before we can do the merge. The checks are automatically run by the continuous before we can do the merge. The checks are automatically run by the continuous
integration system, you will see a green tick mark if all the checks succeeded. integration system, you will see a green tick mark if all the checks succeeded.
If you see a red mark, please correct your code accordingly. If you see a red mark, please correct your code accordingly.
###### Checklist
<!--
If you modify this list please keep it up-to-date with pull_request_template.md
-->
- [ ] Your branch is up-to-date with the `master` branch and tested with latest changes
- [ ] Extended the README / documentation, if necessary
- [ ] Code compiles correctly
- [ ] All tests passing
- [ ] `make check`
- [ ] `pylint --disable=R,C --rcfile=PythonClient/.pylintrc PythonClient/carla PythonClient/*.py`
- [ ] `cppcheck . -iBuild -i.pb.cc --enable=warning`

66
Docs/benchmark.md Normal file
View File

@ -0,0 +1,66 @@
CARLA Benchmark
===============
Running the Benchmark
---------------------
The "carla" api provides a basic benchmarking system, that allows making several
tests on a certain agent. We already provide the same benchmark used in the CoRL
2017 paper. By running this benchmark you can compare the results of your agent
to the results obtained by the agents show in the paper.
Besides the requirements of the CARLA client, the benchmark package also needs
the future package
$ sudo pip install future
By running the benchmark a default agent that just go straight will be tested.
To run the benchmark you need a server running. For a default localhost server
on port 2000, to run the benchmark you just need to run
$ ./run_benchmark.py
or
$ python run_benchmark.py
Run the help command to see options available
$ ./run_benchmark.py --help
Benchmarking your Agent
---------------------
The benchmark works by calling three lines of code
corl = CoRL2017(city_name=args.city_name, name_to_save=args.log_name)
agent = Manual(args.city_name)
results = corl.benchmark_agent(agent, client)
This is excerpt is executed in the [run_benchmark.py](https://github.com/carla-simulator/carla/blob/master/PythonClient/run_benchmark.py) example.
First a *benchmark* object is defined, for this case, a CoRL2017 benchmark. This is object is used to benchmark a certain Agent. <br>
On the second line of our sample code, there is an object of a Manual class instanced. This class inherited an Agent base class
that is used by the *benchmark* object.
To be benchmarked, an Agent subclass must redefine the *run_step* function as it is done in the following excerpt:
def run_step(self, measurements, sensor_data, target):
"""
Function to run a control step in the CARLA vehicle.
:param measurements: object of the Measurements type
:param sensor_data: images list object
:param target: target position of Transform type
:return: an object of the control type.
"""
control = VehicleControl()
control.throttle = 0.9
return control
The function receives measurements from the world, sensor data and a target position. With this, the function must return a control to the car, *i.e.* steering value, throttle value, brake value, etc.
The [measurements](measurements.md), [target](measurements.md), [sensor_data](cameras_and_sensors.md) and [control](measurements.md) types are described on the documentation.
Creating your Benchmark
---------------------
Tutorial to be added

25
Docs/coding_standard.md Normal file
View File

@ -0,0 +1,25 @@
Coding standard
===============
> _This document is a work in progress and might be incomplete._
General
-------
* Use spaces, not tabs.
* Avoid adding trailing whitespace as it creates noise in the diffs.
* Comments should not exceed 80 columns, code may exceed this limit a bit in rare occasions if it results in clearer code.
Python
------
* All code must be compatible with Python 2.7, 3.5, and 3.6.
* [Pylint](https://www.pylint.org/) should not give any error or warning (few exceptions apply with external classes like `numpy`, see our `.pylintrc`).
* Python code follows [PEP8 style guide](https://www.python.org/dev/peps/pep-0008/) (use `autopep8` whenever possible).
C++
---
* Compilation should not give any error or warning (`clang++ -Wall -Wextra -std=C++14`).
* Unreal C++ code (CarlaUE4 and Carla plugin) follow the [Unreal Engine's Coding Standard](https://docs.unrealengine.com/latest/INT/Programming/Development/CodingStandard/) with the exception of using spaces instead of tabs.
* CarlaServer uses [Google's style guide](https://google.github.io/styleguide/cppguide.html).

View File

@ -1,7 +1,20 @@
CARLA F.A.Q. <!-- ======================================================================= -->
============ <details>
<summary><h5 style="display:inline">
What is the recommended hardware to run CARLA?
</h4></summary>
#### What is the expected disk space needed for building CARLA? CARLA is a very performance demanding software, at the very minimum you would
need a computer with a dedicated GPU capable of running Unreal Engine. See
[Unreal Engine's recommended hardware](https://wiki.unrealengine.com/Recommended_Hardware).
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
What is the expected disk space needed for building CARLA?
</h4></summary>
Building CARLA from source requires about 15GB of disk space, not counting Building CARLA from source requires about 15GB of disk space, not counting
Unreal Engine installation. Unreal Engine installation.
@ -10,24 +23,27 @@ However, you will also need to build and install Unreal Engine, which on Linux
requires much more disk space as it keeps all the intermediate files, requires much more disk space as it keeps all the intermediate files,
[see this thread](https://answers.unrealengine.com/questions/430541/linux-engine-size.html). [see this thread](https://answers.unrealengine.com/questions/430541/linux-engine-size.html).
#### Is it possible to dump images from the CARLA server view? </details>
Yes, this is an Unreal Engine feature. You can dump the images of the server <!-- ======================================================================= -->
camera by running CARLA with <details>
<summary><h5 style="display:inline">
$ ./CarlaUE4.sh -benchmark -fps=30 -dumpmovie I downloaded CARLA source from GitHub, where is the "CarlaUE4.sh" script?
</h4></summary>
Images are saved to "CarlaUE4/Saved/Screenshots/LinuxNoEditor".
#### I downloaded CARLA source from GitHub, where is the "CarlaUE4.sh" script?
There is no "CarlaUE4.sh" script in the source version of CARLA, you need to There is no "CarlaUE4.sh" script in the source version of CARLA, you need to
follow the instructions in the [documentation](http://carla.readthedocs.io) on follow the instructions in the [documentation](http://carla.readthedocs.io) for
building CARLA from source. building CARLA from source.
Once you open the project in the Unreal Editor, you can hit Play to test CARLA. Once you open the project in the Unreal Editor, you can hit Play to test CARLA.
#### Can I skip the download step in Setup.sh? </details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Setup.sh fails to download content, can I skip this step?
</h4></summary>
It is possible to skip the download step by passing the `-s` argument to the It is possible to skip the download step by passing the `-s` argument to the
setup script setup script
@ -40,9 +56,99 @@ for instructions or run
$ ./Update.sh -s $ ./Update.sh -s
#### How can I create a binary version of CARLA? </details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Can I run the server from within Unreal Editor?
</h4></summary>
Yes, you can connect the Python client to a server running within Unreal Editor
as if it was the standalone server.
Go to **"Unreal/CarlaUE4/Config/CarlaSettings.ini"** (this file should have been
created by the Setup.sh) and enable networking. If for whatever reason you don't
have this file, just create it and add the following
```ini
[CARLA/Server]
UseNetworking=true
```
Now when you hit Play the editor will hang until a client connects.
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Why Unreal Editor hangs after hitting Play?
</h4></summary>
This is most probably happening because CARLA is starting in server mode. Check
your **"Unreal/CarlaUE4/Config/CarlaSettings.ini"** and set
```ini
[CARLA/Server]
UseNetworking=false
```
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
How can I create a binary version of CARLA?
</h4></summary>
To compile a binary (packaged) version of CARLA, open the CarlaUE4 project with To compile a binary (packaged) version of CARLA, open the CarlaUE4 project with
Unreal Editor, go to the menu “File -> Package Project”, and select your Unreal Editor, go to the menu "File -> Package Project", and select your
platform. This takes a while, but in the end it should generate a packaged platform. This takes a while, but in the end it should generate a packaged
version of CARLA to execute without Unreal Editor. version of CARLA to execute without Unreal Editor.
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Why do I have very low FPS when running the server in Unreal Editor?
</h4></summary>
UE4 Editor goes to a low performance mode when out of focus. It can be disabled
in the editor preferences. Go to "Edit->Editor Preferences->Performance" and
disable the "Use Less CPU When in Background" option.
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Is it possible to dump images from the CARLA server view?
</h4></summary>
Yes, this is an Unreal Engine feature. You can dump the images of the server
camera by running CARLA with
$ ./CarlaUE4.sh -benchmark -fps=30 -dumpmovie
Images are saved to "CarlaUE4/Saved/Screenshots/LinuxNoEditor".
</details>
<!-- ======================================================================= -->
<details>
<summary><h5 style="display:inline">
Fatal error: 'version.h' has been modified since the precompiled header.
</h4></summary>
This happens from time to time due to Linux updates. It is possible to force a
rebuild of all the project files with
$ cd Unreal/CarlaUE4/
$ make CarlaUE4Editor ARGS=-clean
$ make CarlaUE4Editor
It takes a long time but fixes the issue. Sometimes a reboot is also needed.
</details>

View File

@ -38,15 +38,9 @@ The "carla" Python module provides a basic API for communicating with the CARLA
server. In the "PythonClient" folder we provide a couple of examples on how to server. In the "PythonClient" folder we provide a couple of examples on how to
use this API. We recommend Python 3, but they are also compatible with Python 2. use this API. We recommend Python 3, but they are also compatible with Python 2.
The basic functionality requires only the protobuf module to be installed Install the dependencies with
$ sudo apt-get install python3 python3-pip $ pip install -r PythonClient/requirements.txt
$ sudo pip3 install protobuf
However, other operations as handling images require some extra modules, and the
"manual_control.py" example requires pygame
$ sudo pip3 install numpy Pillow pygame
The script "PythonClient/client_example.py" provides basic functionality for The script "PythonClient/client_example.py" provides basic functionality for
controlling the vehicle and saving images to disk. Run the help command to see controlling the vehicle and saving images to disk. Run the help command to see

View File

@ -7,8 +7,8 @@ CARLA Documentation
* [CARLA settings](carla_settings.md) * [CARLA settings](carla_settings.md)
* [Measurements](measurements.md) * [Measurements](measurements.md)
* [Cameras and sensors](cameras_and_sensors.md) * [Cameras and sensors](cameras_and_sensors.md)
* [Benchmark](benchmark.md)
* [F.A.Q.](faq.md) * [F.A.Q.](faq.md)
* [Troubleshooting](troubleshooting.md)
#### Building from source #### Building from source
@ -19,6 +19,8 @@ CARLA Documentation
#### Contributing #### Contributing
* [Contribution guidelines](CONTRIBUTING.md) * [Contribution guidelines](CONTRIBUTING.md)
* [Coding standard](coding_standard.md)
* [Code of conduct](CODE_OF_CONDUCT.md)
#### Development #### Development

9
Docs/issue_template.md Normal file
View File

@ -0,0 +1,9 @@
<!--
Thanks for contributing to CARLA!
If you are asking a question please make sure your question was not asked before
by searching among the existing issues. Also make sure you have read our
documentation and FAQ at carla.readthedocs.io.
-->

View File

@ -0,0 +1,32 @@
<!--
Thanks for sending a pull request! Please make sure you click the link above to
view the contribution guidelines, then fill out the blanks below.
Checklist:
- [ ] Your branch is up-to-date with the `master` branch and tested with latest changes
- [ ] Extended the README / documentation, if necessary
- [ ] Code compiles correctly
- [ ] All tests passing
- [ ] `make check`
- [ ] `pylint --disable=R,C --rcfile=PythonClient/.pylintrc PythonClient/carla PythonClient/*.py`
- [ ] `cppcheck . -iBuild -i.pb.cc --enable=warning`
-->
#### Description
<!-- Please explain the changes you made here as detailed as possible. -->
Fixes # <!-- If fixes an issue, please add here the issue number. -->
#### Where has this been tested?
* **Platform(s):** ...
* **Python version(s):** ...
* **Unreal Engine version(s):** ...
#### Possible Drawbacks
<!-- What are the possible side-effects or negative impacts of the code change? -->

View File

@ -1,41 +0,0 @@
Troubleshooting
===============
#### Editor hangs after hitting Play
This is most probably happening because CARLA is started in server mode. Check
in your CarlaSettings.ini file ("./Unreal/CarlaUE4/Config/CarlaSettings.ini")
and set
```ini
[CARLA/Server]
UseNetworking=false
```
#### Very low FPS in editor when not in focus
UE4 Editor goes to a low performance mode when out of focus. It can be disabled
in the editor preferences. Go to "Edit->Editor Preferences->Performance" and
disable the "Use Less CPU When in Background" option.
#### Fatal error: file '/usr/include/linux/version.h' has been modified since the precompiled header
This happens from time to time due to Linux updates. It is possible to force a
rebuild of all the project files with
$ cd Unreal/CarlaUE4/
$ make CarlaUE4Editor ARGS=-clean
$ make CarlaUE4Editor
#### Setup.sh fails to download content
It is possible to skip the download step by passing the `-s` argument to the
setup script
$ ./Setup.sh -s
Bear in mind that if you do so, you are supposed to manually download and
extract the content package yourself, check out the last output of the Setup.sh
for instructions or run
$ ./Update.sh -s

View File

@ -1,7 +1,7 @@
MIT License MIT License
Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
Barcelona (UAB), and the INTEL Visual Computing Lab. Barcelona (UAB).
Permission is hereby granted, free of charge, to any person obtaining a copy Permission is hereby granted, free of charge, to any person obtaining a copy
of this software and associated documentation files (the "Software"), to deal of this software and associated documentation files (the "Software"), to deal

2
PythonClient/.pep8 Normal file
View File

@ -0,0 +1,2 @@
[pep8]
max-line-length = 120

2
PythonClient/MANIFEST.in Normal file
View File

@ -0,0 +1,2 @@
include carla/planner/*.txt
include carla/planner/*.png

View File

@ -0,0 +1,38 @@
#!/usr/bin/env python2
# -*- coding: utf-8 -*-
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
# @author: german,felipecode
from __future__ import print_function
import abc
from carla.planner.planner import Planner
class Agent(object):
def __init__(self, city_name):
self.__metaclass__ = abc.ABCMeta
self._planner = Planner(city_name)
def get_distance(self, start_point, end_point):
path_distance = self._planner.get_shortest_path_distance(
[start_point.location.x, start_point.location.y, 22]
, [start_point.orientation.x, start_point.orientation.y, 22]
, [end_point.location.x, end_point.location.y, 22]
, [end_point.orientation.x, end_point.orientation.y, 22])
# We calculate the timout based on the distance
return path_distance
@abc.abstractmethod
def run_step(self, measurements, sensor_data, target):
"""
Function to be redefined by an agent.
:param The measurements like speed, the image data and a target
:returns A carla Control object, with the steering/gas/brake for the agent
"""

View File

@ -0,0 +1,377 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import csv
import datetime
import math
import os
import abc
import logging
from builtins import input as input_data
from carla.client import VehicleControl
def sldist(c1, c2):
return math.sqrt((c2[0] - c1[0])**2 + (c2[1] - c1[1])**2)
class Benchmark(object):
"""
The Benchmark class, controls the execution of the benchmark by an
Agent class.
The benchmark class must be inherited
"""
def __init__(
self,
city_name,
name_to_save,
continue_experiment=False,
save_images=False
):
self.__metaclass__ = abc.ABCMeta
self._city_name = city_name
self._base_name = name_to_save
self._dict_stats = {'exp_id': -1,
'rep': -1,
'weather': -1,
'start_point': -1,
'end_point': -1,
'result': -1,
'initial_distance': -1,
'final_distance': -1,
'final_time': -1,
'time_out': -1
}
self._dict_rewards = {'exp_id': -1,
'rep': -1,
'weather': -1,
'collision_gen': -1,
'collision_ped': -1,
'collision_car': -1,
'lane_intersect': -1,
'sidewalk_intersect': -1,
'pos_x': -1,
'pos_y': -1
}
self._experiments = self._build_experiments()
# Create the log files and get the names
self._suffix_name, self._full_name = self._create_log_record(name_to_save, self._experiments)
# Get the line for the experiment to be continued
self._line_on_file = self._continue_experiment(continue_experiment)
self._save_images = save_images
self._image_filename_format = os.path.join(
self._full_name, '_images/episode_{:s}/{:s}/image_{:0>5d}.jpg')
def run_navigation_episode(
self,
agent,
carla,
time_out,
target,
episode_name):
measurements, sensor_data = carla.read_data()
carla.send_control(VehicleControl())
t0 = measurements.game_timestamp
t1 = t0
success = False
measurement_vec = []
frame = 0
distance = 10000
while(t1 - t0) < (time_out * 1000) and not success:
measurements, sensor_data = carla.read_data()
control = agent.run_step(measurements, sensor_data, target)
logging.info("Controller is Inputting:")
logging.info('Steer = %f Throttle = %f Brake = %f ',
control.steer, control.throttle, control.brake)
carla.send_control(control)
# measure distance to target
if self._save_images:
for name, image in sensor_data.items():
image.save_to_disk(self._image_filename_format.format(
episode_name, name, frame))
curr_x = measurements.player_measurements.transform.location.x
curr_y = measurements.player_measurements.transform.location.y
measurement_vec.append(measurements.player_measurements)
t1 = measurements.game_timestamp
distance = sldist([curr_x, curr_y],
[target.location.x, target.location.y])
logging.info('Status:')
logging.info(
'[d=%f] c_x = %f, c_y = %f ---> t_x = %f, t_y = %f',
float(distance), curr_x, curr_y, target.location.x,
target.location.y)
if distance < 200.0:
success = True
frame += 1
if success:
return 1, measurement_vec, float(t1 - t0) / 1000.0, distance
return 0, measurement_vec, time_out, distance
def benchmark_agent(self, agent, carla):
if self._line_on_file == 0:
# The fixed name considering all the experiments being run
with open(os.path.join(self._full_name,
self._suffix_name), 'w') as ofd:
w = csv.DictWriter(ofd, self._dict_stats.keys())
w.writeheader()
with open(os.path.join(self._full_name,
'details_' + self._suffix_name), 'w') as rfd:
rw = csv.DictWriter(rfd, self._dict_rewards.keys())
rw.writeheader()
start_task = 0
start_pose = 0
else:
(start_task, start_pose) = self._get_pose_and_task(self._line_on_file)
logging.info(' START ')
for experiment in self._experiments[start_task:]:
positions = carla.load_settings(
experiment.conditions).player_start_spots
for pose in experiment.poses[start_pose:]:
for rep in range(experiment.repetitions):
start_point = pose[0]
end_point = pose[1]
carla.start_episode(start_point)
logging.info('======== !!!! ==========')
logging.info(' Start Position %d End Position %d ',
start_point, end_point)
path_distance = agent.get_distance(
positions[start_point], positions[end_point])
euclidean_distance = \
sldist([positions[start_point].location.x, positions[start_point].location.y],
[positions[end_point].location.x, positions[end_point].location.y])
time_out = self._calculate_time_out(path_distance)
# running the agent
(result, reward_vec, final_time, remaining_distance) = \
self.run_navigation_episode(
agent, carla, time_out, positions[end_point],
str(experiment.Conditions.WeatherId) + '_'
+ str(experiment.id) + '_' + str(start_point)
+ '.' + str(end_point))
# compute stats for the experiment
self._write_summary_results(
experiment, pose, rep, euclidean_distance,
remaining_distance, final_time, time_out, result)
self._write_details_results(experiment, rep, reward_vec)
if(result > 0):
logging.info('+++++ Target achieved in %f seconds! +++++',
final_time)
else:
logging.info('----- Timeout! -----')
return self.get_all_statistics()
def _write_summary_results(self, experiment, pose, rep,
path_distance, remaining_distance,
final_time, time_out, result):
self._dict_stats['exp_id'] = experiment.id
self._dict_stats['rep'] = rep
self._dict_stats['weather'] = experiment.Conditions.WeatherId
self._dict_stats['start_point'] = pose[0]
self._dict_stats['end_point'] = pose[1]
self._dict_stats['result'] = result
self._dict_stats['initial_distance'] = path_distance
self._dict_stats['final_distance'] = remaining_distance
self._dict_stats['final_time'] = final_time
self._dict_stats['time_out'] = time_out
with open(os.path.join(self._full_name, self._suffix_name), 'a+') as ofd:
w = csv.DictWriter(ofd, self._dict_stats.keys())
w.writerow(self._dict_stats)
def _write_details_results(self, experiment, rep, reward_vec):
with open(os.path.join(self._full_name,
'details_' + self._suffix_name), 'a+') as rfd:
rw = csv.DictWriter(rfd, self._dict_rewards.keys())
for i in range(len(reward_vec)):
self._dict_rewards['exp_id'] = experiment.id
self._dict_rewards['rep'] = rep
self._dict_rewards['weather'] = experiment.Conditions.WeatherId
self._dict_rewards['collision_gen'] = reward_vec[
i].collision_other
self._dict_rewards['collision_ped'] = reward_vec[
i].collision_pedestrians
self._dict_rewards['collision_car'] = reward_vec[
i].collision_vehicles
self._dict_rewards['lane_intersect'] = reward_vec[
i].intersection_otherlane
self._dict_rewards['sidewalk_intersect'] = reward_vec[
i].intersection_offroad
self._dict_rewards['pos_x'] = reward_vec[
i].transform.location.x
self._dict_rewards['pos_y'] = reward_vec[
i].transform.location.y
rw.writerow(self._dict_rewards)
def _create_log_record(self, base_name, experiments):
"""
This function creates the log files for the benchmark.
"""
suffix_name = self._get_experiments_names(experiments)
full_name = os.path.join('_benchmarks_results',
base_name + '_'
+ self._get_details() + '/')
folder = os.path.dirname(full_name)
if not os.path.isdir(folder):
os.makedirs(folder)
# Make a date file: to show when this was modified,
# the number of times the experiments were run
now = datetime.datetime.now()
open(os.path.join(full_name, now.strftime("%Y%m%d%H%M")),'w').close()
return suffix_name, full_name
def _continue_experiment(self, continue_experiment):
if self._experiment_exist():
if continue_experiment:
line_on_file = self._get_last_position()
else:
# Ask question, to avoid mistaken override situations
answer = input_data("The experiment was already found in the files"
+ ", Do you want to continue (y/n)? \n"
)
if answer == 'Yes' or answer == 'y':
line_on_file = self._get_last_position()
else:
line_on_file = 0
else:
line_on_file = 0
return line_on_file
def _experiment_exist(self):
return os.path.isfile(self._full_name)
def _get_last_position(self):
with open(os.path.join(self._full_name, self._suffix_name)) as f:
return sum(1 for _ in f)
# To be redefined on subclasses on how to calculate timeout for an episode
@abc.abstractmethod
def _calculate_time_out(self, distance):
pass
@abc.abstractmethod
def _get_details(self):
"""
Get details
:return: a string with name and town of the subclass
"""
@abc.abstractmethod
def _build_experiments(self):
"""
Returns a set of experiments to be evaluated
Must be redefined in an inherited class.
"""
@abc.abstractmethod
def get_all_statistics(self):
"""
Get the statistics of the evaluated experiments
:return:
"""
@abc.abstractmethod
def _get_pose_and_task(self, line_on_file):
"""
Parse the experiment depending on number of poses and tasks
"""
@abc.abstractmethod
def plot_summary_train(self):
"""
returns the summary for the train weather/task episodes
"""
@abc.abstractmethod
def plot_summary_test(self):
"""
returns the summary for the test weather/task episodes
"""
@staticmethod
def _get_experiments_names(experiments):
name_cat = 'w'
for experiment in experiments:
name_cat += str(experiment.Conditions.WeatherId) + '.'
return name_cat

View File

@ -0,0 +1,203 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
# CORL experiment set.
from __future__ import print_function
import os
from .benchmark import Benchmark
from .experiment import Experiment
from carla.sensor import Camera
from carla.settings import CarlaSettings
from .metrics import compute_summary
class CoRL2017(Benchmark):
def get_all_statistics(self):
summary = compute_summary(os.path.join(
self._full_name, self._suffix_name), [3])
return summary
def plot_summary_train(self):
self._plot_summary([1.0, 3.0, 6.0, 8.0])
def plot_summary_test(self):
self._plot_summary([4.0, 14.0])
def _plot_summary(self, weathers):
"""
We plot the summary of the testing for the set selected weathers.
The test weathers are [4,14]
"""
metrics_summary = compute_summary(os.path.join(
self._full_name, self._suffix_name), [3])
for metric, values in metrics_summary.items():
print('Metric : ', metric)
for weather, tasks in values.items():
if weather in set(weathers):
print(' Weather: ', weather)
count = 0
for t in tasks:
print(' Task ', count, ' -> ', t)
count += 1
print(' AvG -> ', float(sum(tasks)) / float(len(tasks)))
def _calculate_time_out(self, distance):
"""
Function to return the timeout ( in miliseconds) that is calculated based on distance to goal.
This is the same timeout as used on the CoRL paper.
"""
return ((distance / 100000.0) / 10.0) * 3600.0 + 10.0
def _poses_town01(self):
"""
Each matrix is a new task. We have all the four tasks
"""
def _poses_straight():
return [[36, 40], [39, 35], [110, 114], [7, 3], [0, 4],
[68, 50], [61, 59], [47, 64], [147, 90], [33, 87],
[26, 19], [80, 76], [45, 49], [55, 44], [29, 107],
[95, 104], [84, 34], [53, 67], [22, 17], [91, 148],
[20, 107], [78, 70], [95, 102], [68, 44], [45, 69]]
def _poses_one_curve():
return [[138, 17], [47, 16], [26, 9], [42, 49], [140, 124],
[85, 98], [65, 133], [137, 51], [76, 66], [46, 39],
[40, 60], [0, 29], [4, 129], [121, 140], [2, 129],
[78, 44], [68, 85], [41, 102], [95, 70], [68, 129],
[84, 69], [47, 79], [110, 15], [130, 17], [0, 17]]
def _poses_navigation():
return [[105, 29], [27, 130], [102, 87], [132, 27], [24, 44],
[96, 26], [34, 67], [28, 1], [140, 134], [105, 9],
[148, 129], [65, 18], [21, 16], [147, 97], [42, 51],
[30, 41], [18, 107], [69, 45], [102, 95], [18, 145],
[111, 64], [79, 45], [84, 69], [73, 31], [37, 81]]
return [_poses_straight(),
_poses_one_curve(),
_poses_navigation(),
_poses_navigation()]
def _poses_town02(self):
def _poses_straight():
return [[38, 34], [4, 2], [12, 10], [62, 55], [43, 47],
[64, 66], [78, 76], [59, 57], [61, 18], [35, 39],
[12, 8], [0, 18], [75, 68], [54, 60], [45, 49],
[46, 42], [53, 46], [80, 29], [65, 63], [0, 81],
[54, 63], [51, 42], [16, 19], [17, 26], [77, 68]]
def _poses_one_curve():
return [[37, 76], [8, 24], [60, 69], [38, 10], [21, 1],
[58, 71], [74, 32], [44, 0], [71, 16], [14, 24],
[34, 11], [43, 14], [75, 16], [80, 21], [3, 23],
[75, 59], [50, 47], [11, 19], [77, 34], [79, 25],
[40, 63], [58, 76], [79, 55], [16, 61], [27, 11]]
def _poses_navigation():
return [[19, 66], [79, 14], [19, 57], [23, 1],
[53, 76], [42, 13], [31, 71], [33, 5],
[54, 30], [10, 61], [66, 3], [27, 12],
[79, 19], [2, 29], [16, 14], [5, 57],
[70, 73], [46, 67], [57, 50], [61, 49], [21, 12],
[51, 81], [77, 68], [56, 65], [43, 54]]
return [_poses_straight(),
_poses_one_curve(),
_poses_navigation(),
_poses_navigation()
]
def _build_experiments(self):
"""
Creates the whole set of experiment objects,
The experiments created depend on the selected Town.
"""
# We set the camera
# This single RGB camera is used on every experiment
camera = Camera('CameraRGB')
camera.set(CameraFOV=100)
camera.set_image_size(800, 600)
camera.set_position(200, 0, 140)
camera.set_rotation(-15.0, 0, 0)
weathers = [1, 3, 6, 8, 4, 14]
if self._city_name == 'Town01':
poses_tasks = self._poses_town01()
vehicles_tasks = [0, 0, 0, 20]
pedestrians_tasks = [0, 0, 0, 50]
else:
poses_tasks = self._poses_town02()
vehicles_tasks = [0, 0, 0, 15]
pedestrians_tasks = [0, 0, 0, 50]
experiments_vector = []
for weather in weathers:
for iteration in range(len(poses_tasks)):
poses = poses_tasks[iteration]
vehicles = vehicles_tasks[iteration]
pedestrians = pedestrians_tasks[iteration]
conditions = CarlaSettings()
conditions.set(
SynchronousMode=True,
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=vehicles,
NumberOfPedestrians=pedestrians,
WeatherId=weather,
SeedVehicles=123456789,
SeedPedestrians=123456789
)
# Add all the cameras that were set for this experiments
conditions.add_sensor(camera)
experiment = Experiment()
experiment.set(
Conditions=conditions,
Poses=poses,
Id=iteration,
Repetitions=1
)
experiments_vector.append(experiment)
return experiments_vector
def _get_details(self):
# Function to get automatic information from the experiment for writing purposes
return 'corl2017_' + self._city_name
def _get_pose_and_task(self, line_on_file):
"""
Returns the pose and task this experiment is, based on the line it was
on the log file.
"""
# We assume that the number of poses is constant
return int(line_on_file / len(self._experiments)), line_on_file % 25

View File

@ -0,0 +1,38 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
from carla.settings import CarlaSettings
class Experiment(object):
def __init__(self):
self.Id = ''
self.Conditions = CarlaSettings()
self.Poses = [[]]
self.Repetitions = 1
def set(self, **kwargs):
for key, value in kwargs.items():
if not hasattr(self, key):
raise ValueError('Experiment: no key named %r' % key)
setattr(self, key, value)
@property
def id(self):
return self.Id
@property
def conditions(self):
return self.Conditions
@property
def poses(self):
return self.Poses
@property
def repetitions(self):
return self.Repetitions

View File

@ -0,0 +1,205 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import numpy as np
import math
import os
sldist = lambda c1, c2: math.sqrt((c2[0] - c1[0])**2 + (c2[1] - c1[1])**2)
flatten = lambda l: [item for sublist in l for item in sublist]
def get_colisions(selected_matrix, header):
count_gen = 0
count_ped = 0
count_car = 0
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_gen')]
- selected_matrix[(i-10), header.index('collision_gen')]) > 40000:
count_gen += 1
i += 20
i += 1
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_car')]
- selected_matrix[(i-10), header.index('collision_car')]) > 40000:
count_car += 1
i += 30
i += 1
i = 1
while i < selected_matrix.shape[0]:
if (selected_matrix[i, header.index('collision_ped')]
- selected_matrix[i-5, header.index('collision_ped')]) > 30000:
count_ped += 1
i += 100
i += 1
return count_gen, count_car, count_ped
def get_distance_traveled(selected_matrix, header):
prev_x = selected_matrix[0, header.index('pos_x')]
prev_y = selected_matrix[0, header.index('pos_y')]
i = 1
acummulated_distance = 0
while i < selected_matrix.shape[0]:
x = selected_matrix[i, header.index('pos_x')]
y = selected_matrix[i, header.index('pos_y')]
# Here we defined a maximun distance in a tick, this case 8 meters or 288km/h
if sldist((x, y), (prev_x, prev_y)) < 800:
acummulated_distance += sldist((x, y), (prev_x, prev_y))
prev_x = x
prev_y = y
i += 1
return float(acummulated_distance)/float(100*1000)
def get_out_of_road_lane(selected_matrix, header):
count_road = 0
count_lane = 0
i = 0
while i < selected_matrix.shape[0]:
# print selected_matrix[i,6]
if (selected_matrix[i, header.index('sidewalk_intersect')]
- selected_matrix[(i-10), header.index('sidewalk_intersect')]) > 0.3:
count_road += 1
i += 20
if i >= selected_matrix.shape[0]:
break
if (selected_matrix[i, header.index('lane_intersect')]
- selected_matrix[(i-10), header.index('lane_intersect')]) > 0.4:
count_lane += 1
i += 20
i += 1
return count_lane, count_road
def compute_summary(filename, dynamic_episodes):
# Separate the PATH and the basename
path = os.path.dirname(filename)
base_name = os.path.basename(filename)
f = open(filename, "rb")
header = f.readline()
header = header.split(',')
header[-1] = header[-1][:-2]
f.close()
f = open(os.path.join(path, 'details_' + base_name), "rb")
header_details = f.readline()
header_details = header_details.split(',')
header_details[-1] = header_details[-1][:-2]
f.close()
data_matrix = np.loadtxt(open(filename, "rb"), delimiter=",", skiprows=1)
# Corner Case: The presented test just had one episode
if data_matrix.ndim == 1:
data_matrix = np.expand_dims(data_matrix, axis=0)
tasks = np.unique(data_matrix[:, header.index('exp_id')])
all_weathers = np.unique(data_matrix[:, header.index('weather')])
reward_matrix = np.loadtxt(open(os.path.join(
path, 'details_' + base_name), "rb"), delimiter=",", skiprows=1)
metrics_dictionary = {'average_completion': {w: [0.0]*len(tasks) for w in all_weathers},
'intersection_offroad': {w: [0.0]*len(tasks) for w in all_weathers},
'intersection_otherlane': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_pedestrians': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_vehicles': {w: [0.0]*len(tasks) for w in all_weathers},
'collision_other': {w: [0.0]*len(tasks) for w in all_weathers},
'average_fully_completed': {w: [0.0]*len(tasks) for w in all_weathers},
'average_speed': {w: [0.0]*len(tasks) for w in all_weathers},
'driven_kilometers': {w: [0.0]*len(tasks) for w in all_weathers}
}
for t in tasks:
task_data_matrix = data_matrix[
data_matrix[:, header.index('exp_id')] == t]
weathers = np.unique(task_data_matrix[:, header.index('weather')])
for w in weathers:
t = int(t)
task_data_matrix = data_matrix[np.logical_and(data_matrix[:, header.index(
'exp_id')] == t, data_matrix[:, header.index('weather')] == w)]
task_reward_matrix = reward_matrix[np.logical_and(reward_matrix[:, header_details.index(
'exp_id')] == float(t), reward_matrix[:, header_details.index('weather')] == float(w))]
km_run = get_distance_traveled(
task_reward_matrix, header_details)
metrics_dictionary['average_fully_completed'][w][t] = sum(
task_data_matrix[:, header.index('result')])/task_data_matrix.shape[0]
metrics_dictionary['average_completion'][w][t] = sum(
(task_data_matrix[:, header.index('initial_distance')]
- task_data_matrix[:, header.index('final_distance')])
/ task_data_matrix[:, header.index('initial_distance')]) \
/ len(task_data_matrix[:, header.index('final_distance')])
metrics_dictionary['driven_kilometers'][w][t]= km_run
metrics_dictionary['average_speed'][w][t]= km_run/ \
((sum(task_data_matrix[:, header.index('final_time')]))/3600.0)
if list(tasks).index(t) in set(dynamic_episodes):
lane_road = get_out_of_road_lane(
task_reward_matrix, header_details)
colisions = get_colisions(task_reward_matrix, header_details)
metrics_dictionary['intersection_offroad'][
w][t] = lane_road[0]/km_run
metrics_dictionary['intersection_otherlane'][
w][t] = lane_road[1]/km_run
metrics_dictionary['collision_pedestrians'][
w][t] = colisions[2]/km_run
metrics_dictionary['collision_vehicles'][
w][t] = colisions[1]/km_run
metrics_dictionary['collision_other'][
w][t] = colisions[0]/km_run
return metrics_dictionary

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

Binary file not shown.

After

Width:  |  Height:  |  Size: 27 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 10 KiB

View File

@ -0,0 +1,156 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import heapq
class Cell(object):
def __init__(self, x, y, reachable):
"""Initialize new cell.
@param reachable is cell reachable? not a wall?
@param x cell x coordinate
@param y cell y coordinate
@param g cost to move from the starting cell to this cell.
@param h estimation of the cost to move from this cell
to the ending cell.
@param f f = g + h
"""
self.reachable = reachable
self.x = x
self.y = y
self.parent = None
self.g = 0
self.h = 0
self.f = 0
def __lt__(self, other):
return self.g < other.g
class AStar(object):
def __init__(self):
# open list
self.opened = []
heapq.heapify(self.opened)
# visited cells list
self.closed = set()
# grid cells
self.cells = []
self.grid_height = None
self.grid_width = None
self.start = None
self.end = None
def init_grid(self, width, height, walls, start, end):
"""Prepare grid cells, walls.
@param width grid's width.
@param height grid's height.
@param walls list of wall x,y tuples.
@param start grid starting point x,y tuple.
@param end grid ending point x,y tuple.
"""
self.grid_height = height
self.grid_width = width
for x in range(self.grid_width):
for y in range(self.grid_height):
if (x, y) in walls:
reachable = False
else:
reachable = True
self.cells.append(Cell(x, y, reachable))
self.start = self.get_cell(*start)
self.end = self.get_cell(*end)
def get_heuristic(self, cell):
"""Compute the heuristic value H for a cell.
Distance between this cell and the ending cell multiply by 10.
@returns heuristic value H
"""
return 10 * (abs(cell.x - self.end.x) + abs(cell.y - self.end.y))
def get_cell(self, x, y):
"""Returns a cell from the cells list.
@param x cell x coordinate
@param y cell y coordinate
@returns cell
"""
return self.cells[x * self.grid_height + y]
def get_adjacent_cells(self, cell):
"""Returns adjacent cells to a cell.
Clockwise starting from the one on the right.
@param cell get adjacent cells for this cell
@returns adjacent cells list.
"""
cells = []
if cell.x < self.grid_width - 1:
cells.append(self.get_cell(cell.x + 1, cell.y))
if cell.y > 0:
cells.append(self.get_cell(cell.x, cell.y - 1))
if cell.x > 0:
cells.append(self.get_cell(cell.x - 1, cell.y))
if cell.y < self.grid_height - 1:
cells.append(self.get_cell(cell.x, cell.y + 1))
return cells
def get_path(self):
cell = self.end
path = [(cell.x, cell.y)]
while cell.parent is not self.start:
cell = cell.parent
path.append((cell.x, cell.y))
path.append((self.start.x, self.start.y))
path.reverse()
return path
def update_cell(self, adj, cell):
"""Update adjacent cell.
@param adj adjacent cell to current cell
@param cell current cell being processed
"""
adj.g = cell.g + 10
adj.h = self.get_heuristic(adj)
adj.parent = cell
adj.f = adj.h + adj.g
def solve(self):
"""Solve maze, find path to ending cell.
@returns path or None if not found.
"""
# add starting cell to open heap queue
heapq.heappush(self.opened, (self.start.f, self.start))
while len(self.opened):
# pop cell from heap queue
_, cell = heapq.heappop(self.opened)
# add cell to closed list so we don't process it twice
self.closed.add(cell)
# if ending cell, return found path
if cell is self.end:
return self.get_path()
# get adjacent cells for cell
adj_cells = self.get_adjacent_cells(cell)
for adj_cell in adj_cells:
if adj_cell.reachable and adj_cell not in self.closed:
if (adj_cell.f, adj_cell) in self.opened:
# if adj cell in open list, check if current path is
# better than the one previously found
# for this adj cell.
if adj_cell.g > cell.g + 10:
self.update_cell(adj_cell, cell)
else:
self.update_cell(adj_cell, cell)
# add adj cell to open list
heapq.heappush(self.opened, (adj_cell.f, adj_cell))

View File

@ -0,0 +1,136 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
from carla.planner.graph import sldist
from carla.planner.astar import AStar
from carla.planner.map import CarlaMap
class CityTrack(object):
def __init__(self, city_name):
self._node_density = 50.0
self._pixel_density = 16.43
self._map = CarlaMap(city_name, self._pixel_density, self._node_density)
self._astar = AStar()
# Refers to the start position of the previous route computation
self._previous_node = []
# The current computed route
self._route = None
def project_node(self, position):
"""
Projecting the graph node into the city road
"""
node = self._map.convert_to_node(position)
# To change the orientation with respect to the map standards
node = tuple([int(x) for x in node])
# Set to zero if it is less than zero.
node = (max(0, node[0]), max(0, node[1]))
node = (min(self._map.get_graph_resolution()[0] - 1, node[0]),
min(self._map.get_graph_resolution()[1] - 1, node[1]))
node = self._map.search_on_grid(node)
return node
def get_intersection_nodes(self):
return self._map.get_intersection_nodes()
def get_pixel_density(self):
return self._pixel_density
def get_node_density(self):
return self._node_density
def is_at_goal(self, source, target):
return source == target
def is_at_new_node(self, current_node):
return current_node != self._previous_node
def is_away_from_intersection(self, current_node):
return self._closest_intersection_position(current_node) > 1
def is_far_away_from_route_intersection(self, current_node):
# CHECK FOR THE EMPTY CASE
if self._route is None:
raise RuntimeError('Impossible to find route'
+ ' Current planner is limited'
+ ' Try to select start points away from intersections')
return self._closest_intersection_route_position(current_node,
self._route) > 4
def compute_route(self, node_source, source_ori, node_target, target_ori):
self._previous_node = node_source
a_star = AStar()
a_star.init_grid(self._map.get_graph_resolution()[0],
self._map.get_graph_resolution()[1],
self._map.get_walls_directed(node_source, source_ori,
node_target, target_ori), node_source,
node_target)
route = a_star.solve()
# JuSt a Corner Case
# Clean this to avoid having to use this function
if route is None:
a_star = AStar()
a_star.init_grid(self._map.get_graph_resolution()[0],
self._map.get_graph_resolution()[1], self._map.get_walls(),
node_source, node_target)
route = a_star.solve()
self._route = route
return route
def get_distance_closest_node_route(self, pos, route):
distance = []
for node_iter in route:
if node_iter in self._map.get_intersection_nodes():
distance.append(sldist(node_iter, pos))
if not distance:
return sldist(route[-1], pos)
return sorted(distance)[0]
def _closest_intersection_position(self, current_node):
distance_vector = []
for node_iterator in self._map.get_intersection_nodes():
distance_vector.append(sldist(node_iterator, current_node))
return sorted(distance_vector)[0]
def _closest_intersection_route_position(self, current_node, route):
distance_vector = []
for _ in route:
for node_iterator in self._map.get_intersection_nodes():
distance_vector.append(sldist(node_iterator, current_node))
return sorted(distance_vector)[0]

View File

@ -0,0 +1,163 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import math
import numpy as np
from carla.planner.graph import string_to_floats
# Constant definition enumeration
PIXEL = 0
WORLD = 1
NODE = 2
class Converter(object):
def __init__(self, city_file, node_density, pixel_density):
self._node_density = node_density
self._pixel_density = pixel_density
with open(city_file, 'r') as f:
# The offset of the world from the zero coordinates ( The
# coordinate we consider zero)
self._worldoffset = string_to_floats(f.readline())
angles = string_to_floats(f.readline())
# If there is an rotation between the world and map coordinates.
self._worldrotation = np.array([
[math.cos(math.radians(angles[2])), -math.sin(math.radians(angles[2])), 0.0],
[math.sin(math.radians(angles[2])), math.cos(math.radians(angles[2])), 0.0],
[0.0, 0.0, 1.0]])
# Ignore for now, these are offsets for map coordinates and scale
# (not used).
_ = f.readline()
# The offset of the map zero coordinate.
self._mapoffset = string_to_floats(f.readline())
def convert_to_node(self, input_data):
"""
Receives a data type (Can Be Pixel or World )
:param input_data: position in some coordinate
:return: A vector representing a node
"""
input_type = self._check_input_type(input_data)
if input_type == PIXEL:
return self._pixel_to_node(input_data)
elif input_type == WORLD:
return self._world_to_node(input_data)
else:
raise ValueError('Invalid node to be converted')
def convert_to_pixel(self, input_data):
"""
Receives a data type (Can Be Node or World )
:param input_data: position in some coordinate
:return: A vector with pixel coordinates
"""
input_type = self._check_input_type(input_data)
if input_type == NODE:
return self._node_to_pixel(input_data)
elif input_type == WORLD:
return self._world_to_pixel(input_data)
else:
raise ValueError('Invalid node to be converted')
def convert_to_world(self, input_data):
"""
Receives a data type (Can Be Pixel or Node )
:param input_data: position in some coordinate
:return: vector with world coordinates
"""
input_type = self._check_input_type(input_data)
if input_type == NODE:
return self._node_to_world(input_data)
elif input_type == PIXEL:
return self._pixel_to_world(input_data)
else:
raise ValueError('Invalid node to be converted')
def _node_to_pixel(self, node):
"""
Conversion from node format (graph) to pixel (image)
:param node:
:return: pixel
"""
pixel = [((node[0] + 2) * self._node_density)
, ((node[1] + 2) * self._node_density)]
return pixel
def _pixel_to_node(self, pixel):
"""
Conversion from pixel format (image) to node (graph)
:param node:
:return: pixel
"""
node = [int(((pixel[0]) / self._node_density) - 2)
, int(((pixel[1]) / self._node_density) - 2)]
return tuple(node)
def _pixel_to_world(self, pixel):
"""
Conversion from pixel format (image) to world (3D)
:param pixel:
:return: world
"""
relative_location = [pixel[0] * self._pixel_density,
pixel[1] * self._pixel_density]
world = [
relative_location[0] + self._mapoffset[0] - self._worldoffset[0],
relative_location[1] + self._mapoffset[1] - self._worldoffset[1],
22
]
return world
def _world_to_pixel(self, world):
"""
Conversion from world format (3D) to pixel
:param world:
:return: pixel
"""
rotation = np.array([world[0], world[1], world[2]])
rotation = rotation.dot(self._worldrotation)
relative_location = [rotation[0] + self._worldoffset[0] - self._mapoffset[0],
rotation[1] + self._worldoffset[1] - self._mapoffset[1],
rotation[2] + self._worldoffset[2] - self._mapoffset[2]]
pixel = [math.floor(relative_location[0] / float(self._pixel_density)),
math.floor(relative_location[1] / float(self._pixel_density))]
return pixel
def _world_to_node(self, world):
return self._pixel_to_node(self._world_to_pixel(world))
def _node_to_world(self, node):
return self._pixel_to_world(self._node_to_pixel(node))
def _check_input_type(self, input_data):
if len(input_data) > 2:
return WORLD
elif type(input_data[0]) is int:
return NODE
else:
return PIXEL

View File

@ -0,0 +1,141 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import math
import numpy as np
def string_to_node(string):
vec = string.split(',')
return (int(vec[0]), int(vec[1]))
def string_to_floats(string):
vec = string.split(',')
return (float(vec[0]), float(vec[1]), float(vec[2]))
def sldist(c1, c2):
return math.sqrt((c2[0] - c1[0]) ** 2 + (c2[1] - c1[1]) ** 2)
def sldist3(c1, c2):
return math.sqrt((c2[0] - c1[0]) ** 2 + (c2[1] - c1[1])
** 2 + (c2[2] - c1[2]) ** 2)
class Graph(object):
"""
A simple directed, weighted graph
"""
def __init__(self, graph_file=None, node_density=50):
self._nodes = set()
self._angles = {}
self._edges = {}
self._distances = {}
self._node_density = node_density
if graph_file is not None:
with open(graph_file, 'r') as f:
# Skipe the first four lines that
lines_after_4 = f.readlines()[4:]
# the graph resolution.
linegraphres = lines_after_4[0]
self._resolution = string_to_node(linegraphres)
for line in lines_after_4[1:]:
from_node, to_node, d = line.split()
from_node = string_to_node(from_node)
to_node = string_to_node(to_node)
if from_node not in self._nodes:
self.add_node(from_node)
if to_node not in self._nodes:
self.add_node(to_node)
self._edges.setdefault(from_node, [])
self._edges[from_node].append(to_node)
self._distances[(from_node, to_node)] = float(d)
def add_node(self, value):
self._nodes.add(value)
def make_orientations(self, node, heading):
import collections
distance_dic = {}
for node_iter in self._nodes:
if node_iter != node:
distance_dic[sldist(node, node_iter)] = node_iter
distance_dic = collections.OrderedDict(
sorted(distance_dic.items()))
self._angles[node] = heading
for _, v in distance_dic.items():
start_to_goal = np.array([node[0] - v[0], node[1] - v[1]])
print(start_to_goal)
self._angles[v] = start_to_goal / np.linalg.norm(start_to_goal)
def add_edge(self, from_node, to_node, distance):
self._add_edge(from_node, to_node, distance)
def _add_edge(self, from_node, to_node, distance):
self._edges.setdefault(from_node, [])
self._edges[from_node].append(to_node)
self._distances[(from_node, to_node)] = distance
def get_resolution(self):
return self._resolution
def get_edges(self):
return self._edges
def intersection_nodes(self):
intersect_nodes = []
for node in self._nodes:
if len(self._edges[node]) > 2:
intersect_nodes.append(node)
return intersect_nodes
# This contains also the non-intersection turns...
def turn_nodes(self):
return self._nodes
def plot_ori(self, c):
from matplotlib import collections as mc
import matplotlib.pyplot as plt
line_len = 1
lines = [[(p[0], p[1]), (p[0] + line_len * self._angles[p][0],
p[1] + line_len * self._angles[p][1])] for p in self._nodes]
lc = mc.LineCollection(lines, linewidth=2, color='green')
_, ax = plt.subplots()
ax.add_collection(lc)
ax.autoscale()
ax.margins(0.1)
xs = [p[0] for p in self._nodes]
ys = [p[1] for p in self._nodes]
plt.scatter(xs, ys, color=c)
def plot(self, c):
import matplotlib.pyplot as plt
xs = [p[0] for p in self._nodes]
ys = [p[1] for p in self._nodes]
plt.scatter(xs, ys, color=c)

View File

@ -0,0 +1,135 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import copy
import numpy as np
def angle_between(v1, v2):
return np.arccos(np.dot(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2))
class Grid(object):
def __init__(self, graph):
self._graph = graph
self._structure = self._make_structure()
self._walls = self._make_walls()
def search_on_grid(self, x, y):
visit = [[0, 1], [0, -1], [1, 0], [1, 1],
[1, -1], [-1, 0], [-1, 1], [-1, -1]]
c_x, c_y = x, y
scale = 1
while self._structure[c_x, c_y] != 0:
for offset in visit:
c_x, c_y = x + offset[0] * scale, y + offset[1] * scale
if c_x >= 0 and c_x < self._graph.get_resolution()[
0] and c_y >= 0 and c_y < self._graph.get_resolution()[1]:
if self._structure[c_x, c_y] == 0:
break
else:
c_x, c_y = x, y
scale += 1
return c_x, c_y
def get_walls(self):
return self._walls
def get_wall_source(self, pos, pos_ori, target):
free_nodes = self._get_adjacent_free_nodes(pos)
# print self._walls
final_walls = copy.copy(self._walls)
# print final_walls
heading_start = np.array([pos_ori[0], pos_ori[1]])
for adj in free_nodes:
start_to_goal = np.array([adj[0] - pos[0], adj[1] - pos[1]])
angle = angle_between(heading_start, start_to_goal)
if (angle > 1.6 and adj != target):
final_walls.add((adj[0], adj[1]))
return final_walls
def get_wall_target(self, pos, pos_ori, source):
free_nodes = self._get_adjacent_free_nodes(pos)
final_walls = copy.copy(self._walls)
heading_start = np.array([pos_ori[0], pos_ori[1]])
for adj in free_nodes:
start_to_goal = np.array([adj[0] - pos[0], adj[1] - pos[1]])
angle = angle_between(heading_start, start_to_goal)
if (angle < 1.0 and adj != source):
final_walls.add((adj[0], adj[1]))
return final_walls
def _draw_line(self, grid, xi, yi, xf, yf):
if xf < xi:
aux = xi
xi = xf
xf = aux
if yf < yi:
aux = yi
yi = yf
yf = aux
for i in range(xi, xf + 1):
for j in range(yi, yf + 1):
grid[i, j] = 0.0
return grid
def _make_structure(self):
structure = np.ones(
(self._graph.get_resolution()[0],
self._graph.get_resolution()[1]))
for key, connections in self._graph.get_edges().items():
# draw a line
for con in connections:
# print key[0],key[1],con[0],con[1]
structure = self._draw_line(
structure, key[0], key[1], con[0], con[1])
# print grid
return structure
def _make_walls(self):
walls = set()
for i in range(self._structure.shape[0]):
for j in range(self._structure.shape[1]):
if self._structure[i, j] == 1.0:
walls.add((i, j))
return walls
def _get_adjacent_free_nodes(self, pos):
""" Eight nodes in total """
visit = [[0, 1], [0, -1], [1, 0], [1, 1],
[1, -1], [-1, 0], [-1, 1], [-1, -1]]
adjacent = set()
for offset in visit:
node = (pos[0] + offset[0], pos[1] + offset[1])
if (node[0] >= 0 and node[0] < self._graph.get_resolution()[0]
and node[1] >= 0 and node[1] < self._graph.get_resolution()[1]):
if self._structure[node[0], node[1]] == 0.0:
adjacent.add(node)
return adjacent

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.
@ -19,54 +19,36 @@ try:
except ImportError: except ImportError:
raise RuntimeError('cannot import PIL, make sure pillow package is installed') raise RuntimeError('cannot import PIL, make sure pillow package is installed')
from carla.planner.graph import Graph
def string_to_node(string): from carla.planner.graph import sldist
vec = string.split(',') from carla.planner.grid import Grid
return (int(vec[0]), int(vec[1])) from carla.planner.converter import Converter
def string_to_floats(string): def color_to_angle(color):
vec = string.split(',') return ((float(color) / 255.0)) * 2 * math.pi
return (float(vec[0]), float(vec[1]), float(vec[2]))
class CarlaMap(object): class CarlaMap(object):
def __init__(self, city):
def __init__(self, city, pixel_density, node_density):
dir_path = os.path.dirname(__file__) dir_path = os.path.dirname(__file__)
city_file = os.path.join(dir_path, city + '.txt') city_file = os.path.join(dir_path, city + '.txt')
city_map_file = os.path.join(dir_path, city + '.png') city_map_file = os.path.join(dir_path, city + '.png')
city_map_file_lanes = os.path.join(dir_path, city + 'Lanes.png') city_map_file_lanes = os.path.join(dir_path, city + 'Lanes.png')
city_map_file_center = os.path.join(dir_path, city + 'Central.png')
with open(city_file, 'r') as file_object: # The built graph. This is the exact same graph that unreal builds. This
# is a generic structure used for many cases
self._graph = Graph(city_file, node_density)
linewordloffset = file_object.readline() self._pixel_density = pixel_density
# The offset of the world from the zero coordinates ( The self._grid = Grid(self._graph)
# coordinate we consider zero) # The number of game units per pixel. For now this is fixed.
self.worldoffset = string_to_floats(linewordloffset)
lineworldangles = file_object.readline() self._converter = Converter(city_file, pixel_density, node_density)
self.angles = string_to_floats(lineworldangles)
self.worldrotation = np.array([
[math.cos(math.radians(self.angles[2])), -math.sin(math.radians(self.angles[2])), 0.0],
[math.sin(math.radians(self.angles[2])), math.cos(math.radians(self.angles[2])), 0.0],
[0.0, 0.0, 1.0]])
# Ignore for now, these are offsets for map coordinates and scale
# (not used).
_ = file_object.readline()
linemapoffset = file_object.readline()
# The offset of the map zero coordinate.
self.mapoffset = string_to_floats(linemapoffset)
# the graph resolution.
linegraphres = file_object.readline()
self.resolution = string_to_node(linegraphres)
# The number of game units per pixel.
self.pixel_density = 16.43
# Load the lanes image # Load the lanes image
self.map_image_lanes = Image.open(city_map_file_lanes) self.map_image_lanes = Image.open(city_map_file_lanes)
self.map_image_lanes.load() self.map_image_lanes.load()
@ -76,72 +58,95 @@ class CarlaMap(object):
self.map_image.load() self.map_image.load()
self.map_image = np.asarray(self.map_image, dtype="int32") self.map_image = np.asarray(self.map_image, dtype="int32")
# Load the lanes image
self.map_image_center = Image.open(city_map_file_center)
self.map_image_center.load()
self.map_image_center = np.asarray(self.map_image_center, dtype="int32")
def get_graph_resolution(self):
return self._graph.get_resolution()
def get_map(self, height=None): def get_map(self, height=None):
if height is not None: if height is not None:
img = Image.fromarray(self.map_image.astype(np.uint8)) img = Image.fromarray(self.map_image.astype(np.uint8))
aspect_ratio = height/float(self.map_image.shape[0]) aspect_ratio = height / float(self.map_image.shape[0])
img = img.resize((int(aspect_ratio*self.map_image.shape[1]),height), Image.ANTIALIAS) img = img.resize((int(aspect_ratio * self.map_image.shape[1]), height), Image.ANTIALIAS)
img.load() img.load()
return np.asarray(img, dtype="int32") return np.asarray(img, dtype="int32")
return np.fliplr(self.map_image) return np.fliplr(self.map_image)
def get_map_lanes(self, height=None): def get_map_lanes(self, size=None):
# if size is not None: if size is not None:
# img = Image.fromarray(self.map_image_lanes.astype(np.uint8)) img = Image.fromarray(self.map_image_lanes.astype(np.uint8))
# img = img.resize((size[1], size[0]), Image.ANTIALIAS) img = img.resize((size[1], size[0]), Image.ANTIALIAS)
# img.load() img.load()
# return np.fliplr(np.asarray(img, dtype="int32")) return np.fliplr(np.asarray(img, dtype="int32"))
# return np.fliplr(self.map_image_lanes) return np.fliplr(self.map_image_lanes)
raise NotImplementedError
def get_position_on_map(self, world):
"""Get the position on the map for a certain world position."""
relative_location = []
pixel = []
rotation = np.array([world[0], world[1], world[2]])
rotation = rotation.dot(self.worldrotation)
relative_location.append(rotation[0] + self.worldoffset[0] - self.mapoffset[0])
relative_location.append(rotation[1] + self.worldoffset[1] - self.mapoffset[1])
relative_location.append(rotation[2] + self.worldoffset[2] - self.mapoffset[2])
pixel.append(math.floor(relative_location[0] / float(self.pixel_density)))
pixel.append(math.floor(relative_location[1] / float(self.pixel_density)))
return pixel
def get_position_on_world(self, pixel):
"""Get world position of a certain map position."""
relative_location = []
world_vertex = []
relative_location.append(pixel[0] * self.pixel_density)
relative_location.append(pixel[1] * self.pixel_density)
world_vertex.append(relative_location[0] + self.mapoffset[0] - self.worldoffset[0])
world_vertex.append(relative_location[1] + self.mapoffset[1] - self.worldoffset[1])
world_vertex.append(22) # Z does not matter for now.
return world_vertex
def get_lane_orientation(self, world): def get_lane_orientation(self, world):
"""Get the lane orientation of a certain world position.""" """Get the lane orientation of a certain world position."""
relative_location = [] pixel = self.convert_to_pixel(world)
pixel = []
rotation = np.array([world[0], world[1], world[2]])
rotation = rotation.dot(self.worldrotation)
relative_location.append(rotation[0] + self.worldoffset[0] - self.mapoffset[0])
relative_location.append(rotation[1] + self.worldoffset[1] - self.mapoffset[1])
relative_location.append(rotation[2] + self.worldoffset[2] - self.mapoffset[2])
pixel.append(math.floor(relative_location[0] / float(self.pixel_density)))
pixel.append(math.floor(relative_location[1] / float(self.pixel_density)))
ori = self.map_image_lanes[int(pixel[1]), int(pixel[0]), 2] ori = self.map_image_lanes[int(pixel[1]), int(pixel[0]), 2]
ori = ((float(ori) / 255.0)) * 2 * math.pi ori = color_to_angle(ori)
return (-math.cos(ori), -math.sin(ori)) return (-math.cos(ori), -math.sin(ori))
def convert_to_node(self, input_data):
"""
Receives a data type (Can Be Pixel or World )
:param input_data: position in some coordinate
:return: A node object
"""
return self._converter.convert_to_node(input_data)
def convert_to_pixel(self, input_data):
"""
Receives a data type (Can Be Pixel or World )
:param input_data: position in some coordinate
:return: A node object
"""
return self._converter.convert_to_pixel(input_data)
def convert_to_world(self, input_data):
"""
Receives a data type (Can Be Pixel or World )
:param input_data: position in some coordinate
:return: A node object
"""
return self._converter.convert_to_world(input_data)
def get_walls_directed(self, node_source, source_ori, node_target, target_ori):
"""
This is the most hacky function. Instead of planning on two ways,
we basically use a one way road and interrupt the other road by adding
an artificial wall.
"""
final_walls = self._grid.get_wall_source(node_source, source_ori, node_target)
final_walls = final_walls.union(self._grid.get_wall_target(
node_target, target_ori, node_source))
return final_walls
def get_walls(self):
return self._grid.get_walls()
def get_distance_closest_node(self, pos):
distance = []
for node_iter in self._graph.intersection_nodes():
distance.append(sldist(node_iter, pos))
return sorted(distance)[0]
def get_intersection_nodes(self):
return self._graph.intersection_nodes()
def search_on_grid(self,node):
return self._grid.search_on_grid(node[0], node[1])

View File

@ -0,0 +1,173 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import collections
import math
import numpy as np
from . import city_track
def compare(x, y):
return collections.Counter(x) == collections.Counter(y)
# Constants Used for the high level commands
REACH_GOAL = 0.0
GO_STRAIGHT = 5.0
TURN_RIGHT = 4.0
TURN_LEFT = 3.0
LANE_FOLLOW = 2.0
# Auxiliary algebra function
def angle_between(v1, v2):
return np.arccos(np.dot(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2))
def sldist(c1, c2): return math.sqrt((c2[0] - c1[0]) ** 2 + (c2[1] - c1[1]) ** 2)
def signal(v1, v2):
return np.cross(v1, v2) / np.linalg.norm(v1) / np.linalg.norm(v2)
class Planner(object):
def __init__(self, city_name):
self._city_track = city_track.CityTrack(city_name)
self._commands = []
def get_next_command(self, source, source_ori, target, target_ori):
"""
Computes the full plan and returns the next command,
:param source: source position
:param source_ori: source orientation
:param target: target position
:param target_ori: target orientation
:return: a command ( Straight,Lane Follow, Left or Right)
"""
track_source = self._city_track.project_node(source)
track_target = self._city_track.project_node(target)
# reach the goal
if self._city_track.is_at_goal(track_source, track_target):
return REACH_GOAL
if (self._city_track.is_at_new_node(track_source)
and self._city_track.is_away_from_intersection(track_source)):
route = self._city_track.compute_route(track_source, source_ori,
track_target, target_ori)
if route is None:
raise RuntimeError('Impossible to find route')
self._commands = self._route_to_commands(route)
if self._city_track.is_far_away_from_route_intersection(
track_source):
return LANE_FOLLOW
else:
if self._commands:
return self._commands[0]
else:
return LANE_FOLLOW
else:
if self._city_track.is_far_away_from_route_intersection(
track_source):
return LANE_FOLLOW
# If there is computed commands
if self._commands:
return self._commands[0]
else:
return LANE_FOLLOW
def get_shortest_path_distance(
self,
source,
source_ori,
target,
target_ori):
distance = 0
track_source = self._city_track.project_node(source)
track_target = self._city_track.project_node(target)
current_pos = track_source
route = self._city_track.compute_route(track_source, source_ori,
track_target, target_ori)
# No Route, distance is zero
if route is None:
return 0.0
for node_iter in route:
distance += sldist(node_iter, current_pos)
current_pos = node_iter
# We multiply by these values to convert distance to world coordinates
return distance * self._city_track.get_pixel_density() \
* self._city_track.get_node_density()
def is_there_posible_route(self, source, source_ori, target, target_ori):
track_source = self._city_track.project_node(source)
track_target = self._city_track.project_node(target)
return not self._city_track.compute_route(
track_source, source_ori, track_target, target_ori) is None
def test_position(self, source):
node_source = self._city_track.project_node(source)
return self._city_track.is_away_from_intersection(node_source)
def _route_to_commands(self, route):
"""
from the shortest path graph, transform it into a list of commands
:param route: the sub graph containing the shortest path
:return: list of commands encoded from 0-5
"""
commands_list = []
for i in range(0, len(route)):
if route[i] not in self._city_track.get_intersection_nodes():
continue
current = route[i]
past = route[i - 1]
future = route[i + 1]
past_to_current = np.array(
[current[0] - past[0], current[1] - past[1]])
current_to_future = np.array(
[future[0] - current[0], future[1] - current[1]])
angle = signal(current_to_future, past_to_current)
if angle < -0.1:
command = TURN_RIGHT
elif angle > 0.1:
command = TURN_LEFT
else:
command = GO_STRAIGHT
commands_list.append(command)
return commands_list

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.
@ -41,7 +41,7 @@ class TCPClient(object):
self._socket.settimeout(self._timeout) self._socket.settimeout(self._timeout)
logging.debug('%sconnected', self._logprefix) logging.debug('%sconnected', self._logprefix)
return return
except OSError as exception: except socket.error as exception:
error = exception error = exception
logging.debug('%sconnection attempt %d: %s', self._logprefix, attempt, error) logging.debug('%sconnection attempt %d: %s', self._logprefix, attempt, error)
time.sleep(1) time.sleep(1)
@ -65,7 +65,7 @@ class TCPClient(object):
header = struct.pack('<L', len(message)) header = struct.pack('<L', len(message))
try: try:
self._socket.sendall(header + message) self._socket.sendall(header + message)
except OSError as exception: except socket.error as exception:
self._reraise_exception_as_tcp_error('failed to write data', exception) self._reraise_exception_as_tcp_error('failed to write data', exception)
def read(self): def read(self):
@ -85,7 +85,7 @@ class TCPClient(object):
while length > 0: while length > 0:
try: try:
data = self._socket.recv(length) data = self._socket.recv(length)
except OSError as exception: except socket.error as exception:
self._reraise_exception_as_tcp_error('failed to read data', exception) self._reraise_exception_as_tcp_error('failed to read data', exception)
if not data: if not data:
raise TCPConnectionError(self._logprefix + 'connection closed') raise TCPConnectionError(self._logprefix + 'connection closed')

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.
@ -127,7 +127,7 @@ class CarlaGame(object):
self._map_view = None self._map_view = None
self._is_on_reverse = False self._is_on_reverse = False
self._city_name = city_name self._city_name = city_name
self._map = CarlaMap(city_name) if city_name is not None else None self._map = CarlaMap(city_name, 16.43, 50.0) if city_name is not None else None
self._map_shape = self._map.map_image.shape if city_name is not None else None self._map_shape = self._map.map_image.shape if city_name is not None else None
self._map_view = self._map.get_map(WINDOW_HEIGHT) if city_name is not None else None self._map_view = self._map.get_map(WINDOW_HEIGHT) if city_name is not None else None
self._position = None self._position = None

View File

@ -2,3 +2,5 @@ Pillow
numpy numpy
protobuf protobuf
pygame pygame
matplotlib
future

94
PythonClient/run_benchmark.py Executable file
View File

@ -0,0 +1,94 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB).
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
import argparse
import logging
import time
from carla.benchmarks.agent import Agent
from carla.benchmarks.corl_2017 import CoRL2017
from carla.client import make_carla_client, VehicleControl
from carla.tcp import TCPConnectionError
class Manual(Agent):
"""
Sample redefinition of the Agent,
An agent that goes straight
"""
def run_step(self, measurements, sensor_data, target):
control = VehicleControl()
control.throttle = 0.9
return control
if __name__ == '__main__':
argparser = argparse.ArgumentParser(description=__doc__)
argparser.add_argument(
'-v', '--verbose',
action='store_true',
dest='verbose',
help='print some extra status information')
argparser.add_argument(
'-db', '--debug',
action='store_true',
dest='debug',
help='print debug information')
argparser.add_argument(
'--host',
metavar='H',
default='localhost',
help='IP of the host server (default: localhost)')
argparser.add_argument(
'-p', '--port',
metavar='P',
default=2000,
type=int,
help='TCP port to listen to (default: 2000)')
argparser.add_argument(
'-c', '--city-name',
metavar='C',
default='Town01',
help='The town that is going to be used on benchmark'
+ '(needs to match active town in server, options: Town01 or Town02)')
argparser.add_argument(
'-n', '--log_name',
metavar='T',
default='test',
help='The name of the log file to be created by the benchmark'
)
args = argparser.parse_args()
if args.debug:
log_level = logging.DEBUG
elif args.verbose:
log_level = logging.INFO
else:
log_level = logging.WARNING
logging.basicConfig(format='%(levelname)s: %(message)s', level=log_level)
logging.info('listening to server %s:%s', args.host, args.port)
while True:
try:
with make_carla_client(args.host, args.port) as client:
corl = CoRL2017(city_name=args.city_name, name_to_save=args.log_name)
agent = Manual(args.city_name)
results = corl.benchmark_agent(agent, client)
corl.plot_summary_test()
corl.plot_summary_train()
break
except TCPConnectionError as error:
logging.error(error)
time.sleep(1)

15
PythonClient/setup.py Normal file
View File

@ -0,0 +1,15 @@
from setuptools import setup
# @todo Dependencies are missing.
setup(
name='carla_client',
version='0.7.1',
packages=['carla', 'carla.benchmarks', 'carla.planner'],
license='MIT License',
description='Python API for communicating with the CARLA server.',
url='https://github.com/carla-simulator/carla',
author='The CARLA team',
author_email='carla.simulator@gmail.com',
include_package_data=True
)

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -0,0 +1,168 @@
#!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab.
#
# This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>.
"""Client that runs two servers simultaneously to test repeatability."""
import argparse
import logging
import os
import random
import sys
import time
sys.path.append(os.path.join(os.path.dirname(__file__), '..'))
from carla.client import make_carla_client
from carla.sensor import Camera, Image
from carla.settings import CarlaSettings
from carla.tcp import TCPConnectionError
def run_carla_clients(args):
filename = '_images_repeatability/server{:d}/{:0>6d}.png'
with make_carla_client(args.host1, args.port1) as client1:
logging.info('1st client connected')
with make_carla_client(args.host2, args.port2) as client2:
logging.info('2nd client connected')
settings = CarlaSettings()
settings.set(
SynchronousMode=True,
SendNonPlayerAgentsInfo=True,
NumberOfVehicles=50,
NumberOfPedestrians=50,
WeatherId=random.choice([1, 3, 7, 8, 14]))
settings.randomize_seeds()
if args.images_to_disk:
camera = Camera('DefaultCamera')
camera.set_image_size(800, 600)
settings.add_sensor(camera)
scene1 = client1.load_settings(settings)
scene2 = client2.load_settings(settings)
number_of_player_starts = len(scene1.player_start_spots)
assert number_of_player_starts == len(scene2.player_start_spots)
player_start = random.randint(0, max(0, number_of_player_starts - 1))
logging.info(
'start episode at %d/%d player start (run forever, press ctrl+c to cancel)',
player_start,
number_of_player_starts)
client1.start_episode(player_start)
client2.start_episode(player_start)
frame = 0
while True:
frame += 1
meas1, sensor_data1 = client1.read_data()
meas2, sensor_data2 = client2.read_data()
player1 = meas1.player_measurements
player2 = meas2.player_measurements
images1 = [x for x in sensor_data1.values() if isinstance(x, Image)]
images2 = [x for x in sensor_data2.values() if isinstance(x, Image)]
control1 = player1.autopilot_control
control2 = player2.autopilot_control
try:
assert len(images1) == len(images2)
assert len(meas1.non_player_agents) == len(meas2.non_player_agents)
assert player1.transform.location.x == player2.transform.location.x
assert player1.transform.location.y == player2.transform.location.y
assert player1.transform.location.z == player2.transform.location.z
assert control1.steer == control2.steer
assert control1.throttle == control2.throttle
assert control1.brake == control2.brake
assert control1.hand_brake == control2.hand_brake
assert control1.reverse == control2.reverse
except AssertionError:
logging.exception('assertion failed')
if args.images_to_disk:
assert len(images1) == 1
images1[0].save_to_disk(filename.format(1, frame))
images2[0].save_to_disk(filename.format(2, frame))
client1.send_control(control1)
client2.send_control(control2)
def main():
argparser = argparse.ArgumentParser(description=__doc__)
argparser.add_argument(
'-v', '--verbose',
action='store_true',
dest='debug',
help='print debug information')
argparser.add_argument(
'--log',
metavar='LOG_FILE',
default=None,
help='print output to file')
argparser.add_argument(
'--host1',
metavar='H',
default='127.0.0.1',
help='IP of the first host server (default: 127.0.0.1)')
argparser.add_argument(
'-p1', '--port1',
metavar='P',
default=2000,
type=int,
help='TCP port to listen to the first server (default: 2000)')
argparser.add_argument(
'--host2',
metavar='H',
default='127.0.0.1',
help='IP of the second host server (default: 127.0.0.1)')
argparser.add_argument(
'-p2', '--port2',
metavar='P',
default=3000,
type=int,
help='TCP port to listen to the second server (default: 3000)')
argparser.add_argument(
'-i', '--images-to-disk',
action='store_true',
help='save images to disk')
args = argparser.parse_args()
logging_config = {
'format': '%(levelname)s: %(message)s',
'level': logging.DEBUG if args.debug else logging.INFO
}
if args.log:
logging_config['filename'] = args.log
logging_config['filemode'] = 'w+'
logging.basicConfig(**logging_config)
logging.info('listening to 1st server at %s:%s', args.host1, args.port1)
logging.info('listening to 2nd server at %s:%s', args.host2, args.port2)
while True:
try:
run_carla_clients(args)
except TCPConnectionError as error:
logging.error(error)
time.sleep(1)
if __name__ == '__main__':
try:
main()
except KeyboardInterrupt:
print('\nCancelled by user. Bye!')

View File

@ -1,7 +1,7 @@
#!/usr/bin/env python3 #!/usr/bin/env python3
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
# Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de # Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de
# Barcelona (UAB), and the INTEL Visual Computing Lab. # Barcelona (UAB).
# #
# This work is licensed under the terms of the MIT license. # This work is licensed under the terms of the MIT license.
# For a copy, see <https://opensource.org/licenses/MIT>. # For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -16,9 +16,14 @@ environmental conditions.
For instructions on how to use and compile CARLA, check out For instructions on how to use and compile CARLA, check out
[CARLA Documentation](http://carla.readthedocs.io). [CARLA Documentation](http://carla.readthedocs.io).
If you want to benchmark your model in the same conditions as in our CoRL17
paper, check out
[Benchmarking](http://carla.readthedocs.io/en/latest/benchmark/).
News News
---- ----
- 22.01.2018 Job opening: [C++ (UE4) Programmer](https://drive.google.com/open?id=1Hx0eUgpXl95d4IL9meEGhJECgSRos1T1).
- 28.11.2017 CARLA 0.7.0 released: [change log](https://github.com/carla-simulator/carla/blob/master/CHANGELOG.md#carla-070), [release](https://github.com/carla-simulator/carla/releases/tag/0.7.0). - 28.11.2017 CARLA 0.7.0 released: [change log](https://github.com/carla-simulator/carla/blob/master/CHANGELOG.md#carla-070), [release](https://github.com/carla-simulator/carla/releases/tag/0.7.0).
- 15.11.2017 CARLA 0.6.0 released: [change log](https://github.com/carla-simulator/carla/blob/master/CHANGELOG.md#carla-060), [release](https://github.com/carla-simulator/carla/releases/tag/0.6.0). - 15.11.2017 CARLA 0.6.0 released: [change log](https://github.com/carla-simulator/carla/blob/master/CHANGELOG.md#carla-060), [release](https://github.com/carla-simulator/carla/releases/tag/0.6.0).
@ -55,6 +60,42 @@ Felipe Codevilla, Antonio Lopez, Vladlen Koltun; PMLR 78:1-16
} }
``` ```
Building CARLA
--------------
Use `git clone` or download the project from this page. Note that the master
branch contains the latest fixes and features, for the latest stable code may be
best to switch to the `stable` branch.
Then follow the instruction at [How to build on Linux][buildlink].
Unfortunately we don't have yet official instructions to build on other
platforms, please check the progress for [Windows][issue21] and [Mac][issue150].
[buildlink]: http://carla.readthedocs.io/en/latest/how_to_build_on_linux
[issue21]: https://github.com/carla-simulator/carla/issues/21
[issue150]: https://github.com/carla-simulator/carla/issues/150
Contributing
------------
Please take a look at our [Contribution guidelines][contriblink].
[contriblink]: http://carla.readthedocs.io/en/latest/CONTRIBUTING
F.A.Q.
------
If you run into problems, check our
[FAQ](http://carla.readthedocs.io/en/latest/faq/).
Jobs
----
We are currently looking for a new programmer to join our team
* [C++ (UE4) Programmer](https://drive.google.com/open?id=1Hx0eUgpXl95d4IL9meEGhJECgSRos1T1)
License License
------- -------

View File

@ -2,7 +2,7 @@
ProjectID=675BF8694238308FA9368292CC440350 ProjectID=675BF8694238308FA9368292CC440350
ProjectName=CARLA UE4 ProjectName=CARLA UE4
CompanyName=CVC CompanyName=CVC
CopyrightNotice="Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonomade Barcelona (UAB), and the INTEL Visual Computing Lab.This work is licensed under the terms of the MIT license.For a copy, see <https://opensource.org/licenses/MIT>." CopyrightNotice="Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma de Barcelona (UAB). This work is licensed under the terms of the MIT license. For a copy, see <https://opensource.org/licenses/MIT>."
ProjectVersion=0.7.0 ProjectVersion=0.7.0
[/Script/UnrealEd.ProjectPackagingSettings] [/Script/UnrealEd.ProjectPackagingSettings]

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.
@ -104,7 +104,7 @@ void AVehicleSpawnerBase::SpawnVehicleAtSpawnPoint(
Vehicle->SpawnDefaultController(); Vehicle->SpawnDefaultController();
auto Controller = GetController(Vehicle); auto Controller = GetController(Vehicle);
if (Controller != nullptr) { // Sometimes fails... if (Controller != nullptr) { // Sometimes fails...
Controller->SetRandomEngine(GetRandomEngine()); Controller->GetRandomEngine()->Seed(GetRandomEngine()->GenerateSeed());
Controller->SetRoadMap(GetRoadMap()); Controller->SetRoadMap(GetRoadMap());
Controller->SetAutopilot(true); Controller->SetAutopilot(true);
Vehicles.Add(Vehicle); Vehicles.Add(Vehicle);

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.
@ -73,6 +73,8 @@ static void ClearQueue(std::queue<T> &Queue)
AWheeledVehicleAIController::AWheeledVehicleAIController(const FObjectInitializer& ObjectInitializer) : AWheeledVehicleAIController::AWheeledVehicleAIController(const FObjectInitializer& ObjectInitializer) :
Super(ObjectInitializer) Super(ObjectInitializer)
{ {
RandomEngine = CreateDefaultSubobject<URandomEngine>(TEXT("RandomEngine"));
PrimaryActorTick.bCanEverTick = true; PrimaryActorTick.bCanEverTick = true;
PrimaryActorTick.TickGroup = TG_PrePhysics; PrimaryActorTick.TickGroup = TG_PrePhysics;
} }

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.
@ -98,14 +98,10 @@ public:
/// @{ /// @{
public: public:
void SetRandomEngine(URandomEngine *InRandomEngine)
{
RandomEngine = InRandomEngine;
}
UFUNCTION(Category = "Random Engine", BlueprintCallable) UFUNCTION(Category = "Random Engine", BlueprintCallable)
URandomEngine *GetRandomEngine() URandomEngine *GetRandomEngine()
{ {
check(RandomEngine != nullptr);
return RandomEngine; return RandomEngine;
} }
@ -221,13 +217,13 @@ private:
private: private:
UPROPERTY() UPROPERTY()
ACarlaWheeledVehicle *Vehicle; ACarlaWheeledVehicle *Vehicle = nullptr;
UPROPERTY() UPROPERTY()
URoadMap *RoadMap; URoadMap *RoadMap = nullptr;
UPROPERTY() UPROPERTY()
URandomEngine *RandomEngine; URandomEngine *RandomEngine = nullptr;
UPROPERTY(VisibleAnywhere) UPROPERTY(VisibleAnywhere)
bool bAutopilotEnabled = false; bool bAutopilotEnabled = false;

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.
@ -176,7 +176,8 @@ void ACarlaGameModeBase::BeginPlay()
VehicleSpawner->SetSeed(CarlaSettings.SeedVehicles); VehicleSpawner->SetSeed(CarlaSettings.SeedVehicles);
VehicleSpawner->SetRoadMap(RoadMap); VehicleSpawner->SetRoadMap(RoadMap);
if (PlayerController != nullptr) { if (PlayerController != nullptr) {
PlayerController->SetRandomEngine(VehicleSpawner->GetRandomEngine()); PlayerController->GetRandomEngine()->Seed(
VehicleSpawner->GetRandomEngine()->GenerateSeed());
} }
} else { } else {
UE_LOG(LogCarla, Error, TEXT("Missing vehicle spawner actor!")); UE_LOG(LogCarla, Error, TEXT("Missing vehicle spawner actor!"));

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

View File

@ -1,5 +1,5 @@
// Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma // Copyright (c) 2017 Computer Vision Center (CVC) at the Universitat Autonoma
// de Barcelona (UAB), and the INTEL Visual Computing Lab. // de Barcelona (UAB).
// //
// This work is licensed under the terms of the MIT license. // This work is licensed under the terms of the MIT license.
// For a copy, see <https://opensource.org/licenses/MIT>. // For a copy, see <https://opensource.org/licenses/MIT>.

Some files were not shown because too many files have changed in this diff Show More