2017-12-22 06:21:54 +08:00
CARLA Benchmark
===============
Running the Benchmark
2017-12-23 01:33:30 +08:00
---------------------
2017-12-22 06:21:54 +08:00
2017-12-23 01:33:30 +08:00
The "carla" api provides a basic benchmarking system, that allows making several
tests on a certain agent. We already provide the same benchmark used in the CoRL
2017 paper. By running this benchmark you can compare the results of your agent
to the results obtained by the agents show in the paper.
2017-12-22 07:11:12 +08:00
2017-12-23 01:33:30 +08:00
Besides the requirements of the CARLA client, the benchmark package also needs
the future package
2017-12-22 07:19:57 +08:00
2017-12-23 01:33:30 +08:00
$ sudo pip install future
2017-12-22 07:19:57 +08:00
2017-12-23 01:33:30 +08:00
By running the benchmark a default agent that just go straight will be tested.
To run the benchmark you need a server running. For a default localhost server
on port 2000, to run the benchmark you just need to run
2017-12-22 07:11:12 +08:00
2017-12-23 01:33:30 +08:00
$ ./run_benchmark.py
2017-12-22 07:11:12 +08:00
2017-12-23 01:33:30 +08:00
or
$ python run_benchmark.py
2017-12-22 06:21:54 +08:00
2017-12-23 01:33:30 +08:00
Run the help command to see options available
2017-12-22 06:21:54 +08:00
2017-12-23 01:33:30 +08:00
$ ./run_benchmark.py --help
2018-01-17 22:21:52 +08:00
Benchmarking your Agent
---------------------
The benchmark works by calling three lines of code
corl = CoRL2017(city_name=args.city_name, name_to_save=args.log_name)
agent = Manual(args.city_name)
results = corl.benchmark_agent(agent, client)
2018-01-22 22:39:59 +08:00
This is excerpt is executed in the [run_benchmark.py ](https://github.com/carla-simulator/carla/blob/master/PythonClient/run_benchmark.py ) example.
2018-01-17 22:21:52 +08:00
2018-01-17 22:37:38 +08:00
First a *benchmark* object is defined, for this case, a CoRL2017 benchmark. This is object is used to benchmark a certain Agent. < br >
2018-01-17 22:33:47 +08:00
On the second line of our sample code, there is an object of a Manual class instanced. This class inherited an Agent base class
that is used by the *benchmark* object.
To be benchmarked, an Agent subclass must redefine the *run_step* function as it is done in the following excerpt:
2018-01-17 22:21:52 +08:00
def run_step(self, measurements, sensor_data, target):
"""
Function to run a control step in the CARLA vehicle.
:param measurements: object of the Measurements type
:param sensor_data: images list object
:param target: target position of Transform type
:return: an object of the control type.
"""
control = VehicleControl()
control.throttle = 0.9
return control
2018-01-17 22:37:38 +08:00
The function receives measurements from the world, sensor data and a target position. With this, the function must return a control to the car, *i.e.* steering value, throttle value, brake value, etc.
2018-01-17 22:21:52 +08:00
2018-01-22 22:41:22 +08:00
The [measurements ](measurements.md ), [target ](measurements.md ), [sensor_data ](cameras_and_sensors.md ) and [control ](measurements.md ) types are described on the documentation.
2018-01-17 22:21:52 +08:00
2018-01-17 22:33:47 +08:00
2018-01-17 22:21:52 +08:00
Creating your Benchmark
---------------------
Tutorial to be added