Integreated new PBR branch into vr

This commit is contained in:
kenzomenzo 2020-12-08 17:57:31 +00:00
commit 045fff928a
257 changed files with 16054 additions and 14030 deletions

10
.gitignore vendored
View File

@ -20,6 +20,7 @@ dev/transfer.c
*.zip
.*.swp
docs/apidoc
# vscode
.vscode/
@ -30,11 +31,9 @@ physics/*.json
# Physics assets
physics/models
*/events*
0_VRDemoSettings.txt
# VR
# This is the folder where the pyd files get put after setup.py builds gibson
Release
@ -47,7 +46,6 @@ checkpoint
*.model.*
examples/train/models/
log_*/
baselines/
aws/
@ -64,11 +62,7 @@ gibson/cpp-household
# Plotting
examples/scripts/plot*
#pycharm
.idea*
#models
# Models
nav_models
gibson/assets

8
Jenkinsfile vendored
View File

@ -3,7 +3,7 @@ pipeline {
agent {
docker {
image 'gibsonchallenge/gibsonv2:jenkins'
args '--runtime=nvidia -u root:root -v ${WORKSPACE}/../ig_dataset:${WORKSPACE}/gibson2/ig_dataset'
args '--runtime=nvidia -u root:root -v ${WORKSPACE}/../ig_dataset:${WORKSPACE}/gibson2/data/ig_dataset'
}
}
@ -14,12 +14,12 @@ pipeline {
sh 'pwd'
sh 'printenv'
sh 'pip install -e .'
sh 'ls gibson2/ig_dataset'
}
}
stage('Build Docs') {
steps {
sh 'sphinx-apidoc -o docs/apidoc gibson2 gibson2/external gibson2/utils/data_utils/'
sh 'sphinx-build -b html docs _sites'
}
}
@ -32,10 +32,12 @@ pipeline {
sh 'pytest test/test_pbr.py --junitxml=test_result/test_pbr.py.xml'
sh 'pytest test/test_object.py --junitxml=test_result/test_object.py.xml'
sh 'pytest test/test_simulator.py --junitxml=test_result/test_simulator.py.xml'
sh 'pytest test/test_navigate_env.py --junitxml=test_result/test_navigate_env.py.xml'
sh 'pytest test/test_igibson_env.py --junitxml=test_result/test_igibson_env.py.xml'
sh 'pytest test/test_scene_importing.py --junitxml=test_result/test_scene_importing.py.xml'
sh 'pytest test/test_robot.py --junitxml=test_result/test_robot.py.xml'
sh 'pytest test/test_igsdf_scene_importing.py --junitxml=test_result/test_igsdf_scene_importing.py.xml'
sh 'pytest test/test_sensors.py --junitxml=test_result/test_sensors.py.xml'
sh 'pytest test/test_motion_planning.py --junitxml=test_result/test_motion_planning.py.xml'
}
}

View File

@ -7,6 +7,15 @@
iGibson, the Interactive Gibson Environment, is a simulation environment providing fast visual rendering and physics simulation (based on Bullet). It is packed with a dataset with hundreds of large 3D environments reconstructed from real homes and offices, and interactive objects that can be pushed and actuated. iGibson allows researchers to train and evaluate robotic agents that use RGB images and/or other visual sensors to solve indoor (interactive) navigation and manipulation tasks such as opening doors, picking and placing objects, or searching in cabinets.
### Latest Updates
[12/4/2020] We created a [Slack workspace](https://join.slack.com/t/igibsonuser/shared_invite/zt-jz8x6wgh-2usPj6nMz7mawWyr1tmNfQ) to support iGibson users.
[12/1/2020] Major update to iGibson to reach iGibson v1.0, for details please refer to our [technical report](TBA).
- Release of iGibson dataset, which consists of 15 fully interactive scenes and 500+ object models.
- New features of the Simulator: Physically based rendering; 1-beam and 16-beam lidar simulation; Domain
randomization support.
- Code refactoring and cleanup.
[05/14/2020] Added dynamic light support :flashlight:
[04/28/2020] Added support for Mac OSX :computer:
@ -15,33 +24,46 @@ iGibson, the Interactive Gibson Environment, is a simulation environment providi
If you use iGibson or its assets and models, consider citing the following publication:
```
@article{xia2020interactive,
title={Interactive Gibson Benchmark: A Benchmark for Interactive Navigation in Cluttered Environments},
author={Xia, Fei and Shen, William B and Li, Chengshu and Kasimbeg, Priya and Tchapmi, Micael Edmond and Toshev, Alexander and Mart{\'\i}n-Mart{\'\i}n, Roberto and Savarese, Silvio},
journal={IEEE Robotics and Automation Letters},
volume={5},
number={2},
pages={713--720},
year={2020},
publisher={IEEE}
@article{shenigibson,
title={iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes},
author={Shen, Bokui and Xia, Fei and Li, Chengshu and Mart{\i}n-Mart{\i}n, Roberto and Fan, Linxi and Wang, Guanzhi and Buch, Shyamal and DArpino, Claudia and Srivastava, Sanjana and Tchapmi, Lyne P and Vainio, Kent and Fei-Fei, Li and Savarese, Silvio},
journal={arXiv preprint},
year={2020}
}
```
### Release
This is the repository for iGibson (gibson2) 0.0.4 release. Bug reports, suggestions for improvement, as well as community developments are encouraged and appreciated. The support for our previous version of the environment, [Gibson v1](http://github.com/StanfordVL/GibsonEnv/), will be moved to this repository.
This is the repository for iGibson (pip package `gibson2`) 1.0 release. Bug reports, suggestions for improvement, as
well as community
developments are encouraged and appreciated. The support for our previous version of the environment, [Gibson
Environment
](http://github.com/StanfordVL/GibsonEnv/), will be moved to this repository.
### Documentation
The documentation for this repository can be found here: [iGibson Environment Documentation](http://svl.stanford.edu/igibson/docs/). It includes installation guide (including data download), quickstart guide, code examples, and APIs.
If you want to know more about iGibson, you can also check out [our webpage](http://svl.stanford.edu/igibson), [our RAL+ICRA20 paper](https://arxiv.org/abs/1910.14442) and [our (outdated) technical report](http://svl.stanford.edu/igibson/assets/gibsonv2paper.pdf).
If you want to know more about iGibson, you can also check out [our webpage](http://svl.stanford.edu/igibson), [our
updated technical report](TBA) and [our RAL+ICRA20 paper](https://arxiv.org/abs/1910.14442) and.
### Dowloading Dataset of 3D Environments
There are several datasets of 3D reconstructed large real-world environments (homes and offices) that you can download and use with iGibson. All of them will be accessible once you fill in this [form](https://forms.gle/36TW9uVpjrE1Mkf9A).
There are several datasets of 3D reconstructed large real-world environments (homes and offices) that you can download and use with iGibson. All of them will be accessible once you fill in this <a href="https://forms.gle/36TW9uVpjrE1Mkf9A" target="_blank">[form]</a>.
You will have access to ten environments with annotated instances of furniture (chairs, tables, desks, doors, sofas) that can be interacted with, and to the original 572 reconstructed 3D environments without annotated objects from [Gibson v1](http://github.com/StanfordVL/GibsonEnv/).
Additionally, with iGibson v1.0 release, you will have access to 15 fully interactive scenes (100+ rooms) that can be
used in simulation. As a highlight, here
are the features we support. We also include 500+ object models.
You will also have access to a [fully annotated environment: Rs_interactive](https://storage.googleapis.com/gibson_scenes/Rs_interactive.tar.gz) where close to 200 articulated objects are placed in their original locations of a real house and ready for interaction. ([The original environment: Rs](https://storage.googleapis.com/gibson_scenes/Rs.tar.gz) is also available). More info can be found in the [installation guide](http://svl.stanford.edu/igibson/docs/installation.html).
- Scenes are the
result of converting 3D reconstructions of real homes into fully interactive simulatable environments.
- Each scene corresponds to one floor of a real-world home.
The scenes are annotated with bounding box location and size of different objects, mostly furniture, e.g. cabinets, doors, stoves, tables, chairs, beds, showers, toilets, sinks...
- Scenes include layout information (occupancy, semantics)
- Each scene's lighting effect is designed manually, and the texture of the building elements (walls, floors, ceilings
) is baked offline with high-performant ray-tracing
- Scenes are defined in iGSDF (iGibson Scene Definition Format), an extension of URDF, and shapes are OBJ files with
associated materials
For instructions to install iGibson and download dataset, you can visit [installation guide](http://svl.stanford.edu/igibson/docs/installation.html).
# VR Information
@ -154,4 +176,4 @@ Have fun in VR!
Helpful tips:
Press ESCAPE to force the fullscreen rendering window to close during program execution.
Before using SRAnipal eye tracking, you may want to re-calibrate the eye tracker. Please go to the Vive system settings to perform this calibration.
Before using SRAnipal eye tracking, you may want to re-calibrate the eye tracker. Please go to the Vive system settings to perform this calibration.

View File

@ -6,3 +6,5 @@ iGibson uses code from a few open source repositories. Without the efforts of th
- Syoyo Fujita: [tinyobjloader](https://github.com/syoyo/tinyobjloader)
- Erwin Coumans: [egl_example](https://github.com/erwincoumans/egl_example)
- Caelan Garrett: [ss-pybullet](https://github.com/caelan/ss-pybullet)
- Sean Barrett: [stb](https://github.com/nothings/stb)
- Michał Siejak: [PBR](https://github.com/Nadrin/PBR)

View File

@ -1,64 +0,0 @@
# Algorithms
### Overview
iGibson can be used with any algorithms (from optimal control to model-free reinforcement leanring) that accommodate OpenAI gym interface. Feel free to use your favorite algorithms and deep learning frameworks.
### Examples
#### TF-Agents
In this example, we show an environment wrapper of [TF-Agents](https://github.com/tensorflow/agents) for iGibson and an example training code for [SAC agent](https://arxiv.org/abs/1801.01290). The code can be found in [our fork of TF-Agents](https://github.com/StanfordVL/agents/): [agents/blob/gibson_sim2real/tf_agents/environments/suite_gibson.py](https://github.com/StanfordVL/agents/blob/gibson_sim2real/tf_agents/environments/suite_gibson.py) and [agents/blob/gibson_sim2real/tf_agents/agents/sac/examples/v1/train_single_env.sh](https://github.com/StanfordVL/agents/blob/gibson_sim2real/tf_agents/agents/sac/examples/v1/train_single_env.sh).
```python
def load(config_file,
scene_id=None,
env_type='gibson',
sim2real_track='static',
env_mode='headless',
action_timestep=1.0 / 5.0,
physics_timestep=1.0 / 40.0,
device_idx=0,
random_position=False,
random_height=False,
gym_env_wrappers=(),
env_wrappers=(),
spec_dtype_map=None):
config_file = os.path.join(os.path.dirname(gibson2.__file__), config_file)
if env_type == 'gibson':
if random_position:
env = NavigateRandomEnv(config_file=config_file,
mode=env_mode,
action_timestep=action_timestep,
physics_timestep=physics_timestep,
device_idx=device_idx,
random_height=random_height)
else:
env = NavigateEnv(config_file=config_file,
mode=env_mode,
action_timestep=action_timestep,
physics_timestep=physics_timestep,
device_idx=device_idx)
elif env_type == 'gibson_sim2real':
env = NavigateRandomEnvSim2Real(config_file=config_file,
mode=env_mode,
action_timestep=action_timestep,
physics_timestep=physics_timestep,
device_idx=device_idx,
track=sim2real_track)
else:
assert False, 'unknown env_type: {}'.format(env_type)
discount = env.discount_factor
max_episode_steps = env.max_step
return wrap_env(
env,
discount=discount,
max_episode_steps=max_episode_steps,
gym_env_wrappers=gym_env_wrappers,
time_limit_wrapper=wrappers.TimeLimit,
env_wrappers=env_wrappers,
spec_dtype_map=spec_dtype_map,
auto_reset=True
```

View File

@ -1,29 +0,0 @@
Env
=============
.. automodule:: gibson2.envs
.. automodule:: gibson2.envs.base_env
BaseEnv
------------
.. autoclass:: gibson2.envs.base_env.BaseEnv
.. automethod:: __init__
.. automethod:: reload
.. automethod:: load
.. automethod:: clean
.. automethod:: simulator_step
.. automodule:: gibson2.envs.locomotor_env
NavigateEnv
-------------
.. autoclass:: gibson2.envs.locomotor_env.NavigateEnv
.. automethod:: __init__
.. automethod:: step
.. automethod:: reset

View File

@ -1,57 +0,0 @@
MeshRenderer
==============
.. automodule:: gibson2.core.render
.. automodule:: gibson2.core.render.mesh_renderer
.. automodule:: gibson2.core.render.mesh_renderer.mesh_renderer_cpu
MeshRenderer
--------------
.. autoclass:: gibson2.core.render.mesh_renderer.mesh_renderer_cpu.MeshRenderer
.. automethod:: __init__
.. automethod:: setup_framebuffer
.. automethod:: load_object
.. automethod:: add_instance
.. automethod:: add_instance_group
.. automethod:: add_robot
.. automethod:: set_camera
.. automethod:: set_fov
.. automethod:: get_intrinsics
.. automethod:: readbuffer
.. automethod:: render
.. automethod:: clean
.. automethod:: release
MeshRendererG2G
----------------
.. autoclass:: gibson2.core.render.mesh_renderer.mesh_renderer_tensor.MeshRendererG2G
.. automethod:: __init__
.. automethod:: render
VisualObject
--------------
.. autoclass:: gibson2.core.render.mesh_renderer.mesh_renderer_cpu.VisualObject
.. automethod:: __init__
InstanceGroup
--------------
.. autoclass:: gibson2.core.render.mesh_renderer.mesh_renderer_cpu.InstanceGroup
.. automethod:: __init__
.. automethod:: render
Instance
--------------
.. autoclass:: gibson2.core.render.mesh_renderer.mesh_renderer_cpu.Instance
.. automethod:: __init__
.. automethod:: render

View File

@ -1,53 +0,0 @@
Object
==========
YCBObject
--------------
.. autoclass:: gibson2.core.physics.interactive_objects.YCBObject
.. automethod:: __init__
ShapeNetObject
--------------
.. autoclass:: gibson2.core.physics.interactive_objects.ShapeNetObject
.. automethod:: __init__
Pedestrian
--------------
.. autoclass:: gibson2.core.physics.interactive_objects.Pedestrian
.. automethod:: __init__
VisualMarker
--------------
.. autoclass:: gibson2.core.physics.interactive_objects.VisualMarker
.. automethod:: __init__
CubeObject
--------------
.. autoclass:: gibson2.objects.cube.CubeObject
.. automethod:: __init__
ArticulatedObject
--------------
.. autoclass:: gibson2.articulated_object.ArticulatedObject
.. automethod:: __init__
RBOObject
--------------
.. autoclass:: gibson2.articulated_object.RBOObject
.. automethod:: __init__

View File

@ -1,115 +0,0 @@
Robot
================
BaseRobot
--------------
.. autoclass:: gibson2.core.physics.robot_bases.BaseRobot
.. automethod:: __init__
.. automethod:: load
.. automethod:: parse_robot
BodyPart
----------
.. autoclass:: gibson2.core.physics.robot_bases.BodyPart
.. automethod:: __init__
Joint
------------
.. autoclass:: gibson2.core.physics.robot_bases.Joint
.. automethod:: __init__
LocomotorRobot
-------------------
.. autoclass:: gibson2.robots.robot_locomotor.LocomotorRobot
.. automethod:: __init__
Robot Implemtations
----------------------
Ant
^^^^^^^^^^
.. autoclass:: gibson2.robots.ant_robot.Ant
.. automethod:: __init__
Humanoid
^^^^^^^^^^
.. autoclass:: gibson2.robots.humanoid_robot.Humanoid
.. automethod:: __init__
Husky
^^^^^^^^^^
.. autoclass:: gibson2.robots.husky_robot.Husky
.. automethod:: __init__
Quadrotor
^^^^^^^^^^
.. autoclass:: gibson2.robots.quadrotor_robot.Quadrotor
.. automethod:: __init__
Turtlebot
^^^^^^^^^^
.. autoclass:: gibson2.robots.turtlebot_robot.Turtlebot
.. automethod:: __init__
Freight
^^^^^^^^^^
.. autoclass:: gibson2.robots.freight_robot.Freight
.. automethod:: __init__
Fetch
^^^^^^^^^^
.. autoclass:: gibson2.robots.fetch_robot.Fetch
.. automethod:: __init__
JR2
^^^^^^^^^^
.. autoclass:: gibson2.robots.jr2_robot.JR2
.. automethod:: __init__
JR2_Kinova
^^^^^^^^^^
.. autoclass:: gibson2.robots.jr2_kinova_robot.JR2_Kinova
.. automethod:: __init__
Locobot
^^^^^^^^^^
.. autoclass:: gibson2.robots.locobot_robot.Locobot
.. automethod:: __init__

View File

@ -1,20 +0,0 @@
Scene
=============
Scene manages the environment where the agent trains. We include a`StadiumScene` and a `BuildingScene`.
.. automodule:: gibson2.core.physics.scene
StadiumScene
--------------
.. autoclass:: gibson2.core.physics.scene.StadiumScene
.. automethod:: load
BuildingScene
--------------
.. autoclass:: gibson2.core.physics.scene.BuildingScene
.. automethod:: __init__
.. automethod:: load

View File

@ -1,22 +0,0 @@
Simulator
============
.. automodule:: gibson2.core.simulator
Simulator
------------
.. autoclass:: gibson2.core.simulator.Simulator
.. automethod:: __init__
.. automethod:: set_timestep
.. automethod:: add_viewer
.. automethod:: reload
.. automethod:: load
.. automethod:: import_scene
.. automethod:: import_object
.. automethod:: import_robot
.. automethod:: import_object
.. automethod:: step
.. automethod:: update_position
.. automethod:: isconnected
.. automethod:: disconnect

View File

@ -3,6 +3,7 @@
## Introduction
Assets includes necessary files for constructing a scene in iGibson simulator. The files include robot models, interactive objects, articulated objects and mesh files for tests. These files are too large to include in a version control system so we distribute them separately. The assets file can be downloaded to the path set in `your_installation_path/gibson2/global_config.yaml` (default to `your_installation_path/gibson2/assets`) with running
```bash
python -m gibson2.utils.assets_utils --download_assets
```
@ -44,7 +45,7 @@ assets
## Models
The robots folders correspond to [robot](robots.md) models.
The robots folders correspond to [robot](./robots.md) models.
| Agent Name | Folder |
|:-------------: | :-------------: |
@ -59,7 +60,7 @@ The robots folders correspond to [robot](robots.md) models.
| JackRabbot | `jr2_urdf` |
| LocoBot | `locobot` |
We also include [YCB objects](http://www.ycbbenchmarks.com/object-models/) in `ycb` folder, [RBO models](https://tu-rbo.github.io/articulated-objects/) in `rbo` folder, and a few commonly used primitives for home environments such as doors (in `scene_components`) and cabinets (in `cabinet` and `cabinet2`). You can refer to [objects](objects.md) page to see how to use these models in gibson scenes. Don't forget to cite related papers when using these assets.
We also include [YCB objects](http://www.ycbbenchmarks.com/object-models/) in `ycb` folder, [RBO models](https://tu-rbo.github.io/articulated-objects/) in `rbo` folder, and a few commonly used primitives for home environments such as doors (in `scene_components`) and cabinets (in `cabinet` and `cabinet2`). You can refer to [objects](./objects.md) page to see how to use these models in gibson scenes. Don't forget to cite related papers when using these assets.
## Pretrained network

View File

@ -1,12 +1,40 @@
Dataset
==========================================
In dataset we include two parts. First we introduce the new iGibson dataset in this release. Secondly, we introduce
how to download previous Gibson dataset, which is updated and compatible with iGibson.
- [Download iGibson Data](#download-igibson-data)
- [Download Gibson Data](#download-gibson-data)
Download iGibson Data
------------------------
The link will first take you to the license agreement and then to the data.
We annotate fifteen 3D reconstructions of real-world scans and convert them into fully interactive scene models. In this process, we respect the original object-instance layout and object-category distribution. The object models are extended from open-source datasets ([ShapeNet Dataset](https://www.shapenet.org/), [Motion Dataset](http://motiondataset.zbuaa.com/), [SAPIEN Dataset](https://sapien.ucsd.edu/)) enriched with annotations of material and dynamic properties.
[[ Get download link for iGibson Data ]](https://forms.gle/36TW9uVpjrE1Mkf9A)
The fifteen fully interactive models are visualized below.
![placeholder.jpg](images/ig_scene.png)
#### Download Instruction
To download the dataset, you need to first configure where the dataset is to be stored. You can change it in `your_installation_path/gibson2/global_config.yaml` (default and recommended: `ig_dataset: your_installation_path/gibson2/data/ig_dataset`). iGibson scenes can be downloaded with one single line:
```bash
python -m gibson2.utils.assets_utils --download_ig_dataset
```
#### Dataset Format
The new dataset format can be found [here](https://github.com/StanfordVL/iGibson/tree/master/gibson2/utils/data_utils).
#### Cubicasa / 3D Front Dataset
We provide support for Cubicasa and 3D Front Dataset, to import them into iGibson, follow the guide [here](https://github.com/StanfordVL/iGibson/tree/master/gibson2/utils/data_utils/ext_scene).
Download Gibson Data
------------------------
Original Gibson Environment Dataset has been updated to use with iGibson simulator. The link will first take you to
the license agreement and then to the data.
<a href="https://forms.gle/36TW9uVpjrE1Mkf9A" target="_blank">[[ Get download link for Gibson Data ]]</a>.
License Note: The dataset license is included in the above link. The license in this repository covers only the provided software.
@ -14,50 +42,25 @@ Files included in this distribution:
1. All scenes, 572 scenes (108GB): gibson_v2_all.tar.gz
2. 4+ partition, 106 scenes, with textures better packed (2.6GB): gibson_v2_4+.tar.gz
3. 10 scenes with interactive objects, 10 Scenes (<1GB): interactive_dataset.tar.gz
4. Demo scenes, `Rs` and `Rs_interactive`
To download 1,2 and 3, you need to fill in the agreement and get the download link `URL`, after which you can manually download and store them in the path set in `your_installation_path/gibson2/global_config.yaml` (default and recommended: `your_installation_path/gibson2/dataset`). You can run a single command to download the dataset, this script automatically download, decompress, and put the dataset to correct place.
3. Demo scene `Rs`
To download 1 and 2, you need to fill in the agreement and get the download link `URL`, after which you can
manually download and store them in the path set in `your_installation_path/gibson2/global_config.yaml` (default and
recommended: `dataset: your_installation_path/gibson2/data/g_dataset`). You can run a single command to download the dataset
, this script automatically download, decompress, and put the dataset to correct place.
```bash
python -m gibson2.utils.assets_utils --download_dataset URL
```
To download 4, you can run:
To download 3, you can run:
```bash
python -m gibson2.utils.assets_utils --download_demo_data
```
New Interactive Gibson Environment Dataset
--------------------------------------------------
Using a semi-automatic pipeline introduced in our [ICRA20 paper](https://ieeexplore.ieee.org/document/8954627), we annotated for five object categories (chairs, desks, doors, sofas, and tables) in ten buildings (more coming soon!)
### Original Gibson Environment Dataset Description (Non-interactive)
Replaced objects are visualized in these topdown views:
![topdown.jpg](images/topdown.jpg)
#### Dataset Format
The dataset format is similar to original gibson dataset, with additional of cleaned scene mesh, floor plane and replaced objects. Files in one folder are listed as below:
```
mesh_z_up.obj # 3d mesh of the environment, it is also associated with an mtl file and a texture file, omitted here
mesh_z_up_cleaned.obj # 3d mesh of the environment, with annotated furnitures removed
alignment_centered_{}.urdf # replaced furniture models as urdf files
pos_{}.txt # xyz position to load above urdf models
floors.txt # floor height
plane_z_up_{}.obj # floor plane for each floor, used for filling holes
floor_render_{}.png # top down views of each floor
floor_{}.png # top down views of obstacles for each floor
floor_trav_{}.png # top down views of traversable areas for each floor
```
Original Gibson Environment Dataset (Non-interactive)
-------------------------------------------------------
Original Gibson Environment Dataset has been updated to use with iGibson simulator.
Full Gibson Environment Dataset consists of 572 models and 1440 floors. We cover a diverse set of models including households, offices, hotels, venues, museums, hospitals, construction sites, etc. A diverse set of visualization of all spaces in Gibson can be seen [here](http://gibsonenv.stanford.edu/database/).

View File

@ -2,43 +2,93 @@
### Overview
We provide **Environments** that follow the [OpenAI gym](https://github.com/openai/gym) interface for applications such as reinforcement learning algorithms. Generally speaking, an **Environment** instantiates **Scene**, **Object** and **Robot** and import them into its **Simulator**. An **Environment** can also be interpreted as a task definition, which includes observation_space, action space, reward, and termination condition. Most of the code can be found here:
[gibson2/envs/locomotor_env.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/envs/locomotor_env.py).
We provide **Environments** that follow the [OpenAI gym](https://github.com/openai/gym) interface for applications such as reinforcement learning algorithms. Generally speaking, an **Environment** instantiates **Scene**, **Object** and **Robot** and import them into its **Simulator**. An **Environment** also instantiates a list of **Sensors** (usually as part of the observation space) and a **Task**, which further includes a list of **Reward Functions** and **Termination Conditions**.
#### Config
To instantiate an **Environment**, we first need to create a YAML config file. It will specifies a number of parameters for the **Environment**, such as which scenes, robots, objects to load, what the sensor specs are, etc. Exapmles of config files can be found here: [examples/configs](https://github.com/StanfordVL/iGibson/tree/master/examples/configs).
Most of the code can be found here: [gibson2/envs/igibson_env.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/envs/igibson_env.py).
Here is one example: [examples/configs/turtlebot_p2p_nav.yaml](https://github.com/StanfordVL/iGibson/blob/master/examples/configs/turtlebot_p2p_nav.yaml)
#### Sensors
We provide different types of sensors as lightweight wrappers around the renderer. Currently we support RGB, surface normal, segmentation, 3D point cloud, depth map, optical flow, and scene flow, and also 1-beam and 16-beam LiDAR signals. Additionally, we provide a sensor noise model with random dropout (currently only for depth map and 1-beam LiDAR) to simulate real-world sensor noise. The amount of noise can be controlled by `depth_noise_rate` and `scan_noise_rate` in the config files. Contribution of more noise models is most welcome.
Most of the code can be found in [gibson2/sensors](https://github.com/StanfordVL/iGibson/tree/master/gibson2/sensors).
#### Tasks
Each **Task** should implement `reset_scene`, `reset_agent`, `step` and `get_task_obs`.
- `reset_scene` and `reset_agent` will be called during `env.reset` and should include task-specific details to reset the scene and the agent, respectively.
- `step` will be called during `env.step` and should include task-specific details of what needs to be done at every timestep.
- `get_task_obs` returns task-specific observation (non-sensory observation) as a numpy array. For instance, typical goal-oriented robotics tasks should include goal information and proprioceptive states in `get_task_obs`.
Each **Task** should also include a list of **Reward Functions** and **Termination Conditions** defined below.
We provide a few Embodied AI tasks.
- PointGoalFixedTask
- PointGoalRandomTask
- InteractiveNavRandomTask
- DynamicNavRandomTask
- ReachingRandomTask
- RoomRearrangementTask
Most of the code can be found in [gibson2/tasks](https://github.com/StanfordVL/iGibson/tree/master/gibson2/tasks).
#### Reward Functions
At each timestep, `env.step` will call `task.get_reward`, which in turn sums up all the reward terms.
We provide a few common reward functions for robotics tasks.
- PointGoalReward
- ReachingGoalReward
- PotentialReward
- CollisionReward
Most of the code can be found in [gibson2/reward_functions](https://github.com/StanfordVL/iGibson/tree/master/gibson2/reward_functions).
#### Termination Conditions
At each timestep, `env.step` will call `task.get_termination`, which in turn checks each of the termination condition to see if the episode is done and/or successful.
We provide a few common termination conditions for robotics tasks.
- PointGoal
- ReachingGoal
- MaxCollision
- Timeout
- OutOfBound
Most of the code can be found in [gibson2/termination_conditions](https://github.com/StanfordVL/iGibson/tree/master/gibson2/termination_conditions).
#### Configs
To instantiate an **Environment**, we first need to create a YAML config file. It will specify parameters for the **Environment** (e.g. robot type, action frequency, etc), the **Sensors** (e.g. sensor types, image resolution, noise rate, etc), the **Task** (e.g. task type, goal distance range, etc), the **Reward Functions** (e.g. reward types, reward scale, etc) and the **Termination Conditions** (e.g. goal convergence threshold, time limit, etc). Exapmles of config files can be found here: [examples/configs](https://github.com/StanfordVL/iGibson/tree/master/examples/configs).
Here is one example: [examples/configs/turtlebot_point_nav.yaml](https://github.com/StanfordVL/iGibson/blob/master/examples/configs/turtlebot_point_nav.yaml)
```yaml
# scene
scene: building
scene_id: Rs
scene: igibson
scene_id: Rs_int
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 2
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: Turtlebot
is_discrete: false
velocity: 1.0
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
# task
task: point_nav_random
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
is_discrete: false
additional_states_dim: 4
goal_format: polar
task_obs_dim: 4
# reward
reward_type: geodesic
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
@ -47,16 +97,19 @@ discount_factor: 0.99
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
goal_format: polar
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# sensor spec
output: [sensor, rgb, depth, scan]
output: [task_obs, rgb, depth, scan]
# image
# ASUS Xtion PRO LIVE
# https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE
fisheye: false
image_width: 640
image_height: 480
image_width: 160
image_height: 120
vertical_fov: 45
# depth
depth_low: 0.8
@ -79,39 +132,43 @@ scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false
```
Parameters of this config file is explained below:
Parameters of this config file are explained below:
| Attribute | Example Value | Expalanation |
| ----------| ------------- | ------------ |
| scene | building | which type of scene: [empty, stadium, building] |
| scene_id | Rs | scene_id for the building scene |
| scene | igibson | which type of scene: [empty, stadium, gibson, igibson] |
| scene_id | Rs_int | scene_id for the gibson or igibson scene |
| build_graph | true | whether to build traversability graph for the building scene |
| load_texture | true | whether to load texture into MeshRenderer. Can be set to false if RGB is not needed |
| pybullet_load_texture | true | whether to load texture into PyBullet, for debugging purpose only |
| trav_map_resolution | 0.1 | resolution of the traversability map. 0.1 means each pixel represents 0.1 meter |
| trav_map_erosion | 2 | number of pixels to erode the traversability map. trav_map_resolution * trav_map_erosion should be almost equal to the radius of the robot base |
| should_open_all_doors | True | whether to open all doors in the scene during episode reset (e.g. useful for cross-room navigation tasks) |
| texture_randomization_freq | null | whether to perform material/texture randomization (null means no randomization, 10 means randomize every 10 episodes) |
| object_randomization_freq | null | whether to perform object randomization (null means no randomization, 10 means randomize every 10 episodes) |
| robot | Turtlebot | which type of robot, e.g. Turtlebot, Fetch, Locobot, etc |
| is_discrete | false | whether to use discrete action space for the robot |
| velocity | 1.0 | maximum normalized joint velocity. 0.5 means maximum robot action will actuate half of maximum joint velocities that are allowed in the robot URDF file |
| task | pointgoal | which type of task, e.g. pointgoal, objectgoal, etc |
| task | point_nav_random | which type of task, e.g. point_nav_random, room_rearrangement, etc |
| target_dist_min | 1.0 | minimum distance (in meters) between the initial and target positions for the navigation task |
| target_dist_max | 10.0 | maximum distance (in meters) between the initial and target positions for the navigation task |
| initial_pos_z_offset | 0.1 | z-offset (in meters) when placing the robots and the objects to accommodate uneven floor surface |
| additional_states_dim | 4 | the dimension of proprioceptive observation such as odometry and joint states. It should exactly match the dimension of the output of `get_additional_states()` |
| goal_format | polar | which format to represent the navigation goals: [polar, cartesian] |
| task_obs_dim | 4 | the dimension of task-specific observation returned by task.get_task_obs |
| reward_type | geodesic | which type of reward: [geodesic, l2, sparse], or define your own |
| success_reward | 10.0 | scaling factor of the success reward |
| slack_reward | -0.01 | scaling factor of the slack reward (negative because it should be a penalty) |
| potential_reward_weight | 1.0 | scaling factor of the potential reward |
| collision_reward_weight | -0.1 | scaling factor of the collision reward (negative because it should be a penalty) |
| collision_ignore_link_a_ids | [1, 2, 3, 4] | collision with these robot links will not result in collision penalty. These usually are links of wheels and caster wheels of the robot |
| discount_factor | 0.99 | discount factor for the MDP |
| dist_tol | 0.36 | the distance tolerance for converging to the navigation goal. This is usually equal to the diameter of the robot base |
| max_step | 500 | maximum number of timesteps allowed in an episode |
| max_collisions_allowed | 500 | maximum number of timesteps with robot collision allowed in an episode |
| goal_format | polar | which format to represent the navigation goals: [polar, cartesian] |
| output | [sensor, rgb, depth, scan] | what observation space is. sensor means proprioceptive info, rgb and depth mean RGBD camera sensing, scan means LiDAR sensing |
| initial_pos_z_offset | 0.1 | z-offset (in meters) when placing the robots and the objects to accommodate uneven floor surface |
| collision_ignore_link_a_ids | [1, 2, 3, 4] | collision with these robot links will not result in collision penalty. These usually are links of wheels
| output | [task_obs, rgb, depth, scan] | what observation space is. sensor means task-specific, non-sensory information (e.g. goal info, proprioceptive state), rgb and depth mean RGBD camera sensing, scan means LiDAR sensing |
| fisheye | false | whether to use fisheye camera |
| image_width | 640 | image width for the camera |
| image_height | 480 | image height for the camera |
@ -129,23 +186,10 @@ Parameters of this config file is explained below:
| visual_object_at_initial_target_pos | true | whether to show visual markers for the initial and target positions |
| target_visual_object_visible_to_agent | false | whether these visual markers are visible to the agents |
#### Task Definition
The main **Environment** classes (`NavigateEnv` and `NavigateRandomEnv`) that use the YAML config files can be found here: [gibson2/envs/locomotor_env.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/envs/locomotor_env.py).
`NavigateEnv` provides an environment to train PointGoal navigation task for fixed locations. `NavigateRandomEnv` builds on top of `NavigateEnv` and includes a mechanism to randomly sample initial and target positions. Following the OpenAI gym convention, they can be readily used to train RL agents.
It's also fairly straighforward to cusutomize your own environment.
- Inherit `NavigateEnv` or `NavigateRandomEnv` and reuse as much functionality as possible.
- Want to change the observation space? Modify `load_observation_space`, `get_state` and its helper functions.
- Want to change reward function? Modify `get_reward`.
- Want to change termination condition? Modify `get_termination`.
- Want to modify episode reset logic? Modify `reset` and `reset_agent`.
- Want to add additional objects or robots into the scene? Check out `load_interactive_objects` and `load_dynamic_objects` in `NavigateRandomEnvSim2Real`. If these are brand-new objects and robots that are not in iGibson yet, you might also need to change [gibson2/robots/robot_locomotor.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/robots/robot_locomotor.py) and [gibson2/physics/interactive_objects.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/interactive_objects.py).
### Examples
#### Static Environments
In this example, we show how to instantiate `NavigateRandomEnv` and how to step through the environment. At the beginning of each episode, we need to call `nav_env.reset()`. Then we need to call `nav_env.step(action)` to step through the environment and retrieve `(state, reward, done, info)`.
In this example, we show how to instantiate `iGibsonEnv` and how to step through the environment. At the beginning of each episode, we need to call `env.reset()`. Then we need to call `env.step(action)` to step through the environment and retrieve `(state, reward, done, info)`.
- `state`: a python dictionary of observations, e.g. `state['rgb']` will be a H x W x 3 numpy array that represents the current image
- `reward`: a scalar that represents the current reward
- `done`: a boolean that indicates whether the episode should terminate
@ -153,45 +197,35 @@ In this example, we show how to instantiate `NavigateRandomEnv` and how to step
The code can be found here: [examples/demo/env_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/env_example.py).
```python
from gibson2.envs.locomotor_env import NavigationEnv, NavigationRandomEnv
from time import time
import numpy as np
from gibson2.envs.igibson_env import iGibsonEnv
from time import time
import gibson2
import os
from gibson2.render.profiler import Profiler
import logging
def main():
config_filename = os.path.join(os.path.dirname(gibson2.__file__),
'../examples/configs/turtlebot_demo.yaml')
nav_env = NavigateRandomEnv(config_file=config_filename, mode='gui')
config_filename = os.path.join(
os.path.dirname(gibson2.__file__),
'../examples/configs/turtlebot_demo.yaml')
env = iGibsonEnv(config_file=config_filename, mode='gui')
for j in range(10):
nav_env.reset()
env.reset()
for i in range(100):
with Profiler('Env action step'):
action = nav_env.action_space.sample()
state, reward, done, info = nav_env.step(action)
with Profiler('Environment action step'):
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(i + 1))
logging.info(
"Episode finished after {} timesteps".format(i + 1))
break
env.close()
if __name__ == "__main__":
main()
```
You actually have already run this in [Quickstart](quickstart.md)!
#### Interactive Environments
In this example, we show how to instantiate `NavigateRandomEnv` with an interactive scene `Placida`. In this scene, the robot can interact with many objects (chairs, tables, couches, etc) by pushing them around. The code can be found here: [examples/demo/env_interactive_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/env_interactive_example.py).
#### Customized Environments
In this example, we show a customized environment `NavigateRandomEnvSim2Real` that builds on top of `NavigateRandomEnv`. We created this environment for [our CVPR2020 Sim2Real Challenge with iGibson](http://svl.stanford.edu/igibson/challenge.html). You should consider participating. :)
Here are the custimizations that we did:
- We added a new robot `Locobot` to [gibson2/physics/robot_locomotors.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/robot_locomotors.py)
- We added additional objects into the scene: `load_interactive_objects` in `NavigateRandomEnvSim2Real`
- We added dynamic objects (another Turtlebot) into the scene: `reset_dynamic_objects` and `step_dynamic_objects` in `NavigateRandomEnvSim2Real`
The code can be found here: [gibson2/envs/locomotor_env.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/envs/locomotor_env.py) and [examples/demo/env_customized_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/env_customized_example.py).
In this example, we show how to instantiate `iGibsobEnv` with a fully interactive scene `Rs_int`. In this scene, the robot can interact with all the objects in the scene (chairs, tables, couches, etc). The code can be found here: [examples/demo/env_interactive_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/env_interactive_example.py).

Binary file not shown.

After

Width:  |  Height:  |  Size: 754 KiB

BIN
docs/images/ig_scene.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.8 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 267 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 111 KiB

After

Width:  |  Height:  |  Size: 482 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 23 KiB

After

Width:  |  Height:  |  Size: 30 KiB

BIN
docs/images/pbr_render.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.0 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 110 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 1.6 MiB

After

Width:  |  Height:  |  Size: 1.5 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 962 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 932 KiB

BIN
docs/images/viewer.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.4 MiB

View File

@ -27,8 +27,9 @@ Welcome to iGibson's documentation!
objects.md
robots.md
simulators.md
viewer.md
environments.md
algorithms.md
learning_framework.md
ros_integration.md
tests.md
@ -36,12 +37,7 @@ Welcome to iGibson's documentation!
:maxdepth: 1
:caption: API
api_mesh_renderer.rst
api_simulator.rst
api_envs.rst
api_scene.rst
api_robot.rst
api_object.rst
apidoc/modules.rst
.. toctree::
:maxdepth: 1

View File

@ -11,6 +11,7 @@ The minimum system requirements are the following:
- Nvidia GPU with VRAM > 6.0GB
- Nvidia driver >= 384
- CUDA >= 9.0, CuDNN >= v7
- CMake >= 2.8.12 (can install with `pip install cmake`)
Other system configurations may work, but we haven't tested them extensively and we probably won't be able to provide as much support as we want.
@ -23,12 +24,14 @@ We provide 3 methods to install the simulator.
iGibson's simulator can be installed as a python package using pip:
```bash
pip install gibson2
pip install gibson2 # This step takes about 4 minutes
# run the demo
python -m gibson2.envs.demo
python -m gibson2.envs.demo_interactive
python -m gibson2.scripts.demo_static
```
Note: we use a custom version of pybullet, so if you have a previous version of pybullet installed, it will not be compabile with iGibson. We recommend you start from a fresh virtualenv/conda environment. Since install iGibson(`gibson`) requires compiling pybullet, it takes about 4 minutes.
### 2. Docker image
Docker provides an easy way to reproduce the development environment across platforms without manually installing the software dependencies. We have prepared docker images that contain everything you need to get started with iGibson.
@ -60,41 +63,76 @@ cd iGibson/docker/headless-gui
### 3. Compile from source
Alternatively, iGibson can be compiled from source: [iGibson GitHub Repo](https://github.com/StanfordVL/iGibson)
Alternatively, iGibson can be compiled from source: [iGibson GitHub Repo](https://github.com/StanfordVL/iGibson). First, you need to install anaconda following the guide on [their website](https://www.anaconda.com/).
```bash
git clone https://github.com/StanfordVL/iGibson --recursive
cd iGibson
conda create -n py3-igibson python=3.6 anaconda
conda create -n py3-igibson python=3.6 anaconda # we support python 3.5, 3.6, 3.7, 3.8
source activate py3-igibson
pip install -e .
pip install -e . # This step takes about 4 minutes
```
Note: we use a custom version of pybullet, so if you have a previous version of pybullet installed, it will not be compabile with iGibson. We recommend you start from a fresh virtualenv/conda environment. Since install iGibson(`gibson`) requires compiling pybullet, it takes about 4 minutes.
We recommend the third method if you plan to modify iGibson in your project. If you plan to use it as it is to train navigation and manipulation agents, the pip installation or docker image should meet your requirements.
## Downloading the Assets
First, create a folder to contain all the iGibson's assets (robotic agents, objects, 3D environments, etc.) and set the path in `your_installation_path/gibson2/global_config.yaml` (default and recommended: `your_installation_path/gibson2/assets`).
First, configure where iGibson's assets (robotic agents, objects, 3D environments, etc.) is going to be stored. It is configured in `your_installation_path/gibson2/global_config.yaml`
Second, you can download our robot models and objects from [here](https://storage.googleapis.com/gibson_scenes/assets_igibson.tar.gz) and unpack it in the assets folder.
To make things easier, the default place to store the data is:
```bash
assets_path: your_installation_path/gibson2/data/assets
g_dataset_path: your_installation_path/gibson2/data/g_dataset
ig_dataset_path: your_installation_path/gibson2/data/ig_dataset
threedfront_dataset_path: your_installation_path/gibson2/data/threedfront_dataset
cubicasa_dataset_path: your_installation_path/gibson2/data/assetscubicasa_dataset
```
Third, you need to download some large 3D reconstructed real-world environments (houses, offices) from [our dataset](dataset.md) for your agents to be trained in. Create a new folder for those environments and set the path in `your_installation_path/gibson2/global_config.yaml` (default and recommended: `your_installation_path/gibson2/dataset`). You can get access and download the full Gibson and iGibson (interactive furniture) datasets by filling up the following [license agreement](https://forms.gle/36TW9uVpjrE1Mkf9A). Alternatively, you can download a single [high quality small environment](https://storage.googleapis.com/gibson_scenes/Rs.tar.gz), R's, together with a [fully interactive version](https://storage.googleapis.com/gibson_scenes/Rs_interactive.tar.gz).
If you are happy with the default path, you don't have to do anything, otherwise you can run this script:
```bash
python -m gibson2.utils.assets_utils --change_data_path
```
The robot and object models, together with the R's interactive and non-interactive versions can be downloaded and extracted in the assets folder indicated in `your_installation_path/gibson2/global_config.yaml` with two commands:
Second, you can download our robot models and objects from [here](https://storage.googleapis.com/gibson_scenes/assets_igibson.tar.gz) and unpack it in the assets folder, or simply run this download script:
```bash
python -m gibson2.utils.assets_utils --download_assets
```
Third, you need to download some large 3D reconstructed real-world environments (e.g. houses and offices) from [our dataset](dataset.md) for your agents to be trained in. Create a new folder for those environments and set the path in `your_installation_path/gibson2/global_config.yaml` (default and recommended: `your_installation_path/gibson2/data/g_dataset` and `your_installation_path/gibson2/data/ig_dataset`). You can get access and download the Gibson and iGibson datasets by filling up the following [license agreement](https://forms.gle/36TW9uVpjrE1Mkf9A). In addition, you can download a single [high quality small environment R's](https://storage.googleapis.com/gibson_scenes/Rs.tar.gz) for demo purposes.
To download the demo data, run:
```bash
python -m gibson2.utils.assets_utils --download_demo_data
```
The full Gibson and iGibson dataset can be downloaded using the following command, this script automatically download, decompress, and put the dataset to correct place. You will get `URL` after filling in the agreement form.
The full Gibson and iGibson dataset can be downloaded using the following command, this script automatically downloads, decompresses, and puts the dataset to correct place. You will get `URL` after filling in the agreement form.
Download iGibson dataset
```bash
python -m gibson2.utils.assets_utils --download_ig_dataset
```
Download Gibson dataset ([agreement signing](https://forms.gle/36TW9uVpjrE1Mkf9A) required to get `URL`)
```bash
python -m gibson2.utils.assets_utils --download_dataset URL
```
## Testing
### Uninstalling
To test gibson2 is properly installed, you can run
```bash
python
>> import gibson2
```
For a full suite of tests and benchmarks, you can refer to [tests](tests.md) for more details.
## Uninstalling
Uninstalling iGibson is easy: `pip uninstall gibson2`

View File

@ -9,31 +9,24 @@ iGibson, the Interactive Gibson Environment, is a simulation environment providi
If you use iGibson or its assets and models, consider citing the following publication:
```
@article{xia2020interactive,
title={Interactive Gibson Benchmark: A Benchmark for Interactive Navigation in Cluttered Environments},
author={Xia, Fei and Shen, William B and Li, Chengshu and Kasimbeg, Priya and Tchapmi, Micael Edmond and Toshev, Alexander and Mart{\'\i}n-Mart{\'\i}n, Roberto and Savarese, Silvio},
journal={IEEE Robotics and Automation Letters},
volume={5},
number={2},
pages={713--720},
year={2020},
publisher={IEEE}
@article{shenigibson,
title={iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes},
author={Shen, Bokui and Xia, Fei and Li, Chengshu and Mart{\i}n-Mart{\i}n, Roberto and Fan, Linxi and Wang, Guanzhi and Buch, Shyamal and DArpino, Claudia and Srivastava, Sanjana and Tchapmi, Lyne P and Vainio, Kent and Fei-Fei, Li and Savarese, Silvio},
journal={arXiv preprint},
year={2020}
}
```
### Code Release
The GitHub repository of iGibson can be found here: [iGibson GitHub Repo](https://github.com/StanfordVL/iGibson). Bug reports, suggestions for improvement, as well as community developments are encouraged and appreciated. The support for our previous version of the environment, [Gibson v1](http://github.com/StanfordVL/GibsonEnv/), will be moved there.
### Documentation
This is the documentation webpage for iGibson. It includes installation guide (including data download), quickstart guide, code examples, and APIs.
If you want to know more about iGibson, you can also check out [our webpage](http://svl.stanford.edu/igibson), [our RAL+ICRA20 paper](https://arxiv.org/abs/1910.14442) and [our (outdated) technical report](http://svl.stanford.edu/igibson/assets/gibsonv2paper.pdf).
If you want to know more about iGibson, you can also check out [our webpage](http://svl.stanford.edu/igibson), [our updated technical report](TBA) and [our RAL+ICRA20 paper](https://arxiv.org/abs/1910.14442).
### Dowloading Dataset of 3D Environments
There are several datasets of 3D reconstructed large real-world environments (homes and offices) that you can download and use with iGibson. All of them will be accessible once you fill in this [form](https://forms.gle/36TW9uVpjrE1Mkf9A).
There are several datasets of 3D reconstructed large real-world environments (homes and offices) that you can download and use with iGibson. All of them will be accessible once you fill in this <a href="https://forms.gle/36TW9uVpjrE1Mkf9A" target="_blank">[form]</a>.
You will have access to ten environments with annotated instances of furniture (chairs, tables, desks, doors, sofas) that can be interacted with, and to the original 572 reconstructed 3D environments without annotated objects from [Gibson v1](http://github.com/StanfordVL/GibsonEnv/).
You will also have access to a [fully annotated environment: Rs_interactive](https://storage.googleapis.com/gibson_scenes/Rs_interactive.tar.gz) where close to 200 articulated objects are placed in their original locations of a real house and ready for interaction. ([The original environment: Rs](https://storage.googleapis.com/gibson_scenes/Rs.tar.gz) is also available). More info can be found in the [installation guide](installation.md).
You will have access to **fifteen** fully interactive environments populated with a wide variety of furniture and objects that can be actuated, and to the original 572 reconstructed, static 3D environments from [Gibson v1](http://github.com/StanfordVL/GibsonEnv/). We provide download utilities in the [installation guide](installation.md).

View File

@ -28,3 +28,12 @@ If the original installation doesn't work, try the following:
4. If you want to render in headless mode, make sure `$DISPLAY` environment variable is unset, otherwise you might have error `Failed to EGL with glad`, because EGL is sensitive to `$DISPLAY` environment variable.
Also, the EGL setup part is borrowed from Erwin Coumans [egl_example](https://github.com/erwincoumans/egl_example). It would be informative to see if that repository can run on your machine.
### Pybullet error
#### `ValueError: not enough values to unpack (expected 13, got 12)`
This is because we require a custom version of pybullet, if in your virtual/conda environment there is already pybullet installed, you need to first uninstall `pip uninstall pybullet` and then repeat install iGibson with
```bash
pip install gibson2
```
or ```pip install -e .``` if you installed from source.

View File

@ -0,0 +1,44 @@
# Learning Frameworks
### Overview
iGibson can be used with any learning framework that accommodates OpenAI gym interface. Feel free to use your favorite ones.
### Examples
#### TF-Agents
In this example, we show an environment wrapper of [TF-Agents](https://github.com/tensorflow/agents) for iGibson and an example training code for [SAC agent](https://arxiv.org/abs/1801.01290). The code can be found in [our fork of TF-Agents](https://github.com/StanfordVL/agents/): [agents/blob/igibson/tf_agents/environments/suite_gibson.py](https://github.com/StanfordVL/agents/blob/igibson/tf_agents/environments/suite_gibson.py) and [agents/blob/igibson/tf_agents/agents/sac/examples/v1/train_single_env.sh](https://github.com/StanfordVL/agents/blob/igibson/tf_agents/agents/sac/examples/v1/train_single_env.sh).
```python
def load(config_file,
model_id=None,
env_mode='headless',
action_timestep=1.0 / 10.0,
physics_timestep=1.0 / 40.0,
device_idx=0,
gym_env_wrappers=(),
env_wrappers=(),
spec_dtype_map=None):
config_file = os.path.join(os.path.dirname(gibson2.__file__), config_file)
env = iGibsonEnv(config_file=config_file,
scene_id=model_id,
mode=env_mode,
action_timestep=action_timestep,
physics_timestep=physics_timestep,
device_idx=device_idx)
discount = env.config.get('discount_factor', 0.99)
max_episode_steps = env.config.get('max_step', 500)
return wrap_env(
env,
discount=discount,
max_episode_steps=max_episode_steps,
gym_env_wrappers=gym_env_wrappers,
time_limit_wrapper=wrappers.TimeLimit,
env_wrappers=env_wrappers,
spec_dtype_map=spec_dtype_map,
auto_reset=True
)
```

View File

@ -7,13 +7,21 @@ We provide a wide variety of **Objects** that can be imported into the **Simulat
- `ShapeNetObject`
- `Pedestrian`
- `ArticulatedObject`
- `URDFObject`
- `SoftObject`
- `Cube`
- `VisualMarker`
- `VisualShape`
Typically, they take in the name or the path of an object (in `gibson2.assets_path`) and provide a `load` function that be invoked externally (usually by `import_object` and `import_object` of `Simulator`). The `load` function imports the object into PyBullet. Some **Objects** (e.g. `ArticulatedObject`) also provide APIs to get and set the object pose.
Most of the code can be found here: [gibson2/physics/interactive_objects.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/interactive_objects.py).
Most of the code can be found here: [gibson2/objects](https://github.com/StanfordVL/iGibson/blob/master/gibson2/objects).
### Adding other objects to iGibson
We provide detailed instructions and scripts to import your own objects (non-articulated) into iGibson.
Instruction can be found here: [External Objects](https://github.com/StanfordVL/iGibson/blob/master/gibson2/utils/data_utils/ext_object).
### Examples
In this example, we import three objects into PyBullet, two of which are articulated objects. The code can be found here: [examples/demo/object_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/object_example.py).
@ -24,30 +32,34 @@ from gibson2.objects.articulated_object import ArticulatedObject
import gibson2
import os
import pybullet as p
import pybullet_data
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setGravity(0, 0, -9.8)
p.setTimeStep(1./240.)
floor = os.path.join(pybullet_data.getDataPath(), "mjcf/ground_plane.xml")
p.loadMJCF(floor)
cabinet_0007 = os.path.join(gibson2.assets_path, 'models/cabinet2/cabinet_0007.urdf')
cabinet_0004 = os.path.join(gibson2.assets_path, 'models/cabinet/cabinet_0004.urdf')
cabinet_0007 = os.path.join(
gibson2.assets_path, 'models/cabinet2/cabinet_0007.urdf')
cabinet_0004 = os.path.join(
gibson2.assets_path, 'models/cabinet/cabinet_0004.urdf')
obj1 = ArticulatedObject(filename=cabinet_0007)
obj1.load()
obj1.set_position([0,0,0.5])
obj1.set_position([0, 0, 0.5])
obj2 = ArticulatedObject(filename=cabinet_0004)
obj2.load()
obj2.set_position([0,0,2])
obj2.set_position([0, 0, 2])
obj3 = YCBObject('003_cracker_box')
obj3.load()
p.resetBasePositionAndOrientation(obj3.body_id, [0,0,1.2], [0,0,0,1])
obj3.set_position_orientation([0, 0, 1.2], [0, 0, 0, 1])
for _ in range(24000): # at least 100 seconds
p.stepSimulation()
@ -58,6 +70,7 @@ def main():
if __name__ == '__main__':
main()
```
You can open the cabinet and the drawer by dragging your mouse over them. You can even put the cereal box into the drawer like this:

View File

@ -1,17 +1,16 @@
# Overview
Next, we will give an overview of iGibson and briefly explain the different layers of abstraction in our system. In general, the modules from one layer will use and instantiate those from the layer immediately below.
Next, we will give an overview of iGibson and briefly explain the different modules in our system.
![quickstart.png](images/overview.png)
At the bottom layer, we have **Dataset** and **Assets**. **Dataset** contain 3D reconstructed real-world environments. **Assets** contain models of robots and objects. Download guide can be found [here](installation.html#downloading-the-assets). More info can be found here: [Dataset](dataset.md) and [Assets](assets.md).
First of all, we have **Dataset** and **Assets**. **Dataset** contain 3D reconstructed real-world environments. **Assets** contain models of robots and objects. Download guide can be found [here](installation.html#downloading-the-assets). More info can be found here: [Dataset](dataset.md) and [Assets](assets.md).
In the next layer, we have **Renderer** and **PhysicsEngine**. These are the two pillars that ensure the visual and physics fidelity of iGibson. We developed our own MeshRenderer that supports customizable camera configuration and various image modalities, and renders at a lightening speed. We use the open-sourced [PyBullet](http://www.pybullet.org/) as our underlying physics engine. It can simulate rigid body collision and joint actuation for robots and articulated objects in an accurate and efficient manner. Since we are using MeshRenderer for rendering and PyBullet for physics simulation, we need to keep them synchronized at all time. Our code have already handled this for you. More info can be found here: [Renderer](renderer.md) and [PhysicsEngine](physics_engine.md).
Next, we have **Renderer** and **PhysicsEngine**. These are the two pillars that ensure the visual and physics fidelity of iGibson. We developed our own MeshRenderer that supports customizable camera configuration, physics-based rendering (PBR) and various image modalities, and renders at a lightening speed. We use the open-sourced [PyBullet](http://www.pybullet.org/) as our underlying physics engine. It can simulate rigid body collision and joint actuation for robots and articulated objects in an accurate and efficient manner. Since we are using MeshRenderer for rendering and PyBullet for physics simulation, we need to keep them synchronized at all time. Our code have already handled this for you. More info can be found here: [Renderer](renderer.md) and [PhysicsEngine](physics_engine.md).
In the next layer, we have **Scene**, **Object**, **Robot**, and **Simulator**. **Scene** loads 3D scene meshes from `gibson2.dataset_path`. **Object** loads interactable objects from `gibson2.assets_path`. **Robot** loads robots from `gibson2.assets_path`. **Simulator** maintains an instance of **Renderer** and **PhysicsEngine** and provides APIs to import **Scene**, **Object** and **Robot** into both of them and keep them synchronized at all time. More info can be found here: [Scene](scenes.md), [Object](objects.md), [Robot](robots.md) and [Simulator](simulators.md).
Furthermore, we have **Scene**, **Object**, **Robot**, and **Simulator**. **Scene** loads 3D scene meshes from `gibson2.g_dataset_path, gibson2.ig_dataset_path`. **Object** loads interactable objects from `gibson2.assets_path`. **Robot** loads robots from `gibson2.assets_path`. **Simulator** maintains an instance of **Renderer** and **PhysicsEngine** and provides APIs to import **Scene**, **Object** and **Robot** into both of them and keep them synchronized at all time. More info can be found here: [Scene](./scenes.md), [Object](./objects.md), [Robot](./robots.md), and [Simulator](simulators.md).
In the next layer, we have **Environment**. **Environment** follows the [OpenAI gym](https://github.com/openai/gym) convention and provides an API interface for applications such as **Algorithms** and **ROS**. **Environment** usually defines a task for an agent to solve, which includes observation_space, action space, reward, termination condition, etc. More info can be found here: [Environment](environments.md).
Moreover, we have **Task**, **Sensor** and **Environment**. **Task** defines the task setup and includes a list of **Reward Function** and **Termination Condition**. It also provides task-specific reset functions and task-relevant observation definition. **Sensor** provides a light wrapper around **Render** to retrieve sensory observation. **Environment** follows the [OpenAI gym](https://github.com/openai/gym) convention and provides an API interface for external applications. More info can be found here: [Environment](environments.md).
In the top and final layer, we have **Algorithm** and **ROS**. **Algorithm** can be any algorithms (from optimal control to model-free reinforcement leanring) that accommodate OpenAI gym interface. We also provide tight integration with **ROS** that allows for evaluation and visualization of, say, ROS Navigation Stack, in iGibson. More info can be found here: [Algorithm](algorithms.md) and [ROS](ros_integration.md).
Finally, any learning framework (e.g. RL, IL) or planning and control framework (e.g. ROS) can be used with **Environment** as long as they accommodate OpenAI gym interface. We provide tight integration with **ROS** that allows for evaluation and visualization of, say, ROS Navigation Stack, in iGibson. More info can be found here: [Learning Framework](learning_framework.md) and [ROS](ros_integration.md).
We highly recommend you go through each of the Modules below for more details and code examples.

View File

@ -14,7 +14,7 @@ It is exciting to see people using Gibson Environment in embodied AI research. H
- Watkins-Valls, David, et al. [Learning Your Way Without a Map or Compass: Panoramic Target Driven Visual Navigation.](https://arxiv.org/pdf/1909.09295.pdf) arXiv preprint arXiv:1909.09295 (2019).
- Akinola, Iretiayo, et al. [Accelerated Robot Learning via Human Brain Signals.](https://arxiv.org/pdf/1910.00682.pdf) arXiv preprint arXiv:1910.00682(2019).
- Xia, Fei, et al. [Interactive Gibson: A Benchmark for Interactive Navigation in Cluttered Environments.](https://arxiv.org/pdf/1910.14442.pdf) arXiv preprint arXiv:1910.14442 (2019).
- Pérez-D'Arpino, Claudia, Can Liu, Patrick Goebel, Roberto Martín-Martín, and Silvio Savarese. [Robot Navigation in Constrained Pedestrian Environments using Reinforcement Learning](https://arxiv.org/pdf/2010.08600.pdf). Preprint arXiv:2010.08600, 2020.
These papers tested policies trained in Gibson v1 on real robots in the physical world:
@ -26,6 +26,15 @@ These papers tested policies trained in Gibson v1 on real robots in the physical
If you use Gibson, iGibson or their assets, please consider citing the following papers for iGibson, the Interactive Gibson Environment:
```
@article{shenigibson,
title={iGibson, a Simulation Environment for Interactive Tasks in Large Realistic Scenes},
author={Shen, Bokui and Xia, Fei and Li, Chengshu and Mart{\i}n-Mart{\i}n, Roberto and Fan, Linxi and Wang, Guanzhi and Buch, Shyamal and DArpino, Claudia and Srivastava, Sanjana and Tchapmi, Lyne P and Vainio, Kent and Fei-Fei, Li and Savarese, Silvio},
journal={arXiv preprint},
year={2020}
}
```
````
@article{xia2020interactive,
title={Interactive Gibson Benchmark: A Benchmark for Interactive Navigation in Cluttered Environments},
@ -39,16 +48,6 @@ If you use Gibson, iGibson or their assets, please consider citing the following
}
````
````text
@techreport{xiagibson2019,
title = {Gibson Env V2: Embodied Simulation Environments for Interactive Navigation},
author = {Xia, Fei and Li, Chengshu and Chen, Kevin and Shen, William B and Mart{\'i}n-Mart{\'i}n, Roberto and Hirose, Noriaki and Zamir, Amir R and Fei-Fei, Li and Savarese, Silvio},
group = {Stanford Vision and Learning Group},
year = {2019},
institution = {Stanford University},
month = {6},
}
````
and the following paper for Gibson v1:

View File

@ -1,7 +1,7 @@
# Quickstart
## iGibson in Action
Let's get our hands dirty and see iGibson in action.
Assume you finished installation and assets downloading. Let's get our hands dirty and see iGibson in action.
```bash
cd examples/demo
@ -49,12 +49,14 @@ python benchmark.py
## Benchmarks
Performance is a big designing focus for iGibson. We provide a few scripts to benchmark the rendering and physics
simulation framerate in iGibson.
### Benchmark static scene (Gibson scenes)
```bash
cd examples/demo
python benchmark.py
cd test/benchmark
python benchmark_static_scene.py
```
You will see output similar to:
@ -70,3 +72,42 @@ Rendering 3d, resolution 512, render_to_tensor False: 292.0761459884919 fps
Rendering normal, resolution 512, render_to_tensor False: 265.70666134193806 fps
```
### Benchmark physics simulation in interactive scenes (iGibson scene)
```bash
cd test/benchmark
python benchmark_interactive_scene.py
```
It will generate a report like below:
![](images/scene_benchmark_Rs_int_o_True_r_True.png)
### Benchmark rendering in interactive scenes
To run a comprehensive benchmark for all rendering in all iGibson scenes, you can excute the following command:
```bash
cd test/benchmark
python benchmark_interactive_scene_rendering.py
```
It benchmarks two use cases, one for training visual RL agents (low resolution, shadow mapping off), another one for
training perception tasks, with highest quality of graphics possible.
```python
'VISUAL_RL': MeshRendererSettings(enable_pbr=True, enable_shadow=False, msaa=False, optimized=True),
'PERCEPTION': MeshRendererSettings(env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True, msaa=True,
light_dimming_factor=1.0,
optimized=True)
```
It will generate a report like below:
![](images/benchmark_rendering.png)

View File

@ -1,184 +0,0 @@
Quick Start
=================
Tests
----
```bash
cd test
pytest # the tests should pass, it will take a few minutes
```
Interactive Gibson Env Framerate
----
Interactive Gibson Env framerate compared with gibson v1 is shown in the table below:
| | Gibson V2 | Gibson V1 |
|---------------------|-----------|-----------|
| RGBD, pre networkf | 264.1 | 58.5 |
| RGBD, post networkf | 61.7 | 30.6 |
| Surface Normal only | 271.1 | 129.7 |
| Semantic only | 279.1 | 144.2 |
| Non-Visual Sensory | 1017.4 | 396.1 |
Rendering Semantics
----
TBA
Robotic Agents
----
Gibson provides a base set of agents. See videos of these agents and their corresponding perceptual observation [here](http://gibsonenv.stanford.edu/agents/).
<img src=misc/agents.gif>
To enable (optionally) abstracting away low-level control and robot dynamics for high-level tasks, we also provide a set of practical and ideal controllers for each agent.
| Agent Name | DOF | Information | Controller |
|:-------------: | :-------------: |:-------------: |:-------------|
| Mujoco Ant | 8 | [OpenAI Link](https://blog.openai.com/roboschool/) | Torque |
| Mujoco Humanoid | 17 | [OpenAI Link](https://blog.openai.com/roboschool/) | Torque |
| Husky Robot | 4 | [ROS](http://wiki.ros.org/Robots/Husky), [Manufacturer](https://www.clearpathrobotics.com/) | Torque, Velocity, Position |
| Minitaur Robot | 8 | [Robot Page](https://www.ghostrobotics.io/copy-of-robots), [Manufacturer](https://www.ghostrobotics.io/) | Sine Controller |
| JackRabbot | 2 | [Stanford Project Link](http://cvgl.stanford.edu/projects/jackrabbot/) | Torque, Velocity, Position |
| TurtleBot | 2 | [ROS](http://wiki.ros.org/Robots/TurtleBot), [Manufacturer](https://www.turtlebot.com/) | Torque, Velocity, Position |
| Quadrotor | 6 | [Paper](https://repository.upenn.edu/cgi/viewcontent.cgi?referer=https://www.google.com/&httpsredir=1&article=1705&context=edissertations) | Position |
### Starter Code
Demonstration examples can be found in `examples/demo` folder. `demo.py` shows the procedure of starting an environment with a random agent.
ROS Configuration
---------
We provide examples of configuring Gibson with ROS [here](https://github.com/StanfordVL/GibsonEnvV2/examples/ros/gibson-ros). We use turtlebot as an example, after a policy is trained in Gibson, it requires minimal changes to deploy onto a turtlebot. See [README](https://github.com/StanfordVL/GibsonEnvV2/examples/ros/gibson-ros) for more details.
Coding Your RL Agent
====
You can code your RL agent following our convention. The interface with our environment is very simple (see some examples in the end of this section).
First, you can create an environment by creating an instance of classes in `gibson/envs` folder.
```python
env = AntNavigateEnv(is_discrete=False, config = config_file)
```
Then do one step of the simulation with `env.step`. And reset with `env.reset()`
```python
obs, rew, env_done, info = env.step(action)
```
`obs` gives the observation of the robot. It is a dictionary with each component as a key value pair. Its keys are specified by user inside config file. E.g. `obs['nonviz_sensor']` is proprioceptive sensor data, `obs['rgb_filled']` is rgb camera data.
`rew` is the defined reward. `env_done` marks the end of one episode, for example, when the robot dies.
`info` gives some additional information of this step; sometimes we use this to pass additional non-visual sensor values.
We mostly followed [OpenAI gym](https://github.com/openai/gym) convention when designing the interface of RL algorithms and the environment. In order to help users start with the environment quicker, we
provide some examples at [examples/train](examples/train). The RL algorithms that we use are from [openAI baselines](https://github.com/openai/baselines) with some adaptation to work with hybrid visual and non-visual sensory data.
In particular, we used [PPO](https://github.com/openai/baselines/tree/master/baselines/ppo1) and a speed optimized version of [PPO](https://github.com/openai/baselines/tree/master/baselines/ppo2).
Environment Configuration
=================
Each environment is configured with a `yaml` file. Examples of `yaml` files can be found in `examples/configs` folder. Parameters for the file is explained below. For more informat specific to Bullet Physics engine, you can see the documentation [here](https://docs.google.com/document/d/10sXEhzFRSnvFcl3XxNGhnD4N2SedqwdAvK3dsihxVUA/edit).
| Argument name | Example value | Explanation |
|:-------------:|:-------------:| :-----|
| envname | AntClimbEnv | Environment name, make sure it is the same as the class name of the environment |
| scene_id | space1-space8 | Scene id, in beta release, choose from space1-space8 |
| target_orn | [0, 0, 3.14] | Eulerian angle (in radian) target orientation for navigating, the reference frame is world frame. For non-navigation tasks, this parameter is ignored. |
|target_pos | [-7, 2.6, -1.5] | target position (in meter) for navigating, the reference frame is world frame. For non-navigation tasks, this parameter is ignored. |
|initial_orn | [0, 0, 3.14] | initial orientation (in radian) for navigating, the reference frame is world frame |
|initial_pos | [-7, 2.6, 0.5] | initial position (in meter) for navigating, the reference frame is world frame|
|fov | 1.57 | field of view for the camera, in radian |
| use_filler | true/false | use neural network filler or not. It is recommended to leave this argument true. See [Gibson Environment website](http://gibson.vision/) for more information. |
|display_ui | true/false | Gibson has two ways of showing visual output, either in multiple windows, or aggregate them into a single pygame window. This argument determines whether to show pygame ui or not, if in a production environment (training), you need to turn this off |
|show_diagnostics | true/false | show dignostics(including fps, robot position and orientation, accumulated rewards) overlaying on the RGB image |
|output | [nonviz_sensor, rgb_filled, depth] | output of the environment to the robot, choose from [nonviz_sensor, rgb_filled, depth]. These values are independent of `ui_components`, as `ui_components` determines what to show and `output` determines what the robot receives. |
|resolution | 512 | choose from [128, 256, 512] resolution of rgb/depth image |
|mode | gui/headless/web_ui | gui or headless, if in a production environment (training), you need to turn this to headless. In gui mode, there will be visual output; in headless mode, there will be no visual output. In addition to that, if you set mode to web_ui, it will behave like in headless mode but the visual will be rendered to a web UI server. ([more information](#web-user-interface))|
|verbose |true/false | show diagnostics in terminal |
|fast_lq_render| true/false| if there is fast_lq_render in yaml file, Gibson will use a smaller filler network, this will render faster but generate slightly lower quality camera output. This option is useful for training RL agents fast. |
#### Making Your Customized Environment
Gibson provides a set of methods for you to define your own environments. You can follow the existing environments inside `gibson/envs`.
| Method name | Usage |
|:------------------:|:---------------------------|
| robot.get_position() | Get current robot position. |
| robot.get_orientation() | Get current robot orientation. |
| robot.eyes.get_position() | Get current robot perceptive camera position. |
| robot.eyes.get_orientation() | Get current robot perceptive camera orientation. |
| robot.get_target_position() | Get robot target position. |
| robot.apply_action(action) | Apply action to robot. |
| robot.reset_new_pose(pos, orn) | Reset the robot to any pose. |
| robot.dist_to_target() | Get current distance from robot to target. |
Create a docker image for Interactive Gibson
=======================================
You can use the following Dockerfile to create a docker image for using gibson. `nvidia-docker` is required to run this docker image.
```text
from nvidia/cudagl:10.0-base-ubuntu18.04
ARG CUDA=10.0
ARG CUDNN=7.6.2.24-1
RUN apt-get update && apt-get install -y --no-install-recommends \
curl build-essential git cmake \
cuda-command-line-tools-10-0 \
cuda-cublas-10-0 \
cuda-cufft-10-0 \
cuda-curand-10-0 \
cuda-cusolver-10-0 \
cuda-cusparse-10-0 \
libcudnn7=${CUDNN}+cuda${CUDA} \
libhdf5-dev \
libsm6 \
libxext6 \
libxrender-dev \
wget
# Install miniconda to /miniconda
RUN curl -LO http://repo.continuum.io/miniconda/Miniconda-latest-Linux-x86_64.sh
RUN bash Miniconda-latest-Linux-x86_64.sh -p /miniconda -b
RUN rm Miniconda-latest-Linux-x86_64.sh
ENV PATH=/miniconda/bin:${PATH}
RUN conda update -y conda
RUN conda create -y -n gibson python=3.6.8
# Python packages from conda
ENV PATH /miniconda/envs/gibson/bin:$PATH
RUN pip install pytest
RUN pip install tf-nightly-gpu==1.15.0-dev20190730
RUN pip install tfp-nightly==0.8.0-dev20190730
RUN pip install tf-estimator-nightly==1.14.0.dev2019041501
RUN pip install gast==0.2.2
RUN pip install opencv-python networkx ipython
RUN git clone --branch release-cleanup https://github.com/StanfordVL/GibsonEnvV2 /opt/gibsonv2 --recursive
WORKDIR /opt/gibsonv2
RUN pip install -e .
RUN git clone https://github.com/ChengshuLi/agents/ /opt/agents
WORKDIR /opt/agents
RUN pip install -e .
WORKDIR /opt/gibsonv2/gibson2/
RUN wget -q https://storage.googleapis.com/gibsonassets/assets_dev.tar.gz && tar -zxf assets_dev.tar.gz
WORKDIR /opt/gibsonv2/gibson2/assets
RUN mkdir dataset
WORKDIR /opt/gibsonv2/gibson2/assets/dataset
RUN wget -q https://storage.googleapis.com/gibsonassets/gibson_mesh/Ohopee.tar.gz && tar -zxf Ohopee.tar.gz
WORKDIR /opt/agents
```

View File

@ -2,7 +2,7 @@
### Overview
We developed our own MeshRenderer that supports customizable camera configuration and various image modalities, and renders at a lightening speed. Specifically, you can specify image width, height and vertical field of view in the constructor of `class MeshRenderer`. Then you can call `renderer.render(modes=('rgb', 'normal', 'seg', '3d'))` to retrieve the images. Currently we support four different image modalities: RGB, surface normal, semantic segmentation and 3D point cloud (z-channel can be extracted as depth map). Most of the code can be found in [gibson2/render](https://github.com/StanfordVL/iGibson/tree/master/gibson2/render).
We developed our own MeshRenderer that supports customizable camera configuration and various image modalities, and renders at a lightening speed. Specifically, you can specify image width, height and vertical field of view in the constructor of `class MeshRenderer`. Then you can call `renderer.render(modes=('rgb', 'normal', 'seg', '3d', 'optical_flow', 'scene_flow'))` to retrieve the images. Currently we support six different image modalities: RGB, surface normal, segmentation, 3D point cloud (z-channel can be extracted as depth map), optical flow, and scene flow. We also support two types of LiDAR sensors: 1-beam and 16-beam (like Velodyne VLP-16). Most of the code can be found in [gibson2/render](https://github.com/StanfordVL/iGibson/tree/master/gibson2/render).
### Examples
@ -15,9 +15,10 @@ import cv2
import sys
import os
import numpy as np
from gibson2.core.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.utils.assets_utils import get_scene_path
def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
@ -27,16 +28,16 @@ def main():
renderer = MeshRenderer(width=512, height=512)
renderer.load_object(model_path)
renderer.add_instance(0)
camera_pose = np.array([0, 0, 1.2])
view_direction = np.array([1, 0, 0])
renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
renderer.set_fov(90)
frames = renderer.render(modes=('rgb', 'normal', '3d'))
frames = renderer.render(
modes=('rgb', 'normal', '3d'))
frames = cv2.cvtColor(np.concatenate(frames, axis=1), cv2.COLOR_RGB2BGR)
cv2.imshow('image', frames)
cv2.waitKey()
cv2.waitKey(0)
if __name__ == '__main__':
main()
@ -55,7 +56,29 @@ python mesh_renderer_example.py
```
You may translate the camera by pressing "WASD" on your keyboard and rotate the camera by dragging your mouse. Press `Q` to exit the rendering loop. The code can be found in [examples/demo/mesh_renderer_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/mesh_renderer_example.py).
#### PBR (Physics-Based Rendering) Example
You can test the physically based renderer with the PBR demo. You can render any objects included in iG dataset, here
we show a sink for example, as it includes different materials. You need to pass in a folder, since it will load all
obj files in the folder.
```bash
cd examples/demo
python mesh_renderer_example_pbr.py <path to ig_dataset>/objects/sink/sink_1/shape/visual
```
![pbr_renderer.png](images/pbr_render.png)
You will get a nice rendering of the sink, and should see the metal parts have specular highlgihts, and shadows
should be casted.
#### Velodyne VLP-16 Example
In this example, we show a demo of 16-beam Velodyne VLP-16 LiDAR placed on top of a virtual Turtlebot. The code can be found in [examples/demo/lidar_velodyne_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/lidar_velodyne_example.py).
The Velodyne VLP-16 LiDAR visualization will look like this:
![lidar_velodyne.png](images/lidar_velodyne.png)
#### Render to PyTorch Tensors
In this example, we show that MeshRenderer can directly render into a PyTorch tensor to maximize efficiency. PyTorch installation is required (otherwise, iGibson does not depend on PyTorch). The code can be found in [examples/demo/mesh_renderer_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/mesh_renderer_gpu_example.py).
In this example, we show that MeshRenderer can directly render into a PyTorch tensor to maximize efficiency. PyTorch installation is required (otherwise, iGibson does not depend on PyTorch). The code can be found in [examples/demo/mesh_renderer_gpu_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/mesh_renderer_gpu_example.py).

View File

@ -38,13 +38,16 @@ def apply_robot_action(action):
```
Note that `robot_action` is a normalized joint velocity, i.e. `robot_action[n] == 1.0` means executing the maximum joint velocity for the nth joint. The limits of joint position, velocity and torque are extracted from the URDF file of the robot.
Most of the code can be found here: [gibson2/physics/robot_locomotors.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/robot_locomotors.py).
Most of the code can be found here: [gibson2/robots](https://github.com/StanfordVL/iGibson/blob/master/gibson2/robots).
### Examples
In this example, we import four different robots into PyBullet. We keep them still for around 10 seconds and then move them with small random actions for another 10 seconds. The code can be found here: [examples/demo/robot_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/robot_example.py).
```python
from gibson2.robots.robot_locomotor import Locobot, Turtlebot, JR2_Kinova, Fetch
from gibson2.robots.locobot_robot import Locobot
from gibson2.robots.turtlebot_robot import Turtlebot
from gibson2.robots.jr2_kinova_robot import JR2_Kinova
from gibson2.robots.fetch_robot import Fetch
from gibson2.utils.utils import parse_config
import os
import time
@ -52,9 +55,10 @@ import numpy as np
import pybullet as p
import pybullet_data
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setGravity(0, 0, -9.8)
p.setTimeStep(1./240.)
floor = os.path.join(pybullet_data.getDataPath(), "mjcf/ground_plane.xml")

View File

@ -1,20 +1,35 @@
# Scenes
### Overview
We provide three types of scenes.
- `EmptyScene` and `StadiumScene`: they are simple scenes with flat grounds and no obstacles, very good for debugging.
- `BuildingScene`: it loads realistic 3D scenes from `gibson2.dataset_path`.
We provide four types of scenes.
- `EmptyScene` and `StadiumScene`: they are simple scenes with flat grounds and no obstacles, useful for debugging purposes.
- `StaticIndoorScene`: it loads static 3D scenes from `gibson2.g_dataset_path`.
- `InteractiveIndoorScene`: it loads fully interactive 3D scenes from `gibson2.ig_dataset_path`.
Typically, they take in the `scene_id` of a scene and provide a `load` function that be invoked externally (usually by `import_scene` of the `Simulator`).
Typically, they take in the `scene_id` of a scene and provide a `load` function that be invoked externally (usually by `import_scene` and `import_ig_scene` of the `Simulator`).
To be more specific, the `load` function of `BuildingScene`
To be more specific, the `load` function of `StaticIndoorScene`
- stores the floor information (we have many multistory houses in our dataset)
- loads the scene mesh into PyBullet
- builds an internal traversability graph for each floor based on the traversability maps stored in the scene folder (e.g. `dataset/Rs/floor_trav_0.png`)
- loads the scene objects and places them in their original locations if the scene is interactive
- provides APIs for sampling a random location in the scene, and for computing the shortest path between two locations in the scene.
Most of the code can be found here: [gibson2/physics/scene.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/scene.py).
In addition to everything mentioned above, the `load` function of `InteractiveIndoorScene` also
- provides material/texture randomization functionality: randomize the material, texture and dynamic property of scene object models
- provides object randomization functionality: randomize scene object models while keeping object poses and categories intact
- provides scene quality check: check if object models have collisions and if fixed, articulated objects can extend their joints fully without collision
- provides partial scene loading functionality: 1) only load objects of certain categories, 2) only load objects in certain room types, 3) only load objects in certain room instances.
- provides APIs for changing the state of articulated objects (e.g. open all "fridges" and "ovens" in the scene)
Most of the code can be found here: [gibson2/scenes](https://github.com/StanfordVL/iGibson/blob/master/gibson2/scenes).
### Adding other scenes to iGibson
We provide detailed instructions and scripts to import scenes from the following sources into iGibson:
1. [CubiCasa5k](https://github.com/CubiCasa/CubiCasa5k): A Dataset and an Improved Multi-Task Model for Floorplan Image Analysis. (Kalervo, Ahti, et al.)
2. [3D-FRONT](https://tianchi.aliyun.com/specials/promotion/alibaba-3d-scene-dataset): 3D Furnished Rooms with layOuts and semaNTics. (Fu, Huanl, et al.)
Instruction can be found here: [External Scenes](https://github.com/StanfordVL/iGibson/blob/master/gibson2/utils/data_utils/ext_scene).
### Examples
@ -93,13 +108,53 @@ if __name__ == '__main__':
```
#### Interactive Building Scenes
In this example, we import a fully interactive scene, and randomly sample points given a room type such as "living_room". This can be useful for tasks that require the robot to always be spawned in certain room types. We support fifteen such scenes right now as part of the new iGibson Dataset. The code can be found here: [examples/demo/scene_interactive_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/scene_interactive_example.py).
In this example, we import an interactive scene. We support ten such scenes right now (the list can be found in `dataset/gibson_list`). All you need to do is to turn on the flag `is_interactive=True` when you initialize `BuildingScene`. The code can be found here: [examples/demo/scene_interactive_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/scene_interactive_example.py).
Note that all objects in these scenes can be interacted realistically.
![scene_interactive.png](images/scene_interactive.png)
The interactive scene will replace the annotated objects with very similar CAD models with their original texture, aligned to their original poses. Because removing the annotated objects will inevitably create holes on the floor, we add additional floor planes with the original floor texture as well.
```python
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.simulator import Simulator
import numpy as np
For example, in the scene `Placida` below, the couches, the coffee table, the dining table and the dining chairs are all interactive objects.
![scene_interactive](images/scene_interactive.png)
def main():
s = Simulator(mode='gui', image_width=512,
image_height=512, device_idx=0)
scene = InteractiveIndoorScene(
'Rs_int', texture_randomization=False, object_randomization=False)
s.import_ig_scene(scene)
np.random.seed(0)
for _ in range(10):
pt = scene.get_random_point_by_room_type('living_room')[1]
print('random point in living_room', pt)
for _ in range(1000):
s.step()
s.disconnect()
if __name__ == '__main__':
main()
```
##### Texture Randomization
In this example, we demonstrate material/texture randomization functionality of `InteractiveIndoorScene`. The goal is to randomize the material, texture and dynamic properties of all scene objects by calling `scene.randomize_texture` on-demand. The code can be found here: [examples/demo/scene_interactive_texture_rand_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/scene_interactive_texture_rand_example.py).
The randomized materials in the `ExternalView` window should look like this.
![scene_interactive_texture_rand](images/scene_interactive_texture_rand.png)
##### Object Randomization
In this example, we demonstrate object randomization functionality of `InteractiveIndoorScene`. The goal is to randomize the object models while maintaining their poses and categories. Note that when object models are randomized, there is no guarantee that they have no collisions or the fixed, articulated objects can extend their joints without collision. We provide `scene.check_scene_quality` functionality to check scene quality and you should do object model re-sampling if this function returns `False`. An alternative way (recommended) is to use randoml object model configuration that we provide (10 for each scenes) which guarantees scene quality, by passing in `object_randomization_idx=[0-9]`. Finally, object randomization can be expensive because the new object models need to be loaded to the simulator each time, so we recommend only using it occasionally (e.g. every 1000 training episodes). The code can be found here: [examples/demo/scene_interactive_object_rand_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/scene_interactive_object_rand_example.py).
The randomized object models in the `ExternalView` window should look like this.
![scene_interactive_object_rand](images/scene_interactive_object_rand.png)
##### Partial Scene Loading
In this example, we demonstrate partial scene loading functionality of `InteractiveIndoorScene`. Specifically in this example we only load "chairs" in "living rooms". This can be useful for tasks that only require certain object categories or rooms. The code can be found here: [examples/demo/scene_interactive_partial_loading_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/scene_interactive_partial_loading_example.py).
#### Visualize Traversability Map

View File

@ -6,34 +6,38 @@
Some key functions are the following:
- `load`: initialize PyBullet physics engine and MeshRenderer
- `import_scene`: import the scene into PyBullet by calling `scene.load`, and then import it into MeshRenderer by calling `self.renderer.add_instance`. If the scene is interactive (`is_interactive=True`), all the objects in the scene will be imported as well.
- `import_{scene, ig_scene}`: import the scene into PyBullet by calling `scene.load`, and then import it into MeshRenderer by calling `self.renderer.add_instance`. If `InteractiveIndoorScene` is imported using `import_ig_scene`, all objects in the scene are also imported.
- `import_{object, articulated_object, robot}`: import the object, articulated object and robot into the simulator in a similar manner
- `sync`: synchronize the poses of the dynamic objects (including the robots) between PyBullet and MeshRenderer. Specifically, it calls `update_position` for each object, in which it retrieve the object's pose in PyBullet, and then update its pose accordingly in MeshRenderer.
If `Simulator` uses `gui` mode, by default it will also maintain a `Viewer`, which essentially is a virtual camera in the scene that can render images. More info about the `Viewer` can be found here: [gibson2/render/viewer.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/render/viewer.py).
If `Simulator` uses `gui` mode, by default it will also maintain a `Viewer`, which essentially is a virtual camera in the scene that can render images. More info about the `Viewer` can be found here: [gibson2/render/viewer.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/render/viewer.py). Notably, you can press `h` in the `ExternalView` window to show the help menu for mouse/keyboard control.
Most of the code can be found here: [gibson2/simulator.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/simulator.py).
### Examples
In this example, we import a `BuildingScene`, a `Turtlebot`, and ten `YCBObject` into the simulator. The code can be found here: [examples/demo/simulator_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/simulator_example.py)
In this example, we import a `StaticIndoorScene`, a `Turtlebot`, and ten `YCBObject` into the simulator. The code can be found here: [examples/demo/simulator_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/simulator_example.py)
```python
from gibson2.robots.robot_locomotor import Turtlebot
from gibson2.robots.turtlebot_robot import Turtlebot
from gibson2.simulator import Simulator
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.ycb_object import YCBObject
from gibson2.utils.utils import parse_config
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
import numpy as np
from gibson2.render.profiler import Profiler
from IPython import embed
def main():
config = parse_config('../configs/turtlebot_demo.yaml')
s = Simulator(mode='gui', image_width=512, image_height=512)
scene = BuildingScene('Rs',
build_graph=True,
pybullet_load_texture=True)
settings = MeshRendererSettings(enable_shadow=False, msaa=False)
s = Simulator(mode='gui', image_width=256,
image_height=256, rendering_settings=settings)
scene = StaticIndoorScene('Rs',
build_graph=True,
pybullet_load_texture=True)
s.import_scene(scene)
turtlebot = Turtlebot(config)
s.import_robot(turtlebot)
@ -41,17 +45,20 @@ def main():
for _ in range(10):
obj = YCBObject('003_cracker_box')
s.import_object(obj)
obj.set_position_orientation(np.random.uniform(low=0, high=2, size=3), [0,0,0,1])
obj.set_position_orientation(np.random.uniform(
low=0, high=2, size=3), [0, 0, 0, 1])
print(s.renderer.instances)
for i in range(10000):
with Profiler('Simulator step'):
turtlebot.apply_action([0.1,0.1])
turtlebot.apply_action([0.1, 0.1])
s.step()
rgb = s.renderer.render_robot_cameras(modes=('rgb'))
s.disconnect()
if __name__ == '__main__':
main()
```

View File

@ -3,7 +3,7 @@
We provide tests in [test](https://github.com/StanfordVL/iGibson/tree/master/test). You can run them by this:
```bash
cd test
pytest
pytest --ignore disabled --ignore benchmark
```
It will take a few minutes. If all tests pass, you will see something like this
```bash
@ -22,5 +22,4 @@ test_scene_importing.py .... [ 9
test_simulator.py . [ 96% ]
test_viewer.py
```
We will further improve our test coverage in the next few weeks.

38
docs/viewer.md Normal file
View File

@ -0,0 +1,38 @@
# Viewer
### Overview
We developed an easy-to-use iGibson-human interface called **Viewer** for users to inspect and interact with our scenes and objects. The Viewer will automatically pop up if you use `gui` or `iggui` mode in `Simulator`.
![viewer.png](images/viewer.png)
On the top left corner, you can see `px 0.4 py -0.9 pz 1.2`, which indicates the camera position, `[1.0, 0.1, -0.1]`, which indicates the camera orientation, and `manip mode`, which indicates the current control mode you are in (explained below).
Keyboard control includes the following
- `W`, `A`, `S`, `D`: translate forward/left/backward/right
- `Q`, `E`: rotate left/right
- `M`: choose between different control mode (navigation, manipulation and planning)
- `R`: start/stop recording
- `P`: pause/resume recording
- `H`: show help menu
- `ESC`: exit
We have three control modes (navigation, manipulation and planning) and the mouse control is different for each control mode. You may switch between these control modes by pressing `M`.
Mouse control in navigation mode
- Left click and drag: rotate camera
- CTRL + left click and drag: translate camera
- Middle click and drag: translate camera closer/further away in the viewing direction
Mouse control in manipulation mode
- Left click and drag: create ball-joint connection to the clicked object and move it
- Middle click and drag: create fixed-joint connection to the clicked object and move it
- CTRL + click and drag: move the object further/closer
Mouse control in planning mode
- Left click and drag: create (click), visualize (drag) and plan / execute (release) a base motion subgoal for the robot base to reach the physical point that corresponds to the clicked pixel
- Middle click: create create, and plan / execute an arm motion subgoal for the robot end-effector to reach the physical point that corresponds to the clicked pixel
In manipulation and planning modes, a visual indicator will be visualized in the `Viewer` to assist control (e.g. the blue sphere at the bottom in the image above).
Most of the code can be found in [gibson2/render/viewer.py](https://github.com/StanfordVL/iGibson/tree/master/gibson2/render/viewer.py).

View File

@ -1,51 +1,58 @@
# scene
scene: gibson
scene_id: Rs
scene: igibson
scene_id: Rs_int
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 3
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: Fetch
is_discrete: false
wheel_velocity: 1.0
torso_lift_velocity: 1.0
arm_velocity: 1.0
wheel_velocity: 0.8
torso_lift_velocity: 0.8
arm_velocity: 0.8
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
# task
task: reaching_random
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
additional_states_dim: 4
goal_format: polar
task_obs_dim: 4
# reward
reward_type: geodesic
reward_type: l2
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [0, 1, 2] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
# termination condition
dist_tol: 0.5 # body width
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
goal_format: polar
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [0, 1, 2] # ignore collisions with these robot links
# sensor spec
output: [sensor, rgb, depth, scan]
output: [task_obs, rgb, depth, scan]
# image
# Primesense Carmine 1.09 short-range RGBD sensor
# http://xtionprolive.com/primesense-carmine-1.09
fisheye: false
image_width: 160
image_height: 120
vertical_fov: 45
image_width: 128
image_height: 128
vertical_fov: 90
# depth
depth_low: 0.35
depth_high: 3.0
@ -67,4 +74,3 @@ scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -0,0 +1,81 @@
# scene
scene: igibson
scene_id: Rs_int
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 3
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: Fetch
wheel_velocity: 0.8
torso_lift_velocity: 0.8
arm_velocity: 0.8
# task
task: room_rearrangement
load_object_categories: [
bottom_cabinet,
bottom_cabinet_no_top,
top_cabinet,
dishwasher,
fridge,
microwave,
oven,
washer
dryer,
]
# reward
potential_reward_weight: 1.0
prismatic_joint_reward_scale: 3.0
revolute_joint_reward_scale: 1.0
# discount factor
discount_factor: 0.99
# termination condition
max_step: 500
max_collisions_allowed: 500
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [0, 1, 2] # ignore collisions with these robot links
# sensor spec
output: [rgb, depth, scan]
# image
# Primesense Carmine 1.09 short-range RGBD sensor
# http://xtionprolive.com/primesense-carmine-1.09
fisheye: false
image_width: 128
image_height: 128
vertical_fov: 90
# depth
depth_low: 0.35
depth_high: 3.0
# scan
# SICK TIM571 scanning range finder
# https://docs.fetchrobotics.com/robot_hardware.html
# n_horizontal_rays is originally 661, sub-sampled 1/3
n_horizontal_rays: 220
n_vertical_beams: 1
laser_linear_range: 25.0
laser_angular_range: 220.0
min_laser_dist: 0.05
laser_link_name: laser_link
# sensor noise
depth_noise_rate: 0.0
scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -1,11 +1,17 @@
# scene
scene: gibson
scene_id: Rs
scene: igibson
scene_id: Rs_int
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 3
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: JR2_Kinova
@ -13,32 +19,33 @@ is_discrete: false
wheel_velocity: 0.3
arm_velocity: 1.0
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
# task
task: reaching_random
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
additional_states_dim: 4
goal_format: polar
task_obs_dim: 4
# reward
reward_type: geodesic
reward_type: l2
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [2, 3, 5, 7] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
# termination condition
dist_tol: 0.5 # body width
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
goal_format: polar
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [2, 3, 5, 7] # ignore collisions with these robot links
# sensor spec
output: [sensor, rgb, depth]
output: [task_obs, rgb, depth]
# image
# Intel Realsense Depth Camera D435
# https://store.intelrealsense.com/buy-intel-realsense-depth-camera-d435.html
@ -56,4 +63,3 @@ depth_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -1,58 +0,0 @@
# scene
scene: gibson
scene_id: Rs
build_graph: true
load_texture: true
trav_map_resolution: 0.1
trav_map_erosion: 2
# robot
robot: Locobot
is_discrete: false
linear_velocity: 0.5
angular_velocity: 1.5707963267948966
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
additional_states_dim: 4
# reward
reward_type: geodesic
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
# termination condition
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
goal_format: polar
# sensor spec
output: [sensor, rgb, depth]
# image
# Intel Realsense Depth Camera D435
# https://store.intelrealsense.com/buy-intel-realsense-depth-camera-d435.html
fisheye: false
image_width: 160
image_height: 90
vertical_fov: 42.5
# depth
depth_low : 0.1
depth_high: 10.0
# sensor noise
depth_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -15,24 +15,22 @@ object_randomization_freq: null
# robot
robot: Locobot
is_discrete: false
linear_velocity: 0.5
angular_velocity: 1.5707963267948966
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
goal_format: polar
# task
task: point_nav_random
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
additional_states_dim: 4
goal_format: polar
task_obs_dim: 4
# reward
reward_type: geodesic
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
@ -42,25 +40,26 @@ dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# sensor spec
output: [sensor, rgb, depth]
output: [task_obs, rgb, depth]
# image
# ASUS Xtion PRO LIVE
# https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE
# Intel Realsense Depth Camera D435
# https://store.intelrealsense.com/buy-intel-realsense-depth-camera-d435.html
fisheye: false
image_width: 160
image_height: 90
vertical_fov: 42.5
# depth
depth_low: 0.1
depth_low : 0.1
depth_high: 10.0
# sensor noise
depth_noise_rate: 0.0
scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -4,28 +4,32 @@ scene_id: Rs
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 2
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: Turtlebot
is_discrete: false
velocity: 1.0
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
# task
task: point_nav_random
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
additional_states_dim: 4
goal_format: polar
task_obs_dim: 4
# reward
reward_type: geodesic
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
@ -34,16 +38,19 @@ discount_factor: 0.99
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
goal_format: polar
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# sensor spec
output: [sensor, rgb, depth, scan]
output: [task_obs, rgb, depth, scan]
# image
# ASUS Xtion PRO LIVE
# https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE
fisheye: false
image_width: 640
image_height: 480
image_width: 160
image_height: 120
vertical_fov: 45
# depth
depth_low: 0.8
@ -66,4 +73,3 @@ scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -4,8 +4,10 @@ scene_id: Rs_int
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 2
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
@ -16,20 +18,18 @@ robot: Turtlebot
is_discrete: false
velocity: 1.0
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
# task
task: dynamic_nav_random
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
additional_states_dim: 4
goal_format: polar
task_obs_dim: 4
# reward
reward_type: geodesic
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
@ -38,16 +38,19 @@ discount_factor: 0.99
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
goal_format: polar
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# sensor spec
output: [sensor, rgb, depth, scan]
output: [task_obs, rgb, depth, scan]
# image
# ASUS Xtion PRO LIVE
# https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE
fisheye: false
image_width: 640
image_height: 480
image_width: 160
image_height: 120
vertical_fov: 45
# depth
depth_low: 0.8
@ -70,4 +73,3 @@ scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -0,0 +1,75 @@
# scene
scene: igibson
scene_id: Rs_int
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 2
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: Turtlebot
is_discrete: false
velocity: 1.0
# task
task: interactive_nav_random
target_dist_min: 1.0
target_dist_max: 10.0
goal_format: polar
task_obs_dim: 4
# reward
reward_type: geodesic
success_reward: 10.0
potential_reward_weight: 1.0
collision_reward_weight: -0.1
# discount factor
discount_factor: 0.99
# termination condition
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# sensor spec
output: [task_obs, rgb, depth, scan]
# image
# ASUS Xtion PRO LIVE
# https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE
fisheye: false
image_width: 160
image_height: 120
vertical_fov: 45
# depth
depth_low: 0.8
depth_high: 3.5
# scan
# Hokuyo URG-04LX-UG01
# https://www.hokuyo-aut.jp/search/single.php?serial=166
# n_horizontal_rays is originally 683, sub-sampled 1/3
n_horizontal_rays: 228
n_vertical_beams: 1
laser_linear_range: 5.6
laser_angular_range: 240.0
min_laser_dist: 0.05
laser_link_name: scan_link
# sensor noise
depth_noise_rate: 0.0
scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -1,30 +1,35 @@
# scene
scene: gibson
scene_id: Rs
scene: igibson
scene_id: Rs_int
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 2
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: Turtlebot
is_discrete: true
is_discrete: false
velocity: 1.0
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
# task
task: point_nav_random
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
additional_states_dim: 4
goal_format: polar
task_obs_dim: 4
# reward
reward_type: geodesic
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
@ -33,16 +38,19 @@ discount_factor: 0.99
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
goal_format: polar
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# sensor spec
output: [sensor, rgb, depth, scan]
output: [task_obs, rgb, depth, scan]
# image
# ASUS Xtion PRO LIVE
# https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE
fisheye: false
image_width: 160
image_height: 120
image_width: 160
image_height: 120
vertical_fov: 45
# depth
depth_low: 0.8
@ -58,11 +66,10 @@ laser_angular_range: 240.0
min_laser_dist: 0.05
laser_link_name: scan_link
# sensor noise
# sensor noise
depth_noise_rate: 0.0
scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -1,25 +1,34 @@
# scene
scene: stadium
build_graph: false
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 2
should_open_all_doors: true
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: Turtlebot
is_discrete: false
velocity: 1.0
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal|reaching
# task
task: point_nav_random
target_dist_min: 1.0
target_dist_max: 10.0
initial_pos_z_offset: 0.1
additional_states_dim: 4
goal_format: polar
task_obs_dim: 4
# reward
reward_type: l2
success_reward: 10.0
slack_reward: -0.01
potential_reward_weight: 1.0
collision_reward_weight: -0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# discount factor
discount_factor: 0.99
@ -28,16 +37,19 @@ discount_factor: 0.99
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
goal_format: polar
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# sensor spec
output: [sensor, rgb, depth, scan]
output: [task_obs, rgb, depth, scan]
# image
# ASUS Xtion PRO LIVE
# https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE
fisheye: false
image_width: 160
image_height: 120
image_width: 160
image_height: 120
vertical_fov: 45
# depth
depth_low: 0.8
@ -53,11 +65,10 @@ laser_angular_range: 240.0
min_laser_dist: 0.05
laser_link_name: scan_link
# sensor noise
# sensor noise
depth_noise_rate: 0.0
scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: false

View File

@ -1,27 +0,0 @@
from gibson2.envs.locomotor_env import NavigationRandomEnvSim2Real
from time import time
import numpy as np
from time import time
import gibson2
import os
from gibson2.render.profiler import Profiler
def main():
config_filename = os.path.join(os.path.dirname(gibson2.__file__),
'../examples/configs/turtlebot_demo.yaml')
nav_env = NavigationRandomEnvSim2Real(config_file=config_filename,
mode='gui',
track='interactive')
for j in range(10):
nav_env.reset()
for i in range(100):
with Profiler('Env action step'):
action = nav_env.action_space.sample()
state, reward, done, info = nav_env.step(action)
if done:
print("Episode finished after {} timesteps".format(i + 1))
break
if __name__ == "__main__":
main()

View File

@ -1,27 +1,28 @@
from gibson2.envs.locomotor_env import NavigationRandomEnv
from time import time
import numpy as np
from gibson2.envs.igibson_env import iGibsonEnv
from time import time
import gibson2
import os
from gibson2.render.profiler import Profiler
import logging
#logging.getLogger().setLevel(logging.DEBUG) #To increase the level of logging
def main():
config_filename = os.path.join(os.path.dirname(gibson2.__file__),
'../examples/configs/turtlebot_demo.yaml')
nav_env = NavigationRandomEnv(config_file=config_filename, mode='gui')
config_filename = os.path.join(
os.path.dirname(gibson2.__file__),
'../examples/configs/turtlebot_demo.yaml')
env = iGibsonEnv(config_file=config_filename, mode='gui')
for j in range(10):
nav_env.reset()
env.reset()
for i in range(100):
with Profiler('Environment action step'):
action = nav_env.action_space.sample()
state, reward, done, info = nav_env.step(action)
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
logging.info("Episode finished after {} timesteps".format(i + 1))
logging.info(
"Episode finished after {} timesteps".format(i + 1))
break
env.close()
if __name__ == "__main__":
main()

View File

@ -1,25 +1,28 @@
from gibson2.envs.locomotor_env import NavigationRandomEnv
from time import time
import numpy as np
from gibson2.envs.igibson_env import iGibsonEnv
from time import time
import gibson2
import os
from gibson2.render.profiler import Profiler
import logging
def main():
config_filename = os.path.join(os.path.dirname(gibson2.__file__),
'../examples/configs/turtlebot_interactive_demo.yaml')
nav_env = NavigationRandomEnv(config_file=config_filename, mode='iggui')
config_filename = os.path.join(
os.path.dirname(gibson2.__file__),
'../examples/configs/turtlebot_point_nav.yaml')
env = iGibsonEnv(config_file=config_filename, mode='gui')
for j in range(10):
nav_env.reset()
env.reset()
for i in range(100):
with Profiler('Env action step'):
action = nav_env.action_space.sample()
state, reward, done, info = nav_env.step(action)
with Profiler('Environment action step'):
action = env.action_space.sample()
state, reward, done, info = env.step(action)
if done:
print("Episode finished after {} timesteps".format(i + 1))
logging.info(
"Episode finished after {} timesteps".format(i + 1))
break
env.close()
if __name__ == "__main__":
main()

View File

@ -1,6 +1,6 @@
from gibson2.simulator import Simulator
from gibson2.scenes.igibson_indoor_scene import iGSDFScene
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
from gibson2.render.profiler import Profiler
import argparse

View File

@ -4,7 +4,7 @@ import os
import numpy as np
from gibson2.simulator import Simulator
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
from gibson2.render.profiler import Profiler
# from gibson2.utils.assets_utils import get_model_path
from gibson2.objects.articulated_object import ArticulatedObject

View File

@ -1,7 +1,7 @@
import gibson2
from gibson2.simulator import Simulator
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
from gibson2.render.profiler import Profiler
from gibson2.utils.assets_utils import get_ig_scene_path
import argparse

View File

@ -0,0 +1,39 @@
from gibson2.robots.turtlebot_robot import Turtlebot
from gibson2.simulator import Simulator
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.ycb_object import YCBObject
from gibson2.utils.utils import parse_config
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
import numpy as np
from gibson2.render.profiler import Profiler
from IPython import embed
def main():
config = parse_config('../configs/turtlebot_demo.yaml')
settings = MeshRendererSettings()
s = Simulator(mode='gui',
image_width=256,
image_height=256,
rendering_settings=settings)
scene = StaticIndoorScene('Rs',
build_graph=True,
pybullet_load_texture=True)
s.import_scene(scene)
turtlebot = Turtlebot(config)
s.import_robot(turtlebot)
for i in range(10000):
with Profiler('Simulator step'):
turtlebot.apply_action([0.1, -0.1])
s.step()
lidar = s.renderer.get_lidar_all()
print(lidar.shape)
# TODO: visualize lidar scan
s.disconnect()
if __name__ == '__main__':
main()

View File

@ -13,15 +13,14 @@ def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
else:
model_path = os.path.join(get_scene_path('Rs_int'), 'mesh_z_up.obj')
model_path = os.path.join(get_scene_path('Rs'), 'mesh_z_up.obj')
renderer = MeshRenderer(width=512, height=512)
renderer.load_object(model_path)
renderer.add_instance(0)
print(renderer.visual_objects, renderer.instances)
print(renderer.materials_mapping, renderer.mesh_materials)
px = 0
py = 0.2
@ -45,8 +44,10 @@ def main():
dy = (y - _mouse_iy) / 100.0
_mouse_ix = x
_mouse_iy = y
r1 = np.array([[np.cos(dy), 0, np.sin(dy)], [0, 1, 0], [-np.sin(dy), 0, np.cos(dy)]])
r2 = np.array([[np.cos(-dx), -np.sin(-dx), 0], [np.sin(-dx), np.cos(-dx), 0], [0, 0, 1]])
r1 = np.array([[np.cos(dy), 0, np.sin(dy)], [
0, 1, 0], [-np.sin(dy), 0, np.cos(dy)]])
r2 = np.array([[np.cos(-dx), -np.sin(-dx), 0],
[np.sin(-dx), np.cos(-dx), 0], [0, 0, 1]])
view_direction = r1.dot(r2).dot(view_direction)
elif event == cv2.EVENT_LBUTTONUP:
down = False
@ -57,7 +58,8 @@ def main():
while True:
with Profiler('Render'):
frame = renderer.render(modes=('rgb'))
cv2.imshow('test', cv2.cvtColor(np.concatenate(frame, axis=1), cv2.COLOR_RGB2BGR))
cv2.imshow('test', cv2.cvtColor(
np.concatenate(frame, axis=1), cv2.COLOR_RGB2BGR))
q = cv2.waitKey(1)
if q == ord('w'):
px += 0.01
@ -70,7 +72,8 @@ def main():
elif q == ord('q'):
break
camera_pose = np.array([px, py, 0.5])
renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
renderer.set_camera(camera_pose, camera_pose +
view_direction, [0, 0, 1])
# start = time.time()
@ -84,4 +87,4 @@ def main():
if __name__ == '__main__':
main()
main()

View File

@ -2,12 +2,14 @@ import cv2
import sys
import os
import numpy as np
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer, MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
from gibson2.render.profiler import Profiler
from gibson2.utils.assets_utils import get_scene_path
from PIL import Image
import gibson2
def load_obj_np(filename_obj, normalization=False, texture_size=4, load_texture=False,
def load_obj_np(filename_obj, normalization=False, texture_size=4,
texture_wrapping='REPEAT', use_bilinear=True):
"""Load Wavefront .obj file into numpy array
This function only supports vertices (v x x x) and faces (f x x x).
@ -39,33 +41,9 @@ def load_obj_np(filename_obj, normalization=False, texture_size=4, load_texture=
faces.append((v0, v1, v2))
faces = np.vstack(faces).astype(np.int32) - 1
# load textures
textures = None
assert load_texture is False # Since I commented out the block below
# if load_texture:
# for line in lines:
# if line.startswith('mtllib'):
# filename_mtl = os.path.join(os.path.dirname(filename_obj), line.split()[1])
# textures = load_textures(filename_obj, filename_mtl, texture_size,
# texture_wrapping=texture_wrapping,
# use_bilinear=use_bilinear)
# if textures is None:
# raise Exception('Failed to load textures.')
# textures = textures.cpu().numpy()
assert normalization is False # Since I commented out the block below
# # normalize into a unit cube centered zero
# if normalization:
# vertices -= vertices.min(0)[0][None, :]
# vertices /= torch.abs(vertices).max()
# vertices *= 2
# vertices -= vertices.max(0)[0][None, :] / 2
if load_texture:
return vertices, faces, textures
else:
return vertices, faces
return vertices, faces
def main():
@ -76,14 +54,10 @@ def main():
else:
model_path = os.path.join(get_scene_path('Rs_int'), 'mesh_z_up.obj')
settings = MeshRendererSettings(msaa=True, enable_shadow=True)
renderer = MeshRenderer(width=512, height=512, vertical_fov=90, rendering_settings=settings)
renderer = MeshRenderer(width=1024, height=1024, vertical_fov=70, rendering_settings=settings)
renderer.set_light_position_direction([0,0,10], [0,0,0])
renderer.load_object('plane/plane_z_up_0.obj', scale=[3,3,3])
renderer.add_instance(0)
renderer.set_pose([0,0,-1.5,1, 0, 0.0, 0.0], -1)
i = 1
i = 0
v = []
for fn in os.listdir(model_path):
@ -95,22 +69,16 @@ def main():
print(v.shape)
xlen = np.max(v[:,0]) - np.min(v[:,0])
ylen = np.max(v[:,1]) - np.min(v[:,1])
scale = 1.0/(max(xlen, ylen))
scale = 2.0/(max(xlen, ylen))
for fn in os.listdir(model_path):
if fn.endswith('obj'):
renderer.load_object(os.path.join(model_path, fn), scale=[scale, scale, scale])
renderer.add_instance(i)
i += 1
renderer.instances[-1].use_pbr = True
renderer.instances[-1].use_pbr_mapping = True
renderer.instances[-1].metalness = 1
renderer.instances[-1].roughness = 0.1
print(renderer.visual_objects, renderer.instances)
print(renderer.materials_mapping, renderer.mesh_materials)
px = 1
py = 1
@ -123,58 +91,48 @@ def main():
_mouse_ix, _mouse_iy = -1, -1
down = False
# def change_dir(event, x, y, flags, param):
# global _mouse_ix, _mouse_iy, down, view_direction
# if event == cv2.EVENT_LBUTTONDOWN:
# _mouse_ix, _mouse_iy = x, y
# down = True
# if event == cv2.EVENT_MOUSEMOVE:
# if down:
# dx = (x - _mouse_ix) / 100.0
# dy = (y - _mouse_iy) / 100.0
# _mouse_ix = x
# _mouse_iy = y
# r1 = np.array([[np.cos(dy), 0, np.sin(dy)], [0, 1, 0], [-np.sin(dy), 0, np.cos(dy)]])
# r2 = np.array([[np.cos(-dx), -np.sin(-dx), 0], [np.sin(-dx), np.cos(-dx), 0], [0, 0, 1]])
# view_direction = r1.dot(r2).dot(view_direction)
# elif event == cv2.EVENT_LBUTTONUP:
# down = False
def change_dir(event, x, y, flags, param):
global _mouse_ix, _mouse_iy, down, view_direction
if event == cv2.EVENT_LBUTTONDOWN:
_mouse_ix, _mouse_iy = x, y
down = True
if event == cv2.EVENT_MOUSEMOVE:
if down:
dx = (x - _mouse_ix) / 100.0
dy = (y - _mouse_iy) / 100.0
_mouse_ix = x
_mouse_iy = y
r1 = np.array([[np.cos(dy), 0, np.sin(dy)], [0, 1, 0], [-np.sin(dy), 0, np.cos(dy)]])
r2 = np.array([[np.cos(-dx), -np.sin(-dx), 0], [np.sin(-dx), np.cos(-dx), 0], [0, 0, 1]])
view_direction = r1.dot(r2).dot(view_direction)
elif event == cv2.EVENT_LBUTTONUP:
down = False
# cv2.namedWindow('test')
# cv2.setMouseCallback('test', change_dir)
cv2.namedWindow('test')
cv2.setMouseCallback('test', change_dir)
theta = 0
r = 1.5
imgs = []
for i in range(60):
theta += np.pi*2/60
renderer.set_pose([0,0,-1.5,np.cos(-theta/2), 0, 0.0, np.sin(-theta/2)], 0)
while True:
with Profiler('Render'):
frame = renderer.render(modes=('rgb'))
frame = renderer.render(modes=('rgb', 'normal'))
cv2.imshow('test', cv2.cvtColor(np.concatenate(frame, axis=1), cv2.COLOR_RGB2BGR))
imgs.append(Image.fromarray((255*np.concatenate(frame, axis=1)[:,:,:3]).astype(np.uint8)))
q = cv2.waitKey(1)
if q == ord('w'):
px += 0.01
px += 0.1
elif q == ord('s'):
px -= 0.01
px -= 0.1
elif q == ord('a'):
py += 0.01
py += 0.1
elif q == ord('d'):
py -= 0.01
py -= 0.1
elif q == ord('q'):
break
px = r*np.sin(theta)
py = r*np.cos(theta)
camera_pose = np.array([px, py, pz])
renderer.set_camera(camera_pose, [0,0,0], [0, 0, 1])
camera_pose = np.array([px, py, 1])
renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
renderer.release()
imgs[0].save('{}.gif'.format('/data2/gifs/' + model_path.replace('/', '_')),
save_all=True, append_images=imgs[1:], optimize=False, duration=40, loop=0)
if __name__ == '__main__':
main()

View File

@ -11,7 +11,7 @@ def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
else:
model_path = os.path.join(get_scene_path('Rs_int'), 'mesh_z_up.obj')
model_path = os.path.join(get_scene_path('Rs'), 'mesh_z_up.obj')
renderer = MeshRendererG2G(width=512, height=512, device_idx=0)
renderer.load_object(model_path)
@ -28,8 +28,10 @@ def main():
frame = renderer.render(modes=('rgb', 'normal', '3d'))
print(frame)
img_np = frame[0].flip(0).data.cpu().numpy().reshape(renderer.height, renderer.width, 4)
normal_np = frame[1].flip(0).data.cpu().numpy().reshape(renderer.height, renderer.width, 4)
img_np = frame[0].flip(0).data.cpu().numpy().reshape(
renderer.height, renderer.width, 4)
normal_np = frame[1].flip(0).data.cpu().numpy().reshape(
renderer.height, renderer.width, 4)
plt.imshow(np.concatenate([img_np, normal_np], axis=1))
plt.show()

View File

@ -5,11 +5,12 @@ import numpy as np
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.utils.assets_utils import get_scene_path
def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
else:
model_path = os.path.join(get_scene_path('Rs_int'), 'mesh_z_up.obj')
model_path = os.path.join(get_scene_path('Rs'), 'mesh_z_up.obj')
renderer = MeshRenderer(width=512, height=512)
renderer.load_object(model_path)
@ -18,7 +19,8 @@ def main():
view_direction = np.array([1, 0, 0])
renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
renderer.set_fov(90)
frames = renderer.render(modes=('rgb', 'normal', '3d'))
frames = renderer.render(
modes=('rgb', 'normal', '3d'))
frames = cv2.cvtColor(np.concatenate(frames, axis=1), cv2.COLOR_RGB2BGR)
cv2.imshow('image', frames)
cv2.waitKey(0)

View File

@ -0,0 +1,33 @@
from gibson2.envs.igibson_env import iGibsonEnv
from gibson2.utils.motion_planning_wrapper import MotionPlanningWrapper
import argparse
import numpy as np
def run_example(args):
nav_env = iGibsonEnv(config_file=args.config,
mode=args.mode,
action_timestep=1.0 / 120.0,
physics_timestep=1.0 / 120.0)
motion_planner = MotionPlanningWrapper(nav_env)
state = nav_env.reset()
while True:
action = np.zeros(nav_env.action_space.shape)
state, reward, done, _ = nav_env.step(action)
if __name__ == "__main__":
parser = argparse.ArgumentParser()
parser.add_argument(
'--config',
'-c',
help='which config file to use [default: use yaml files in examples/configs]')
parser.add_argument('--mode',
'-m',
choices=['headless', 'gui', 'iggui'],
default='headless',
help='which mode for simulation (default: headless)')
args = parser.parse_args()
run_example(args)

View File

@ -1,44 +1,64 @@
#!/usr/bin/env python
from gibson2.simulator import Simulator
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.utils.utils import parse_config
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
import os
import gibson2
import sys
import time
import random
import sys
import matplotlib.pyplot as plt
import gibson2
import argparse
import numpy as np
import pybullet as p
import matplotlib.pyplot as plt
from gibson2.simulator import Simulator
from gibson2.utils.utils import parse_config
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
from gibson2.utils.assets_utils import get_ig_scene_path,get_cubicasa_scene_path,get_3dfront_scene_path
# human interaction demo
def test_import_igsdf():
def test_import_igsdf(scene_name, scene_source):
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
if scene_source == "IG":
scene_dir = get_ig_scene_path(scene_name)
elif scene_source == "CUBICASA":
scene_dir = get_cubicasa_scene_path(scene_name)
else:
scene_dir = get_3dfront_scene_path(scene_name)
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
scene_dir, 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
gibson2.ig_dataset_path, 'scenes', 'background',
'urban_street_01.jpg')
scene = InteractiveIndoorScene(
'Beechwood_0_int', texture_randomization=False, object_randomization=False)
#scene._set_first_n_objects(10)
scene_name,
texture_randomization=False,
object_randomization=False,
scene_source=scene_source)
settings = MeshRendererSettings(env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True, msaa=True,
light_dimming_factor=1.0)
s = Simulator(mode='headless', image_width=960,
s = Simulator(mode='iggui', image_width=960,
image_height=720, device_idx=0, rendering_settings=settings)
#s.viewer.min_cam_z = 1.0
s.import_ig_scene(scene)
fpss = []
np.random.seed(0)
_,(px,py,pz) = scene.get_random_point()
s.viewer.px = px
s.viewer.py = py
s.viewer.pz = 1.7
s.viewer.update()
for i in range(3000):
if i == 2500:
@ -57,7 +77,16 @@ def test_import_igsdf():
plt.show()
def main():
test_import_igsdf()
parser = argparse.ArgumentParser(
description='Open a scene with iGibson interactive viewer.')
parser.add_argument('--scene', dest='scene_name',
type=str, default='Rs_int',
help='The name of the scene to load')
parser.add_argument('--source', dest='scene_source',
type=str, default='IG',
help='The name of the source dataset, among [IG,CUBICASA,THREEDFRONT]')
args = parser.parse_args()
test_import_igsdf(args.scene_name, args.scene_source)
if __name__ == "__main__":

View File

@ -6,28 +6,31 @@ import pybullet as p
import pybullet_data
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setGravity(0, 0, -9.8)
p.setTimeStep(1./240.)
floor = os.path.join(pybullet_data.getDataPath(), "mjcf/ground_plane.xml")
p.loadMJCF(floor)
cabinet_0007 = os.path.join(gibson2.assets_path, 'models/cabinet2/cabinet_0007.urdf')
cabinet_0004 = os.path.join(gibson2.assets_path, 'models/cabinet/cabinet_0004.urdf')
cabinet_0007 = os.path.join(
gibson2.assets_path, 'models/cabinet2/cabinet_0007.urdf')
cabinet_0004 = os.path.join(
gibson2.assets_path, 'models/cabinet/cabinet_0004.urdf')
obj1 = ArticulatedObject(filename=cabinet_0007)
obj1.load()
obj1.set_position([0,0,0.5])
obj1.set_position([0, 0, 0.5])
obj2 = ArticulatedObject(filename=cabinet_0004)
obj2.load()
obj2.set_position([0,0,2])
obj2.set_position([0, 0, 2])
obj3 = YCBObject('003_cracker_box')
obj3.load()
obj3.set_position_orientation([0,0,1.2], [0,0,0,1])
obj3.set_position_orientation([0, 0, 1.2], [0, 0, 0, 1])
for _ in range(24000): # at least 100 seconds
p.stepSimulation()

View File

@ -11,10 +11,10 @@ def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
else:
model_path = os.path.join(get_scene_path('Rs_int'), 'mesh_z_up.obj')
model_path = os.path.join(get_scene_path('Rs'), 'mesh_z_up.obj')
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setGravity(0, 0, -9.8)
p.setTimeStep(1./240.)
# Load scenes
@ -25,19 +25,21 @@ def main():
visual_id = p.createVisualShape(p.GEOM_MESH,
fileName=model_path,
meshScale=1.0)
texture_filename = get_texture_file(model_path)
texture_id = p.loadTexture(texture_filename)
mesh_id = p.createMultiBody(baseCollisionShapeIndex=collision_id,
baseVisualShapeIndex=visual_id)
# Load robots
turtlebot_urdf = os.path.join(gibson2.assets_path, 'models/turtlebot/turtlebot.urdf')
robot_id = p.loadURDF(turtlebot_urdf, flags=p.URDF_USE_MATERIAL_COLORS_FROM_MTL)
turtlebot_urdf = os.path.join(
gibson2.assets_path, 'models/turtlebot/turtlebot.urdf')
robot_id = p.loadURDF(
turtlebot_urdf, flags=p.URDF_USE_MATERIAL_COLORS_FROM_MTL)
# Load objects
obj_visual_filename = os.path.join(gibson2.assets_path, 'models/ycb/002_master_chef_can/textured_simple.obj')
obj_collision_filename = os.path.join(gibson2.assets_path, 'models/ycb/002_master_chef_can/textured_simple_vhacd.obj')
obj_visual_filename = os.path.join(
gibson2.assets_path, 'models/ycb/002_master_chef_can/textured_simple.obj')
obj_collision_filename = os.path.join(
gibson2.assets_path, 'models/ycb/002_master_chef_can/textured_simple_vhacd.obj')
collision_id = p.createCollisionShape(p.GEOM_MESH,
fileName=obj_collision_filename,
meshScale=1.0)
@ -57,4 +59,3 @@ def main():
if __name__ == '__main__':
main()

View File

@ -9,9 +9,10 @@ import numpy as np
import pybullet as p
import pybullet_data
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setGravity(0, 0, -9.8)
p.setTimeStep(1./240.)
floor = os.path.join(pybullet_data.getDataPath(), "mjcf/ground_plane.xml")
@ -63,4 +64,3 @@ def main():
if __name__ == '__main__':
main()

View File

@ -3,9 +3,10 @@ import pybullet as p
import numpy as np
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setGravity(0, 0, -9.8)
p.setTimeStep(1./240.)
scene = StaticIndoorScene('Rs',
@ -18,7 +19,8 @@ def main():
random_floor = scene.get_random_floor()
p1 = scene.get_random_point(random_floor)[1]
p2 = scene.get_random_point(random_floor)[1]
shortest_path, geodesic_distance = scene.get_shortest_path(random_floor, p1[:2], p2[:2], entire_path=True)
shortest_path, geodesic_distance = scene.get_shortest_path(
random_floor, p1[:2], p2[:2], entire_path=True)
print('random point 1:', p1)
print('random point 2:', p2)
print('geodesic distance between p1 and p2', geodesic_distance)

View File

@ -1,33 +1,23 @@
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
import pybullet as p
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.simulator import Simulator
import numpy as np
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
scene = StaticIndoorScene('Placida',
build_graph=True,
pybullet_load_texture=True)
scene.load()
s = Simulator(mode='gui', image_width=512,
image_height=512, device_idx=0)
scene = InteractiveIndoorScene(
'Rs_int', texture_randomization=False, object_randomization=False)
s.import_ig_scene(scene)
np.random.seed(0)
for _ in range(10):
random_floor = scene.get_random_floor()
p1 = scene.get_random_point(random_floor)[1]
p2 = scene.get_random_point(random_floor)[1]
shortest_path, geodesic_distance = scene.get_shortest_path(random_floor, p1[:2], p2[:2], entire_path=True)
print('random point 1:', p1)
print('random point 2:', p2)
print('geodesic distance between p1 and p2', geodesic_distance)
print('shortest path from p1 to p2:', shortest_path)
pt = scene.get_random_point_by_room_type('living_room')[1]
print('random point in living_room', pt)
for _ in range(24000): # at least 100 seconds
p.stepSimulation()
time.sleep(1./240.)
p.disconnect()
for _ in range(1000):
s.step()
s.disconnect()
if __name__ == '__main__':

View File

@ -0,0 +1,23 @@
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.simulator import Simulator
def main():
s = Simulator(mode='gui', image_width=512,
image_height=512, device_idx=0)
for random_seed in range(10):
scene = InteractiveIndoorScene('Rs_int',
texture_randomization=False,
object_randomization=True,
object_randomization_idx=random_seed)
s.import_ig_scene(scene)
for i in range(1000):
s.step()
s.reload()
s.disconnect()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,19 @@
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.simulator import Simulator
def main():
s = Simulator(mode='gui', image_width=512,
image_height=512, device_idx=0)
scene = InteractiveIndoorScene(
'Rs_int', texture_randomization=False, object_randomization=False,
load_object_categories=['chair'], load_room_types=['living_room'])
s.import_ig_scene(scene)
for _ in range(1000):
s.step()
s.disconnect()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,20 @@
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.simulator import Simulator
def main():
s = Simulator(mode='gui', image_width=512,
image_height=512, device_idx=0)
scene = InteractiveIndoorScene(
'Rs_int', texture_randomization=True, object_randomization=False)
s.import_ig_scene(scene)
for i in range(10000):
if i % 1000 == 0:
scene.randomize_texture()
s.step()
s.disconnect()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,22 @@
from gibson2.core.physics.scene import BuildingScene
import pybullet as p
import numpy as np
import time
def main():
scenes = ['Bolton', 'Connellsville', 'Pleasant', 'Cantwell', 'Placida', 'Nicut', 'Brentsville', 'Samuels', 'Oyens', 'Kerrtown']
for scene in scenes:
print('scene: ', scene, '-' * 50)
p.connect(p.DIRECT)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
scene = BuildingScene(scene,
is_interactive=True,
build_graph=True,
pybullet_load_texture=True)
scene.load()
p.disconnect()
if __name__ == '__main__':
main()

View File

@ -3,15 +3,17 @@ from gibson2.simulator import Simulator
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.ycb_object import YCBObject
from gibson2.utils.utils import parse_config
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
import numpy as np
from gibson2.render.profiler import Profiler
from IPython import embed
def main():
config = parse_config('../configs/turtlebot_demo.yaml')
settings = MeshRendererSettings(enable_shadow=True, msaa=False)
s = Simulator(mode='gui', image_width=256, image_height=256, rendering_settings=settings)
settings = MeshRendererSettings(enable_shadow=False, msaa=False)
s = Simulator(mode='gui', image_width=256,
image_height=256, rendering_settings=settings)
scene = StaticIndoorScene('Rs',
build_graph=True,
@ -23,22 +25,16 @@ def main():
for _ in range(10):
obj = YCBObject('003_cracker_box')
s.import_object(obj)
obj.set_position_orientation(np.random.uniform(low=0, high=2, size=3), [0,0,0,1])
obj.set_position_orientation(np.random.uniform(
low=0, high=2, size=3), [0, 0, 0, 1])
print(s.renderer.instances)
for item in s.renderer.instances[1:]:
item.use_pbr = True
item.use_pbr_mapping = False
item.metalness = np.random.random()
item.roughness = np.random.random()
for i in range(10000):
with Profiler('Simulator step'):
turtlebot.apply_action([0.1,0.1])
turtlebot.apply_action([0.1, 0.1])
s.step()
rgb = s.renderer.render_robot_cameras(modes=('rgb'))
s.disconnect()

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.2 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 866 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.1 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 32 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 574 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 43 KiB

View File

@ -0,0 +1,79 @@
<html>
<head>
<title>iGibson Web Interface</title>
</head>
<style>
.img {
width: 512px;
height: 512px;
border:2px solid #fff;
box-shadow: 10px 10px 5px #ccc;
-moz-box-shadow: 10px 10px 5px #ccc;
-webkit-box-shadow: 10px 10px 5px #ccc;
-khtml-box-shadow: 10px 10px 5px #ccc;
}
.center {
display: block;
margin-left: auto;
margin-right: auto;
}
body, html {
height: 100%;
}
/* The hero image */
.hero-image {
/* Use "linear-gradient" to add a darken background effect to the image (photographer.jpg). This will make the text easier to read */
background-image: url("{{url_for('static', filename='igibson_logo.png')}}");
/* Set a specific height */
height: 300px;
margin-left: auto;
margin-right: auto;
width: 900px;
/* Position and center the image to scale nicely on all screens */
background-position: center;
background-repeat: no-repeat;
background-size: cover;
position: relative;
}
/* Place text in the middle of the image */
.hero-text {
margin-top: 30px;
font-size: 20px;
text-align: center;
font-family: Arial, Helvetica, sans-serif;
}
</style>
<body>
<div class="hero-image">
</div>
</div>
<img class="img center" id="bg" src="{{ url_for('video_feed') }}?uuid={{uuid}}&robot={{robot}}&scene={{scene}}"> </img>
<div class="hero-text">
"ASWD" to send moving signal, "F" to stop, everything is physically simulated in real time.
</div>
</body>
<script>
document.addEventListener('keydown', (event) => {
console.log(`key=${event.key},code=${event.code}`);
var xhr = new XMLHttpRequest();
xhr.open('POST', "{{ url_for('video_feed') }}?key=" + `${event.key}` + '&uuid={{uuid}}', true);
xhr.setRequestHeader('Content-type', 'application/x-www-form-urlencoded');
xhr.onload = function () {
// do something to response
console.log(this.responseText);
};
xhr.send('');
});
</script>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 12 KiB

View File

@ -0,0 +1,129 @@
<html>
<head>
<title>iGibson Web Interface</title>
<meta charset="utf-8">
<meta name="viewport" content="width=device-width, initial-scale=1">
<link rel="stylesheet" href="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/css/bootstrap.min.css">
<script src="https://ajax.googleapis.com/ajax/libs/jquery/3.5.1/jquery.min.js"></script>
<script src="https://maxcdn.bootstrapcdn.com/bootstrap/3.4.1/js/bootstrap.min.js"></script>
</head>
<style>
.img {
width: 512px;
height: 512px;
border:2px solid #fff;
box-shadow: 10px 10px 5px #ccc;
-moz-box-shadow: 10px 10px 5px #ccc;
-webkit-box-shadow: 10px 10px 5px #ccc;
-khtml-box-shadow: 10px 10px 5px #ccc;
}
.center {
display: block;
margin-left: auto;
margin-right: auto;
}
body, html {
height: 100%;
}
/* The hero image */
.hero-image {
/* Use "linear-gradient" to add a darken background effect to the image (photographer.jpg). This will make the text easier to read */
background-image: url("{{url_for('static', filename='igibson_logo.png')}}");
/* Set a specific height */
height: 300px;
margin-left: auto;
margin-right: auto;
width: 900px;
/* Position and center the image to scale nicely on all screens */
background-position: center;
background-repeat: no-repeat;
background-size: cover;
position: relative;
}
/* Place text in the middle of the image */
.hero-text {
margin-top: 30px;
font-size: 20px;
text-align: center;
font-family: Arial, Helvetica, sans-serif;
}
</style>
<body>
<div class="hero-image">
</div>
<div class="container">
<form action="/demo">
<div class="row">
<div class="col-sm-6"><h4>Select Robot</h4>
<div class="form-check">
<input class="form-check-input" type="radio" name="robot" id="robot" value="fetch" checked>
<label class="form-check-label" for="fetch">
Fetch
<p> </p>
<img src="{{url_for('static', filename='fetch.jpg')}}"> </img>
</label>
</div>
<div class="form-check">
<input class="form-check-input" type="radio" name="robot" id="robot" value="turtlebot" checked>
<label class="form-check-label" for="turtlebot">
Turtlebot
<p> </p>
<img src="{{url_for('static', filename='turtlebot.jpg')}}"> </img>
</label>
</div>
</div>
<div class="col-sm-6"><h4>Select Scene</h4>
<div class="form-check">
<input class="form-check-input" type="radio" name="scene" id="scene" value="Rs_int" checked>
<label class="form-check-label" for="Rs_int">
Rs_int (1 min to load)
<p> </p>
<img src="{{url_for('static', filename='Rs.gif')}}"> </img>
</label>
</div>
<div class="form-check">
<input class="form-check-input" type="radio" name="scene" id="scene" value="Beechwood_0_int" checked>
<label class="form-check-label" for="Beechwood_0_int">
Beechwood_0_int (3 min to load)
<p> </p>
<img src="{{url_for('static', filename='Beechwood_0.gif')}}"> </img>
</label>
</div>
<div class="form-check">
<input class="form-check-input" type="radio" name="scene" id="scene" value="Merom_1_int" checked>
<label class="form-check-label" for="Merom_1_int">
Merom_1_int (3 min to load)
<p> </p>
<img src="{{url_for('static', filename='Merom_1.gif')}}"> </img>
</label>
</div>
</div>
</div>
<div class="row">
<button type="submit"type="button" class="btn btn-primary btn-block">Launch Demo</button>
</div>
</form>
</div>
</body>
</html>

Binary file not shown.

After

Width:  |  Height:  |  Size: 22 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 29 KiB

View File

@ -0,0 +1,405 @@
from flask import Flask, render_template, Response, request, session
import sys
import pickle
from gibson2.robots.turtlebot_robot import Turtlebot
from gibson2.robots.fetch_robot import Fetch
from gibson2.simulator import Simulator
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
import gibson2
import os
from gibson2.objects.ycb_object import YCBObject
from gibson2.utils.utils import parse_config
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
import numpy as np
from gibson2.render.profiler import Profiler
import cv2
from PIL import Image
from io import BytesIO
import base64
import binascii
import multiprocessing
import traceback
import atexit
import time
import cv2
import uuid
interactive = True
def pil_image_to_base64(pil_image):
buf = BytesIO()
pil_image.save(buf, format="JPEG")
return base64.b64encode(buf.getvalue())
class ProcessPyEnvironment(object):
"""Step a single env in a separate process for lock free paralellism."""
# Message types for communication via the pipe.
_READY = 1
_ACCESS = 2
_CALL = 3
_RESULT = 4
_EXCEPTION = 5
_CLOSE = 6
def __init__(self, env_constructor):
self._env_constructor = env_constructor
def start(self):
"""Start the process."""
self._conn, conn = multiprocessing.Pipe()
self._process = multiprocessing.Process(target=self._worker,
args=(conn, self._env_constructor))
atexit.register(self.close)
self._process.start()
result = self._conn.recv()
if isinstance(result, Exception):
self._conn.close()
self._process.join(5)
raise result
assert result is self._READY, result
def __getattr__(self, name):
"""Request an attribute from the environment.
Note that this involves communication with the external process, so it can
be slow.
:param name: attribute to access.
:return: value of the attribute.
"""
print('gettinng', name)
self._conn.send((self._ACCESS, name))
return self._receive()
def call(self, name, *args, **kwargs):
"""Asynchronously call a method of the external environment.
:param name: name of the method to call.
:param args: positional arguments to forward to the method.
:param kwargs: keyword arguments to forward to the method.
:return: promise object that blocks and provides the return value when called.
"""
payload = name, args, kwargs
self._conn.send((self._CALL, payload))
return self._receive
def close(self):
"""Send a close message to the external process and join it."""
try:
self._conn.send((self._CLOSE, None))
self._conn.close()
except IOError:
# The connection was already closed.
pass
self._process.join(5)
def step(self, action, blocking=True):
"""Step the environment.
:param action: the action to apply to the environment.
:param blocking: whether to wait for the result.
:return: (next_obs, reward, done, info) tuple when blocking, otherwise callable that returns that tuple
"""
promise = self.call('step', action)
if blocking:
return promise()
else:
return promise
def reset(self, blocking=True):
"""Reset the environment.
:param blocking: whether to wait for the result.
:return: next_obs when blocking, otherwise callable that returns next_obs
"""
promise = self.call('reset')
if blocking:
return promise()
else:
return promise
def _receive(self):
"""Wait for a message from the worker process and return its payload.
:raise Exception: an exception was raised inside the worker process.
:raise KeyError: the reveived message is of an unknown type.
:return: payload object of the message.
"""
message, payload = self._conn.recv()
# Re-raise exceptions in the main process.
if message == self._EXCEPTION:
stacktrace = payload
raise Exception(stacktrace)
if message == self._RESULT:
return payload
self.close()
raise KeyError(
'Received message of unexpected type {}'.format(message))
def _worker(self, conn, env_constructor):
"""The process waits for actions and sends back environment results.
:param conn: connection for communication to the main process.
:param env_constructor: env_constructor for the OpenAI Gym environment.
:raise KeyError: when receiving a message of unknown type.
"""
try:
np.random.seed()
env = env_constructor()
conn.send(self._READY) # Ready.
while True:
try:
# Only block for short times to have keyboard exceptions be raised.
if not conn.poll(0.1):
continue
message, payload = conn.recv()
except (EOFError, KeyboardInterrupt):
break
if message == self._ACCESS:
name = payload
result = getattr(env, name)
conn.send((self._RESULT, result))
continue
if message == self._CALL:
name, args, kwargs = payload
if name == 'step' or name == 'reset':
result = getattr(env, name)(*args, **kwargs)
conn.send((self._RESULT, result))
continue
if message == self._CLOSE:
getattr(env, 'close')()
assert payload is None
break
raise KeyError(
'Received message of unknown type {}'.format(message))
except Exception: # pylint: disable=broad-except
etype, evalue, tb = sys.exc_info()
stacktrace = ''.join(traceback.format_exception(etype, evalue, tb))
message = 'Error in environment process: {}'.format(stacktrace)
conn.send((self._EXCEPTION, stacktrace))
finally:
conn.close()
class ToyEnv(object):
def __init__(self):
config = parse_config('../../configs/turtlebot_demo.yaml')
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
settings = MeshRendererSettings(enable_shadow=False, enable_pbr=False)
self.s = Simulator(mode='headless', image_width=400,
image_height=400, rendering_settings=settings)
scene = StaticIndoorScene('Rs')
self.s.import_scene(scene)
#self.s.import_ig_scene(scene)
self.robot = Turtlebot(config)
self.s.import_robot(self.robot)
for _ in range(5):
obj = YCBObject('003_cracker_box')
self.s.import_object(obj)
obj.set_position_orientation(np.random.uniform(
low=0, high=2, size=3), [0, 0, 0, 1])
print(self.s.renderer.instances)
def step(self, a):
self.robot.apply_action(a)
self.s.step()
frame = self.s.renderer.render_robot_cameras(modes=('rgb'))[0]
return frame
def close(self):
self.s.disconnect()
class ToyEnvInt(object):
def __init__(self, robot='turtlebot', scene='Rs_int'):
config = parse_config('../../configs/turtlebot_demo.yaml')
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
scene = InteractiveIndoorScene(
scene, texture_randomization=False, object_randomization=False)
#scene._set_first_n_objects(5)
scene.open_all_doors()
settings = MeshRendererSettings(env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True, msaa=True,
light_dimming_factor=1.0,
optimized=True)
self.s = Simulator(mode='headless', image_width=400,
image_height=400, rendering_settings=settings)
self.s.import_ig_scene(scene)
if robot=='turtlebot':
self.robot = Turtlebot(config)
else:
self.robot = Fetch(config)
self.s.import_robot(self.robot)
for _ in range(5):
obj = YCBObject('003_cracker_box')
self.s.import_object(obj)
obj.set_position_orientation(np.random.uniform(
low=0, high=2, size=3), [0, 0, 0, 1])
print(self.s.renderer.instances)
def step(self, a):
action = np.zeros(self.robot.action_space.shape)
if isinstance(self.robot, Turtlebot):
action[0] = a[0]
action[1] = a[1]
else:
action[1] = a[0]
action[0] = a[1]
self.robot.apply_action(action)
self.s.step()
frame = self.s.renderer.render_robot_cameras(modes=('rgb'))[0]
return frame
def close(self):
self.s.disconnect()
class iGFlask(Flask):
def __init__(self, args, **kwargs):
super(iGFlask, self).__init__(args, **kwargs)
self.action= {}
self.envs = {}
self.envs_inception_time = {}
def cleanup(self):
print(self.envs)
for k,v in self.envs_inception_time.items():
if time.time() - v > 200:
# clean up an old environment
self.stop_app(k)
def prepare_app(self, uuid, robot, scene):
self.cleanup()
def env_constructor():
if interactive:
return ToyEnvInt(robot=robot, scene=scene)
else:
return ToyEnv()
self.envs[uuid] = ProcessPyEnvironment(env_constructor)
self.envs[uuid].start()
self.envs_inception_time[uuid] = time.time()
def stop_app(self, uuid):
self.envs[uuid].close()
del self.envs[uuid]
del self.envs_inception_time[uuid]
app = iGFlask(__name__)
@app.route('/')
def index():
id = uuid.uuid4()
return render_template('index.html', uuid=id)
@app.route('/demo')
def demo():
args = request.args
id = uuid.uuid4()
robot = args['robot']
scene = args['scene']
return render_template('demo.html', uuid=id, robot=robot, scene=scene)
def gen(app, unique_id, robot, scene):
image = np.array(Image.open("templates/loading.jpg").resize((400, 400))).astype(np.uint8)
loading_frame = pil_image_to_base64(Image.fromarray(image))
loading_frame = binascii.a2b_base64(loading_frame)
image = np.array(Image.open("templates/waiting.jpg").resize((400, 400))).astype(np.uint8)
waiting_frame = pil_image_to_base64(Image.fromarray(image))
waiting_frame = binascii.a2b_base64(waiting_frame)
image = np.array(Image.open("templates/finished.jpg").resize((400, 400))).astype(np.uint8)
finished_frame = pil_image_to_base64(Image.fromarray(image))
finished_frame = binascii.a2b_base64(finished_frame)
id = unique_id
if len(app.envs) < 3:
for i in range(5):
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + loading_frame + b'\r\n\r\n')
app.prepare_app(id, robot, scene)
try:
start_time = time.time()
if interactive:
timeout = 200
else:
timeout = 30
while time.time() - start_time < timeout:
frame = app.envs[id].step(app.action[id])
frame = (frame[:, :, :3] * 255).astype(np.uint8)
frame = pil_image_to_base64(Image.fromarray(frame))
frame = binascii.a2b_base64(frame)
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + frame + b'\r\n\r\n')
except:
pass
finally:
app.stop_app(id)
for i in range(5):
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + finished_frame + b'\r\n\r\n')
else:
for i in range(5):
yield (b'--frame\r\n' b'Content-Type: image/jpeg\r\n\r\n' + waiting_frame + b'\r\n\r\n')
@app.route('/video_feed', methods=['POST', 'GET'])
def video_feed():
unique_id = request.args['uuid']
if 'robot' in request.args.keys():
robot = request.args['robot']
if 'scene' in request.args.keys():
scene = request.args['scene']
print(unique_id)
if request.method == 'POST':
key = request.args['key']
if key == 'w':
app.action[unique_id] = [1,1]
if key == 's':
app.action[unique_id] = [-1,-1]
if key == 'd':
app.action[unique_id] = [0.3,-0.3]
if key == 'a':
app.action[unique_id] = [-0.3,0.3]
if key == 'f':
app.action[unique_id] = [0,0]
return ""
else:
app.action[unique_id] = [0,0]
return Response(gen(app, unique_id, robot, scene), mimetype='multipart/x-mixed-replace; boundary=frame')
if __name__ == '__main__':
port = int(sys.argv[1])
app.run(host="0.0.0.0", port=port)

View File

@ -11,7 +11,7 @@ import rospkg
import numpy as np
from cv_bridge import CvBridge
import tf
from gibson2.envs.locomotor_env import NavigationEnv
from gibson2.envs.igibson import iGibsonEnv
class SimNode:
@ -24,15 +24,19 @@ class SimNode:
self.cmdx = 0.0
self.cmdy = 0.0
self.image_pub = rospy.Publisher("/gibson_ros/camera/rgb/image", ImageMsg, queue_size=10)
self.depth_pub = rospy.Publisher("/gibson_ros/camera/depth/image", ImageMsg, queue_size=10)
self.lidar_pub = rospy.Publisher("/gibson_ros/lidar/points", PointCloud2, queue_size=10)
self.image_pub = rospy.Publisher(
"/gibson_ros/camera/rgb/image", ImageMsg, queue_size=10)
self.depth_pub = rospy.Publisher(
"/gibson_ros/camera/depth/image", ImageMsg, queue_size=10)
self.lidar_pub = rospy.Publisher(
"/gibson_ros/lidar/points", PointCloud2, queue_size=10)
self.depth_raw_pub = rospy.Publisher("/gibson_ros/camera/depth/image_raw",
ImageMsg,
queue_size=10)
self.odom_pub = rospy.Publisher("/odom", Odometry, queue_size=10)
self.gt_odom_pub = rospy.Publisher("/ground_truth_odom", Odometry, queue_size=10)
self.gt_odom_pub = rospy.Publisher(
"/ground_truth_odom", Odometry, queue_size=10)
self.camera_info_pub = rospy.Publisher("/gibson_ros/camera/depth/camera_info",
CameraInfo,
@ -40,13 +44,14 @@ class SimNode:
self.bridge = CvBridge()
self.br = tf.TransformBroadcaster()
self.env = NavigationEnv(config_file=config_filename,
mode='headless',
action_timestep=1 / 30.0) # assume a 30Hz simulation
self.env = iGibsonEnv(config_file=config_filename,
mode='headless',
action_timestep=1 / 30.0) # assume a 30Hz simulation
print(self.env.config)
obs = self.env.reset()
rospy.Subscriber("/mobile_base/commands/velocity", Twist, self.cmd_callback)
rospy.Subscriber("/mobile_base/commands/velocity",
Twist, self.cmd_callback)
rospy.Subscriber("/reset_pose", PoseStamped, self.tp_robot_callback)
self.tp_time = None
@ -71,8 +76,10 @@ class SimNode:
depth = obs["depth"].astype(np.float32)
image_message = self.bridge.cv2_to_imgmsg(rgb, encoding="rgb8")
depth_raw_image = (obs["depth"] * 1000).astype(np.uint16)
depth_raw_message = self.bridge.cv2_to_imgmsg(depth_raw_image, encoding="passthrough")
depth_message = self.bridge.cv2_to_imgmsg(depth, encoding="passthrough")
depth_raw_message = self.bridge.cv2_to_imgmsg(
depth_raw_image, encoding="passthrough")
depth_message = self.bridge.cv2_to_imgmsg(
depth, encoding="passthrough")
now = rospy.Time.now()
@ -103,7 +110,8 @@ class SimNode:
lidar_header = Header()
lidar_header.stamp = now
lidar_header.frame_id = 'scan_link'
lidar_message = pc2.create_cloud_xyz32(lidar_header, lidar_points.tolist())
lidar_message = pc2.create_cloud_xyz32(
lidar_header, lidar_points.tolist())
self.lidar_pub.publish(lidar_message)
# odometry
@ -116,7 +124,8 @@ class SimNode:
]
self.br.sendTransform((odom[0][0], odom[0][1], 0),
tf.transformations.quaternion_from_euler(0, 0, odom[-1][-1]),
tf.transformations.quaternion_from_euler(
0, 0, odom[-1][-1]),
rospy.Time.now(), 'base_footprint', "odom")
odom_msg = Odometry()
odom_msg.header.stamp = rospy.Time.now()
@ -126,10 +135,12 @@ class SimNode:
odom_msg.pose.pose.position.x = odom[0][0]
odom_msg.pose.pose.position.y = odom[0][1]
odom_msg.pose.pose.orientation.x, odom_msg.pose.pose.orientation.y, odom_msg.pose.pose.orientation.z, \
odom_msg.pose.pose.orientation.w = tf.transformations.quaternion_from_euler(0, 0, odom[-1][-1])
odom_msg.pose.pose.orientation.w = tf.transformations.quaternion_from_euler(
0, 0, odom[-1][-1])
odom_msg.twist.twist.linear.x = (self.cmdx + self.cmdy) * 5
odom_msg.twist.twist.angular.z = (self.cmdy - self.cmdx) * 5 * 8.695652173913043
odom_msg.twist.twist.angular.z = (
self.cmdy - self.cmdx) * 5 * 8.695652173913043
self.odom_pub.publish(odom_msg)
# Ground truth pose
@ -151,16 +162,20 @@ class SimNode:
rpy[2])
gt_odom_msg.twist.twist.linear.x = (self.cmdx + self.cmdy) * 5
gt_odom_msg.twist.twist.angular.z = (self.cmdy - self.cmdx) * 5 * 8.695652173913043
gt_odom_msg.twist.twist.angular.z = (
self.cmdy - self.cmdx) * 5 * 8.695652173913043
self.gt_odom_pub.publish(gt_odom_msg)
def cmd_callback(self, data):
self.cmdx = data.linear.x / 10.0 - data.angular.z / (10 * 8.695652173913043)
self.cmdy = data.linear.x / 10.0 + data.angular.z / (10 * 8.695652173913043)
self.cmdx = data.linear.x / 10.0 - \
data.angular.z / (10 * 8.695652173913043)
self.cmdy = data.linear.x / 10.0 + \
data.angular.z / (10 * 8.695652173913043)
def tp_robot_callback(self, data):
rospy.loginfo('Teleporting robot')
position = [data.pose.position.x, data.pose.position.y, data.pose.position.z]
position = [data.pose.position.x,
data.pose.position.y, data.pose.position.z]
orientation = [
data.pose.orientation.x, data.pose.orientation.y, data.pose.orientation.z,
data.pose.orientation.w

View File

@ -1,58 +1,75 @@
# scene
scene: gibson
scene_id: Rs
build_graph: true
load_texture: true
pybullet_load_texture: true
trav_map_type: no_obj
trav_map_resolution: 0.1
trav_map_erosion: 2
should_open_all_doors: true
scene: building
scene_id: area1
# domain randomization
texture_randomization_freq: null
object_randomization_freq: null
# robot
robot: Turtlebot
# task, observation and action
task: pointgoal # pointgoal|objectgoal|areagoal
initial_orn: [0.0, 0.0, 0.0]
initial_pos: [0.0, 0.0, 0.0]
target_orn: [0.0, 0.0, 0.0]
target_pos: [3.0, 5.0, 0.0]
dist_tol: 0.5
terminal_reward: 5000
discount_factor: 1.0
additional_states_dim: 3
fisheye: false
fov: 1.57
is_discrete: false
velocity: 1.0
debug: true
# display
# task
task: point_nav_random
target_dist_min: 1.0
target_dist_max: 10.0
goal_format: polar
task_obs_dim: 4
use_filler: true
display_ui: false
show_diagnostics: false
ui_num: 2
ui_components: [RGB_FILLED, DEPTH]
random:
random_initial_pose : false
random_target_pose : false
random_init_x_range: [-0.1, 0.1]
random_init_y_range: [-0.1, 0.1]
random_init_z_range: [-0.1, 0.1]
random_init_rot_range: [-0.1, 0.1]
# reward
reward_type: geodesic
success_reward: 10.0
potential_reward_weight: 1.0
collision_reward_weight: -0.1
output: [nonviz_sensor, rgb, depth, scan]
resolution: 256
# discount factor
discount_factor: 0.99
speed:
timestep: 0.001
frameskip: 10
# termination condition
dist_tol: 0.36 # body width
max_step: 500
max_collisions_allowed: 500
mode: web_ui #gui|headless
verbose: false
fast_lq_render: true
# misc config
initial_pos_z_offset: 0.1
collision_ignore_link_a_ids: [1, 2, 3, 4] # ignore collisions with these robot links
# sensor spec
output: [task_obs, rgb, depth]
# image
# ASUS Xtion PRO LIVE
# https://www.asus.com/us/3D-Sensor/Xtion_PRO_LIVE
fisheye: false
image_width: 160
image_height: 120
vertical_fov: 45
# depth
depth_low: 0.8
depth_high: 3.5
# scan
# Hokuyo URG-04LX-UG01
# https://www.hokuyo-aut.jp/search/single.php?serial=166
# n_horizontal_rays is originally 683, sub-sampled 1/3
n_horizontal_rays: 228
n_vertical_beams: 1
laser_linear_range: 5.6
laser_angular_range: 240.0
min_laser_dist: 0.05
laser_link_name: scan_link
# sensor noise
depth_noise_rate: 0.0
scan_noise_rate: 0.0
# visual objects
visual_object_at_initial_target_pos: true
target_visual_object_visible_to_agent: true
target_visual_object_visible_to_agent: false

View File

@ -17,10 +17,10 @@ else:
assets_path = os.path.expanduser(assets_path)
if 'GIBSON_DATASET_PATH' in os.environ:
dataset_path = os.environ['GIBSON_DATASET_PATH']
g_dataset_path = os.environ['GIBSON_DATASET_PATH']
else:
dataset_path = global_config['dataset_path']
dataset_path = os.path.expanduser(dataset_path)
g_dataset_path = global_config['g_dataset_path']
g_dataset_path = os.path.expanduser(g_dataset_path)
if 'IGIBSON_DATASET_PATH' in os.environ:
ig_dataset_path = os.environ['IGIBSON_DATASET_PATH']
@ -28,16 +28,34 @@ else:
ig_dataset_path = global_config['ig_dataset_path']
ig_dataset_path = os.path.expanduser(ig_dataset_path)
if '3DFRONT_DATASET_PATH' in os.environ:
threedfront_dataset_path = os.environ['3DFRONT_DATASET_PATH']
else:
threedfront_dataset_path = global_config['threedfront_dataset_path']
threedfront_dataset_path = os.path.expanduser(threedfront_dataset_path)
if 'CUBICASA_DATASET_PATH' in os.environ:
cubicasa_dataset_path = os.environ['CUBICASA_DATASET_PATH']
else:
cubicasa_dataset_path = global_config['cubicasa_dataset_path']
cubicasa_dataset_path = os.path.expanduser(cubicasa_dataset_path)
root_path = os.path.dirname(os.path.realpath(__file__))
if not os.path.isabs(assets_path):
assets_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), assets_path)
if not os.path.isabs(dataset_path):
dataset_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), dataset_path)
if not os.path.isabs(g_dataset_path):
g_dataset_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), g_dataset_path)
if not os.path.isabs(ig_dataset_path):
ig_dataset_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), ig_dataset_path)
if not os.path.isabs(threedfront_dataset_path):
threedfront_dataset_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), threedfront_dataset_path)
if not os.path.isabs(cubicasa_dataset_path):
cubicasa_dataset_path = os.path.join(os.path.dirname(os.path.realpath(__file__)), cubicasa_dataset_path)
logging.info('Importing iGibson (gibson2 module)')
logging.info('Assets path: {}'.format(assets_path))
logging.info('Dataset path: {}'.format(dataset_path))
logging.info('Gibson Dataset path: {}'.format(g_dataset_path))
logging.info('iG Dataset path: {}'.format(ig_dataset_path))
logging.info('3D-FRONT Dataset path: {}'.format(threedfront_dataset_path))
logging.info('CubiCasa5K Dataset path: {}'.format(cubicasa_dataset_path))

View File

@ -13,13 +13,15 @@ from gibson2.scenes.stadium_scene import StadiumScene
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.utils.utils import parse_config
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
import gym
class BaseEnv(gym.Env):
'''
a basic environment, step, observation and reward not implemented
Base Env class, follows OpenAI Gym interface
Handles loading scene and robot
Functions like reset and step are not implemented
'''
def __init__(self,
@ -54,9 +56,12 @@ class BaseEnv(gym.Env):
enable_shadow = self.config.get('enable_shadow', False)
enable_pbr = self.config.get('enable_pbr', True)
texture_scale = self.config.get('texture_scale', 1.0)
settings = MeshRendererSettings(enable_shadow=enable_shadow,
enable_pbr=enable_pbr,
msaa=False)
msaa=False,
texture_scale=texture_scale)
self.simulator = Simulator(mode=mode,
physics_timestep=physics_timestep,
@ -69,14 +74,13 @@ class BaseEnv(gym.Env):
'vertical_fov', 90),
device_idx=device_idx,
render_to_tensor=render_to_tensor,
rendering_settings=settings,
auto_sync=True)
rendering_settings=settings)
self.load()
def reload(self, config_file):
"""
Reload another config file, this allows one to change the environment on the fly
Reload another config file
Thhis allows one to change the configuration on the fly
:param config_file: new config file path
"""
@ -86,7 +90,9 @@ class BaseEnv(gym.Env):
def reload_model(self, scene_id):
"""
Reload another model, this allows one to change the environment on the fly
Reload another scene model
This allows one to change the scene on the fly
:param scene_id: new scene_id
"""
self.config['scene_id'] = scene_id
@ -105,6 +111,9 @@ class BaseEnv(gym.Env):
self.load()
def get_next_scene_random_seed(self):
"""
Get the next scene random seed
"""
if self.object_randomization_freq is None:
return None
return self.scene_random_seeds[self.scene_random_seed_idx]
@ -146,6 +155,7 @@ class BaseEnv(gym.Env):
trav_map_resolution=self.config.get(
'trav_map_resolution', 0.1),
trav_map_erosion=self.config.get('trav_map_erosion', 2),
trav_map_type=self.config.get('trav_map_type', 'with_obj'),
pybullet_load_texture=self.config.get(
'pybullet_load_texture', False),
texture_randomization=self.texture_randomization_freq is not None,
@ -153,9 +163,16 @@ class BaseEnv(gym.Env):
object_randomization_idx=self.object_randomization_idx,
should_open_all_doors=self.config.get(
'should_open_all_doors', False),
trav_map_type=self.config.get('trav_map_type', 'with_obj'),
load_object_categories=self.config.get(
'load_object_categories', None),
load_room_types=self.config.get('load_room_types', None),
load_room_instances=self.config.get(
'load_room_instances', None),
)
# TODO: Unify the function import_scene and take out of the if-else clauses
first_n = self.config.get('_set_first_n_objects', -1)
if first_n != -1:
scene._set_first_n_objects(first_n)
self.simulator.import_ig_scene(scene)
if self.config['robot'] == 'Turtlebot':
@ -192,9 +209,17 @@ class BaseEnv(gym.Env):
if self.simulator is not None:
self.simulator.disconnect()
def close(self):
"""
Synonymous function with clean
"""
self.clean()
def simulator_step(self):
"""
Step the simulation, this is different from environment step where one can get observation and reward
Step the simulation.
This is different from environment step that returns the next
observation, reward, done, info.
"""
self.simulator.step()
@ -211,4 +236,7 @@ class BaseEnv(gym.Env):
return NotImplementedError()
def set_mode(self, mode):
"""
Set simulator mode
"""
self.simulator.mode = mode

480
gibson2/envs/igibson_env.py Normal file
View File

@ -0,0 +1,480 @@
from gibson2.utils.utils import quatToXYZW
from gibson2.envs.env_base import BaseEnv
from gibson2.tasks.room_rearrangement_task import RoomRearrangementTask
from gibson2.tasks.point_nav_fixed_task import PointNavFixedTask
from gibson2.tasks.point_nav_random_task import PointNavRandomTask
from gibson2.tasks.interactive_nav_random_task import InteractiveNavRandomTask
from gibson2.tasks.dynamic_nav_random_task import DynamicNavRandomTask
from gibson2.tasks.reaching_random_task import ReachingRandomTask
from gibson2.sensors.scan_sensor import ScanSensor
from gibson2.sensors.vision_sensor import VisionSensor
from gibson2.robots.robot_base import BaseRobot
from gibson2.external.pybullet_tools.utils import stable_z_on_aabb
from transforms3d.euler import euler2quat
from collections import OrderedDict
import argparse
import gym
import numpy as np
import pybullet as p
import time
import logging
class iGibsonEnv(BaseEnv):
"""
iGibson Environment (OpenAI Gym interface)
"""
def __init__(
self,
config_file,
scene_id=None,
mode='headless',
action_timestep=1 / 10.0,
physics_timestep=1 / 240.0,
device_idx=0,
render_to_tensor=False,
automatic_reset=False,
):
"""
:param config_file: config_file path
:param scene_id: override scene_id in config file
:param mode: headless, gui, iggui
:param action_timestep: environment executes action per action_timestep second
:param physics_timestep: physics timestep for pybullet
:param device_idx: which GPU to run the simulation and rendering on
:param render_to_tensor: whether to render directly to pytorch tensors
:param automatic_reset: whether to automatic reset after an episode finishes
"""
super(iGibsonEnv, self).__init__(config_file=config_file,
scene_id=scene_id,
mode=mode,
action_timestep=action_timestep,
physics_timestep=physics_timestep,
device_idx=device_idx,
render_to_tensor=render_to_tensor)
self.automatic_reset = automatic_reset
def load_task_setup(self):
"""
Load task setup
"""
self.initial_pos_z_offset = self.config.get(
'initial_pos_z_offset', 0.1)
# s = 0.5 * G * (t ** 2)
drop_distance = 0.5 * 9.8 * (self.action_timestep ** 2)
assert drop_distance < self.initial_pos_z_offset, \
'initial_pos_z_offset is too small for collision checking'
# ignore the agent's collision with these body ids
self.collision_ignore_body_b_ids = set(
self.config.get('collision_ignore_body_b_ids', []))
# ignore the agent's collision with these link ids of itself
self.collision_ignore_link_a_ids = set(
self.config.get('collision_ignore_link_a_ids', []))
# discount factor
self.discount_factor = self.config.get('discount_factor', 0.99)
# domain randomization frequency
self.texture_randomization_freq = self.config.get(
'texture_randomization_freq', None)
self.object_randomization_freq = self.config.get(
'object_randomization_freq', None)
# task
if self.config['task'] == 'point_nav_fixed':
self.task = PointNavFixedTask(self)
elif self.config['task'] == 'point_nav_random':
self.task = PointNavRandomTask(self)
elif self.config['task'] == 'interactive_nav_random':
self.task = InteractiveNavRandomTask(self)
elif self.config['task'] == 'dynamic_nav_random':
self.task = DynamicNavRandomTask(self)
elif self.config['task'] == 'reaching_random':
self.task = ReachingRandomTask(self)
elif self.config['task'] == 'room_rearrangement':
self.task = RoomRearrangementTask(self)
else:
self.task = None
def build_obs_space(self, shape, low, high):
"""
Helper function that builds individual observation spaces
"""
return gym.spaces.Box(
low=low,
high=high,
shape=shape,
dtype=np.float32)
def load_observation_space(self):
"""
Load observation space
"""
self.output = self.config['output']
self.image_width = self.config.get('image_width', 128)
self.image_height = self.config.get('image_height', 128)
observation_space = OrderedDict()
sensors = OrderedDict()
vision_modalities = []
scan_modalities = []
if 'task_obs' in self.output:
observation_space['task_obs'] = self.build_obs_space(
shape=(self.task.task_obs_dim,), low=-np.inf, high=-np.inf)
if 'rgb' in self.output:
observation_space['rgb'] = self.build_obs_space(
shape=(self.image_height, self.image_width, 3),
low=0.0, high=1.0)
vision_modalities.append('rgb')
if 'depth' in self.output:
observation_space['depth'] = self.build_obs_space(
shape=(self.image_height, self.image_width, 1),
low=0.0, high=1.0)
vision_modalities.append('depth')
if 'pc' in self.output:
observation_space['pc'] = self.build_obs_space(
shape=(self.image_height, self.image_width, 3),
low=-np.inf, high=np.inf)
vision_modalities.append('pc')
if 'optical_flow' in self.output:
observation_space['optical_flow'] = self.build_obs_space(
shape=(self.image_height, self.image_width, 2),
low=-np.inf, high=np.inf)
vision_modalities.append('optical_flow')
if 'scene_flow' in self.output:
observation_space['scene_flow'] = self.build_obs_space(
shape=(self.image_height, self.image_width, 3),
low=-np.inf, high=np.inf)
vision_modalities.append('scene_flow')
if 'normal' in self.output:
observation_space['normal'] = self.build_obs_space(
shape=(self.image_height, self.image_width, 3),
low=-np.inf, high=np.inf)
vision_modalities.append('normal')
if 'seg' in self.output:
observation_space['seg'] = self.build_obs_space(
shape=(self.image_height, self.image_width, 1),
low=0.0, high=1.0)
vision_modalities.append('seg')
if 'rgb_filled' in self.output: # use filler
observation_space['rgb_filled'] = self.build_obs_space(
shape=(self.image_height, self.image_width, 3),
low=0.0, high=1.0)
vision_modalities.append('rgb_filled')
if 'scan' in self.output:
self.n_horizontal_rays = self.config.get('n_horizontal_rays', 128)
self.n_vertical_beams = self.config.get('n_vertical_beams', 1)
assert self.n_vertical_beams == 1, 'scan can only handle one vertical beam for now'
observation_space['scan'] = self.build_obs_space(
shape=(self.n_horizontal_rays * self.n_vertical_beams, 1),
low=0.0, high=1.0)
scan_modalities.append('scan')
if 'occupancy_grid' in self.output:
self.grid_resolution = self.config.get('grid_resolution', 128)
self.occupancy_grid_space = gym.spaces.Box(low=0.0,
high=1.0,
shape=(self.grid_resolution,
self.grid_resolution, 1))
observation_space['occupancy_grid'] = self.occupancy_grid_space
scan_modalities.append('occupancy_grid')
if len(vision_modalities) > 0:
sensors['vision'] = VisionSensor(self, vision_modalities)
if len(scan_modalities) > 0:
sensors['scan_occ'] = ScanSensor(self, scan_modalities)
self.observation_space = gym.spaces.Dict(observation_space)
self.sensors = sensors
def load_action_space(self):
"""
Load action space
"""
self.action_space = self.robots[0].action_space
def load_miscellaneous_variables(self):
"""
Load miscellaneous variables for book keeping
"""
self.current_step = 0
self.collision_step = 0
self.current_episode = 0
self.collision_links = []
def load(self):
"""
Load environment
"""
super(iGibsonEnv, self).load()
self.load_task_setup()
self.load_observation_space()
self.load_action_space()
self.load_miscellaneous_variables()
def get_state(self, collision_links=[]):
"""
Get the current observation
:param collision_links: collisions from last physics timestep
:return: observation as a dictionary
"""
state = OrderedDict()
if 'task_obs' in self.output:
state['task_obs'] = self.task.get_task_obs(self)
if 'vision' in self.sensors:
vision_obs = self.sensors['vision'].get_obs(self)
for modality in vision_obs:
state[modality] = vision_obs[modality]
if 'scan_occ' in self.sensors:
scan_obs = self.sensors['scan_occ'].get_obs(self)
for modality in scan_obs:
state[modality] = scan_obs[modality]
return state
def run_simulation(self):
"""
Run simulation for one action timestep (same as one render timestep in Simulator class)
:return: collision_links: collisions from last physics timestep
"""
self.simulator_step()
collision_links = list(p.getContactPoints(
bodyA=self.robots[0].robot_ids[0]))
return self.filter_collision_links(collision_links)
def filter_collision_links(self, collision_links):
"""
Filter out collisions that should be ignored
:param collision_links: original collisions, a list of collisions
:return: filtered collisions
"""
new_collision_links = []
for item in collision_links:
# ignore collision with body b
if item[2] in self.collision_ignore_body_b_ids:
continue
# ignore collision with robot link a
if item[3] in self.collision_ignore_link_a_ids:
continue
# ignore self collision with robot link a (body b is also robot itself)
if item[2] == self.robots[0].robot_ids[0] and item[4] in self.collision_ignore_link_a_ids:
continue
new_collision_links.append(item)
return new_collision_links
def populate_info(self, info):
"""
Populate info dictionary with any useful information
"""
info['episode_length'] = self.current_step
info['collision_step'] = self.collision_step
def step(self, action):
"""
Apply robot's action.
Returns the next state, reward, done and info,
following OpenAI Gym's convention
:param action: robot actions
:return: state: next observation
:return: reward: reward of this time step
:return: done: whether the episode is terminated
:return: info: info dictionary with any useful information
"""
self.current_step += 1
if action is not None:
self.robots[0].apply_action(action)
collision_links = self.run_simulation()
self.collision_links = collision_links
self.collision_step += int(len(collision_links) > 0)
state = self.get_state(collision_links)
info = {}
reward, info = self.task.get_reward(
self, collision_links, action, info)
done, info = self.task.get_termination(
self, collision_links, action, info)
self.task.step(self)
self.populate_info(info)
if done and self.automatic_reset:
info['last_observation'] = state
state = self.reset()
return state, reward, done, info
def check_collision(self, body_id):
"""
Check with the given body_id has any collision after one simulator step
:param body_id: pybullet body id
:return: whether the given body_id has no collision
"""
self.simulator_step()
collisions = list(p.getContactPoints(bodyA=body_id))
if logging.root.level <= logging.DEBUG: # Only going into this if it is for logging --> efficiency
for item in collisions:
logging.debug('bodyA:{}, bodyB:{}, linkA:{}, linkB:{}'.format(
item[1], item[2], item[3], item[4]))
return len(collisions) == 0
def set_pos_orn_with_z_offset(self, obj, pos, orn=None, offset=None):
"""
Reset position and orientation for the robot or the object
:param obj: an instance of robot or object
:param pos: position
:param orn: orientation
:param offset: z offset
"""
if orn is None:
orn = np.array([0, 0, np.random.uniform(0, np.pi * 2)])
if offset is None:
offset = self.initial_pos_z_offset
is_robot = isinstance(obj, BaseRobot)
body_id = obj.robot_ids[0] if is_robot else obj.body_id
# first set the correct orientation
obj.set_position_orientation(pos, quatToXYZW(euler2quat(*orn), 'wxyz'))
# compute stable z based on this orientation
stable_z = stable_z_on_aabb(body_id, [pos, pos])
# change the z-value of position with stable_z + additional offset
# in case the surface is not perfect smooth (has bumps)
obj.set_position([pos[0], pos[1], stable_z + offset])
def test_valid_position(self, obj, pos, orn=None):
"""
Test if the robot or the object can be placed with no collision
:param obj: an instance of robot or object
:param pos: position
:param orn: orientation
:return: validity
"""
is_robot = isinstance(obj, BaseRobot)
self.set_pos_orn_with_z_offset(obj, pos, orn)
if is_robot:
obj.robot_specific_reset()
obj.keep_still()
body_id = obj.robot_ids[0] if is_robot else obj.body_id
has_collision = self.check_collision(body_id)
return has_collision
def land(self, obj, pos, orn):
"""
Land the robot or the object onto the floor, given a valid position and orientation
:param obj: an instance of robot or object
:param pos: position
:param orn: orientation
"""
is_robot = isinstance(obj, BaseRobot)
self.set_pos_orn_with_z_offset(obj, pos, orn)
if is_robot:
obj.robot_specific_reset()
obj.keep_still()
body_id = obj.robot_ids[0] if is_robot else obj.body_id
land_success = False
# land for maximum 1 second, should fall down ~5 meters
max_simulator_step = int(1.0 / self.action_timestep)
for _ in range(max_simulator_step):
self.simulator_step()
if len(p.getContactPoints(bodyA=body_id)) > 0:
land_success = True
break
if not land_success:
print("WARNING: Failed to land")
if is_robot:
obj.robot_specific_reset()
obj.keep_still()
def reset_variables(self):
"""
Reset bookkeeping variables for the next new episode
"""
self.current_episode += 1
self.current_step = 0
self.collision_step = 0
self.collision_links = []
def randomize_domain(self):
"""
Domain randomization
Object randomization loads new object models with the same poses
Texture randomization loads new materials and textures for the same object models
"""
if self.object_randomization_freq is not None:
if self.current_episode % self.object_randomization_freq == 0:
self.reload_model_object_randomization()
if self.texture_randomization_freq is not None:
if self.current_episode % self.texture_randomization_freq == 0:
self.simulator.scene.randomize_texture()
def reset(self):
"""
Reset episode
"""
self.randomize_domain()
# move robot away from the scene
self.robots[0].set_position([100.0, 100.0, 100.0])
self.task.reset_scene(self)
self.task.reset_agent(self)
self.simulator.sync()
state = self.get_state()
self.reset_variables()
return state
if __name__ == '__main__':
parser = argparse.ArgumentParser()
parser.add_argument(
'--config',
'-c',
help='which config file to use [default: use yaml files in examples/configs]')
parser.add_argument('--mode',
'-m',
choices=['headless', 'gui', 'iggui'],
default='headless',
help='which mode for simulation (default: headless)')
args = parser.parse_args()
env = iGibsonEnv(config_file=args.config,
mode=args.mode,
action_timestep=1.0 / 10.0,
physics_timestep=1.0 / 40.0)
step_time_list = []
for episode in range(100):
print('Episode: {}'.format(episode))
start = time.time()
env.reset()
for _ in range(100): # 10 seconds
action = env.action_space.sample()
state, reward, done, _ = env.step(action)
print('reward', reward)
if done:
break
print('Episode finished after {} timesteps, took {} seconds.'.format(
env.current_step, time.time() - start))
env.close()

File diff suppressed because it is too large Load Diff

View File

@ -1,5 +1,5 @@
import gibson2
from gibson2.envs.locomotor_env import NavigationEnv
from gibson2.envs.igibson_env import iGibsonEnv
import atexit
import multiprocessing
import sys
@ -8,31 +8,27 @@ import numpy as np
import os
class ParallelNavEnv(NavigationEnv):
class ParallelNavEnv(iGibsonEnv):
"""Batch together environments and simulate them in external processes.
The environments are created in external processes by calling the provided
callables. This can be an environment class, or a function creating the
environment and potentially wrapping it. The returned environment should not
access global variables.
"""
The environments are created in external processes by calling the provided
callables. This can be an environment class, or a function creating the
environment and potentially wrapping it. The returned environment should not
access global variables.
"""
def __init__(self, env_constructors, blocking=False, flatten=False):
"""Batch together environments and simulate them in external processes.
The environments can be different but must use the same action and
observation specs.
The environments can be different but must use the same action and
observation specs.
Args:
env_constructors: List of callables that create environments.
blocking: Whether to step environments one after another.
flatten: Boolean, whether to use flatten action and time_steps during
communication to reduce overhead.
Raises:
ValueError: If the action or observation specs don't match.
"""
self._envs = [ProcessPyEnvironment(ctor, flatten=flatten) for ctor in env_constructors]
:param env_constructors: List of callables that create environments.
:param blocking: Whether to step environments one after another.
:param flatten: Boolean, whether to use flatten action and time_steps during
communication to reduce overhead.
:raise ValueError: If the action or observation specs don't match.
"""
self._envs = [ProcessPyEnvironment(
ctor, flatten=flatten) for ctor in env_constructors]
self._num_envs = len(env_constructors)
self.start()
self.action_space = self._envs[0].action_space
@ -41,10 +37,11 @@ class ParallelNavEnv(NavigationEnv):
self._flatten = flatten
def start(self):
#tf.logging.info('Starting all processes.')
"""
Start all children processes
"""
for env in self._envs:
env.start()
#tf.logging.info('All processes started.')
@property
def batched(self):
@ -57,9 +54,8 @@ class ParallelNavEnv(NavigationEnv):
def reset(self):
"""Reset all environments and combine the resulting observation.
Returns:
Time step with batch dimension.
"""
:return: a list of [next_obs, reward, done, info]
"""
time_steps = [env.reset(self._blocking) for env in self._envs]
if not self._blocking:
time_steps = [promise() for promise in time_steps]
@ -68,16 +64,11 @@ class ParallelNavEnv(NavigationEnv):
def step(self, actions):
"""Forward a batch of actions to the wrapped environments.
Args:
actions: Batched action, possibly nested, to apply to the environment.
Raises:
ValueError: Invalid actions.
Returns:
Batch of observations, rewards, and done flags.
"""
time_steps = [env.step(action, self._blocking) for env, action in zip(self._envs, actions)]
:param actions: batched action, possibly nested, to apply to the environment.
:return: a list of [next_obs, reward, done, info]
"""
time_steps = [env.step(action, self._blocking)
for env, action in zip(self._envs, actions)]
# When blocking is False we get promises that need to be called.
if not self._blocking:
time_steps = [promise() for promise in time_steps]
@ -88,50 +79,6 @@ class ParallelNavEnv(NavigationEnv):
for env in self._envs:
env.close()
def set_subgoal(self, subgoals):
time_steps = [env.set_subgoal(subgoal, self._blocking) for env, subgoal in zip(self._envs, subgoals)]
if not self._blocking:
time_steps = [promise() for promise in time_steps]
return time_steps
def set_subgoal_type(self,sg_types):
time_steps = [env.set_subgoal_type(np.array(sg_type), self._blocking) for env, sg_type in zip(self._envs, sg_types)]
if not self._blocking:
time_steps = [promise() for promise in time_steps]
return time_steps
#def _stack_time_steps(self, time_steps):
# """Given a list of TimeStep, combine to one with a batch dimension."""
# if self._flatten:
# return fast_map_structure_flatten(lambda *arrays: np.stack(arrays),
# self._time_step_spec,
# *time_steps)
# else:
# return fast_map_structure(lambda *arrays: np.stack(arrays), *time_steps)
#def _unstack_actions(self, batched_actions):
# """Returns a list of actions from potentially nested batch of actions."""
# flattened_actions = nest.flatten(batched_actions)
# if self._flatten:
# unstacked_actions = zip(*flattened_actions)
# else:
# unstacked_actions = [nest.pack_sequence_as(batched_actions, actions)
# for actions in zip(*flattened_actions)]
# return unstacked_actions
## TODO(sguada) Move to utils.
#def fast_map_structure_flatten(func, structure, *flat_structure):
# entries = zip(*flat_structure)
# return nest.pack_sequence_as(structure, [func(*x) for x in entries])
#def fast_map_structure(func, *structure):
# flat_structure = [nest.flatten(s) for s in structure]
# entries = zip(*flat_structure)
# return nest.pack_sequence_as(
# structure[0], [func(*x) for x in entries])
class ProcessPyEnvironment(object):
"""Step a single env in a separate process for lock free paralellism."""
@ -147,26 +94,17 @@ class ProcessPyEnvironment(object):
def __init__(self, env_constructor, flatten=False):
"""Step environment in a separate process for lock free paralellism.
The environment is created in an external process by calling the provided
callable. This can be an environment class, or a function creating the
environment and potentially wrapping it. The returned environment should
not access global variables.
The environment is created in an external process by calling the provided
callable. This can be an environment class, or a function creating the
environment and potentially wrapping it. The returned environment should
not access global variables.
Args:
env_constructor: Callable that creates and returns a Python environment.
flatten: Boolean, whether to assume flattened actions and time_steps
:param env_constructor: callable that creates and returns a Python environment.
:param flatten: boolean, whether to assume flattened actions and time_steps
during communication to avoid overhead.
Attributes:
observation_spec: The cached observation spec of the environment.
action_spec: The cached action spec of the environment.
time_step_spec: The cached time step spec of the environment.
"""
"""
self._env_constructor = env_constructor
self._flatten = flatten
#self._observation_spec = None
#self._action_spec = None
#self._time_step_spec = None
def start(self):
"""Start the process."""
@ -182,47 +120,25 @@ class ProcessPyEnvironment(object):
raise result
assert result is self._READY, result
#def observation_spec(self):
# if not self._observation_spec:
# self._observation_spec = self.call('observation_spec')()
# return self._observation_spec
#def action_spec(self):
# if not self._action_spec:
# self._action_spec = self.call('action_spec')()
# return self._action_spec
#def time_step_spec(self):
# if not self._time_step_spec:
# self._time_step_spec = self.call('time_step_spec')()
# return self._time_step_spec
def __getattr__(self, name):
"""Request an attribute from the environment.
Note that this involves communication with the external process, so it can
be slow.
Note that this involves communication with the external process, so it can
be slow.
Args:
name: Attribute to access.
Returns:
Value of the attribute.
"""
:param name: attribute to access.
:return: value of the attribute.
"""
self._conn.send((self._ACCESS, name))
return self._receive()
def call(self, name, *args, **kwargs):
"""Asynchronously call a method of the external environment.
Args:
name: Name of the method to call.
*args: Positional arguments to forward to the method.
**kwargs: Keyword arguments to forward to the method.
Returns:
Promise object that blocks and provides the return value when called.
"""
:param name: name of the method to call.
:param args: positional arguments to forward to the method.
:param kwargs: keyword arguments to forward to the method.
:return: promise object that blocks and provides the return value when called.
"""
payload = name, args, kwargs
self._conn.send((self._CALL, payload))
return self._receive
@ -240,13 +156,10 @@ class ProcessPyEnvironment(object):
def step(self, action, blocking=True):
"""Step the environment.
Args:
action: The action to apply to the environment.
blocking: Whether to wait for the result.
Returns:
time step when blocking, otherwise callable that returns the time step.
"""
:param action: the action to apply to the environment.
:param blocking: whether to wait for the result.
:return: (next_obs, reward, done, info) tuple when blocking, otherwise callable that returns that tuple
"""
promise = self.call('step', action)
if blocking:
return promise()
@ -256,45 +169,24 @@ class ProcessPyEnvironment(object):
def reset(self, blocking=True):
"""Reset the environment.
Args:
blocking: Whether to wait for the result.
Returns:
New observation when blocking, otherwise callable that returns the new
observation.
"""
:param blocking: whether to wait for the result.
:return: next_obs when blocking, otherwise callable that returns next_obs
"""
promise = self.call('reset')
if blocking:
return promise()
else:
return promise
def set_subgoal(self, subgoal, blocking=True):
promise = self.call('set_subgoal', subgoal)
if blocking:
return promise()
else:
return promise
def set_subgoal_type(self, subgoal_type, blocking=True):
promise = self.call('set_subgoal_type', subgoal_type)
if blocking:
return promise()
else:
return promise
def _receive(self):
"""Wait for a message from the worker process and return its payload.
Raises:
Exception: An exception was raised inside the worker process.
KeyError: The reveived message is of an unknown type.
:raise Exception: an exception was raised inside the worker process.
:raise KeyError: the reveived message is of an unknown type.
Returns:
Payload object of the message.
"""
:return: payload object of the message.
"""
message, payload = self._conn.recv()
#print(message, payload)
# Re-raise exceptions in the main process.
if message == self._EXCEPTION:
stacktrace = payload
@ -302,27 +194,24 @@ class ProcessPyEnvironment(object):
if message == self._RESULT:
return payload
self.close()
raise KeyError('Received message of unexpected type {}'.format(message))
raise KeyError(
'Received message of unexpected type {}'.format(message))
def _worker(self, conn, env_constructor, flatten=False):
"""The process waits for actions and sends back environment results.
Args:
conn: Connection for communication to the main process.
env_constructor: env_constructor for the OpenAI Gym environment.
flatten: Boolean, whether to assume flattened actions and time_steps
during communication to avoid overhead.
:param conn: connection for communication to the main process.
:param env_constructor: env_constructor for the OpenAI Gym environment.
:param flatten: boolean, whether to assume flattened actions and
time_steps during communication to avoid overhead.
Raises:
KeyError: When receiving a message of unknown type.
"""
:raise KeyError: when receiving a message of unknown type.
"""
try:
np.random.seed()
env = env_constructor()
#action_spec = env.action_spec()
conn.send(self._READY) # Ready.
while True:
#print(len(self._conn._cache))
try:
# Only block for short times to have keyboard exceptions be raised.
if not conn.poll(0.1):
@ -337,36 +226,31 @@ class ProcessPyEnvironment(object):
continue
if message == self._CALL:
name, args, kwargs = payload
if name == 'step' or name == 'reset' or name == 'set_subgoal' or name == 'set_subgoal_type':
if name == 'step' or name == 'reset':
result = getattr(env, name)(*args, **kwargs)
#result = []
#if flatten and name == 'step' or name == 'reset':
# args = [nest.pack_sequence_as(action_spec, args[0])]
# result = getattr(env, name)(*args, **kwargs)
#if flatten and name in ['step', 'reset']:
# result = nest.flatten(result)
conn.send((self._RESULT, result))
continue
if message == self._CLOSE:
assert payload is None
break
raise KeyError('Received message of unknown type {}'.format(message))
raise KeyError(
'Received message of unknown type {}'.format(message))
except Exception: # pylint: disable=broad-except
etype, evalue, tb = sys.exc_info()
stacktrace = ''.join(traceback.format_exception(etype, evalue, tb))
message = 'Error in environment process: {}'.format(stacktrace)
#tf.logging.error(message)
# tf.logging.error(message)
conn.send((self._EXCEPTION, stacktrace))
finally:
conn.close()
if __name__ == "__main__":
config_filename = os.path.join(os.path.dirname(gibson2.__file__), '../test/test.yaml')
config_filename = os.path.join(os.path.dirname(
gibson2.__file__), '../test/test.yaml')
def load_env():
return NavigationEnv(config_file=config_filename, mode='headless')
return iGibsonEnv(config_file=config_filename, mode='headless')
parallel_env = ParallelNavEnv([load_env] * 2, blocking=False)

View File

@ -1,23 +1,25 @@
from scipy.spatial.kdtree import KDTree
from heapq import heappush, heappop
from collections import namedtuple
from utils import INF, elapsed_time
from rrt_connect import direct_path
from smoothing import smooth_path
from gibson2.external.motion.motion_planners.utils import INF, elapsed_time
from gibson2.external.motion.motion_planners.rrt_connect import direct_path
from gibson2.external.motion.motion_planners.smoothing import smooth_path
import random
import time
import numpy as np
Node = namedtuple('Node', ['g', 'parent'])
unit_cost_fn = lambda v1, v2: 1.
zero_heuristic_fn = lambda v: 0
def unit_cost_fn(v1, v2): return 1.
def zero_heuristic_fn(v): return 0
def retrace_path(visited, vertex):
if vertex is None:
return []
return retrace_path(visited, visited[vertex].parent) + [vertex]
def dijkstra(start_v, neighbors_fn, cost_fn=unit_cost_fn):
# Update the heuristic over time
start_g = 0
@ -34,12 +36,13 @@ def dijkstra(start_v, neighbors_fn, cost_fn=unit_cost_fn):
heappush(queue, (next_g, next_v))
return visited
def wastar_search(start_v, end_v, neighbors_fn, cost_fn=unit_cost_fn,
heuristic_fn=zero_heuristic_fn, w=1, max_cost=INF, max_time=INF):
# TODO: lazy wastar to get different paths
#heuristic_fn = lambda v: cost_fn(v, end_v)
priority_fn = lambda g, h: g + w*h
goal_test = lambda v: v == end_v
def priority_fn(g, h): return g + w*h
def goal_test(v): return v == end_v
start_time = time.time()
start_g, start_h = 0, heuristic_fn(start_v)
@ -60,6 +63,7 @@ def wastar_search(start_v, end_v, neighbors_fn, cost_fn=unit_cost_fn,
heappush(queue, (priority_fn(next_g, next_h), next_g, next_v))
return None
def check_path(path, colliding_vertices, colliding_edges, samples, extend_fn, collision_fn):
# TODO: bisect order
vertices = list(path)
@ -82,7 +86,8 @@ def check_path(path, colliding_vertices, colliding_edges, samples, extend_fn, co
return False
return True
def lazy_prm(start_conf, end_conf, sample_fn, extend_fn, collision_fn, num_samples=100, max_degree=10,
def lazy_prm(start_conf, end_conf, distance_fn, sample_fn, extend_fn, collision_fn, num_samples=100, max_degree=10,
weights=None, p_norm=2, max_distance=INF, approximate_eps=0.0,
max_cost=INF, max_time=INF, max_paths=INF):
# TODO: multi-query motion planning
@ -90,9 +95,12 @@ def lazy_prm(start_conf, end_conf, sample_fn, extend_fn, collision_fn, num_sampl
# TODO: can embed pose and/or points on the robot for other distances
if weights is None:
weights = np.ones(len(start_conf))
embed_fn = lambda q: weights * q
distance_fn = lambda q1, q2: np.linalg.norm(embed_fn(q2) - embed_fn(q1), ord=p_norm)
cost_fn = lambda v1, v2: distance_fn(samples[v1], samples[v2])
def embed_fn(q): return weights * q
# def distance_fn(q1, q2): return np.linalg.norm(
# embed_fn(q2) - embed_fn(q1), ord=p_norm)
def cost_fn(v1, v2): return distance_fn(samples[v1], samples[v2])
# TODO: can compute cost between waypoints from extend_fn
samples = []
@ -100,6 +108,7 @@ def lazy_prm(start_conf, end_conf, sample_fn, extend_fn, collision_fn, num_sampl
conf = sample_fn()
if (distance_fn(start_conf, conf) + distance_fn(conf, end_conf)) < max_cost:
samples.append(conf)
start_index, end_index = 0, 1
samples[start_index] = start_conf
samples[end_index] = end_conf
@ -121,24 +130,25 @@ def lazy_prm(start_conf, end_conf, sample_fn, extend_fn, collision_fn, num_sampl
#print(time.time() - start_time, len(edges), float(len(edges))/len(samples))
colliding_vertices, colliding_edges = {}, {}
def neighbors_fn(v1):
for v2 in neighbors_from_index[v1]:
if not (colliding_vertices.get(v2, False) or
colliding_edges.get((v1, v2), False)):
colliding_edges.get((v1, v2), False)):
yield v2
visited = dijkstra(end_index, neighbors_fn, cost_fn)
heuristic_fn = lambda v: visited[v].g if v in visited else INF
def heuristic_fn(v): return visited[v].g if v in visited else INF
while elapsed_time(start_time) < max_time:
# TODO: extra cost to prioritize reusing checked edges
path = wastar_search(start_index, end_index, neighbors_fn=neighbors_fn,
cost_fn=cost_fn, heuristic_fn=heuristic_fn,
max_cost=max_cost, max_time=max_time-elapsed_time(start_time))
if path is None:
return None, edges, colliding_vertices, colliding_edges
return None, samples, edges, colliding_vertices, colliding_edges
cost = sum(cost_fn(v1, v2) for v1, v2 in zip(path, path[1:]))
print('Length: {} | Cost: {:.3f} | Vertices: {} | Edges: {} | Time: {:.3f}'.format(
len(path), cost, len(colliding_vertices), len(colliding_edges), elapsed_time(start_time)))
# print('Length: {} | Cost: {:.3f} | Vertices: {} | Edges: {} | Time: {:.3f}'.format(
# len(path), cost, len(colliding_vertices), len(colliding_edges), elapsed_time(start_time)))
if check_path(path, colliding_vertices, colliding_edges, samples, extend_fn, collision_fn):
break
@ -147,15 +157,18 @@ def lazy_prm(start_conf, end_conf, sample_fn, extend_fn, collision_fn, num_sampl
solution.extend(extend_fn(samples[q1], samples[q2]))
return solution, samples, edges, colliding_vertices, colliding_edges
def replan_loop(start_conf, end_conf, sample_fn, extend_fn, collision_fn, params_list, smooth=0, **kwargs):
def lazy_prm_replan_loop(start_conf, end_conf, distance_fn, sample_fn, extend_fn, collision_fn, params_list, smooth=0, **kwargs):
if collision_fn(start_conf) or collision_fn(end_conf):
return None
path = direct_path(start_conf, end_conf, extend_fn, collision_fn)
if path is not None:
return path
for num_samples in params_list:
path = lazy_prm(start_conf, end_conf, sample_fn, extend_fn, collision_fn,
num_samples=num_samples, **kwargs)
res = lazy_prm(start_conf, end_conf, distance_fn, sample_fn, extend_fn, collision_fn,
num_samples=num_samples, **kwargs)
# print(res)
path, samples, edges, colliding_vertices, colliding_edges = res
if path is not None:
return smooth_path(path, extend_fn, collision_fn, iterations=smooth)
return None

View File

@ -240,3 +240,8 @@ class DegreePRM(PRM):
else:
degree += 1
return new_vertices
def call_prm(start_conf, end_conf, distance_fn, sample_fn, extend_fn, collision_fn):
prm = DistancePRM(distance_fn, extend_fn, collision_fn)
return prm.call(start_conf, end_conf)

View File

@ -1,5 +1,5 @@
from random import random
import numpy as np
from .utils import irange, argmin, RRT_ITERATIONS
@ -45,6 +45,7 @@ def configs(nodes):
def rrt(start, goal_sample, distance, sample, extend, collision, goal_test=lambda q: False, iterations=RRT_ITERATIONS, goal_probability=.2):
#goal_test = lambda q: np.linalg.norm(q - goal_sample) < 0.5
if collision(start):
return None
if not callable(goal_sample):
@ -61,7 +62,7 @@ def rrt(start, goal_sample, distance, sample, extend, collision, goal_test=lambd
break
last = TreeNode(q, parent=last)
nodes.append(last)
if goal_test(last.config):
if np.linalg.norm(np.array(last.config) - goal_sample()) < 0.5:#goal_test(last.config):
return configs(last.retrace())
else:
if goal:

View File

@ -1,6 +1,6 @@
from random import random
from time import time
import numpy as np
from .utils import INF, argmin
@ -77,7 +77,7 @@ def safe_path(sequence, collision):
return path
def rrt_star(start, goal, distance, sample, extend, collision, radius, max_time=INF, max_iterations=INF, goal_probability=.2, informed=True):
def rrt_star(start, goal, distance, sample, extend, collision, radius=0.5, max_time=INF, max_iterations=INF, goal_probability=.2, informed=True):
if collision(start) or collision(goal):
return None
nodes = [OptimalNode(start)]
@ -91,8 +91,9 @@ def rrt_star(start, goal, distance, sample, extend, collision, radius, max_time=
if informed and goal_n is not None and distance(start, s) + distance(s, goal) >= goal_n.cost:
continue
if it % 100 == 0:
print it, time() - t0, goal_n is not None, do_goal, (goal_n.cost if goal_n is not None else INF)
print(it, time() - t0, goal_n is not None, do_goal, (goal_n.cost if goal_n is not None else INF))
it += 1
print(it, len(nodes))
nearest = argmin(lambda n: distance(n.config, s), nodes)
path = safe_path(extend(nearest.config, s), collision)
@ -104,9 +105,17 @@ def rrt_star(start, goal, distance, sample, extend, collision, radius, max_time=
if do_goal and distance(new.config, goal) < 1e-6:
goal_n = new
goal_n.set_solution(True)
break
# TODO - k-nearest neighbor version
neighbors = filter(lambda n: distance(
n.config, new.config) < radius, nodes)
# neighbors = filter(lambda n: distance(
# n.config, new.config) < radius, nodes)
# print('num neighbors', len(list(neighbors)))
k = 10
k = np.min([k, len(nodes)])
dists = [distance(n.config, new.config) for n in nodes]
neighbors = [nodes[i] for i in np.argsort(dists)[:k]]
#print(neighbors)
nodes.append(new)
for n in neighbors:

View File

@ -1,4 +1,5 @@
from random import randint
import numpy as np
def smooth_path(path, extend, collision, iterations=50):
@ -14,7 +15,36 @@ def smooth_path(path, extend, collision, iterations=50):
i, j = j, i
shortcut = list(extend(smoothed_path[i], smoothed_path[j]))
if (len(shortcut) < (j - i)) and all(not collision(q) for q in shortcut):
smoothed_path = smoothed_path[:i + 1] + shortcut + smoothed_path[j + 1:]
smoothed_path = smoothed_path[:i + 1] + \
shortcut + smoothed_path[j + 1:]
return smoothed_path
# TODO: sparsify path to just waypoints
def optimize_path(path, extend, collision, iterations=50):
def cost_fn(l):
s = 0
for i in range(len(l)-1):
s += np.sqrt((l[i][0] - l[i+1][0])**2 + (l[i][1] - l[i+1][1])**2)
return s
# smoothed_paths = []
smoothed_path = path
for _ in range(iterations):
if len(smoothed_path) <= 2:
return smoothed_path
i = randint(0, len(smoothed_path) - 1)
j = randint(0, len(smoothed_path) - 1)
if abs(i - j) <= 1:
continue
if j < i:
i, j = j, i
shortcut = list(extend(smoothed_path[i], smoothed_path[j]))
# print('short cut cost', cost_fn(shortcut),
# 'original cost:', cost_fn(smoothed_path[i:j]))
if (cost_fn(shortcut) < cost_fn(smoothed_path[i:j])) and all(not collision(q) for q in shortcut):
smoothed_path = smoothed_path[:i + 1] + \
shortcut + smoothed_path[j + 1:]
# smoothed_paths.append(np.copy(smoothed_path))
# return smoothed_paths
return smoothed_path
# TODO: sparsify path to just waypoints

File diff suppressed because it is too large Load Diff

View File

@ -1,3 +1,5 @@
assets_path: assets # put either absolute path or relative to current directory
dataset_path: dataset
ig_dataset_path: ig_dataset
assets_path: data/assets # put either absolute path or relative to current directory
g_dataset_path: data/g_dataset
ig_dataset_path: data/ig_dataset
threedfront_dataset_path: data/threedfront_dataset
cubicasa_dataset_path: data/cubicasa_dataset

Some files were not shown because too many files have changed in this diff Show More