finish example and documentation until Scene module, beef up Scene to include interactive objects

This commit is contained in:
Chengshu Li 2020-04-04 18:33:56 -07:00
parent 6f010e2ec0
commit a0b878367a
27 changed files with 683 additions and 150 deletions

View File

@ -1,2 +1,4 @@
#!/bin/sh
rm -rf build
rm -rf gibson2/core/render/mesh_renderer/build/
rm -rf gibson2/core/render/mesh_renderer/build/

View File

@ -15,7 +15,7 @@ Simulator
.. automethod:: import_scene
.. automethod:: import_object
.. automethod:: import_robot
.. automethod:: import_interactive_object
.. automethod:: import_articulated_object
.. automethod:: step
.. automethod:: update_position
.. automethod:: isconnected

BIN
docs/images/overview.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 23 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.1 MiB

BIN
docs/images/quickstart.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 2.2 MiB

BIN
docs/images/renderer.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 409 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.6 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 1.7 MiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 17 KiB

View File

@ -14,6 +14,7 @@ Welcome to iGibson's documentation!
installation.md
quickstart.md
overview.md
quickstart_old.md
.. toctree::
:maxdepth: 1
@ -29,7 +30,7 @@ Welcome to iGibson's documentation!
simulators.md
environments.md
algorithms.md
ros_integration.md
ros.md
examples_render.rst
examples_simulator.rst
examples_tasks.rst

View File

@ -35,5 +35,5 @@ There are several datasets of 3D reconstructed large real-world environments (ho
You will have access to ten environments with annotated instances of furniture (chairs, tables, desks, doors, sofas) that can be interacted with, and to the original 572 reconstructed 3D environments without annotated objects from [Gibson v1](http://github.com/StanfordVL/GibsonEnv/).
You will also have access to a [fully annotated environment: Rs_interactive](https://storage.googleapis.com/gibson_scenes/Rs_interactive.tar.gz) where close to 200 articulated objects are placed in their original locations of a real house and ready for interaction. ([The original environment: Rs](https://storage.googleapis.com/gibson_scenes/Rs.tar.gz) is also available). More info can be found in the [installation guide](http://svl.stanford.edu/igibson/docs/installation.html).
You will also have access to a [fully annotated environment: Rs_interactive](https://storage.googleapis.com/gibson_scenes/Rs_interactive.tar.gz) where close to 200 articulated objects are placed in their original locations of a real house and ready for interaction. ([The original environment: Rs](https://storage.googleapis.com/gibson_scenes/Rs.tar.gz) is also available). More info can be found in the [installation guide](installation.md).

View File

@ -1,7 +1,17 @@
# Overview
TODO: @eric
1. Give an overview of the different layers of abstraction for iGibson
2. Briefly explain each module in two sentences.
Next, we will give an overview of iGibson and briefly explain the different layers of abstraction in our system. In general, the modules from one layer will use and instantiate those from the layer immediately below.
![quickstart.png](images/overview.png)
At the bottom layer, we have **Dataset** and **Assets**. **Dataset** contain 3D reconstructed real-world environments. **Assets** contain models of robots and objects. Download guide can be found [here](installation.html#downloading-the-assets). More info can be found here: [Dataset](dataset.md) and [Assets](assets.md).
In the next layer, we have **Renderer** and **PhysicsEngine**. These are the two pillars that ensure the visual and physics fidelity of iGibson. We developed our own MeshRenderer that supports customizable camera configuration and various image modalities, and renders at a lightening speed. We use the open-sourced [PyBullet](http://www.pybullet.org/) as our underlying physics engine. It can simulate rigid body collision and joint actuation for robots and articulated objects in an accurate and efficient manner. Since we are using MeshRenderer for rendering and PyBullet for physics simulation, we need to keep them synchronized at all time. Our code have already handled this for you. More info can be found here: [Renderer](renderer.md) and [PhysicsEngine](physics_engine.md).
In the next layer, we have **Scene**, **Object**, **Robot**, and **Simulator**. **Scene** loads 3D scene meshes from `gibson2.dataset_path`. **Object** loads interactable objects from `gibson2.assets_path`. **Robot** loads robots from `gibson2.assets_path`. **Simulator** maintains an instance of **Renderer** and **PhysicsEngine** and provides APIs to import **Scene**, **Object** and **Robot** into both of them and keep them synchronized at all time. More info can be found here: [Scene](scenes.md), [Object](objects.md), [Robot](robots.md) and [Simulator](simulators.md).
In the next layer, we have **Environment**. **Environment** follows the [OpenAI gym](https://github.com/openai/gym) convention and provides an API interface for applications such as **Algorithms** and **ROS**. **Env** usually defines a task for an agent to solve, which includes observation_space, action space, reward, termination condition, etc. More info can be found here: [Environment](environments.md).
In the top and final layer, we have **Algorithm** and **ROS**. **Algorithm** can be any algorithms (from optimal control to model-free reinforcement leanring) that accommodate OpenAI gym interface. We also provide tight integration with **ROS** that allows for evaluation and visualization of, say, ROS Navigation Stack, in iGibson. More info can be found here: [Algorithm](algorithms.md) and [ROS](ros.md).
We really recommend you go through each of the Modules below for more details and code examples!

View File

@ -1,3 +1,77 @@
# Physics Engine
TODO: @eric
### Overview
We use the open-sourced [PyBullet](http://www.pybullet.org/) as our underlying physics engine. It can simulate rigid body collision and joint actuation for robots and articulated objects in an accurate and efficient manner. Since we are using MeshRenderer for rendering and PyBullet for physics simulation, we need to keep them synchronized at all time. Our code have already handled this for you.
Typically, we use `p.createMultiBody` and `p.loadURDF` to load scenes, objects and robots into PyBullet, use `p.resetBasePositionAndOrientation` to set the base pose of robots and objects, `p.resetJointState` to set joint position of robots and articulated objects, and `p.setJointMotorControl2` to control the robots and articulated objects.
More info can be found in here: [PyBullet documentation](https://docs.google.com/document/d/10sXEhzFRSnvFcl3XxNGhnD4N2SedqwdAvK3dsihxVUA).
### Examples
In this example, we import a scene, a robot and an object into PyBullet and step through a few seconds of simulation. The code can be found here:[examples/demo/physics_engine_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/physics_engine_example.py).
```python
import pybullet as p
from gibson2.utils.assets_utils import get_model_path, get_texture_file
import gibson2
import os
import sys
import time
def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
else:
model_path = os.path.join(get_model_path('Rs'), 'mesh_z_up.obj')
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
# Load scenes
collision_id = p.createCollisionShape(p.GEOM_MESH,
fileName=model_path,
meshScale=1.0,
flags=p.GEOM_FORCE_CONCAVE_TRIMESH)
visual_id = p.createVisualShape(p.GEOM_MESH,
fileName=model_path,
meshScale=1.0)
texture_filename = get_texture_file(model_path)
texture_id = p.loadTexture(texture_filename)
mesh_id = p.createMultiBody(baseCollisionShapeIndex=collision_id,
baseVisualShapeIndex=visual_id)
# Load robots
turtlebot_urdf = os.path.join(gibson2.assets_path, 'models/turtlebot/turtlebot.urdf')
robot_id = p.loadURDF(turtlebot_urdf, flags=p.URDF_USE_MATERIAL_COLORS_FROM_MTL)
# Load objects
obj_visual_filename = os.path.join(gibson2.assets_path, 'models/ycb/002_master_chef_can/textured_simple.obj')
obj_collision_filename = os.path.join(gibson2.assets_path, 'models/ycb/002_master_chef_can/textured_simple_vhacd.obj')
collision_id = p.createCollisionShape(p.GEOM_MESH,
fileName=obj_collision_filename,
meshScale=1.0)
visual_id = p.createVisualShape(p.GEOM_MESH,
fileName=obj_visual_filename,
meshScale=1.0)
object_id = p.createMultiBody(baseCollisionShapeIndex=collision_id,
baseVisualShapeIndex=visual_id,
basePosition=[1.0, 0.0, 1.0],
baseMass=0.1)
for _ in range(10000):
p.stepSimulation()
p.disconnect()
if __name__ == '__main__':
main()
```
You will see the PyBullet interface like this. In the scene, there is a Turtlebot, together with a blue food can next to the robot.
![physics_engine.png](images/physics_engine.png)

View File

@ -1,3 +1,61 @@
# Renderer
TODO: @eric
### Overview
We developed our own MeshRenderer that supports customizable camera configuration and various image modalities, and renders at a lightening speed. Specifically, you can specify image width, height and vertical field of view in the constructor of `class MeshRenderer`. Then you can call `renderer.render(modes=('rgb', 'normal', 'seg', '3d'))` to retrieve the images. Currently we support four different image modalities: RGB, surface normal, semantic segmentation and 3D point cloud (z-channel can be extracted as depth map). Most of the code can be found in [gibson2/core/render](https://github.com/StanfordVL/iGibson/tree/master/gibson2/core/render).
### Examples
#### Simple Example
In this example, we render an iGibson scene with a few lines of code. The code can be found in [examples/demo/mesh_renderer_simple_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/mesh_renderer_simple_example.py).
```
import cv2
import sys
import os
import numpy as np
from gibson2.core.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.utils.assets_utils import get_model_path
def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
else:
model_path = os.path.join(get_model_path('Rs'), 'mesh_z_up.obj')
renderer = MeshRenderer(width=512, height=512)
renderer.load_object(model_path)
renderer.add_instance(0)
camera_pose = np.array([0, 0, 1.2])
view_direction = np.array([1, 0, 0])
renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
renderer.set_fov(90)
frames = renderer.render(modes=('rgb', 'normal', '3d'))
frames = cv2.cvtColor(np.concatenate(frames, axis=1), cv2.COLOR_RGB2BGR)
cv2.imshow('image', frames)
cv2.waitKey()
if __name__ == '__main__':
main()
```
For `Rs` scene, the rendering results will look like this:
![renderer.png](images/renderer.png)
#### Interactive Example
In this example, we show an interactive demo of MeshRenderer.
```bash
cd examples/demo
python mesh_renderer_example.py
```
You may translate the camera by pressing "WASD" on your keyboard and rotate the camera by dragging your mouse. Press `Q` to exit the rendering loop. The code can be found in [examples/demo/mesh_renderer_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/mesh_renderer_example.py).
#### Render to PyTorch Tensors
In this example, we show that MeshRenderer can directly render into a PyTorch tensor to maximize efficiency. PyTorch installation is required (otherwise, iGibson does not depend on PyTorch). The code can be found in [examples/demo/mesh_renderer_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/mesh_renderer_gpu_example.py).

View File

@ -1,4 +1,4 @@
ROS Integration
ROS
============
TODO: @fei

View File

@ -1,2 +1,106 @@
# Scenes
TODO: @eric
### Overview
We provide three types of scenes.
- `EmptyScene` and `StadiumScene`: they are simple scenes with flat grounds and no obstacles, very good for debugging.
- `BuildingScene`: it loads most of the realistic 3D scenes from `gibson2.dataset_path`.
To be more specific, `BuildingScene`
- stores the floor information (we have many multistory houses in our dataset)
- loads the scene mesh into PyBullet
- builds an internal traversability graph for each floor based on the traversability maps stored in the scene folder (e.g. `dataset/Rs/floor_trav_0.png`)
- loads the scene objects and places them in their original locations if the scene is interactive
- provides APIs for sampling a random location in the scene, and for computing the shortest path between two locations in the scene.
### Examples
#### Stadium Scenes
In this example, we import a simple stadium scene that is good for debugging. The code can be found here: [examples/demo/scene_stadium_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/scene_stadium_example.py).
```python
from gibson2.core.physics.scene import StadiumScene
import pybullet as p
import numpy as np
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
scene = StadiumScene()
scene.load()
for _ in range(24000): # at least 100 seconds
p.stepSimulation()
time.sleep(1./240.)
p.disconnect()
if __name__ == '__main__':
main()
```
The stadium scene looks like this:
![scene_stadium](images/scene_stadium.png)
#### Static Building Scenes
In this example, we import a static scene, and then randomly sample a pair of locations in the scene and compuete the shortest path between them. The code can be found here: [examples/demo/scene_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/scene_example.py).
```python
from gibson2.core.physics.scene import BuildingScene
import pybullet as p
import numpy as np
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
scene = BuildingScene('Rs',
build_graph=True,
pybullet_load_texture=True)
scene.load()
np.random.seed(0)
for _ in range(10):
random_floor = scene.get_random_floor()
p1 = scene.get_random_point_floor(random_floor)[1]
p2 = scene.get_random_point_floor(random_floor)[1]
shortest_path, geodesic_distance = scene.get_shortest_path(random_floor, p1[:2], p2[:2], entire_path=True)
print('random point 1:', p1)
print('random point 2:', p2)
print('geodesic distance between p1 and p2', geodesic_distance)
print('shortest path from p1 to p2:', shortest_path)
for _ in range(24000): # at least 100 seconds
p.stepSimulation()
time.sleep(1./240.)
p.disconnect()
if __name__ == '__main__':
main()
```
#### Interactive Building Scenes
In this example, we import an interactive scene. We support ten such scenes right now (the list can be found in `dataset/gibson_list`). All you need to do is to turn on the flag `is_interactive=True` when you initialize `BuildingScene`. The code can be found here: [examples/demo/scene_interactive_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/scene_interactive_example.py).
The interactive scene will replace the annotated objects with very similar CAD models with their original texture, aligned to their original poses. Because removing the annotated objects will inevitably create holes on the floor, we add additional floor planes with the original floor texture as well.
For example, in the scene `Placida` below, the couches, the coffee table, the dining table and the dining chairs are all interactive objects.
![scene_interactive](images/scene_interactive.png)
#### Visualize Traversability Map
In this example, we visuliaze the traversability map of a scene. We use this map to build an internal traversability graph for each floor so that we can compute the shortest path between two locations, and place robots and objects at valid locations inside the scene. The code can be found here [examples/demo/trav_map_vis_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/trav_map_vis_example.py).
The traversability map of the scene `Rs` looks like this:
![trav_map_vis](images/trav_map_vis.png)

View File

@ -0,0 +1,28 @@
import cv2
import sys
import os
import numpy as np
from gibson2.core.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.utils.assets_utils import get_model_path
def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
else:
model_path = os.path.join(get_model_path('Rs'), 'mesh_z_up.obj')
renderer = MeshRenderer(width=512, height=512)
renderer.load_object(model_path)
renderer.add_instance(0)
camera_pose = np.array([0, 0, 1.2])
view_direction = np.array([1, 0, 0])
renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
renderer.set_fov(90)
frames = renderer.render(modes=('rgb', 'normal', '3d'))
frames = cv2.cvtColor(np.concatenate(frames, axis=1), cv2.COLOR_RGB2BGR)
cv2.imshow('image', frames)
cv2.waitKey(0)
if __name__ == '__main__':
main()

View File

@ -0,0 +1,60 @@
import pybullet as p
from gibson2.utils.assets_utils import get_model_path, get_texture_file
import gibson2
import os
import sys
import time
def main():
if len(sys.argv) > 1:
model_path = sys.argv[1]
else:
model_path = os.path.join(get_model_path('Rs'), 'mesh_z_up.obj')
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
# Load scenes
collision_id = p.createCollisionShape(p.GEOM_MESH,
fileName=model_path,
meshScale=1.0,
flags=p.GEOM_FORCE_CONCAVE_TRIMESH)
visual_id = p.createVisualShape(p.GEOM_MESH,
fileName=model_path,
meshScale=1.0)
texture_filename = get_texture_file(model_path)
texture_id = p.loadTexture(texture_filename)
mesh_id = p.createMultiBody(baseCollisionShapeIndex=collision_id,
baseVisualShapeIndex=visual_id)
# Load robots
turtlebot_urdf = os.path.join(gibson2.assets_path, 'models/turtlebot/turtlebot.urdf')
robot_id = p.loadURDF(turtlebot_urdf, flags=p.URDF_USE_MATERIAL_COLORS_FROM_MTL)
# Load objects
obj_visual_filename = os.path.join(gibson2.assets_path, 'models/ycb/002_master_chef_can/textured_simple.obj')
obj_collision_filename = os.path.join(gibson2.assets_path, 'models/ycb/002_master_chef_can/textured_simple_vhacd.obj')
collision_id = p.createCollisionShape(p.GEOM_MESH,
fileName=obj_collision_filename,
meshScale=1.0)
visual_id = p.createVisualShape(p.GEOM_MESH,
fileName=obj_visual_filename,
meshScale=1.0)
object_id = p.createMultiBody(baseCollisionShapeIndex=collision_id,
baseVisualShapeIndex=visual_id,
basePosition=[1.0, 0.0, 1.0],
baseMass=0.1)
for _ in range(10000):
p.stepSimulation()
p.disconnect()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,35 @@
from gibson2.core.physics.scene import BuildingScene
import pybullet as p
import numpy as np
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
scene = BuildingScene('Rs',
build_graph=True,
pybullet_load_texture=True)
scene.load()
np.random.seed(0)
for _ in range(10):
random_floor = scene.get_random_floor()
p1 = scene.get_random_point_floor(random_floor)[1]
p2 = scene.get_random_point_floor(random_floor)[1]
shortest_path, geodesic_distance = scene.get_shortest_path(random_floor, p1[:2], p2[:2], entire_path=True)
print('random point 1:', p1)
print('random point 2:', p2)
print('geodesic distance between p1 and p2', geodesic_distance)
print('shortest path from p1 to p2:', shortest_path)
for _ in range(24000): # at least 100 seconds
p.stepSimulation()
time.sleep(1./240.)
p.disconnect()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,35 @@
from gibson2.core.physics.scene import BuildingScene
import pybullet as p
import numpy as np
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
scene = BuildingScene('Placida',
is_interactive=True,
build_graph=True,
pybullet_load_texture=True)
scene.load()
np.random.seed(0)
for _ in range(10):
random_floor = scene.get_random_floor()
p1 = scene.get_random_point_floor(random_floor)[1]
p2 = scene.get_random_point_floor(random_floor)[1]
shortest_path, geodesic_distance = scene.get_shortest_path(random_floor, p1[:2], p2[:2], entire_path=True)
print('random point 1:', p1)
print('random point 2:', p2)
print('geodesic distance between p1 and p2', geodesic_distance)
print('shortest path from p1 to p2:', shortest_path)
for _ in range(24000): # at least 100 seconds
p.stepSimulation()
time.sleep(1./240.)
p.disconnect()
if __name__ == '__main__':
main()

View File

@ -0,0 +1,22 @@
from gibson2.core.physics.scene import StadiumScene
import pybullet as p
import numpy as np
import time
def main():
p.connect(p.GUI)
p.setGravity(0,0,-9.8)
p.setTimeStep(1./240.)
scene = StadiumScene()
scene.load()
for _ in range(24000): # at least 100 seconds
p.stepSimulation()
time.sleep(1./240.)
p.disconnect()
if __name__ == '__main__':
main()

View File

@ -7,6 +7,7 @@ os.sys.path.insert(0, parentdir)
import pybullet_data
from gibson2.utils.assets_utils import get_model_path, get_texture_file
from gibson2.utils.utils import l2_distance
from gibson2.core.physics.interactive_objects import InteractiveObj
import numpy as np
from PIL import Image
@ -25,9 +26,9 @@ class EmptyScene(Scene):
"""
def load(self):
planeName = os.path.join(pybullet_data.getDataPath(), "mjcf/ground_plane.xml")
self.ground_plane_mjcf = p.loadMJCF(planeName)
p.changeDynamics(self.ground_plane_mjcf[0], -1, lateralFriction=1)
return [item for item in self.ground_plane_mjcf]
self.ground = p.loadMJCF(planeName)[0]
p.changeDynamics(self.ground, -1, lateralFriction=1)
return [self.ground]
class StadiumScene(Scene):
"""
@ -38,15 +39,11 @@ class StadiumScene(Scene):
filename = os.path.join(pybullet_data.getDataPath(), "stadium_no_collision.sdf")
self.stadium = p.loadSDF(filename)
planeName = os.path.join(pybullet_data.getDataPath(), "mjcf/ground_plane.xml")
self.ground_plane_mjcf = p.loadMJCF(planeName)
for i in self.ground_plane_mjcf:
pos, orn = p.getBasePositionAndOrientation(i)
p.resetBasePositionAndOrientation(i, [pos[0], pos[1], pos[2] - 0.005], orn)
for i in self.ground_plane_mjcf:
p.changeVisualShape(i, -1, rgbaColor=[1, 1, 1, 0.5])
return [item for item in self.stadium] + [item for item in self.ground_plane_mjcf]
self.ground = p.loadMJCF(planeName)[0]
pos, orn = p.getBasePositionAndOrientation(self.ground)
p.resetBasePositionAndOrientation(self.ground, [pos[0], pos[1], pos[2] - 0.005], orn)
p.changeVisualShape(self.ground, -1, rgbaColor=[1, 1, 1, 0.5])
return self.stadium + [self.ground]
def get_random_floor(self):
return 0
@ -75,7 +72,7 @@ class StadiumScene(Scene):
def reset_floor(self, floor=0, additional_elevation=0.05, height=None):
return
# TODO: Merge InteractiveBuildingScene into BuildingScene
class InteractiveBuildingScene(Scene):
"""
A simple stadium scene for debugging
@ -98,8 +95,8 @@ class BuildingScene(Scene):
model_id,
trav_map_resolution=0.1,
trav_map_erosion=2,
build_graph=False,
should_load_replaced_objects=False,
build_graph=True,
is_interactive=False,
num_waypoints=10,
waypoint_resolution=0.2,
pybullet_load_texture=False,
@ -111,7 +108,7 @@ class BuildingScene(Scene):
:param trav_map_resolution: traversability map resolution
:param trav_map_erosion: erosion radius of traversability areas, should be robot footprint radius
:param build_graph: build connectivity graph
:param should_load_replaced_objects: load CAD objects for parts of the meshes
:param is_interactive: whether the scene is interactive. If so, we will replace the annotated objects with the corresponding CAD models and add floor planes with the original floor texture.
:param num_waypoints: number of way points returned
:param waypoint_resolution: resolution of adjacent way points
:param pybullet_load_texture: whether to load texture into pybullet. This is for debugging purpose only and does not affect what the robots see
@ -124,122 +121,180 @@ class BuildingScene(Scene):
self.trav_map_size = None
self.trav_map_erosion = trav_map_erosion
self.build_graph = build_graph
self.should_load_replaced_objects = should_load_replaced_objects
self.is_interactive = is_interactive
self.num_waypoints = num_waypoints
self.waypoint_interval = int(waypoint_resolution / trav_map_resolution)
self.mesh_body_id = None
self.pybullet_load_texture = pybullet_load_texture
def load(self):
def load_floor_metadata(self):
"""
Load the mesh into pybullet
Load floor metadata
"""
filename = os.path.join(get_model_path(self.model_id), "mesh_z_up_downsampled.obj")
if os.path.isfile(filename):
print('Using down-sampled mesh!')
floor_height_path = os.path.join(get_model_path(self.model_id), 'floors.txt')
if not os.path.isfile(floor_height_path):
raise Exception('floors.txt cannot be found in model: {}'.format(self.model_id))
with open(floor_height_path, 'r') as f:
self.floors = sorted(list(map(float, f.readlines())))
print('floors', self.floors)
def load_scene_mesh(self):
"""
Load scene mesh
"""
if self.is_interactive:
filename = os.path.join(get_model_path(self.model_id), "mesh_z_up_cleaned.obj")
else:
if self.should_load_replaced_objects:
filename = os.path.join(get_model_path(self.model_id), "mesh_z_up_cleaned.obj")
else:
filename = os.path.join(get_model_path(self.model_id), "mesh_z_up_downsampled.obj")
if not os.path.isfile(filename):
filename = os.path.join(get_model_path(self.model_id), "mesh_z_up.obj")
scaling = [1, 1, 1]
collision_id = p.createCollisionShape(p.GEOM_MESH,
fileName=filename,
meshScale=scaling,
flags=p.GEOM_FORCE_CONCAVE_TRIMESH)
fileName=filename,
flags=p.GEOM_FORCE_CONCAVE_TRIMESH)
if self.pybullet_load_texture:
visual_id = p.createVisualShape(p.GEOM_MESH,
fileName=filename,
meshScale=scaling)
fileName=filename)
texture_filename = get_texture_file(filename)
print('texture_file', texture_filename)
texture_id = p.loadTexture(texture_filename)
else:
visual_id = -1
texture_id = -1
boundary_id = p.createMultiBody(baseCollisionShapeIndex=collision_id,
baseVisualShapeIndex=visual_id)
self.mesh_body_id = boundary_id
p.changeDynamics(boundary_id, -1, lateralFriction=1)
self.mesh_body_id = p.createMultiBody(baseCollisionShapeIndex=collision_id,
baseVisualShapeIndex=visual_id)
p.changeDynamics(self.mesh_body_id, -1, lateralFriction=1)
if self.pybullet_load_texture:
p.changeVisualShape(boundary_id,
p.changeVisualShape(self.mesh_body_id,
-1,
textureUniqueId=texture_id)
planeName = os.path.join(pybullet_data.getDataPath(), "mjcf/ground_plane.xml")
self.ground_plane_mjcf = p.loadMJCF(planeName)
p.resetBasePositionAndOrientation(self.ground_plane_mjcf[0],
posObj=[0, 0, 0],
ornObj=[0, 0, 0, 1])
p.changeVisualShape(self.ground_plane_mjcf[0],
-1,
rgbaColor=[168 / 255.0, 164 / 255.0, 92 / 255.0, 0.35],
specularColor=[0.5, 0.5, 0.5])
floor_height_path = os.path.join(get_model_path(self.model_id), 'floors.txt')
if os.path.exists(floor_height_path):
self.floor_map = []
self.floor_graph = []
with open(floor_height_path, 'r') as f:
self.floors = sorted(list(map(float, f.readlines())))
print('floors', self.floors)
def load_floor_planes(self):
self.floor_body_ids = []
if self.is_interactive:
for f in range(len(self.floors)):
trav_map = np.array(Image.open(
os.path.join(get_model_path(self.model_id), 'floor_trav_{}.png'.format(f))
))
obstacle_map = np.array(Image.open(
os.path.join(get_model_path(self.model_id), 'floor_{}.png'.format(f))
))
if self.trav_map_original_size is None:
height, width = trav_map.shape
assert height == width, 'trav map is not a square'
self.trav_map_original_size = height
self.trav_map_size = int(self.trav_map_original_size *
self.trav_map_default_resolution /
self.trav_map_resolution)
trav_map[obstacle_map == 0] = 0
trav_map = cv2.resize(trav_map, (self.trav_map_size, self.trav_map_size))
trav_map = cv2.erode(trav_map, np.ones((self.trav_map_erosion, self.trav_map_erosion)))
trav_map[trav_map < 255] = 0
# load the floor plane with the original floor texture for each floor
plane_name = os.path.join(get_model_path(self.model_id), "plane_z_up_{}.obj".format(f))
collision_id = p.createCollisionShape(p.GEOM_MESH,
fileName=plane_name)
visual_id = p.createVisualShape(p.GEOM_MESH,
fileName=plane_name)
texture_filename = get_texture_file(plane_name)
texture_id = p.loadTexture(texture_filename)
floor_body_id = p.createMultiBody(baseCollisionShapeIndex=collision_id,
baseVisualShapeIndex=visual_id)
p.changeVisualShape(floor_body_id,
-1,
textureUniqueId=texture_id)
floor_height = self.floors[f]
p.resetBasePositionAndOrientation(floor_body_id,
posObj=[0, 0, floor_height],
ornObj=[0, 0, 0, 1])
if self.build_graph:
graph_file = os.path.join(get_model_path(self.model_id), 'floor_trav_{}.p'.format(f))
if os.path.isfile(graph_file):
print("load traversable graph")
with open(graph_file, 'rb') as pfile:
g = pickle.load(pfile)
else:
print("build traversable graph")
g = nx.Graph()
for i in range(self.trav_map_size):
for j in range(self.trav_map_size):
if trav_map[i, j] > 0:
g.add_node((i, j))
# 8-connected graph
neighbors = [(i - 1, j - 1), (i, j - 1), (i + 1, j - 1), (i - 1, j)]
for n in neighbors:
if 0 <= n[0] < self.trav_map_size and 0 <= n[1] < self.trav_map_size and \
trav_map[n[0], n[1]] > 0:
g.add_edge(n, (i, j), weight=l2_distance(n, (i, j)))
# Since both the floor plane and the scene mesh have mass 0 (static),
# PyBullet seems to have already disabled collision between them.
# Just to be safe, explicit disable collision between them.
p.setCollisionFilterPair(self.mesh_body_id, floor_body_id, -1, -1, enableCollision=0)
# only take the largest connected component
largest_cc = max(nx.connected_components(g), key=len)
g = g.subgraph(largest_cc).copy()
with open(graph_file, 'wb') as pfile:
pickle.dump(g, pfile, protocol=pickle.HIGHEST_PROTOCOL)
self.floor_body_ids.append(floor_body_id)
else:
# load the default floor plane (only once) and later reset it to different floor heiights
plane_name = os.path.join(pybullet_data.getDataPath(), "mjcf/ground_plane.xml")
floor_body_id = p.loadMJCF(plane_name)[0]
p.resetBasePositionAndOrientation(floor_body_id,
posObj=[0, 0, 0],
ornObj=[0, 0, 0, 1])
p.setCollisionFilterPair(self.mesh_body_id, floor_body_id, -1, -1, enableCollision=0)
self.floor_body_ids.append(floor_body_id)
self.floor_graph.append(g)
# update trav_map accordingly
trav_map[:, :] = 0
for node in g.nodes:
trav_map[node[0], node[1]] = 255
def load_trav_map(self):
self.floor_map = []
self.floor_graph = []
for f in range(len(self.floors)):
trav_map = np.array(Image.open(
os.path.join(get_model_path(self.model_id), 'floor_trav_{}.png'.format(f))
))
obstacle_map = np.array(Image.open(
os.path.join(get_model_path(self.model_id), 'floor_{}.png'.format(f))
))
if self.trav_map_original_size is None:
height, width = trav_map.shape
assert height == width, 'trav map is not a square'
self.trav_map_original_size = height
self.trav_map_size = int(self.trav_map_original_size *
self.trav_map_default_resolution /
self.trav_map_resolution)
trav_map[obstacle_map == 0] = 0
trav_map = cv2.resize(trav_map, (self.trav_map_size, self.trav_map_size))
trav_map = cv2.erode(trav_map, np.ones((self.trav_map_erosion, self.trav_map_erosion)))
trav_map[trav_map < 255] = 0
self.floor_map.append(trav_map)
if self.build_graph:
graph_file = os.path.join(get_model_path(self.model_id), 'floor_trav_{}.p'.format(f))
if os.path.isfile(graph_file):
print("load traversable graph")
with open(graph_file, 'rb') as pfile:
g = pickle.load(pfile)
else:
print("build traversable graph")
g = nx.Graph()
for i in range(self.trav_map_size):
for j in range(self.trav_map_size):
if trav_map[i, j] > 0:
g.add_node((i, j))
# 8-connected graph
neighbors = [(i - 1, j - 1), (i, j - 1), (i + 1, j - 1), (i - 1, j)]
for n in neighbors:
if 0 <= n[0] < self.trav_map_size and 0 <= n[1] < self.trav_map_size and trav_map[n[0], n[1]] > 0:
g.add_edge(n, (i, j), weight=l2_distance(n, (i, j)))
return [boundary_id] + [item for item in self.ground_plane_mjcf]
# only take the largest connected component
largest_cc = max(nx.connected_components(g), key=len)
g = g.subgraph(largest_cc).copy()
with open(graph_file, 'wb') as pfile:
pickle.dump(g, pfile, protocol=pickle.HIGHEST_PROTOCOL)
self.floor_graph.append(g)
# update trav_map accordingly
trav_map[:, :] = 0
for node in g.nodes:
trav_map[node[0], node[1]] = 255
self.floor_map.append(trav_map)
def load_scene_objects(self):
if not self.is_interactive:
return
self.scene_objects = []
self.scene_objects_pos = []
scene_path = get_model_path(self.model_id)
urdf_files = [item for item in os.listdir(scene_path) if item[-4:] == 'urdf']
position_files = [item[:-4].replace('alignment_centered', 'pos') + 'txt' for item in urdf_files]
for urdf_file, position_file in zip(urdf_files, position_files):
with open(os.path.join(scene_path, position_file)) as f:
pos = np.array([float(item) for item in f.readlines()[0].strip().split()])
# filter out duplicate annotations for the same object
if len(self.scene_objects_pos) == 0 or \
np.min(np.linalg.norm(np.array(self.scene_objects_pos) - pos, axis=1)) > 0.5:
obj = InteractiveObj(os.path.join(scene_path, urdf_file))
obj.load()
self.scene_objects.append(obj)
self.scene_objects_pos.append(pos)
def load(self):
"""
Initialize scene
"""
self.load_floor_metadata()
self.load_scene_mesh()
self.load_floor_planes()
self.load_trav_map()
self.load_scene_objects()
self.reset_scene_objects()
return [self.mesh_body_id] + self.floor_body_ids
def get_random_floor(self):
return np.random.randint(0, high=len(self.floors))
@ -303,12 +358,25 @@ class BuildingScene(Scene):
return path_world, geodesic_distance
def reset_floor(self, floor=0, additional_elevation=0.05, height=None):
def reset_floor(self, floor=0, additional_elevation=0.02, height=None):
if self.is_interactive:
# loads the floor plane with the original floor texture for each floor, no need to reset_floor
return
height = height if height is not None else self.floors[floor] + additional_elevation
p.resetBasePositionAndOrientation(self.ground_plane_mjcf[0],
p.resetBasePositionAndOrientation(self.floor_body_ids[0],
posObj=[0, 0, height],
ornObj=[0, 0, 0, 1])
def reset_scene_objects(self):
if not self.is_interactive:
# does not have objects in the scene
return
for obj, pos in zip(self.scene_objects, self.scene_objects_pos):
obj.set_position_orientation(pos,
[0, 0, 0, 1])
def get_floor_height(self, floor):
return self.floors[floor]

View File

@ -496,7 +496,7 @@ class MeshRenderer(object):
else:
self.mesh_materials.append(material_id + material_count)
#print('mesh_materials', self.mesh_materials)
# print('mesh_materials', self.mesh_materials)
VAO_ids.append(self.get_num_objects() - 1)
#release(scene)

View File

@ -103,6 +103,7 @@ class Simulator:
self.robots = []
self.scene = None
self.objects = []
self.next_class_id = 0
def load_without_pybullet_vis(load_func):
def wrapped_load_func(*args, **kwargs):
@ -113,7 +114,7 @@ class Simulator:
return wrapped_load_func
@load_without_pybullet_vis
def import_scene(self, scene, texture_scale=1.0, load_texture=True, class_id=0):
def import_scene(self, scene, texture_scale=1.0, load_texture=True, class_id=None):
"""
Import a scene. A scene could be a synthetic one or a realistic Gibson Environment.
@ -123,9 +124,14 @@ class Simulator:
:param class_id: Class id for rendering semantic segmentation
"""
if class_id is None:
class_id = self.next_class_id
self.next_class_id += 1
new_objects = scene.load()
for item in new_objects:
self.objects.append(item)
for new_object in new_objects:
for shape in p.getVisualShapeData(new_object):
id, link_id, type, dimensions, filename, rel_pos, rel_orn, color = shape[:8]
@ -160,25 +166,55 @@ class Simulator:
# pybullet_uuid=new_object,
# class_id=class_id)
if scene.is_interactive:
for obj in scene.scene_objects:
class_id = self.next_class_id
self.next_class_id += 1
# obj.load() should have already been called in scene.load()
new_object = obj.body_id
is_soft = False
if obj.__class__.__name__ == 'SoftObject':
is_soft = True
self.objects.append(new_object)
for shape in p.getVisualShapeData(new_object):
id, link_id, type, dimensions, filename, rel_pos, rel_orn, color = shape[:8]
if type == p.GEOM_MESH:
filename = filename.decode('utf-8')
# print(filename, self.visual_objects)
self.renderer.load_object(filename)
self.visual_objects[filename] = len(self.renderer.visual_objects) - 1
self.renderer.add_instance(len(self.renderer.visual_objects) - 1,
pybullet_uuid=new_object,
class_id=class_id,
dynamic=True,
softbody=is_soft)
self.scene = scene
return new_objects
@load_without_pybullet_vis
def import_object(self, object, class_id=0):
def import_object(self, object, class_id=None):
"""
:param object: Object to load
:param class_id: Class id for rendering semantic segmentation
"""
if class_id is None:
class_id = self.next_class_id
self.next_class_id += 1
new_object = object.load()
isSoft = False
is_soft = False
if object.__class__.__name__ == 'SoftObject':
isSoft = True
is_soft = True
self.objects.append(new_object)
for shape in p.getVisualShapeData(new_object):
id, link_id, type, dimensions, filename, rel_pos, rel_orn, color = shape[:8]
if type == p.GEOM_MESH:
filename = filename.decode('utf-8')
print(filename, self.visual_objects)
# print(filename, self.visual_objects)
if not filename in self.visual_objects.keys():
self.renderer.load_object(filename)
self.visual_objects[filename] = len(self.renderer.visual_objects) - 1
@ -186,13 +222,13 @@ class Simulator:
pybullet_uuid=new_object,
class_id=class_id,
dynamic=True,
softbody=isSoft)
softbody=is_soft)
else:
self.renderer.add_instance(self.visual_objects[filename],
pybullet_uuid=new_object,
class_id=class_id,
dynamic=True,
softbody=isSoft)
softbody=is_soft)
elif type == p.GEOM_SPHERE:
filename = os.path.join(gibson2.assets_path, 'models/mjcf_primitives/sphere8.obj')
self.renderer.load_object(
@ -230,7 +266,7 @@ class Simulator:
return new_object
@load_without_pybullet_vis
def import_robot(self, robot, class_id=0):
def import_robot(self, robot, class_id=None):
"""
Import a robot into Simulator
@ -239,6 +275,10 @@ class Simulator:
:return: id for robot in pybullet
"""
if class_id is None:
class_id = self.next_class_id
self.next_class_id += 1
ids = robot.load()
visual_objects = []
link_ids = []
@ -251,7 +291,7 @@ class Simulator:
if type == p.GEOM_MESH:
filename = filename.decode('utf-8')
if not filename in self.visual_objects.keys():
print(filename, rel_pos, rel_orn, color, dimensions)
# print(filename, rel_pos, rel_orn, color, dimensions)
self.renderer.load_object(filename,
transform_orn=rel_orn,
transform_pos=rel_pos,
@ -265,7 +305,7 @@ class Simulator:
link_ids.append(link_id)
elif type == p.GEOM_SPHERE:
filename = os.path.join(gibson2.assets_path, 'models/mjcf_primitives/sphere8.obj')
print(filename, dimensions, rel_pos, rel_orn, color)
# print(filename, dimensions, rel_pos, rel_orn, color)
self.renderer.load_object(
filename,
transform_orn=rel_orn,
@ -276,7 +316,7 @@ class Simulator:
link_ids.append(link_id)
elif type == p.GEOM_CAPSULE or type == p.GEOM_CYLINDER:
filename = os.path.join(gibson2.assets_path, 'models/mjcf_primitives/cube.obj')
print(filename, dimensions, rel_pos, rel_orn, color)
# print(filename, dimensions, rel_pos, rel_orn, color)
self.renderer.load_object(
filename,
transform_orn=rel_orn,
@ -287,7 +327,7 @@ class Simulator:
link_ids.append(link_id)
elif type == p.GEOM_BOX:
filename = os.path.join(gibson2.assets_path, 'models/mjcf_primitives/cube.obj')
print(filename, dimensions, rel_pos, rel_orn, color)
# print(filename, dimensions, rel_pos, rel_orn, color)
self.renderer.load_object(filename,
transform_orn=rel_orn,
transform_pos=rel_pos,
@ -314,7 +354,7 @@ class Simulator:
return ids
@load_without_pybullet_vis
def import_interactive_object(self, obj, class_id=0):
def import_articulated_object(self, obj, class_id=None):
"""
Import articulated objects into simulator
@ -322,6 +362,11 @@ class Simulator:
:param class_id: Class id for rendering semantic segmentation
:return: pybulet id
"""
if class_id is None:
class_id = self.next_class_id
self.next_class_id += 1
ids = obj.load()
visual_objects = []
link_ids = []
@ -333,7 +378,7 @@ class Simulator:
if type == p.GEOM_MESH:
filename = filename.decode('utf-8')
if not filename in self.visual_objects.keys():
print(filename, rel_pos, rel_orn, color, dimensions)
# print(filename, rel_pos, rel_orn, color, dimensions)
self.renderer.load_object(filename,
transform_orn=rel_orn,
transform_pos=rel_pos,
@ -347,7 +392,7 @@ class Simulator:
link_ids.append(link_id)
elif type == p.GEOM_SPHERE:
filename = os.path.join(gibson2.assets_path, 'models/mjcf_primitives/sphere8.obj')
print(filename, dimensions, rel_pos, rel_orn, color)
# print(filename, dimensions, rel_pos, rel_orn, color)
self.renderer.load_object(
filename,
transform_orn=rel_orn,
@ -355,11 +400,10 @@ class Simulator:
input_kd=color[:3],
scale=[dimensions[0] / 0.5, dimensions[0] / 0.5, dimensions[0] / 0.5])
visual_objects.append(len(self.renderer.visual_objects) - 1)
# self.visual_objects[filename] = len(self.renderer.visual_objects) - 1
link_ids.append(link_id)
elif type == p.GEOM_CAPSULE or type == p.GEOM_CYLINDER:
filename = os.path.join(gibson2.assets_path, 'models/mjcf_primitives/cube.obj')
print(filename, dimensions, rel_pos, rel_orn, color)
# print(filename, dimensions, rel_pos, rel_orn, color)
self.renderer.load_object(
filename,
transform_orn=rel_orn,
@ -370,7 +414,7 @@ class Simulator:
link_ids.append(link_id)
elif type == p.GEOM_BOX:
filename = os.path.join(gibson2.assets_path, 'models/mjcf_primitives/cube.obj')
print(filename, dimensions, rel_pos, rel_orn, color)
# print(filename, dimensions, rel_pos, rel_orn, color)
self.renderer.load_object(filename,
transform_orn=rel_orn,
transform_pos=rel_pos,

View File

@ -80,13 +80,8 @@ class BaseEnv(gym.Env):
should_load_replaced_objects=self.config.get('should_load_replaced_objects', False),
pybullet_load_texture=self.config.get('pybullet_load_texture', False),
)
self.simulator.import_scene(scene, load_texture=self.config.get('load_texture', True))
# scene: class_id = 0
# robot: class_id = 1
# objects: class_id > 1
self.simulator.import_scene(scene,
load_texture=self.config.get('load_texture', True),
class_id=0)
if self.config['robot'] == 'Turtlebot':
robot = Turtlebot(self.config)
elif self.config['robot'] == 'Husky':
@ -111,7 +106,7 @@ class BaseEnv(gym.Env):
self.scene = scene
self.robots = [robot]
for robot in self.robots:
self.simulator.import_robot(robot, class_id=1)
self.simulator.import_robot(robot)
def clean(self):
"""

View File

@ -13,14 +13,6 @@ from gibson2.utils.assets_utils import download_data
class DemoInteractive(object):
def __init__(self):
download_data()
self.download_interactive_assets()
def download_interactive_assets(self):
if not os.path.exists(gibson2.dataset_path):
os.makedirs(gibson2.dataset_path)
if not os.path.exists(os.path.join(gibson2.dataset_path, 'Rs_interactive')):
os.system('wget https://storage.googleapis.com/gibson_scenes/Rs_interactive.tar.gz -O /tmp/Rs_interactive.tar.gz')
os.system('tar -zxf /tmp/Rs_interactive.tar.gz --directory {}'.format(gibson2.dataset_path))
def run_demo(self):
config = parse_config(os.path.join(gibson2.assets_path, '../../examples/configs/turtlebot_demo.yaml'))

View File

@ -36,5 +36,10 @@ def download_data():
os.system('wget https://storage.googleapis.com/gibson_scenes/Rs.tar.gz -O /tmp/Rs.tar.gz')
os.system('tar -zxf /tmp/Rs.tar.gz --directory {}'.format(gibson2.dataset_path))
if not os.path.exists(os.path.join(gibson2.dataset_path, 'Rs_interactive')):
os.system('wget https://storage.googleapis.com/gibson_scenes/Rs_interactive.tar.gz -O /tmp/Rs_interactive.tar.gz')
os.system('tar -zxf /tmp/Rs_interactive.tar.gz --directory {}'.format(gibson2.dataset_path))
if __name__ == "__main__":
download_data()