fixing refactor

This commit is contained in:
Roberto Martin-Martin 2020-09-12 18:58:10 -07:00
parent 8b045ad156
commit a4bfc716a1
24 changed files with 37 additions and 29 deletions

View File

@ -1 +1 @@
gibson2/core/render/pybind11/
gibson2/render/pybind11/

View File

@ -1,5 +1,5 @@
include LICENSE
recursive-include gibson2/core *
recursive-include gibson2 *
include gibson2/global_config.yaml

View File

@ -1,4 +1,4 @@
#!/bin/sh
rm -rf build
rm -rf gibson2/core/render/mesh_renderer/build/
rm -rf gibson2/render/mesh_renderer/build/

View File

@ -142,7 +142,7 @@ It's also fairly straighforward to cusutomize your own environment.
- Want to change reward function? Modify `get_reward`.
- Want to change termination condition? Modify `get_termination`.
- Want to modify episode reset logic? Modify `reset` and `reset_agent`.
- Want to add additional objects or robots into the scene? Check out `load_interactive_objects` and `load_dynamic_objects` in `NavigateRandomEnvSim2Real`. If these are brand-new objects and robots that are not in iGibson yet, you might also need to change [gibson2/core/physics/robot_locomotors.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/physics/robot_locomotors.py) and [gibson2/core/physics/interactive_objects.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/physics/interactive_objects.py).
- Want to add additional objects or robots into the scene? Check out `load_interactive_objects` and `load_dynamic_objects` in `NavigateRandomEnvSim2Real`. If these are brand-new objects and robots that are not in iGibson yet, you might also need to change [gibson2/physics/robot_locomotors.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/robot_locomotors.py) and [gibson2/physics/interactive_objects.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/interactive_objects.py).
### Examples
@ -191,7 +191,7 @@ In this example, we show how to instantiate `NavigateRandomEnv` with an interact
In this example, we show a customized environment `NavigateRandomEnvSim2Real` that builds on top of `NavigateRandomEnv`. We created this environment for [our CVPR2020 Sim2Real Challenge with iGibson](http://svl.stanford.edu/igibson/challenge.html). You should consider participating. :)
Here are the custimizations that we did:
- We added a new robot `Locobot` to [gibson2/core/physics/robot_locomotors.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/physics/robot_locomotors.py)
- We added a new robot `Locobot` to [gibson2/physics/robot_locomotors.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/robot_locomotors.py)
- We added additional objects into the scene: `load_interactive_objects` in `NavigateRandomEnvSim2Real`
- We added dynamic objects (another Turtlebot) into the scene: `reset_dynamic_objects` and `step_dynamic_objects` in `NavigateRandomEnvSim2Real`

View File

@ -24,7 +24,7 @@ If the original installation doesn't work, try the following:
1. Is nvidia driver properly installed? You can check by running nvidia-smi
2. Are openGL libraries visible? You can do so by
`export LD_LIBRARY_PATH=/usr/lib/nvidia-<vvv>:$LD_LIBRARY_PATH`
3. There are two ways of setting up openGL library, if the current installation doesn't work, you can try to install with USE_GLAD set to FALSE in [here](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/render/CMakeLists.txt)
3. There are two ways of setting up openGL library, if the current installation doesn't work, you can try to install with USE_GLAD set to FALSE in [here](https://github.com/StanfordVL/iGibson/blob/master/gibson2/render/CMakeLists.txt)
4. If you want to render in headless mode, make sure `$DISPLAY` environment variable is unset, otherwise you might have error `Failed to EGL with glad`, because EGL is sensitive to `$DISPLAY` environment variable.
Also, the EGL setup part is borrowed from Erwin Coumans [egl_example](https://github.com/erwincoumans/egl_example). It would be informative to see if that repository can run on your machine.

View File

@ -13,7 +13,7 @@ We provide a wide variety of **Objects** that can be imported into the **Simulat
Typically, they take in the name or the path of an object (in `gibson2.assets_path`) and provide a `load` function that be invoked externally (usually by `import_object` and `import_articulated_object` of `Simulator`). The `load` function imports the object into PyBullet. Some **Objects** (e.g. `InteractiveObj`) also provide APIs to get and set the object pose.
Most of the code can be found here: [gibson2/core/physics/interactive_objects.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/physics/interactive_objects.py).
Most of the code can be found here: [gibson2/physics/interactive_objects.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/interactive_objects.py).
### Examples
In this example, we import three objects into PyBullet, two of which are articulated objects. The code can be found here: [examples/demo/object_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/object_example.py).

View File

@ -61,7 +61,7 @@ Coding Your RL Agent
====
You can code your RL agent following our convention. The interface with our environment is very simple (see some examples in the end of this section).
First, you can create an environment by creating an instance of classes in `gibson/core/envs` folder.
First, you can create an environment by creating an instance of classes in `gibson/envs` folder.
```python
@ -105,7 +105,7 @@ Each environment is configured with a `yaml` file. Examples of `yaml` files can
|fast_lq_render| true/false| if there is fast_lq_render in yaml file, Gibson will use a smaller filler network, this will render faster but generate slightly lower quality camera output. This option is useful for training RL agents fast. |
#### Making Your Customized Environment
Gibson provides a set of methods for you to define your own environments. You can follow the existing environments inside `gibson/core/envs`.
Gibson provides a set of methods for you to define your own environments. You can follow the existing environments inside `gibson/envs`.
| Method name | Usage |
|:------------------:|:---------------------------|

View File

@ -2,7 +2,7 @@
### Overview
We developed our own MeshRenderer that supports customizable camera configuration and various image modalities, and renders at a lightening speed. Specifically, you can specify image width, height and vertical field of view in the constructor of `class MeshRenderer`. Then you can call `renderer.render(modes=('rgb', 'normal', 'seg', '3d'))` to retrieve the images. Currently we support four different image modalities: RGB, surface normal, semantic segmentation and 3D point cloud (z-channel can be extracted as depth map). Most of the code can be found in [gibson2/core/render](https://github.com/StanfordVL/iGibson/tree/master/gibson2/core/render).
We developed our own MeshRenderer that supports customizable camera configuration and various image modalities, and renders at a lightening speed. Specifically, you can specify image width, height and vertical field of view in the constructor of `class MeshRenderer`. Then you can call `renderer.render(modes=('rgb', 'normal', 'seg', '3d'))` to retrieve the images. Currently we support four different image modalities: RGB, surface normal, semantic segmentation and 3D point cloud (z-channel can be extracted as depth map). Most of the code can be found in [gibson2/render](https://github.com/StanfordVL/iGibson/tree/master/gibson2/render).
### Examples

View File

@ -38,7 +38,7 @@ def apply_robot_action(action):
```
Note that `robot_action` is a normalized joint velocity, i.e. `robot_action[n] == 1.0` means executing the maximum joint velocity for the nth joint. The limits of joint position, velocity and torque are extracted from the URDF file of the robot.
Most of the code can be found here: [gibson2/core/physics/robot_locomotors.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/physics/robot_locomotors.py).
Most of the code can be found here: [gibson2/physics/robot_locomotors.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/robot_locomotors.py).
### Examples
In this example, we import four different robots into PyBullet. We keep them still for around 10 seconds and then move them with small random actions for another 10 seconds. The code can be found here: [examples/demo/robot_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/robot_example.py).

View File

@ -14,7 +14,7 @@ To be more specific, the `load` function of `BuildingScene`
- loads the scene objects and places them in their original locations if the scene is interactive
- provides APIs for sampling a random location in the scene, and for computing the shortest path between two locations in the scene.
Most of the code can be found here: [gibson2/core/physics/scene.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/physics/scene.py).
Most of the code can be found here: [gibson2/physics/scene.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/physics/scene.py).
### Examples

View File

@ -10,9 +10,9 @@ Some key functions are the following:
- `import_{object, articulated_object, robot}`: import the object, articulated object and robot into the simulator in a similar manner
- `sync`: synchronize the poses of the dynamic objects (including the robots) between PyBullet and MeshRenderer. Specifically, it calls `update_position` for each object, in which it retrieve the object's pose in PyBullet, and then update its pose accordingly in MeshRenderer.
If `Simulator` uses `gui` mode, by default it will also maintain a `Viewer`, which essentially is a virtual camera in the scene that can render images. More info about the `Viewer` can be found here: [gibson2/core/render/viewer.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/render/viewer.py).
If `Simulator` uses `gui` mode, by default it will also maintain a `Viewer`, which essentially is a virtual camera in the scene that can render images. More info about the `Viewer` can be found here: [gibson2/render/viewer.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/render/viewer.py).
Most of the code can be found here: [gibson2/core/simulator.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/core/simulator.py).
Most of the code can be found here: [gibson2/simulator.py](https://github.com/StanfordVL/iGibson/blob/master/gibson2/simulator.py).
### Examples
In this example, we import a `BuildingScene`, a `Turtlebot`, and ten `YCBObject` into the simulator. The code can be found here: [examples/demo/simulator_example.py](https://github.com/StanfordVL/iGibson/blob/master/examples/demo/simulator_example.py)

View File

@ -6,7 +6,7 @@ import pybullet as p
import gibson2.render.mesh_renderer as mesh_renderer
from gibson2.render.mesh_renderer.get_available_devices import get_available_devices
from gibson2.render.mesh_renderer import EGLRendererContext
from gibson2.render.mesh_renderer import perspective, lookat, xyz2mat, quat2rotmat, mat2xyz, \
from gibson2.render.mesh_renderer.glutils.meshutil import perspective, lookat, xyz2mat, quat2rotmat, mat2xyz, \
safemat2quat, xyzw2wxyz
import numpy as np
import os

View File

@ -1,5 +1,5 @@
import numpy as np
from gibson2.render.mesh_renderer import MeshRenderer
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.render.mesh_renderer.get_available_devices import get_cuda_device
import logging

View File

@ -1,7 +1,6 @@
from gibson2.render.mesh_renderer import MeshRenderer, InstanceGroup, Instance, quat2rotmat,\
xyz2mat, xyzw2wxyz
from gibson2.render.mesh_renderer import MeshRendererG2G
from gibson2.render import Viewer
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer, InstanceGroup, Instance
from gibson2.render.mesh_renderer.mesh_renderer_tensor import MeshRendererG2G
from gibson2.render.viewer import Viewer
import pybullet as p
import gibson2
import os
@ -9,7 +8,6 @@ import numpy as np
import platform
import logging
class Simulator:
def __init__(self,
gravity=9.8,

View File

@ -7,7 +7,7 @@ import os
import time
from multiprocessing import Pool
dll = np.ctypeslib.load_library('../core/render/render_cuda_f', '')
dll = np.ctypeslib.load_library('../render/render_cuda_f', '')
# In[6]:

View File

@ -40,7 +40,7 @@ Configuration Argument
| semantic_color | 2 | Semantic label color coding scheme |
## Semantic Color Coding
There are two ways for rendering rgb semantic maps in semantic mode, defined inside `gibson/core/channels/common/semantic_color.hpp`. Each is defined below:
There are two ways for rendering rgb semantic maps in semantic mode, defined inside `gibson/channels/common/semantic_color.hpp`. Each is defined below:
### Instance-by-Instance Color Coding

View File

@ -136,12 +136,12 @@ setup(
'pytest',
'future',
],
ext_modules=[CMakeExtension('MeshRendererContext', sourcedir='gibson2/core/render')],
ext_modules=[CMakeExtension('MeshRendererContext', sourcedir='gibson2/render')],
cmdclass=dict(build_ext=CMakeBuild),
tests_require=[],
package_data={'': [
'gibson2/global_config.yaml',
'gibson2/core/render/mesh_renderer/shaders/*'
'gibson2/render/mesh_renderer/shaders/*'
]},
include_package_data=True,
) #yapf: disable

View File

@ -1,4 +1,9 @@
from gibson2.render.mesh_renderer import MeshRendererContext
import platform
if platform.system() == 'Darwin':
from gibson2.render.mesh_renderer import GLFWRendererContext as MeshRendererContext
else:
from gibson2.render.mesh_renderer import EGLRendererContext as MeshRendererContext
from gibson2.render.mesh_renderer.get_available_devices import get_available_devices

View File

@ -8,6 +8,7 @@ download_assets()
download_demo_data()
def test_env():
print("Test env")
config_filename = os.path.join(gibson2.root_path, '../test/test_house.yaml')
nav_env = NavigateEnv(config_file=config_filename, mode='headless')
try:

View File

@ -6,7 +6,7 @@ from gibson2.external.pybullet_tools.utils import stable_z_on_aabb
from gibson2.external.pybullet_tools.utils import get_center_extent
from gibson2.simulator import Simulator
from gibson2.scenes.scene import EmptyScene
from gibson2.scenes.scene import save_urdfs_without_floating_joints, round_up
from gibson2.scenes.scene import save_urdfs_without_floating_joints
from gibson2.objects.base_object import InteractiveObj
from gibson2.objects.base_object import VisualMarker
from gibson2.utils.utils import rotate_vector_3d

View File

@ -1,4 +1,4 @@
from gibson2.render.mesh_renderer import MeshRenderer
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
import numpy as np
import os
import gibson2
@ -6,12 +6,15 @@ import GPUtil
test_dir = os.path.join(gibson2.assets_path, 'test')
def test_render_loading_cleaning():
print('Test1')
renderer = MeshRenderer(width=800, height=600)
renderer.release()
def test_render_rendering():
print('Test2')
renderer = MeshRenderer(width=800, height=600)
renderer.load_object(os.path.join(test_dir, 'mesh/bed1a77d92d64f5cbbaaae4feed64ec1_new.obj'))
renderer.add_instance(0)
@ -27,6 +30,7 @@ def test_render_rendering():
def test_render_rendering_cleaning():
print('Test3')
for i in range(5):
renderer = MeshRenderer(width=800, height=600)
renderer.load_object(os.path.join(test_dir, 'mesh/bed1a77d92d64f5cbbaaae4feed64ec1_new.obj'))

View File

@ -1,4 +1,4 @@
from gibson2.render import Viewer
from gibson2.render.viewer import Viewer
def test_viewer():