Merge branch 'vr_new' into igdsl2

This commit is contained in:
Chengshu Li 2020-12-14 20:52:26 -08:00
commit bf18531d94
52 changed files with 1516 additions and 2898 deletions

View File

@ -50,7 +50,7 @@ The scenes are annotated with bounding box location and size of different object
) is baked offline with high-performant ray-tracing
- Scenes are defined in iGSDF (iGibson Scene Definition Format), an extension of URDF, and shapes are OBJ files with
associated materials
For instructions to install iGibson and download dataset, you can visit [installation guide](http://svl.stanford.edu/igibson/docs/installation.html).
There are other datasets we link to iGibson. We include support to use CubiCasa5K and 3DFront scenes, adding up more than 8000 extra interactive scenes to use in iGibson! Check our documentation on how to use those.
@ -121,14 +121,13 @@ Gibson
* Get codebase and assets:
```
$ git clone https://github.com/fxia22/iGibson.git --recursive
$ git clone https://github.com/fxia22/iGibson.git --init --recursive
$ cd iGibson
$ git checkout vr
$ git checkout vr_new
$ git submodule update --recursive
```
Download Gibson assets and copy to iGibson/gibson2/assets/
Download enviroments (scenes) and copy to iGibson/gibson2/assets/dataset
Follow the instructions on the iGibson website to obtain the iGibson assets and dataset (http://svl.stanford.edu/igibson/docs/).
* Create anaconda env:
@ -137,40 +136,64 @@ $ conda create -n gibsonvr python=3.6
```
Activate conda env:
```
$ source activate gibsonvr
$ conda activate gibsonvr
```
* Install Gibson in anaconda env:
```
$ cd iGibson
```
- If you followed the instructions, iGibson is at the vr branch
- If you followed the instructions, iGibson is at the vr_new branch
```
$ pip install -e .
```
Should end printing 'Successfully installed gibson2'
Important - VR functionality and where to find it:
You can find all the VR demos in iGibson/examples/demo/vr_demos
You can find all the VR demos in iGibson/examples/demo/vr_demos, which has the following structure:
Run:
-vr_playground.py
$ python vr_playground_no_pbr (for a scene without PBR)
--robot_embodiment (folder)
or
---vr_demo_robot_control.py
$ python vr_playground_pbr (for the current state-of-the-art Gibson graphics)
--muvr (folder)
Data saving/replay code can be found in vr_demos/data_save_replay.
Run vr_demo_save to save a demo to a log file, and vr_demo_replay to run it again.
Please see the demos and gibson2/utils/vr_logging.py for more details on the data saving/replay system.
---igvr_client.py
To use the VR hand asset, please download and unzip the asset and put it into assets/models under the folder name 'vr_hand'.
The asset is stored in a drive folder and is entitled vr_hand.zip.
Link to VR hand zip: https://drive.google.com/drive/folders/1zm3ZpPc7yHwyALEGfsb0_NybFMvV81Um?usp=sharing
---igvr_server.py
---muvr_demo.py
--data_save_replay (folder)
---vr_states_sr.py
---vr_actions_sr.py
---vr_logs (folder containing saved data)
Additional information:
1) Most VR functions can be found in the gibson2/simulator.py
2) The VrAgent and its associated VR objects can be found in gibson2/objects/vr_objects.py
3) VR utility functions are found in gibson2/utils/vr_utils.py
4) The VR renderer can be found in gibson2/render/mesh_renderer.py
5) The underlying VR C++ code can be found in vr_mesh_render.h and .cpp in gibson2/render/cpp
To get started with the iGibson VR experience run:
$ python vr_playground.py
By default the LOAD_PARTIAL boolean is set to false to speed up loading (loads first 10 objects into the scene as well as some objects to interact with). Please edit this variable to True if you wish to load the entire Rs_int scene.
To use the VR assets, please access the Google drive folder at this link:
https://drive.google.com/drive/folders/1zm3ZpPc7yHwyALEGfsb0_NybFMvV81Um?usp=sharing
You will need to download both vr_body and vr_hand and place them into assets/models. The pack_lunch folder containing the groceries assets used in the ATUS demos can also be found here. Please also put this into your assets/models folder.
Have fun in VR!
Helpful tips:
Press ESCAPE to force the fullscreen rendering window to close during program execution.
Before using SRAnipal eye tracking, you may want to re-calibrate the eye tracker. Please go to the Vive system settings to perform this calibration.
1) Press ESCAPE to force the fullscreen rendering window to close during program execution (although fullscreen is disabled by default)
2) Before using SRAnipal eye tracking, you may want to re-calibrate the eye tracker. Please go to the Vive system settings to perform this calibration.

View File

@ -1,151 +0,0 @@
""" Lunch packing demo - initial conditions - Kent """
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
optimize = True
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Beechwood_0_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=optimize,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', rendering_settings=vr_rendering_settings, vr_eye_tracking=False, vr_mode=True)
scene = InteractiveIndoorScene('Beechwood_0_int')
s.import_ig_scene(scene)
# Position that is roughly in the middle of the kitchen - used to help place objects
kitchen_middle = [-4.5, -3.5, 1.5]
# List of object names to filename mapping
lunch_pack_folder = os.path.join(gibson2.assets_path, 'pack_lunch')
lunch_pack_files = {
'chip': os.path.join(lunch_pack_folder, 'food', 'snack', 'chips', 'chips0', 'rigid_body.urdf'),
'fruit': os.path.join(lunch_pack_folder, 'food', 'fruit', 'pear', 'pear00', 'rigid_body.urdf'),
'water': os.path.join(lunch_pack_folder, 'drink', 'soda', 'soda23_mountaindew710mL', 'rigid_body.urdf'),
'eggs': os.path.join(lunch_pack_folder, 'eggs', 'eggs00_eggland', 'rigid_body.urdf'),
'container': os.path.join(lunch_pack_folder, 'dish', 'casserole_dish', 'casserole_dish00', 'rigid_body.urdf')
}
item_scales = {
'chip': 1,
'fruit': 0.9,
'water': 0.8,
'eggs': 0.5,
'container': 0.5
}
# A list of start positions and orientations for the objects - determined by placing objects in VR
item_start_pos_orn = {
'chip': [
[(-5.39, -1.62, 1.42), (-0.14, -0.06, 0.71, 0.69)],
[(-5.39, -1.62, 1.49), (-0.14, -0.06, 0.71, 0.69)],
[(-5.12, -1.62, 1.42), (-0.14, -0.06, 0.71, 0.69)],
[(-5.12, -1.62, 1.49), (-0.14, -0.06, 0.71, 0.69)],
],
'fruit': [
[(-4.8, -3.55, 0.97), (0, 0, 0, 1)],
[(-4.8, -3.7, 0.97), (0, 0, 0, 1)],
[(-4.8, -3.85, 0.97), (0, 0, 0, 1)],
[(-4.8, -4.0, 0.97), (0, 0, 0, 1)],
],
'water': [
[(-5.0, -3.55, 1.03), (0.68, -0.18, -0.18, 0.68)],
[(-5.0, -3.7, 1.03), (0.68, -0.18, -0.18, 0.68)],
[(-5.0, -3.85, 1.03), (0.68, -0.18, -0.18, 0.68)],
[(-5.0, -4.0, 1.03), (0.68, -0.18, -0.18, 0.68)],
],
'eggs': [
[(-4.65, -1.58, 1.40), (0.72, 0, 0, 0.71)],
[(-4.66, -1.58, 1.46), (0.72, 0, 0, 0.71)],
[(-4.89, -1.58, 1.40), (0.72, 0, 0, 0.71)],
[(-4.89, -1.58, 1.46), (0.72, 0, 0, 0.71)],
],
'container': [
[(-4.1, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-4.5, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-4.9, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-5.3, -1.82, 0.87), (0.71, 0, 0, 0.71)],
]
}
# Import all objects and put them in the correct positions
pack_items = list(lunch_pack_files.keys())
for item in pack_items:
fpath = lunch_pack_files[item]
start_pos_orn = item_start_pos_orn[item]
item_scale = item_scales[item]
for pos, orn in start_pos_orn:
item_ob = ArticulatedObject(fpath, scale=item_scale)
s.import_object(item_ob)
item_ob.set_position(pos)
item_ob.set_orientation(orn)
vr_body = VrBody()
s.import_object(vr_body, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
vr_body.init_body([kitchen_middle[0], kitchen_middle[1]])
r_hand = VrHand(hand='right')
s.import_object(r_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
r_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2])
l_hand = VrHand(hand='left')
s.import_object(l_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
l_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2.2])
if optimize:
s.optimize_vertex_and_texture()
s.set_vr_offset([-4.34, -2.68, -0.5])
time_fps = True
while True:
start_time = time.time()
s.step()
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
vr_body.move_body(s, r_touch_x, r_touch_y, 0.03, 'hmd')
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
frame_dur = time.time() - start_time
if time_fps:
print('Fps: {}'.format(round(1/max(frame_dur, 0.00001), 2)))
s.disconnect()

View File

@ -1,4 +1,14 @@
""" Lunch packing demo - initial conditions - Eric """
""" VR playground containing various objects. This playground operates in the
Rs_int PBR scene.
Important - VR functionality and where to find it:
1) Most VR functions can be found in the gibson2/simulator.py
2) The VrAgent and its associated VR objects can be found in gibson2/objects/vr_objects.py
3) VR utility functions are found in gibson2/utils/vr_utils.py
4) The VR renderer can be found in gibson2/render/mesh_renderer.py
5) The underlying VR C++ code can be found in vr_mesh_render.h and .cpp in gibson2/render/cpp
"""
import numpy as np
import os
@ -7,13 +17,21 @@ import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_vr import VrSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.object_base import Object
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.vr_objects import VrAgent
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
import signal
import sys
optimize = True
# Set to false to load entire Rs_int scene
LOAD_PARTIAL = True
# Set to true to print out render, physics and overall frame FPS
PRINT_FPS = True
# HDR files for PBR rendering
hdr_texture = os.path.join(
@ -21,12 +39,12 @@ hdr_texture = os.path.join(
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Beechwood_0_int', 'layout', 'floor_lighttype_0.png')
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=optimize,
vr_rendering_settings = MeshRendererSettings(optimized=True,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
@ -36,16 +54,36 @@ vr_rendering_settings = MeshRendererSettings(optimized=optimize,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', rendering_settings=vr_rendering_settings, vr_eye_tracking=False, vr_mode=True)
# VR system settings
# Change use_vr to toggle VR mode on/off
vr_settings = VrSettings(use_vr=True)
s = Simulator(mode='vr',
rendering_settings=vr_rendering_settings,
vr_settings=vr_settings)
scene = InteractiveIndoorScene('Beechwood_0_int')
# Turn this on when debugging to speed up loading
if LOAD_PARTIAL:
scene._set_first_n_objects(10)
# Set gravity to 0 to stop all objects falling on the floor
p.setGravity(0, 0, 0)
s.import_ig_scene(scene)
# Position that is roughly in the middle of the kitchen - used to help place objects
kitchen_middle = [-4.5, -3.5, 1.5]
kitchen_middle = [-3.7, -2.7, 1.8]
if not vr_settings.use_vr:
camera_pose = np.array(kitchen_middle)
view_direction = np.array([0, 1, 0])
s.renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
s.renderer.set_fov(90)
# Create a VrAgent and it will handle all initialization and importing under-the-hood
vr_agent = VrAgent(s)
# List of object names to filename mapping
lunch_pack_folder = os.path.join(gibson2.assets_path, 'pack_lunch')
lunch_pack_folder = os.path.join(gibson2.assets_path, 'models', 'pack_lunch')
lunch_pack_files = {
'sandwich': os.path.join(lunch_pack_folder, 'cereal', 'cereal01', 'rigid_body.urdf'),
'chip': os.path.join(lunch_pack_folder, 'food', 'snack', 'chips', 'chips0', 'rigid_body.urdf'),
@ -120,6 +158,8 @@ item_start_pos_orn = {
]
}
# Import all objects and put them in the correct positions
pack_items = list(lunch_pack_files.keys())
for item in pack_items:
@ -132,44 +172,21 @@ for item in pack_items:
item_ob.set_position(pos)
item_ob.set_orientation(orn)
vr_body = VrBody()
s.import_object(vr_body, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
vr_body.init_body([kitchen_middle[0], kitchen_middle[1]])
s.optimize_vertex_and_texture()
r_hand = VrHand(hand='right')
s.import_object(r_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
r_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2])
if vr_settings.use_vr:
# Since vr_height_offset is set, we will use the VR HMD true height plus this offset instead of the third entry of the start pos
s.set_vr_start_pos(kitchen_middle, vr_height_offset=-0.1)
l_hand = VrHand(hand='left')
s.import_object(l_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
l_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2.2])
if optimize:
s.optimize_vertex_and_texture()
s.set_vr_offset([-4.34, -2.68, -0.5])
time_fps = True
# Main simulation loop
while True:
start_time = time.time()
s.step()
s.step(print_time=PRINT_FPS)
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
vr_body.move_body(s, r_touch_x, r_touch_y, 0.03, 'hmd')
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
# Don't update VR agents or query events if we are not using VR
if not vr_settings.use_vr:
continue
frame_dur = time.time() - start_time
if time_fps:
print('Fps: {}'.format(round(1/max(frame_dur, 0.00001), 2)))
# Update VR objects
vr_agent.update()
s.disconnect()

View File

@ -1,175 +0,0 @@
""" Lunch packing demo - initial conditions - Eric """
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
optimize = True
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Beechwood_0_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=optimize,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', rendering_settings=vr_rendering_settings, vr_eye_tracking=False, vr_mode=True)
scene = InteractiveIndoorScene('Beechwood_0_int')
s.import_ig_scene(scene)
# Position that is roughly in the middle of the kitchen - used to help place objects
kitchen_middle = [-4.5, -3.5, 1.5]
# List of object names to filename mapping
lunch_pack_folder = os.path.join(gibson2.assets_path, 'pack_lunch')
lunch_pack_files = {
'sandwich': os.path.join(lunch_pack_folder, 'cereal', 'cereal01', 'rigid_body.urdf'),
'chip': os.path.join(lunch_pack_folder, 'food', 'snack', 'chips', 'chips0', 'rigid_body.urdf'),
'fruit': os.path.join(lunch_pack_folder, 'food', 'fruit', 'pear', 'pear00', 'rigid_body.urdf'),
'bread': os.path.join(lunch_pack_folder, 'granola', 'granola00', 'rigid_body.urdf'),
'yogurt': os.path.join(lunch_pack_folder, 'food', 'dairy', 'yogurt', 'yogurt00_dannonbananacarton', 'rigid_body.urdf'),
'water': os.path.join(lunch_pack_folder, 'drink', 'soda', 'soda23_mountaindew710mL', 'rigid_body.urdf'),
'eggs': os.path.join(lunch_pack_folder, 'eggs', 'eggs00_eggland', 'rigid_body.urdf'),
'container': os.path.join(lunch_pack_folder, 'dish', 'casserole_dish', 'casserole_dish00', 'rigid_body.urdf')
}
item_scales = {
'sandwich': 0.7,
'chip': 1,
'fruit': 0.9,
'bread': 0.7,
'yogurt': 1,
'water': 0.8,
'eggs': 0.5,
'container': 0.3
}
# A list of start positions and orientations for the objects - determined by placing objects in VR
item_start_pos_orn = {
'sandwich': [
[(-5.24, -1.6, 0.97), (0, 0.71, 0.71, 0)],
[(-5.24, -1.7, 0.97), (0, 0.71, 0.71, 0)],
[(-5.24, -1.8, 0.97), (0, 0.71, 0.71, 0)],
[(-5.24, -1.9, 0.97), (0, 0.71, 0.71, 0)],
],
'chip': [
[(-5.39, -1.62, 1.42), (-0.14, -0.06, 0.71, 0.69)],
[(-5.39, -1.62, 1.49), (-0.14, -0.06, 0.71, 0.69)],
[(-5.12, -1.62, 1.42), (-0.14, -0.06, 0.71, 0.69)],
[(-5.12, -1.62, 1.49), (-0.14, -0.06, 0.71, 0.69)],
],
'fruit': [
[(-4.8, -3.55, 0.97), (0, 0, 0, 1)],
[(-4.8, -3.7, 0.97), (0, 0, 0, 1)],
[(-4.8, -3.85, 0.97), (0, 0, 0, 1)],
[(-4.8, -4.0, 0.97), (0, 0, 0, 1)],
],
'bread': [
[(-5.39, -1.6, 0.97), (0, 0.71, 0.71, 0)],
[(-5.39, -1.7, 0.97), (0, 0.71, 0.71, 0)],
[(-5.39, -1.8, 0.97), (0, 0.71, 0.71, 0)],
[(-5.39, -1.9, 0.97), (0, 0.71, 0.71, 0)],
],
'yogurt': [
[(-5.43, -1.64, 1.68), (0.57, 0.42, 0.42, 0.57)],
[(-5.32, -1.64, 1.68), (0.57, 0.42, 0.42, 0.57)],
[(-5.2, -1.64, 1.68), (0.57, 0.42, 0.42, 0.57)],
[(-5.1, -1.64, 1.68), (0.57, 0.42, 0.42, 0.57)],
],
'water': [
[(-4.61, -1.69, 1.73), (0.68, -0.18, -0.18, 0.68)],
[(-4.69, -1.69, 1.73), (0.68, -0.18, -0.18, 0.68)],
[(-4.8, -1.69, 1.73), (0.68, -0.18, -0.18, 0.68)],
[(-4.9, -1.69, 1.73), (0.68, -0.18, -0.18, 0.68)],
],
'eggs': [
[(-4.65, -1.58, 1.40), (0.72, 0, 0, 0.71)],
[(-4.66, -1.58, 1.46), (0.72, 0, 0, 0.71)],
[(-4.89, -1.58, 1.40), (0.72, 0, 0, 0.71)],
[(-4.89, -1.58, 1.46), (0.72, 0, 0, 0.71)],
],
'container': [
[(-4.1, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-4.4, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-4.7, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-5.0, -1.82, 0.87), (0.71, 0, 0, 0.71)],
]
}
# Import all objects and put them in the correct positions
pack_items = list(lunch_pack_files.keys())
for item in pack_items:
fpath = lunch_pack_files[item]
start_pos_orn = item_start_pos_orn[item]
item_scale = item_scales[item]
for pos, orn in start_pos_orn:
item_ob = ArticulatedObject(fpath, scale=item_scale)
s.import_object(item_ob)
item_ob.set_position(pos)
item_ob.set_orientation(orn)
vr_body = VrBody()
s.import_object(vr_body, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
vr_body.init_body([kitchen_middle[0], kitchen_middle[1]])
r_hand = VrHand(hand='right')
s.import_object(r_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
r_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2])
l_hand = VrHand(hand='left')
s.import_object(l_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
l_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2.2])
if optimize:
s.optimize_vertex_and_texture()
s.set_vr_offset([-4.34, -2.68, -0.5])
time_fps = True
while True:
start_time = time.time()
s.step()
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
vr_body.move_body(s, r_touch_x, r_touch_y, 0.03, 'hmd')
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
frame_dur = time.time() - start_time
if time_fps:
print('Fps: {}'.format(round(1/max(frame_dur, 0.00001), 2)))
s.disconnect()

View File

@ -1,182 +0,0 @@
""" A very simple VR program containing only a single scene.
The user can fly around the scene using the controller, and can
explore whether all the graphics features of iGibson are working as intended.
"""
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
optimize = True
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Beechwood_0_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=optimize,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=False,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0, rendering_settings=vr_rendering_settings,
vr_eye_tracking=False, vr_mode=True)
scene = InteractiveIndoorScene('Beechwood_0_int')
scene._set_first_n_objects(10)
p.setGravity(0, 0, 0)
s.import_ig_scene(scene)
# Position that is roughly in the middle of the kitchen - used to help place objects
kitchen_middle = [-4.5, -3.5, 1.5]
# List of object names to filename mapping
lunch_pack_folder = os.path.join(gibson2.assets_path, 'pack_lunch')
lunch_pack_files = {
'sandwich': os.path.join(lunch_pack_folder, 'cereal', 'cereal01', 'rigid_body.urdf'),
'chip': os.path.join(lunch_pack_folder, 'food', 'snack', 'chips', 'chips0', 'rigid_body.urdf'),
'fruit': os.path.join(lunch_pack_folder, 'food', 'fruit', 'pear', 'pear00', 'rigid_body.urdf'),
'bread': os.path.join(lunch_pack_folder, 'granola', 'granola00', 'rigid_body.urdf'),
'yogurt': os.path.join(lunch_pack_folder, 'food', 'dairy', 'yogurt', 'yogurt00_dannonbananacarton', 'rigid_body.urdf'),
'water': os.path.join(lunch_pack_folder, 'drink', 'soda', 'soda23_mountaindew710mL', 'rigid_body.urdf'),
'eggs': os.path.join(lunch_pack_folder, 'eggs', 'eggs00_eggland', 'rigid_body.urdf'),
'container': os.path.join(lunch_pack_folder, 'dish', 'casserole_dish', 'casserole_dish00', 'rigid_body.urdf')
}
item_nums = {
'sandwich': 4,
'chip': 4,
'fruit': 4,
'bread': 4,
'yogurt': 4,
'water': 4,
'eggs': 4,
'container': 4
}
# Objects will be loaded into a grid starting at the kitchen middle
# All have the same starting x coordinate and different y coordinates (offsets from kitchen middle)
item_y_offsets = {
'sandwich': 0.4,
'chip': 0.1,
'fruit': -0.4,
'bread': -0.6,
'yogurt': -0.8,
'water': -1.0,
'eggs': -1.4,
'container': -1.8
}
x_offsets = [0, -0.2, -0.4, -0.6]
item_height = 1.2
# Store object data for body id - name, position and orientation - for use in object placement
body_id_dict = {}
all_items = []
# Import all objects and put them in the correct positions
pack_items = list(lunch_pack_files.keys())
for item in pack_items:
fpath = lunch_pack_files[item]
y_offset = item_y_offsets[item]
num_items = item_nums[item]
for i in range(num_items):
x_offset = x_offsets[i]
if item == 'container':
x_offset = x_offset * 2
if item == 'eggs':
x_offset = x_offset * 1.7
item_scale = 1
if item == 'container':
item_scale = 0.4
elif item == 'eggs':
item_scale = 0.5
elif item == 'water':
item_scale = 0.8
item_ob = ArticulatedObject(fpath, scale=item_scale)
s.import_object(item_ob) #, use_pbr=False, use_pbr_mapping=False, shadow_caster=False)
item_ob.set_position([kitchen_middle[0] + x_offset, kitchen_middle[1] + y_offset, item_height])
all_items.append(item_ob)
bid = item_ob.body_id
item_data = [item, item_ob.get_position(), item_ob.get_orientation()]
body_id_dict[bid] = item_data
r_hand = VrHand(hand='right')
s.import_object(r_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=False)
r_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2])
l_hand = VrHand(hand='left')
s.import_object(l_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=False)
l_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2.2])
if optimize:
s.optimize_vertex_and_texture()
s.set_vr_pos([-4.5, -3.5, 0.3])
time_fps = False
while True:
# Use right menu controller to change z offset to we can easily change height for placing objects
vr_z_offset = 0
event_list = s.poll_vr_events()
for event in event_list:
device_type, event_type = event
if device_type == 'right_controller':
if event_type == 'menu_press':
# Press the menu button to move up
vr_z_offset = 0.01
elif event_type == 'grip_press':
# Press the grip to move down
vr_z_offset = -0.01
elif event_type == 'touchpad_press':
# Print body_id_dict data
print(body_id_dict)
curr_offset = s.get_vr_offset()
s.set_vr_offset([curr_offset[0], curr_offset[1], curr_offset[2] + vr_z_offset])
start_time = time.time()
s.step()
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
move_player_no_body(s, r_touch_x, r_touch_y, 0.005, 'hmd')
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
frame_dur = time.time() - start_time
if time_fps:
print('Fps: {}'.format(round(1/max(frame_dur, 0.00001), 2)))
# Every frame we update the body_id_dictionary data
for item in all_items:
body_id_dict[item.body_id][1] = item.get_position()
body_id_dict[item.body_id][2] = item.get_orientation()
s.disconnect()

View File

@ -1,151 +0,0 @@
""" Lunch packing demo - initial conditions - Kent """
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
optimize = True
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Beechwood_0_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=optimize,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', rendering_settings=vr_rendering_settings, vr_eye_tracking=False, vr_mode=True)
scene = InteractiveIndoorScene('Beechwood_0_int')
s.import_ig_scene(scene)
# Position that is roughly in the middle of the kitchen - used to help place objects
kitchen_middle = [-4.5, -3.5, 1.5]
# List of object names to filename mapping
lunch_pack_folder = os.path.join(gibson2.assets_path, 'pack_lunch')
lunch_pack_files = {
'chip': os.path.join(lunch_pack_folder, 'food', 'snack', 'chips', 'chips0', 'rigid_body.urdf'),
'fruit': os.path.join(lunch_pack_folder, 'food', 'fruit', 'pear', 'pear00', 'rigid_body.urdf'),
'water': os.path.join(lunch_pack_folder, 'drink', 'soda', 'soda23_mountaindew710mL', 'rigid_body.urdf'),
'eggs': os.path.join(lunch_pack_folder, 'eggs', 'eggs00_eggland', 'rigid_body.urdf'),
'container': os.path.join(lunch_pack_folder, 'dish', 'casserole_dish', 'casserole_dish00', 'rigid_body.urdf')
}
item_scales = {
'chip': 1,
'fruit': 0.9,
'water': 0.8,
'eggs': 0.5,
'container': 0.5
}
# A list of start positions and orientations for the objects - determined by placing objects in VR
item_start_pos_orn = {
'chip': [
[(-5.39, -1.62, 1.42), (-0.14, -0.06, 0.71, 0.69)],
[(-5.39, -1.62, 1.49), (-0.14, -0.06, 0.71, 0.69)],
[(-5.12, -1.62, 1.42), (-0.14, -0.06, 0.71, 0.69)],
[(-5.12, -1.62, 1.49), (-0.14, -0.06, 0.71, 0.69)],
],
'fruit': [
[(-4.8, -3.55, 0.97), (0, 0, 0, 1)],
[(-4.8, -3.7, 0.97), (0, 0, 0, 1)],
[(-4.8, -3.85, 0.97), (0, 0, 0, 1)],
[(-4.8, -4.0, 0.97), (0, 0, 0, 1)],
],
'water': [
[(-5.0, -3.55, 1.03), (0.68, -0.18, -0.18, 0.68)],
[(-5.0, -3.7, 1.03), (0.68, -0.18, -0.18, 0.68)],
[(-5.0, -3.85, 1.03), (0.68, -0.18, -0.18, 0.68)],
[(-5.0, -4.0, 1.03), (0.68, -0.18, -0.18, 0.68)],
],
'eggs': [
[(-4.65, -1.58, 1.40), (0.72, 0, 0, 0.71)],
[(-4.66, -1.58, 1.46), (0.72, 0, 0, 0.71)],
[(-4.89, -1.58, 1.40), (0.72, 0, 0, 0.71)],
[(-4.89, -1.58, 1.46), (0.72, 0, 0, 0.71)],
],
'container': [
[(-4.1, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-4.5, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-4.9, -1.82, 0.87), (0.71, 0, 0, 0.71)],
[(-5.3, -1.82, 0.87), (0.71, 0, 0, 0.71)],
]
}
# Import all objects and put them in the correct positions
pack_items = list(lunch_pack_files.keys())
for item in pack_items:
fpath = lunch_pack_files[item]
start_pos_orn = item_start_pos_orn[item]
item_scale = item_scales[item]
for pos, orn in start_pos_orn:
item_ob = ArticulatedObject(fpath, scale=item_scale)
s.import_object(item_ob)
item_ob.set_position(pos)
item_ob.set_orientation(orn)
vr_body = VrBody()
s.import_object(vr_body, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
vr_body.init_body([kitchen_middle[0], kitchen_middle[1]])
r_hand = VrHand(hand='right')
s.import_object(r_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
r_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2])
l_hand = VrHand(hand='left')
s.import_object(l_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
l_hand.set_start_state(start_pos=[kitchen_middle[0], kitchen_middle[1], 2.2])
if optimize:
s.optimize_vertex_and_texture()
s.set_vr_offset([-4.34, -2.68, -0.5])
time_fps = True
while True:
start_time = time.time()
s.step()
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
vr_body.move_body(s, r_touch_x, r_touch_y, 0.03, 'hmd')
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
frame_dur = time.time() - start_time
if time_fps:
print('Fps: {}'.format(round(1/max(frame_dur, 0.00001), 2)))
s.disconnect()

View File

@ -0,0 +1,164 @@
""" VR saving/replay demo.
This demo saves the actions of certain objects as well as states. Either can
be used to playback later in the replay demo.
In this demo, we save some "mock" VR actions that are already saved by default,
but can be saved separately as actions to demonstrate the action-saving system.
During replay, we use a combination of these actions and saved values to get data
that can be used to control the physics simulation without setting every object's
transform each frame.
Usage:
python vr_actions_sr.py --mode=[save/replay]
This demo saves to vr_logs/vr_actions_sr.h5
Run this demo (and also change the filename) if you would like to save your own data."""
import argparse
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_vr import VrSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.object_base import Object
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrAgent
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_logging import VRLogReader, VRLogWriter
from gibson2 import assets_path
# Number of seconds to run the data saving for
DATA_SAVE_RUNTIME = 30
# Set to false to load entire Rs_int scene
LOAD_PARTIAL = True
# Set to true to print out render, physics and overall frame FPS
PRINT_FPS = False
def run_state_sr(mode):
"""
Runs state save/replay. Mode can either be save or replay.
"""
assert mode in ['save', 'replay']
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=True,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# VR system settings
# Change use_vr to toggle VR mode on/off
vr_settings = VrSettings(use_vr=(mode == 'save'))
s = Simulator(mode='vr',
rendering_settings=vr_rendering_settings,
vr_settings=vr_settings)
scene = InteractiveIndoorScene('Rs_int')
# Turn this on when debugging to speed up loading
if LOAD_PARTIAL:
scene._set_first_n_objects(10)
s.import_ig_scene(scene)
# Create a VrAgent and it will handle all initialization and importing under-the-hood
# Data replay uses constraints during both save and replay modes
vr_agent = VrAgent(s, use_constraints=True)
# Objects to interact with
mass_list = [5, 10, 100, 500]
mustard_start = [-1, 1.55, 1.2]
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
s.import_object(mustard, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
mustard.set_position([mustard_start[0] + i * 0.2, mustard_start[1], mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
s.optimize_vertex_and_texture()
if vr_settings.use_vr:
# Since vr_height_offset is set, we will use the VR HMD true height plus this offset instead of the third entry of the start pos
s.set_vr_start_pos([0, 0, 0], vr_height_offset=-0.1)
# Note: Modify this path to save to different files
vr_log_path = 'vr_logs/vr_actions_sr.h5'
mock_vr_action_path = 'mock_vr_action'
if mode == 'save':
# Saves every 2 seconds or so (200 / 90fps is approx 2 seconds)
vr_writer = VRLogWriter(frames_before_write=200, log_filepath=vr_log_path, profiling_mode=True)
# Save a single button press as a mock action that demonstrates action-saving capabilities.
vr_writer.register_action(mock_vr_action_path, (1,))
# Call set_up_data_storage once all actions have been registered (in this demo we only save states so there are none)
# Despite having no actions, we need to call this function
vr_writer.set_up_data_storage()
else:
vr_reader = VRLogReader(log_filepath=vr_log_path)
if mode == 'save':
start_time = time.time()
# Main simulation loop - run for as long as the user specified
while (time.time() - start_time < DATA_SAVE_RUNTIME):
s.step(print_time=PRINT_FPS)
# Example of querying VR events to hide object
# We will store this as a mock action, even though it is saved by default
if s.query_vr_event('right_controller', 'touchpad_press'):
s.set_hidden_state(mustard, hide=not s.get_hidden_state(mustard))
vr_writer.save_action(mock_vr_action_path, np.array([1]))
# Update VR objects
vr_agent.update()
# Record this frame's data in the VRLogWriter
vr_writer.process_frame(s)
# Note: always call this after the simulation is over to close the log file
# and clean up resources used.
vr_writer.end_log_session()
else:
# The VR reader automatically shuts itself down and performs cleanup once the while loop has finished running
while vr_reader.get_data_left_to_read():
s.step()
# Note that fullReplay is set to False for action replay
vr_reader.read_frame(s, fullReplay=False)
# Read our mock action and hide/unhide the mustard based on its value
mock_action = int(vr_reader.read_action(mock_vr_action_path)[0])
if mock_action == 1:
s.set_hidden_state(mustard, hide=not s.get_hidden_state(mustard))
# Get relevant VR action data and update VR agent
vr_action_data = vr_reader.get_vr_action_data()
vr_agent.update(vr_action_data)
s.disconnect()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='VR state saving and replay demo')
parser.add_argument('--mode', default='save', help='Mode to run in: either save or replay')
args = parser.parse_args()
run_state_sr(mode=args.mode)

View File

@ -1,157 +0,0 @@
""" VR saving demo using simplified VR playground code.
This demo replays the actions of certain objects in the scene.
Note: This demo does not use PBR so it can be supported on a wide range of devices, including Mac OS.
This demo saves to vr_logs/vr_demo_save_states.h5
If you would like to replay the data, please run
vr_demo_replay using this file path as an input.
Run this demo if you would like to save your own data."""
import numpy as np
import os
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_logging import VRLogReader
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Whether we should hide a mustard bottle when the menu button is presed
hide_mustard_on_press = True
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0,
rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=fullscreen, enable_pbr=False),
vr_eye_tracking=use_eye_tracking, vr_mode=False)
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body)
# Note: we don't call init_body since we will be controlling the body directly through pos/orientation actions
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=5)
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
mustard_list = []
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
if optimize:
s.optimize_vertex_and_texture()
# Start user close to counter for interaction
s.set_vr_offset([-0.5, 0.0, -0.5])
# State of can hiding, toggled by a menu press
hide_mustard = False
# Modify this path to save to different files
vr_log_path = 'vr_logs/vr_demo_save_actions.h5'
vr_right_hand_action_path = 'vr_hand/right'
vr_left_hand_action_path = 'vr_hand/left'
vr_menu_button_action_path = 'vr_menu_button'
vr_body_action_path = 'vr_body'
vr_reader = VRLogReader(log_filepath=vr_log_path)
# In this demo, we feed actions into the simulator and simulate
# everything else.
while vr_reader.get_data_left_to_read():
# We need to read frame before step when doing replay
vr_reader.read_frame(s, fullReplay=False)
# We set fullReplay to false so we only simulate using actions
s.step()
# Contains validity [0], trans [1-3], orn [4-7], trig_frac [8], touch coordinates (x and y) [9-10]
vr_rh_actions = vr_reader.read_action(vr_right_hand_action_path)
vr_lh_actions = vr_reader.read_action(vr_left_hand_action_path)
vr_menu_state = vr_reader.read_action(vr_menu_button_action_path)
vr_body_actions = vr_reader.read_action(vr_body_action_path)
# Set mustard hidden state based on recorded button action
if vr_menu_state == 1:
s.set_hidden_state(mustard_list[2], hide=True)
elif vr_menu_state == 0:
s.set_hidden_state(mustard_list[2], hide=False)
# Move VR hands
if vr_rh_actions[0] == 1.0:
r_hand.move(vr_rh_actions[1:4], vr_rh_actions[4:8])
r_hand.set_close_fraction(vr_rh_actions[8])
if vr_lh_actions[0] == 1.0:
l_hand.move(vr_lh_actions[1:4], vr_lh_actions[4:8])
l_hand.set_close_fraction(vr_lh_actions[8])
# Move VR body
vr_body.set_position_orientation(vr_body_actions[0:3], vr_body_actions[3:7])
# Get stored eye tracking data - this is an example of how to read values that are not actions from the VRLogReader
eye_data = vr_reader.read_value('vr/vr_eye_tracking_data')
is_eye_data_valid = eye_data[0]
origin = eye_data[1:4]
direction = eye_data[4:7]
left_pupil_diameter = eye_data[7]
right_pupil_diameter = eye_data[8]
if is_eye_data_valid:
# Move gaze marker based on eye tracking data
updated_marker_pos = [origin[0] + direction[0], origin[1] + direction[1], origin[2] + direction[2]]
gaze_marker.set_position(updated_marker_pos)
# We always need to call end_log_session() at the end of a VRLogReader session
vr_reader.end_log_session()

View File

@ -1,168 +0,0 @@
""" VR saving demo using simplified VR playground code.
This demo replays the actions of certain objects in the scene.
Note: This demo does not use PBR so it can be supported on a wide range of devices, including Mac OS.
This demo saves to vr_logs/vr_demo_save_states.h5
If you would like to replay the data, please run
vr_demo_replay using this file path as an input.
Run this demo if you would like to save your own data."""
import numpy as np
import os
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_logging import VRLogReader
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Whether we should hide a mustard bottle when the menu button is presed
hide_mustard_on_press = True
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0,
rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=fullscreen, enable_pbr=False),
vr_eye_tracking=use_eye_tracking, vr_mode=False)
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body)
# Note: we don't call init_body since we will be controlling the body directly through pos/orientation actions
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=5)
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
mustard_list = []
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
if optimize:
s.optimize_vertex_and_texture()
# Start user close to counter for interaction
s.set_vr_offset([-0.5, 0.0, -0.5])
# State of can hiding, toggled by a menu press
hide_mustard = False
# Modify this path to save to different files
vr_log_path = 'vr_logs/vr_demo_save_actions.h5'
vr_right_hand_action_path = 'vr_hand/right'
vr_left_hand_action_path = 'vr_hand/left'
vr_menu_button_action_path = 'vr_menu_button'
vr_body_action_path = 'vr_body'
vr_reader = VRLogReader(log_filepath=vr_log_path)
# Record mustard positions/orientations and save to a text file to test determinism
mustard_data = []
# In this demo, we feed actions into the simulator and simulate
# everything else.
while vr_reader.get_data_left_to_read():
# We set fullReplay to false so we only simulate using actions
vr_reader.read_frame(s, fullReplay=False)
s.step()
# Save the mustard positions each frame to a text file
mustard_pos = mustard_list[0].get_position()
mustard_orn = mustard_list[0].get_orientation()
mustard_data.append(np.array(mustard_pos + mustard_orn))
# Contains validity [0], trans [1-3], orn [4-7], trig_frac [8], touch coordinates (x and y) [9-10]
vr_rh_actions = vr_reader.read_action(vr_right_hand_action_path)
vr_lh_actions = vr_reader.read_action(vr_left_hand_action_path)
vr_menu_state = vr_reader.read_action(vr_menu_button_action_path)
vr_body_actions = vr_reader.read_action(vr_body_action_path)
# Set mustard hidden state based on recorded button action
if vr_menu_state == 1:
s.set_hidden_state(mustard_list[2], hide=True)
elif vr_menu_state == 0:
s.set_hidden_state(mustard_list[2], hide=False)
# Move VR hands
if vr_rh_actions[0] == 1.0:
r_hand.move(vr_rh_actions[1:4], vr_rh_actions[4:8])
r_hand.set_close_fraction(vr_rh_actions[8])
if vr_lh_actions[0] == 1.0:
l_hand.move(vr_lh_actions[1:4], vr_lh_actions[4:8])
l_hand.set_close_fraction(vr_lh_actions[8])
# Move VR body
vr_body.set_position_orientation(vr_body_actions[0:3], vr_body_actions[3:7])
# Get stored eye tracking data - this is an example of how to read values that are not actions from the VRLogReader
eye_data = vr_reader.read_value('vr/vr_eye_tracking_data')
is_eye_data_valid = eye_data[0]
origin = eye_data[1:4]
direction = eye_data[4:7]
left_pupil_diameter = eye_data[7]
right_pupil_diameter = eye_data[8]
if is_eye_data_valid:
# Move gaze marker based on eye tracking data
updated_marker_pos = [origin[0] + direction[0], origin[1] + direction[1], origin[2] + direction[2]]
gaze_marker.set_position(updated_marker_pos)
print('Mustard data information:')
print('Length of array: {}'.format(len(mustard_data)))
print('First element: {}'.format(mustard_data[0]))
# We always need to call end_log_session() at the end of a VRLogReader session
vr_reader.end_log_session()

View File

@ -1,103 +0,0 @@
""" VR replay demo using simplified VR playground code.
This demo replay the states of all objects in their entirety, and does
not involve any meaningful physical simulation.
Note: This demo does not use PBR so it can be supported on a wide range of devices, including Mac OS.
This demo reads logs from to vr_logs/vr_demo_save_states.h5
If you would like to replay your own data, please run
vr_demo_save_states and change the file path where data is recoded."""
import numpy as np
import os
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_logging import VRLogReader
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0,
rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=fullscreen, enable_pbr=False),
vr_eye_tracking=use_eye_tracking, vr_mode=False)
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body)
# Note: we don't call init_body for the VR body to avoid constraints interfering with the replay
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# Note: we don't call set start state for the VR hands to avoid constraints interfering with the replay
l_hand = VrHand(hand='left')
s.import_object(l_hand)
# Note: we don't call set start state for the VR hands to avoid constraints interfering with the replay
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=5)
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
mustard_list = []
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
if optimize:
s.optimize_vertex_and_texture()
# Start user close to counter for interaction
s.set_vr_offset([-0.5, 0.0, -0.5])
# Note: the VRLogReader plays back the demo at the recorded fps, so there is not need to set this
vr_log_path = 'vr_logs/vr_demo_save_states.h5'
vr_reader = VRLogReader(log_filepath=vr_log_path)
# The VR reader automatically shuts itself down and performs cleanup once the while loop has finished running
while vr_reader.get_data_left_to_read():
# We need to read frame before step for various reasons - one of them is that we need to set the camera
# matrix for this frame before rendering in step
vr_reader.read_frame(s, fullReplay=True)
s.step()

View File

@ -1,99 +0,0 @@
""" VR replay demo using simplified VR playground code.
This demo replay the states of all objects in their entirety, and does
not involve any meaningful physical simulation.
Note: This demo does not use PBR so it can be supported on a wide range of devices, including Mac OS.
This demo reads logs from to vr_logs/vr_demo_save_states.h5
If you would like to replay your own data, please run
vr_demo_save_states and change the file path where data is recoded."""
import numpy as np
import os
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_logging import VRLogReader
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Initialize simulator with specific rendering settings
s = Simulator(mode='simple', image_width=504, image_height=560,
rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=fullscreen, enable_pbr=False))
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body)
# Note: we don't call init_body for the VR body to avoid constraints interfering with the replay
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# Note: we don't call set start state for the VR hands to avoid constraints interfering with the replay
l_hand = VrHand(hand='left')
s.import_object(l_hand)
# Note: we don't call set start state for the VR hands to avoid constraints interfering with the replay
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=5)
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
mustard_list = []
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
if optimize:
s.optimize_vertex_and_texture()
# Note: the VRLogReader plays back the demo at the recorded fps, so there is not need to set this
vr_log_path = 'vr_logs/vr_demo_save_states.h5'
vr_reader = VRLogReader(log_filepath=vr_log_path)
# The VR reader automatically shuts itself down and performs cleanup once the while loop has finished running
while vr_reader.get_data_left_to_read():
# We need to read frame before step for various reasons - one of them is that we need to set the camera
# matrix for this frame before rendering in step
vr_reader.read_frame(s, fullReplay=True)
s.step()

View File

@ -1,220 +0,0 @@
""" VR saving demo using simplified VR playground code.
This demo saves the actions of certain objects as well as states. Either can
be used to playback later in the replay demo.
Note: This demo does not use PBR so it can be supported on a wide range of devices, including Mac OS.
This demo saves to vr_logs/vr_demo_save_states.h5
If you would like to replay the data, please run
vr_demo_replay using this file path as an input.
Run this demo if you would like to save your own data."""
import numpy as np
import os
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_logging import VRLogWriter
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0,
rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=fullscreen, enable_pbr=False),
vr_eye_tracking=use_eye_tracking, vr_mode=True)
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body)
vr_body.init_body([0,0])
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=5)
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
mustard_list = []
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
if optimize:
s.optimize_vertex_and_texture()
# Start user close to counter for interaction
s.set_vr_offset([-0.5, 0.0, -0.5])
# Modify this path to save to different files
vr_log_path = 'vr_logs/vr_demo_save_actions.h5'
# Saves every 2 seconds or so (200 / 90fps is approx 2 seconds)
vr_writer = VRLogWriter(frames_before_write=200, log_filepath=vr_log_path, profiling_mode=True)
# Register all actions. In this demo we register the following actions:
# Save Vr hand transform, validity and trigger fraction for each hand
# action->vr_hand->right/left (dataset)
# Total size of numpy array: 1 (validity) + 3 (pos) + 4 (orn) + 1 (trig_frac) + 2 (touch coordinates)= (11,)
vr_right_hand_action_path = 'vr_hand/right'
vr_writer.register_action(vr_right_hand_action_path, (11,))
vr_left_hand_action_path = 'vr_hand/left'
vr_writer.register_action(vr_left_hand_action_path, (11,))
# Save menu button to we can replay hiding the mustard bottle
vr_menu_button_action_path = 'vr_menu_button'
# We will save the state - 1 is pressed, 0 is not pressed (-1 indicates no data for the given frame)
vr_writer.register_action(vr_menu_button_action_path, (1,))
# Save body position and orientation as an action - it is quite complicated to replay the VR body using VR data,
# so we will just record its position and orientation as an action
vr_body_action_path = 'vr_body'
# Total size of numpy array: 3 (pos) + 4 (orn) = (7,)
vr_writer.register_action(vr_body_action_path, (7,))
# Call set_up_data_storage once all actions have been registered (in this demo we only save states so there are none)
# Despite having no actions, we need to call this function
vr_writer.set_up_data_storage()
# Main simulation loop
for i in range(3000):
# We save the right controller menu press that hides/unhides the mustard - this can be replayed
# VR button data is saved by default, so we don't need to make it an action
# Please see utils/vr_logging.py for more details on what is saved by default for the VR system
# In this example, the mustard is visible until the user presses the menu button, and then is toggled
# on/off depending on whether the menu is pressed or unpressed
event_list = s.poll_vr_events()
for event in event_list:
device_type, event_type = event
if device_type == 'right_controller':
if event_type == 'menu_press':
# Toggle mustard hidden state
s.set_hidden_state(mustard_list[2], hide=True)
vr_writer.save_action(vr_menu_button_action_path, np.array([1]))
elif event_type == 'menu_unpress':
s.set_hidden_state(mustard_list[2], hide=False)
vr_writer.save_action(vr_menu_button_action_path, np.array([0]))
# Step the simulator - this needs to be done every frame to actually run the simulation
s.step()
# VR device data
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
# VR button data
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
# Create actions and save them
vr_right_hand_data = [1.0 if r_is_valid else 0.0]
vr_right_hand_data.extend(r_trans)
vr_right_hand_data.extend(r_rot)
vr_right_hand_data.append(r_trig)
vr_right_hand_data.extend([r_touch_x, r_touch_y])
vr_right_hand_data = np.array(vr_right_hand_data)
vr_writer.save_action(vr_right_hand_action_path, vr_right_hand_data)
vr_left_hand_data = [1.0 if l_is_valid else 0.0]
vr_left_hand_data.extend(l_trans)
vr_left_hand_data.extend(l_rot)
vr_left_hand_data.append(l_trig)
vr_left_hand_data.extend([l_touch_x, l_touch_y])
vr_left_hand_data = np.array(vr_left_hand_data)
vr_writer.save_action(vr_left_hand_action_path, vr_left_hand_data)
vr_body_data = list(vr_body.get_position())
vr_body_data.extend(vr_body.get_orientation())
vr_body_data = np.array(vr_body_data)
vr_writer.save_action(vr_body_action_path, vr_body_data)
# VR eye tracking data
if use_eye_tracking:
is_eye_data_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = s.get_eye_tracking_data()
if is_eye_data_valid:
# Move gaze marker based on eye tracking data
updated_marker_pos = [origin[0] + dir[0], origin[1] + dir[1], origin[2] + dir[2]]
gaze_marker.set_position(updated_marker_pos)
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
if enable_vr_body:
# See VrBody class for more details on this method
vr_body.move_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
else:
# Right hand used to control movement
# Move VR system based on device coordinate system and touchpad press location
move_player_no_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
# Trigger haptic pulse on right touchpad, modulated by trigger close fraction
# Close the trigger to create a stronger pulse
# Note: open trigger has closed fraction of 0.05 when open, so cutoff haptic input under 0.1
# to avoid constant rumbling
s.trigger_haptic_pulse('right_controller', r_trig if r_trig > 0.1 else 0)
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
s.trigger_haptic_pulse('left_controller', l_trig if l_trig > 0.1 else 0)
# Record this frame's data in the VRLogWriter
vr_writer.process_frame(s)
# Note: always call this after the simulation is over to close the log file
# and clean up resources used.
vr_writer.end_log_session()
s.disconnect()

View File

@ -1,159 +0,0 @@
""" VR saving demo using simplified VR playground code.
This demo saves the states of all objects in their entirety. The replay
resulting from this is completely controlled by the saved state data, and does
not involve any meaningful physical simulation.
Note: This demo does not use PBR so it can be supported on a wide range of devices, including Mac OS.
This demo saves to vr_logs/vr_demo_save_states.h5
If you would like to replay the data, please run
vr_demo_replay using this file path as an input.
Run this demo if you would like to save your own data."""
import numpy as np
import os
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_logging import VRLogWriter
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0,
rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=fullscreen, enable_pbr=False),
vr_eye_tracking=use_eye_tracking, vr_mode=True)
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body)
vr_body.init_body([0,0])
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=5)
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
mustard_list = []
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
if optimize:
s.optimize_vertex_and_texture()
# Start user close to counter for interaction
s.set_vr_offset([-0.5, 0.0, -0.5])
# Modify this path to save to different files
vr_log_path = 'vr_logs/vr_demo_save_states.h5'
# Saves every 2 seconds or so (200 / 90fps is approx 2 seconds)
vr_writer = VRLogWriter(frames_before_write=200, log_filepath=vr_log_path, profiling_mode=True)
# Call set_up_data_storage once all actions have been registered (in this demo we only save states so there are none)
# Despite having no actions, we need to call this function
vr_writer.set_up_data_storage()
# Main simulation loop
for i in range(3000):
# Step the simulator - this needs to be done every frame to actually run the simulation
s.step()
# VR device data
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
# VR button data
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
# VR eye tracking data
if use_eye_tracking:
is_eye_data_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = s.get_eye_tracking_data()
if is_eye_data_valid:
# Move gaze marker based on eye tracking data
updated_marker_pos = [origin[0] + dir[0], origin[1] + dir[1], origin[2] + dir[2]]
gaze_marker.set_position(updated_marker_pos)
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
if enable_vr_body:
# See VrBody class for more details on this method
vr_body.move_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
else:
# Right hand used to control movement
# Move VR system based on device coordinate system and touchpad press location
move_player_no_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
# Trigger haptic pulse on right touchpad, modulated by trigger close fraction
# Close the trigger to create a stronger pulse
# Note: open trigger has closed fraction of 0.05 when open, so cutoff haptic input under 0.1
# to avoid constant rumbling
s.trigger_haptic_pulse('right_controller', r_trig if r_trig > 0.1 else 0)
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
s.trigger_haptic_pulse('left_controller', l_trig if l_trig > 0.1 else 0)
# Record this frame's data in the VRLogWriter
vr_writer.process_frame(s)
# Note: always call this after the simulation is over to close the log file
# and clean up resources used.
vr_writer.end_log_session()
s.disconnect()

View File

@ -0,0 +1,141 @@
""" VR saving/replay demo.
This demo saves the states of all objects in their entirety. The replay
resulting from this is completely controlled by the saved state data, and does
not involve any meaningful physical simulation.
Usage:
python vr_states_sr.py --mode=[save/replay]
This demo saves to vr_logs/vr_states_sr.h5
Run this demo (and also change the filename) if you would like to save your own data."""
import argparse
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_vr import VrSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.object_base import Object
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrAgent
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_logging import VRLogReader, VRLogWriter
from gibson2 import assets_path
# Number of seconds to run the data saving for
DATA_SAVE_RUNTIME = 30
# Set to false to load entire Rs_int scene
LOAD_PARTIAL = True
# Set to true to print out render, physics and overall frame FPS
PRINT_FPS = False
def run_state_sr(mode):
"""
Runs state save/replay. Mode can either be save or replay.
"""
assert mode in ['save', 'replay']
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=True,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# VR system settings
# Change use_vr to toggle VR mode on/off
vr_settings = VrSettings(use_vr=(mode == 'save'))
s = Simulator(mode='vr',
rendering_settings=vr_rendering_settings,
vr_settings=vr_settings)
scene = InteractiveIndoorScene('Rs_int')
# Turn this on when debugging to speed up loading
if LOAD_PARTIAL:
scene._set_first_n_objects(10)
s.import_ig_scene(scene)
# Create a VrAgent and it will handle all initialization and importing under-the-hood
vr_agent = VrAgent(s, use_constraints=(mode == 'save'))
# Objects to interact with
mass_list = [5, 10, 100, 500]
mustard_start = [-1, 1.55, 1.2]
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
s.import_object(mustard, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
mustard.set_position([mustard_start[0] + i * 0.2, mustard_start[1], mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
s.optimize_vertex_and_texture()
if vr_settings.use_vr:
# Since vr_height_offset is set, we will use the VR HMD true height plus this offset instead of the third entry of the start pos
s.set_vr_start_pos([0, 0, 0], vr_height_offset=-0.1)
# Note: Modify this path to save to different files
vr_log_path = 'vr_logs/vr_states_sr.h5'
if mode == 'save':
# Saves every 2 seconds or so (200 / 90fps is approx 2 seconds)
vr_writer = VRLogWriter(frames_before_write=200, log_filepath=vr_log_path, profiling_mode=True)
# Call set_up_data_storage once all actions have been registered (in this demo we only save states so there are none)
# Despite having no actions, we need to call this function
vr_writer.set_up_data_storage()
else:
vr_reader = VRLogReader(log_filepath=vr_log_path)
if mode == 'save':
start_time = time.time()
# Main simulation loop - run for as long as the user specified
while (time.time() - start_time < DATA_SAVE_RUNTIME):
s.step(print_time=PRINT_FPS)
# Example of querying VR events to hide object
if s.query_vr_event('right_controller', 'touchpad_press'):
s.set_hidden_state(mustard, hide=not s.get_hidden_state(mustard))
# Update VR objects
vr_agent.update()
# Record this frame's data in the VRLogWriter
vr_writer.process_frame(s)
# Note: always call this after the simulation is over to close the log file
# and clean up resources used.
vr_writer.end_log_session()
else:
# The VR reader automatically shuts itself down and performs cleanup once the while loop has finished running
while vr_reader.get_data_left_to_read():
s.step()
vr_reader.read_frame(s, fullReplay=True)
s.disconnect()
if __name__ == "__main__":
parser = argparse.ArgumentParser(description='VR state saving and replay demo')
parser.add_argument('--mode', default='save', help='Mode to run in: either save or replay')
args = parser.parse_args()
run_state_sr(mode=args.mode)

View File

@ -1,9 +1,12 @@
"""Client code that connects to server, syncs iGibson data and renders to VR."""
from collections import defaultdict
import numpy as np
import time
from gibson2.render.mesh_renderer.mesh_renderer_cpu import Instance, InstanceGroup
from gibson2.utils.vr_utils import calc_offset
from PodSixNet.Connection import connection, ConnectionListener
@ -17,16 +20,20 @@ class IGVRClient(ConnectionListener):
self.is_connected = False
self.Connect((host, port))
self.is_connected = True
# Client stores its offset that will be used in server-based calculations
self.vr_offset = [0, 0, 0]
print("IGVRClient started")
def register_sim_renderer(self, sim):
def register_data(self, sim, client_agent):
"""
Register the simulator and renderer which the clients need to render
:param renderer: the renderer from which we extract visual data
Register the simulator and renderer from which the server will collect frame data.
Also stores client_agent for VrAgent computations.
"""
self.s = sim
self.renderer = sim.renderer
self.client_agent = client_agent
self.vr_device = '{}_controller'.format(self.s.vr_settings.movement_controller)
self.devices = ['left_controller', 'right_controller', 'hmd']
# Custom server callbacks
def Network_syncframe(self, data):
@ -61,9 +68,16 @@ class IGVRClient(ConnectionListener):
instance.poses_trans = poses_trans
instance.poses_rot = poses_rot
# Then render the frame
# Render the frame in VR
self.s.viewer.update()
if self.s.can_access_vr_context:
self.s.poll_vr_events()
# Sets the VR starting position if one has been specified by the user
self.s.perform_vr_start_pos_move()
# Update VR offset so updated value can be used in server
self.client_agent.update_frame_offset()
# Standard methods for networking diagnostics
def Network_connected(self, data):
print("Connected to the server")
@ -77,20 +91,66 @@ class IGVRClient(ConnectionListener):
def Network_disconnected(self, data):
print("Server disconnected")
exit()
# Methods for handling VR data
def generate_vr_data(self):
"""
Generates all the VR data that the server needs to operate:
Controller/HMD: valid, trans, rot, right, up, forward coordinate directions
Controller: + trig_frac, touch_x, touch_y
Eye tracking: valid, origin, dir, l_pupil_diameter, r_pupil_diameter
Events: list of all events from simulator (each event is a tuple of device type, event type)
Current vr position
Vr settings
"""
if not self.s.can_access_vr_context:
return []
# Store all data in a dictionary to be sent to the server
vr_data_dict = defaultdict(list)
for device in self.devices:
device_data = []
is_valid, trans, rot = self.s.get_data_for_vr_device(device)
device_data.extend([is_valid, trans.tolist(), rot.tolist()])
device_data.extend(self.s.get_device_coordinate_system(device))
if device in ['left_controller', 'right_controller']:
device_data.extend(self.s.get_button_data_for_controller(device))
vr_data_dict[device] = device_data
vr_data_dict['eye_data'] = self.s.get_eye_tracking_data()
vr_data_dict['event_data'] = self.s.poll_vr_events()
vr_data_dict['vr_pos'] = self.s.get_vr_pos().tolist()
f_vr_offset = [float(self.vr_offset[0]), float(self.vr_offset[1]), float(self.vr_offset[2])]
vr_data_dict['vr_offset'] = f_vr_offset
# Note: eye tracking is enable by default
vr_data_dict['vr_settings'] = [
self.s.vr_settings.touchpad_movement,
self.s.vr_settings.movement_controller,
self.s.vr_settings.relative_movement_device,
self.s.vr_settings.movement_speed
]
return dict(vr_data_dict)
# Methods for interacting with the server
def refresh_frame_data(self):
"""
Refreshes frame data that was sent from the server.
"""
# TODO: Is double-pumping causing an issue?
#print("Refresh time: {}".format(time.time()))
if self.is_connected:
connection.Pump()
self.Pump()
def send_vr_data(self, vr_data):
def send_vr_data(self):
"""
Sends vr data over to the server.
Generates and sends vr data over to the server.
"""
#print("Send time: {}".format(time.time()))
# First generate VR data
vr_data = self.generate_vr_data()
# Send to a server if connected
if self.is_connected:
self.Send({"action":"vrdata", "vr_data":vr_data})
connection.Pump()

View File

@ -2,9 +2,10 @@
import numpy as np
from time import sleep
import time
from gibson2.render.mesh_renderer.mesh_renderer_cpu import Instance, InstanceGroup
from gibson2.utils.vr_utils import VrData
from PodSixNet.Channel import Channel
from PodSixNet.Server import Server
@ -55,31 +56,36 @@ class IGVRServer(Server):
print('IGVR server launched!')
# This server manages a single vr client
self.vr_client = None
# Single VrData object that gets refreshed every frame
# This is used to update the client agent's server-side VR data
self.vr_data_persistent = None
self.last_comm_time = time.time()
def register_sim_renderer(self, sim):
def has_client(self):
"""
Register the simulator and renderer from which the server will collect frame data
Returns whether the server has a client connected.
"""
return self.vr_client is not None
:param renderer: the renderer from which we extract visual data
def register_data(self, sim, client_agent):
"""
Register the simulator and renderer and VrAgent objects from which the server will collect frame data
"""
self.s = sim
self.renderer = sim.renderer
self.client_agent = client_agent
def register_vr_objects(self, vr_objects):
"""
Register the list of vr objects whose transform data the client will send over each frame.
"""
self.vr_objects = vr_objects
def update_vr_objects(self, vr_data):
def update_client_vr_data(self, vr_data):
"""
Updates VR objects based on data sent by client. This function is called from the asynchronous
Network_vrdata that is first called by the client channel.
"""
# TODO: Extend this to work with all the other VR objects, including left hand, body and gaze marker
right_hand_pos = vr_data['right_hand'][0]
right_hand_orn = vr_data['right_hand'][1]
self.vr_objects['right_hand'].move(right_hand_pos, right_hand_orn)
time_since_last_comm = time.time() - self.last_comm_time
self.last_comm_time = time.time()
print("Time since last comm: {}".format(time_since_last_comm))
print("Comm fps: {}".format(1/max(0.0001, time_since_last_comm)))
# Set new VR data object - this is used each frame to update the client agent
self.vr_data_persistent = vr_data
def Connected(self, channel, addr):
"""
@ -87,7 +93,7 @@ class IGVRServer(Server):
"""
print("New connection:", channel)
self.vr_client = channel
self.vr_client.set_vr_data_callback(self.update_vr_objects)
self.vr_client.set_vr_data_callback(self.update_client_vr_data)
def generate_frame_data(self):
"""
@ -121,7 +127,7 @@ class IGVRServer(Server):
Pumps the server to refresh incoming/outgoing connections.
"""
self.Pump()
frame_data = self.generate_frame_data()
if self.vr_client:
frame_data = self.generate_frame_data()
self.vr_client.send_frame_data(frame_data)

View File

@ -1,8 +1,5 @@
""" Multi-user VR demo. Always start server running before the client.
TODO: Add more detail in description!
TODO: Upgrade to use PBR scenes in future!
Usage: python muvr_demo.py --mode=[server or client] --host=[localhost or ip address] --port=[valid port number]
"""
@ -12,11 +9,12 @@ import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.render.mesh_renderer.mesh_renderer_vr import VrSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.vr_objects import VrAgent
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2 import assets_path
@ -25,80 +23,111 @@ from gibson2 import assets_path
from igvr_server import IGVRServer
from igvr_client import IGVRClient
# TODO: Add functions to set up a simple scene here!
# TODO: Then add the data transfer using the IGVR libraries
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
def run_muvr(mode='server', host='localhost', port='8887'):
# Only load in first few objects in Rs to decrease load times
LOAD_PARTIAL = True
# Whether to print FPS each frame
PRINT_FPS = False
# Note: This is where the VR configuration for the MUVR experience can be changed.
RUN_SETTINGS = {
'client': VrSettings(use_vr=False),
'server': VrSettings(use_vr=True)
}
def run_muvr(mode='server', host='localhost', port='8885'):
"""
Sets up the iGibson environment that will be used by both server and client
TODO: Add descriptions for arguments
"""
print('INFO: Running MUVR {} at {}:{}'.format(mode, host, port))
# This function only runs if mode is one of server or client, so setting this bool is safe
is_server = mode == 'server'
vr_mode = False
print_fps = False
vr_rendering_settings = MeshRendererSettings(optimized=True, fullscreen=False, enable_pbr=False)
s = Simulator(mode='vr',
rendering_settings=vr_rendering_settings,
vr_eye_tracking=True,
vr_mode=vr_mode)
vr_settings = RUN_SETTINGS[mode]
# Load scene
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
if not vr_mode:
camera_pose = np.array([0, 0, 1.2])
view_direction = np.array([1, 0, 0])
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=True,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
s = Simulator(mode='vr',
rendering_settings=vr_rendering_settings,
vr_settings=vr_settings)
scene = InteractiveIndoorScene('Rs_int')
if LOAD_PARTIAL:
scene._set_first_n_objects(10)
s.import_ig_scene(scene)
# Default camera for non-VR MUVR users
if not vr_settings.use_vr:
camera_pose = np.array([0, -3, 1.2])
view_direction = np.array([0, 1, 0])
s.renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
s.renderer.set_fov(90)
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0.6, 0, 1])
# Spawn two agents - one for client and one for the server
# The client loads the agents in with MUVR set to true - this allows the VrAgent to
# be set up just for rendering, with no physics or constraints
client_agent = VrAgent(s, agent_num=1)
server_agent = VrAgent(s, agent_num=2)
# Import 4 mustard bottles
# Objects to interact with
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
m = None
mustard_start = [-1, 1.55, 1.2]
for i in range(len(mass_list)):
m = mustard = YCBObject('006_mustard_bottle')
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
mustard = YCBObject('006_mustard_bottle')
s.import_object(mustard, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
mustard.set_position([mustard_start[0] + i * 0.2, mustard_start[1], mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
# Optimize data before rendering
s.optimize_vertex_and_texture()
# Store vr objects in a structure that can be accessed by IGVRServer
vr_objects = {
'right_hand': r_hand
}
# Start the two agents at different points so they don't collide upon entering the scene
if vr_settings.use_vr:
s.set_vr_start_pos([0.5, 0 if is_server else -1.5, 0], vr_height_offset=-0.1)
# Setup client/server
if is_server:
vr_server = IGVRServer(localaddr=(host, port))
vr_server.register_sim_renderer(s)
vr_server.register_vr_objects(vr_objects)
vr_server.register_data(s, client_agent)
else:
vr_client = IGVRClient(host, port)
vr_client.register_sim_renderer(s)
# Disconnect pybullet since client only renders
vr_client.register_data(s, client_agent)
# Disconnect pybullet since the client only renders
s.disconnect_pybullet()
# Run main networking/rendering/physics loop
sin_accumulator = 0
run_start_time = time.time()
while True:
start_time = time.time()
if is_server:
# Server is the one that steps the physics simulation, not the client
s.step()
# Only step the server if a client has been connected
if vr_server.has_client():
# Server is the one that steps the physics simulation, not the client
s.step(print_time=PRINT_FPS)
# TODO: Remove jittery mustard
m.set_position([1, -0.8 + float(np.sin(sin_accumulator)) / 2.0, 1])
# Update VR agent on server-side
if s.vr_settings.use_vr:
server_agent.update()
# Need to update client agent every frame, even if VR data is stale
if vr_server.vr_data_persistent:
client_agent.update(vr_server.vr_data_persistent)
# Send the current frame to be rendered by the client,
# and also ingest new client data
@ -109,20 +138,8 @@ def run_muvr(mode='server', host='localhost', port='8887'):
# Note: the rendering happens asynchronously when a callback inside the vr_client is triggered (after being sent a frame)
vr_client.refresh_frame_data()
# 2) Query VR data
# TODO: Actually query the VR system for data here
# This mock data will move the hand around its center position
mock_vr_data = {
'right_hand': [[0.6, 0 + float(np.sin(sin_accumulator)) / 2.0, 1], [0, 0, 0, 1]]
}
# 3) Send VR data over to the server
vr_client.send_vr_data(mock_vr_data)
if print_fps:
# Display a max of 500 fps if delta time gets too close to 0
print('Fps: {}'.format(round(1/max(time.time() - start_time, 1/500.0), 2)))
sin_accumulator += 0.00005
# 2) Generate VR data and send over to the server
vr_client.send_vr_data()
# Disconnect at end of server session
if is_server:
@ -133,7 +150,7 @@ if __name__ == "__main__":
parser = argparse.ArgumentParser(description='Multi-user VR demo that can be run in server and client mode.')
parser.add_argument('--mode', default='server', help='Mode to run in: either server or client')
parser.add_argument('--host', default='localhost', help='Host to connect to - eg. localhost or an IP address')
parser.add_argument('--port', default='8887', help='Port to connect to - eg. 8887')
parser.add_argument('--port', default='8885', help='Port to connect to - eg. 8887')
args = parser.parse_args()
try:
port = int(args.port)

View File

@ -4,183 +4,88 @@ import numpy as np
import os
import pybullet as p
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_vr import VrSettings
from gibson2.robots.fetch_vr_robot import FetchVR
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.vr_objects import VrGazeMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.utils import parse_config
from gibson2.utils.vr_utils import move_player_no_body
from gibson2.utils.vr_utils import move_player
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
fetch_config = parse_config(os.path.join('..', '..', '..', 'configs', 'fetch_p2p_nav.yaml'))
# Playground configuration: edit this to change functionality
optimize = True
# Toggles SRAnipal eye tracking
use_eye_tracking = True
fetch_config = parse_config(os.path.join('..', '..', '..', 'configs', 'fetch_reaching.yaml'))
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0,
rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=False, enable_pbr=False),
vr_eye_tracking=use_eye_tracking, vr_mode=True)
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Set to false to load entire Rs_int scene
LOAD_PARTIAL = True
# Set to true to print out render, physics and overall frame FPS
PRINT_FPS = False
# Set to false to just use FetchVR in non-VR mode
VR_MODE = False
# TODO: Change this to VR fetch!
fvr = FetchVR(fetch_config)
s.import_robot(fvr)
# Set differential drive to control wheels
fvr.set_position([0,-1.5,0])
fvr.robot_specific_reset()
fvr.keep_still()
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# Load robot end-effector-tracker
effector_marker = VisualMarker(rgba_color = [1, 0, 1, 0.2], radius=0.05)
s.import_object(effector_marker)
# Hide marker upon initialization
effector_marker.set_position([0,0,-5])
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=True,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
if VR_MODE:
s = Simulator(mode='vr',
rendering_settings=vr_rendering_settings,
vr_settings=VrSettings())
else:
s = Simulator(mode='iggui', image_width=960,
image_height=720, device_idx=0, rendering_settings=vr_rendering_settings)
s.viewer.min_cam_z = 1.0
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
scene = InteractiveIndoorScene('Rs_int')
# Turn this on when debugging to speed up loading
if LOAD_PARTIAL:
scene._set_first_n_objects(10)
s.import_ig_scene(scene)
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=5)
# Import FetchVR robot - the class handles importing and setup itself
fvr = FetchVR(fetch_config, s, [0.5, -1.5, 0], update_freq=1)
# Gaze marker to visualize where the user is looking
gm = VrGazeMarker(s)
# Objects to interact with
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
mustard_list = []
mustard_start = [-1, 1.55, 1.2]
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
s.import_object(mustard, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
mustard.set_position([mustard_start[0] + i * 0.2, mustard_start[1], mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
if optimize:
s.optimize_vertex_and_texture()
s.optimize_vertex_and_texture()
fetch_height = 1.2
wheel_axle_half = 0.18738 # half of the distance between the wheels
wheel_radius = 0.054 # radius of the wheels themselves
r_wheel_joint = fvr.ordered_joints[0]
l_wheel_joint = fvr.ordered_joints[1]
fetch_lin_vel_multiplier = 100
# Variables used in IK to move end effector
fetch_body_id = fvr.get_fetch_body_id()
fetch_joint_num = p.getNumJoints(fetch_body_id)
effector_link_id = 19
# Setting to determine whether IK should also solve for end effector orientation
# based on the VR controller
solve_effector_orn = True
# Update frequency - number of frames before update
# TODO: Play around with this
update_freq = 1
frame_num = 0
# Main simulation loop
while True:
s.step()
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
# Fetch only has one arm which is entirely controlled by the right hand
# TODO: Use left arm for movement?
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
# Set fetch orientation directly from HMD to avoid lag when turning and resultant motion sickness
fvr.set_z_rotation(hmd_rot)
# Get world position and fetch position
hmd_world_pos = s.get_hmd_world_pos()
fetch_pos = fvr.get_position()
# Calculate x and y offset to get to fetch position
# z offset is to the desired hmd height, corresponding to fetch head height
offset_to_fetch = [fetch_pos[0] - hmd_world_pos[0],
fetch_pos[1] - hmd_world_pos[1],
fetch_height - hmd_world_pos[2]]
s.set_vr_offset(offset_to_fetch)
# TODO: Consolidate this functionality into the FetchVR class
# Update fetch arm at user-defined frequency
if r_is_valid and frame_num % 10 == 0:
effector_marker.set_position(r_trans)
effector_marker.set_orientation(r_rot)
# Linear velocity is relative to current direction fetch is pointing,
# so only need to know how fast we should travel in that direction (Y touchpad direction is used for this)
lin_vel = fetch_lin_vel_multiplier * r_touch_y
ang_vel = 0
left_wheel_ang_vel = (lin_vel - ang_vel * wheel_axle_half) / wheel_radius
right_wheel_ang_vel = (lin_vel + ang_vel * wheel_axle_half) / wheel_radius
l_wheel_joint.set_motor_velocity(left_wheel_ang_vel)
r_wheel_joint.set_motor_velocity(right_wheel_ang_vel)
# Ignore sideays rolling dimensions of controller (x axis) since fetch can't "roll" its arm
r_euler_rot = p.getEulerFromQuaternion(r_rot)
r_rot_no_x = p.getQuaternionFromEuler([0, r_euler_rot[1], r_euler_rot[2]])
# Iteration and residual threshold values are based on recommendations from PyBullet
ik_joint_poses = None
if solve_effector_orn:
ik_joint_poses = p.calculateInverseKinematics(fetch_body_id,
effector_link_id,
r_trans,
r_rot_no_x,
solver=0,
maxNumIterations=100,
residualThreshold=.01)
else:
ik_joint_poses = p.calculateInverseKinematics(fetch_body_id,
effector_link_id,
r_trans,
solver=0)
# Set joints to the results of the IK
if ik_joint_poses is not None:
for i in range(len(ik_joint_poses)):
next_pose = ik_joint_poses[i]
next_joint = fvr.ordered_joints[i]
# Set wheel joint back to original position so IK calculation does not affect movement
# Note: PyBullet does not currently expose the root of the IK calculation
if next_joint.joint_name == 'r_wheel_joint' or next_joint.joint_name == 'l_wheel_joint':
next_pose, _, _ = next_joint.get_state()
p.resetJointState(fetch_body_id, next_joint.joint_index, next_pose)
# TODO: Arm is not moving with this function - debug!
# TODO: This could be causing some problems with movement
#p.setJointMotorControl2(bodyIndex=fetch_body_id,
# jointIndex=next_joint.joint_index,
# controlMode=p.POSITION_CONTROL,
# targetPosition=next_pose,
# force=500)
# TODO: Implement opening/closing the end effectors
# Something like this: fetch.set_fetch_gripper_fraction(rTrig)
# TODO: Implement previous rest pose
frame_num += 1
if VR_MODE:
# FetchVR class handles all update logic
fvr.update()
# Update visual gaze marker
gm.update()
s.disconnect()

View File

@ -1,161 +0,0 @@
""" VR playground containing various objects and VR options that can be toggled
to experiment with the VR experience in iGibson. This playground operates in a
PBR scene. Please see vr_playground_no_pbr.py for a non-PBR experience.
Important - VR functionality and where to find it:
1) Most VR functions can be found in the gibson2/simulator.py
2) VR utility functions are found in gibson2/utils/vr_utils.py
3) The VR renderer can be found in gibson2/render/mesh_renderer.py
4) The underlying VR C++ code can be found in vr_mesh_render.h and .cpp in gibson2/render/cpp
"""
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.object_base import Object
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
groceries_folder = os.path.join(assets_path, 'models', 'groceries')
# Playground configuration: edit this to change functionality
optimize = False
# Toggles fullscreen companion window
fullscreen = False
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=optimize,
fullscreen=fullscreen,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0, rendering_settings=vr_rendering_settings,
vr_eye_tracking=False, vr_mode=True)
scene = InteractiveIndoorScene('Rs_int')
# Turn this on when debugging to speed up loading
scene._set_first_n_objects(5)
s.import_ig_scene(scene)
# TODO: Remove later
p.setGravity(0, 0, 0)
# Player body is represented by a translucent blue cylinder
""" if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
vr_body.init_body([0,0]) """
vr_body_fpath = os.path.join(assets_path, 'models', 'vr_body', 'vr_body.urdf')
vrb = ArticulatedObject(vr_body_fpath, scale=0.1)
s.import_object(vrb, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
#vrb.set_position([0, 0, 1])
vrb_cid = p.createConstraint(vrb.body_id, -1, -1, -1, p.JOINT_FIXED,
[0, 0, 0], [0, 0, 0], [0, 0, 1.2])
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path, scale=0.8)
s.import_object(basket)
basket.set_position([-1, 1.55, 1.2])
p.changeDynamics(basket.body_id, -1, mass=5)
can_1_path = os.path.join(groceries_folder, 'canned_food', '1', 'rigid_body.urdf')
can_pos = [[-0.8, 1.55, 1.2], [-0.6, 1.55, 1.2], [-0.4, 1.55, 1.2]]
cans = []
for i in range (len(can_pos)):
can_1 = ArticulatedObject(can_1_path, scale=0.6)
cans.append(can_1)
s.import_object(can_1)
can_1.set_position(can_pos[i])
# TODO: Remove this test
#r_hand.set_hand_no_collision(can_1.body_id)
#r_hand.set_hand_no_collision(basket.body_id)
#r_hand.set_hand_no_collision(vr_body.body_id)
#p.setCollisionFilterPair(can_1.body_id, basket.body_id, -1, -1, 0) # the last argument is 0 for disabling collision, 1 for enabling collision
#p.setCollisionFilterPair(can_1.body_id, r_hand.body_id, -1, -1, 0)
#p.setCollisionFilterPair(can_1.body_id, l_hand.body_id, -1, -1, 0)
if optimize:
s.optimize_vertex_and_texture()
# Set VR starting position in the scene
s.set_vr_offset([0, 0, 0])
while True:
s.step()
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
if hmd_is_valid:
p.changeConstraint(vrb_cid, hmd_trans, vrb.get_orientation(), maxForce=2000)
"""if enable_vr_body:
if not r_is_valid:
# See VrBody class for more details on this method
vr_body.move_body(s, 0, 0, movement_speed, relative_movement_device)
else:
vr_body.move_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device) """
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
# Right hand used to control movement
# Move VR system based on device coordinate system and touchpad press location
move_player_no_body(s, r_touch_x, r_touch_y, 0.03, 'hmd')
# Trigger haptic pulse on right touchpad, modulated by trigger close fraction
# Close the trigger to create a stronger pulse
# Note: open trigger has closed fraction of 0.05 when open, so cutoff haptic input under 0.1
# to avoid constant rumbling
s.trigger_haptic_pulse('right_controller', r_trig if r_trig > 0.1 else 0)
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
s.trigger_haptic_pulse('left_controller', l_trig if l_trig > 0.1 else 0)
s.disconnect()

View File

@ -1,145 +0,0 @@
""" VR demo for tuning physics parameters. Has mustard bottles, pears and a basket that can be grasped. """
import numpy as np
import os
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
lunch_pack_folder = os.path.join(assets_path, 'pack_lunch')
small_fruit_path = os.path.join(lunch_pack_folder, 'food', 'fruit', 'pear', 'pear00', 'rigid_body.urdf')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=fullscreen, enable_pbr=False),
vr_eye_tracking=use_eye_tracking, vr_mode=True)
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body)
vr_body.init_body([0,0])
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=1)
# Experiment with heavier mustard bottles
mass_list = [0.5, 1, 2, 5]
mustard_start = [1, -0.2, 1]
mustard_list = []
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
fruit_start = [1, -1, 1]
for i in range(3):
fruit = ArticulatedObject(small_fruit_path, scale=0.9)
s.import_object(fruit)
fruit.set_position([fruit_start[0], fruit_start[1] - i * 0.2, fruit_start[2]])
# Normal-sized pears weigh around 200 grams
p.changeDynamics(fruit.body_id, -1, mass=0.2)
if optimize:
s.optimize_vertex_and_texture()
# Start user close to counter for interaction
# Small negative offset to account for lighthouses not being set up entirely correctly
s.set_vr_offset([-0.5, 0.0, -0.1])
# Main simulation loop
while True:
# Step the simulator - this needs to be done every frame to actually run the simulation
s.step()
# VR device data
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
# VR button data
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
# VR eye tracking data
if use_eye_tracking:
is_eye_data_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = s.get_eye_tracking_data()
if is_eye_data_valid:
# Move gaze marker based on eye tracking data
updated_marker_pos = [origin[0] + dir[0], origin[1] + dir[1], origin[2] + dir[2]]
gaze_marker.set_position(updated_marker_pos)
if enable_vr_body:
if not r_is_valid:
# See VrBody class for more details on this method
vr_body.move_body(s, 0, 0, movement_speed, relative_movement_device)
else:
vr_body.move_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
# Right hand used to control movement
# Move VR system based on device coordinate system and touchpad press location
move_player_no_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
# Trigger haptic pulse on right touchpad, modulated by trigger close fraction
# Close the trigger to create a stronger pulse
# Note: open trigger has closed fraction of 0.05 when open, so cutoff haptic input under 0.1
# to avoid constant rumbling
s.trigger_haptic_pulse('right_controller', r_trig if r_trig > 0.1 else 0)
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
s.trigger_haptic_pulse('left_controller', l_trig if l_trig > 0.1 else 0)
s.disconnect()

View File

@ -1,152 +0,0 @@
""" VR playground containing various objects and VR options that can be toggled
to experiment with the VR experience in iGibson. This playground operates in a
PBR scene. Please see vr_playground_no_pbr.py for a non-PBR experience.
Important - VR functionality and where to find it:
1) Most VR functions can be found in the gibson2/simulator.py
2) VR utility functions are found in gibson2/utils/vr_utils.py
3) The VR renderer can be found in gibson2/render/mesh_renderer.py
4) The underlying VR C++ code can be found in vr_mesh_render.h and .cpp in gibson2/render/cpp
"""
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.object_base import Object
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
groceries_folder = os.path.join(assets_path, 'models', 'groceries')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=optimize,
fullscreen=fullscreen,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', rendering_settings=vr_rendering_settings,
vr_eye_tracking=use_eye_tracking, vr_mode=True)
scene = InteractiveIndoorScene('Beechwood_0_int')
# Turn this on when debugging to speed up loading
scene._set_first_n_objects(10)
s.import_ig_scene(scene)
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker, use_pbr=False, use_pbr_mapping=False, shadow_caster=False)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path, scale=0.8)
s.import_object(basket)
basket.set_position([-1, 1.55, 1.2])
p.changeDynamics(basket.body_id, -1, mass=5)
can_1_path = os.path.join(groceries_folder, 'canned_food', '1', 'rigid_body.urdf')
can_pos = [[-0.8, 1.55, 1.2], [-0.6, 1.55, 1.2], [-0.4, 1.55, 1.2]]
cans = []
for i in range (len(can_pos)):
can_1 = ArticulatedObject(can_1_path, scale=0.6)
cans.append(can_1)
s.import_object(can_1)
can_1.set_position(can_pos[i])
if optimize:
s.optimize_vertex_and_texture()
# Set VR starting position in the scene
s.set_vr_offset([0, 0, -0.1])
while True:
s.step(print_time=True)
# VR device data
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
# VR button data
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
# VR eye tracking data
if use_eye_tracking:
is_eye_data_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = s.get_eye_tracking_data()
if is_eye_data_valid:
# Move gaze marker based on eye tracking data
updated_marker_pos = [origin[0] + dir[0], origin[1] + dir[1], origin[2] + dir[2]]
gaze_marker.set_position(updated_marker_pos)
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
# Right hand used to control movement
# Move VR system based on device coordinate system and touchpad press location
move_player_no_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
# Trigger haptic pulse on right touchpad, modulated by trigger close fraction
# Close the trigger to create a stronger pulse
# Note: open trigger has closed fraction of 0.05 when open, so cutoff haptic input under 0.1
# to avoid constant rumbling
s.trigger_haptic_pulse('right_controller', r_trig if r_trig > 0.1 else 0)
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
s.trigger_haptic_pulse('left_controller', l_trig if l_trig > 0.1 else 0)
s.disconnect()

View File

@ -0,0 +1,106 @@
""" VR playground containing various objects. This playground operates in the
Rs_int PBR scene.
Important - VR functionality and where to find it:
1) Most VR functions can be found in the gibson2/simulator.py
2) The VrAgent and its associated VR objects can be found in gibson2/objects/vr_objects.py
3) VR utility functions are found in gibson2/utils/vr_utils.py
4) The VR renderer can be found in gibson2/render/mesh_renderer.py
5) The underlying VR C++ code can be found in vr_mesh_render.h and .cpp in gibson2/render/cpp
"""
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.render.mesh_renderer.mesh_renderer_vr import VrSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.object_base import Object
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrAgent
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2 import assets_path
# Set to false to load entire Rs_int scene
LOAD_PARTIAL = False
# Set to true to print out render, physics and overall frame FPS
PRINT_FPS = True
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=True,
fullscreen=False,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# VR system settings
# Change use_vr to toggle VR mode on/off
vr_settings = VrSettings(use_vr=True)
s = Simulator(mode='vr',
rendering_settings=vr_rendering_settings,
vr_settings=vr_settings)
scene = InteractiveIndoorScene('Rs_int')
# Turn this on when debugging to speed up loading
if LOAD_PARTIAL:
scene._set_first_n_objects(10)
s.import_ig_scene(scene)
if not vr_settings.use_vr:
camera_pose = np.array([0, -3, 1.2])
view_direction = np.array([0, 1, 0])
s.renderer.set_camera(camera_pose, camera_pose + view_direction, [0, 0, 1])
s.renderer.set_fov(90)
# Create a VrAgent and it will handle all initialization and importing under-the-hood
vr_agent = VrAgent(s)
# Objects to interact with
mass_list = [5, 10, 100, 500]
mustard_start = [-1, 1.55, 1.2]
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
s.import_object(mustard, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
mustard.set_position([mustard_start[0] + i * 0.2, mustard_start[1], mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
s.optimize_vertex_and_texture()
if vr_settings.use_vr:
# Since vr_height_offset is set, we will use the VR HMD true height plus this offset instead of the third entry of the start pos
s.set_vr_start_pos([0, 0, 0], vr_height_offset=-0.1)
# Main simulation loop
while True:
s.step(print_time=PRINT_FPS)
# Don't update VR agents or query events if we are not using VR
if not vr_settings.use_vr:
continue
# Example of querying VR events to hide object
if s.query_vr_event('right_controller', 'touchpad_press'):
s.set_hidden_state(mustard, hide=not s.get_hidden_state(mustard))
# Update VR objects
vr_agent.update()
s.disconnect()

View File

@ -1,160 +0,0 @@
""" VR playground containing various objects and VR options that can be toggled
to experiment with the VR experience in iGibson. This playground operates in
the Placida scene, which is from the set of old iGibson environments and does not use PBR.
Important: VR functionality and where to find it:
1) Most VR functions can be found in the gibson2/simulator.py
2) VR utility functions are found in gibson2/utils/vr_utils.py
3) The VR renderer can be found in gibson2/render/mesh_renderer.py
4) The underlying VR C++ code can be found in vr_mesh_render.h and .cpp in gibson2/render/cpp
"""
import numpy as np
import os
import pybullet as p
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.gibson_indoor_scene import StaticIndoorScene
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Whether we should hide a mustard bottle when the menu button is presed
hide_mustard_on_press = True
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0,
rendering_settings=MeshRendererSettings(optimized=optimize, fullscreen=fullscreen, enable_pbr=False),
vr_eye_tracking=use_eye_tracking, vr_mode=True)
scene = StaticIndoorScene('Placida')
s.import_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body)
vr_body.init_body([0,0])
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path)
s.import_object(basket)
basket.set_position([1, 0.2, 1])
p.changeDynamics(basket.body_id, -1, mass=5)
mass_list = [5, 10, 100, 500]
mustard_start = [1, -0.2, 1]
mustard_list = []
for i in range(len(mass_list)):
mustard = YCBObject('006_mustard_bottle')
mustard_list.append(mustard)
s.import_object(mustard)
mustard.set_position([mustard_start[0], mustard_start[1] - i * 0.2, mustard_start[2]])
p.changeDynamics(mustard.body_id, -1, mass=mass_list[i])
if optimize:
s.optimize_vertex_and_texture()
# Start user close to counter for interaction
s.set_vr_offset([-0.5, 0.0, 0])
# State of mustard hiding, toggled by a menu press
hide_mustard = False
# Main simulation loop
while True:
# Demonstrates how to call VR events - replace pass with custom logic
# See pollVREvents description in simulator for full list of events
event_list = s.poll_vr_events()
for event in event_list:
device_type, event_type = event
if device_type == 'right_controller':
if event_type == 'menu_press' and hide_mustard_on_press:
# Toggle mustard hidden state
hide_mustard = not hide_mustard
s.set_hidden_state(mustard_list[2], hide=hide_mustard)
# Step the simulator - this needs to be done every frame to actually run the simulation
s.step()
# VR device data
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
# VR button data
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
# VR eye tracking data
if use_eye_tracking:
is_eye_data_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = s.get_eye_tracking_data()
if is_eye_data_valid:
# Move gaze marker based on eye tracking data
updated_marker_pos = [origin[0] + dir[0], origin[1] + dir[1], origin[2] + dir[2]]
gaze_marker.set_position(updated_marker_pos)
if enable_vr_body:
if not r_is_valid:
# See VrBody class for more details on this method
vr_body.move_body(s, 0, 0, movement_speed, relative_movement_device)
else:
vr_body.move_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
# Right hand used to control movement
# Move VR system based on device coordinate system and touchpad press location
move_player_no_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
# Trigger haptic pulse on right touchpad, modulated by trigger close fraction
# Close the trigger to create a stronger pulse
# Note: open trigger has closed fraction of 0.05 when open, so cutoff haptic input under 0.1
# to avoid constant rumbling
s.trigger_haptic_pulse('right_controller', r_trig if r_trig > 0.1 else 0)
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
s.trigger_haptic_pulse('left_controller', l_trig if l_trig > 0.1 else 0)
s.disconnect()

View File

@ -1,183 +0,0 @@
""" VR playground containing various objects and VR options that can be toggled
to experiment with the VR experience in iGibson. This playground operates in a
PBR scene. Please see vr_playground_no_pbr.py for a non-PBR experience.
Important - VR functionality and where to find it:
1) Most VR functions can be found in the gibson2/simulator.py
2) VR utility functions are found in gibson2/utils/vr_utils.py
3) The VR renderer can be found in gibson2/render/mesh_renderer.py
4) The underlying VR C++ code can be found in vr_mesh_render.h and .cpp in gibson2/render/cpp
"""
import numpy as np
import os
import pybullet as p
import time
import gibson2
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRendererSettings
from gibson2.scenes.igibson_indoor_scene import InteractiveIndoorScene
from gibson2.objects.object_base import Object
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.vr_objects import VrBody, VrHand
from gibson2.objects.visual_marker import VisualMarker
from gibson2.objects.ycb_object import YCBObject
from gibson2.simulator import Simulator
from gibson2.utils.vr_utils import move_player_no_body
from gibson2 import assets_path
sample_urdf_folder = os.path.join(assets_path, 'models', 'sample_urdfs')
groceries_folder = os.path.join(assets_path, 'models', 'groceries')
# Playground configuration: edit this to change functionality
optimize = True
# Toggles fullscreen companion window
fullscreen = False
# Toggles SRAnipal eye tracking
use_eye_tracking = True
# Enables the VR collision body
enable_vr_body = True
# Toggles movement with the touchpad (to move outside of play area)
touchpad_movement = True
# Set to one of hmd, right_controller or left_controller to move relative to that device
relative_movement_device = 'hmd'
# Movement speed for touchpad-based movement
movement_speed = 0.03
# Whether we should hide a can bottle when the menu button is presed
hide_can_on_press = True
# HDR files for PBR rendering
hdr_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_02.hdr')
hdr_texture2 = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'probe_03.hdr')
light_modulation_map_filename = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'Rs_int', 'layout', 'floor_lighttype_0.png')
background_texture = os.path.join(
gibson2.ig_dataset_path, 'scenes', 'background', 'urban_street_01.jpg')
# VR rendering settings
vr_rendering_settings = MeshRendererSettings(optimized=optimize,
fullscreen=fullscreen,
env_texture_filename=hdr_texture,
env_texture_filename2=hdr_texture2,
env_texture_filename3=background_texture,
light_modulation_map_filename=light_modulation_map_filename,
enable_shadow=True,
enable_pbr=True,
msaa=True,
light_dimming_factor=1.0)
# Initialize simulator with specific rendering settings
s = Simulator(mode='vr', physics_timestep = 1/90.0, render_timestep = 1/90.0, rendering_settings=vr_rendering_settings,
vr_eye_tracking=use_eye_tracking, vr_mode=True)
scene = InteractiveIndoorScene('Rs_int')
# Turn this on when debugging to speed up loading
# scene._set_first_n_objects(10)
s.import_ig_scene(scene)
# Player body is represented by a translucent blue cylinder
if enable_vr_body:
vr_body = VrBody()
s.import_object(vr_body, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
vr_body.init_body([0,0])
# The hand can either be 'right' or 'left'
# It has enough friction to pick up the basket and the mustard bottles
r_hand = VrHand(hand='right')
s.import_object(r_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
# This sets the hand constraints so it can move with the VR controller
r_hand.set_start_state(start_pos=[0, 0, 1.5])
l_hand = VrHand(hand='left')
s.import_object(l_hand, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
# This sets the hand constraints so it can move with the VR controller
l_hand.set_start_state(start_pos=[0, 0.5, 1.5])
if use_eye_tracking:
# Eye tracking visual marker - a red marker appears in the scene to indicate gaze direction
gaze_marker = VisualMarker(radius=0.03)
s.import_object(gaze_marker, use_pbr=False, use_pbr_mapping=False, shadow_caster=False)
gaze_marker.set_position([0,0,1.5])
basket_path = os.path.join(sample_urdf_folder, 'object_ZU6u5fvE8Z1.urdf')
basket = ArticulatedObject(basket_path, scale=0.8)
s.import_object(basket)
basket.set_position([-1, 1.55, 1.2])
p.changeDynamics(basket.body_id, -1, mass=5)
can_1_path = os.path.join(groceries_folder, 'canned_food', '1', 'rigid_body.urdf')
can_pos = [[-0.8, 1.55, 1.2], [-0.6, 1.55, 1.2], [-0.4, 1.55, 1.2]]
cans = []
for i in range (len(can_pos)):
can_1 = ArticulatedObject(can_1_path, scale=0.6)
cans.append(can_1)
s.import_object(can_1)
can_1.set_position(can_pos[i])
if optimize:
s.optimize_vertex_and_texture()
# Set VR starting position in the scene
s.set_vr_offset([0, 0, -0.1])
# State of can hiding, toggled by a menu press
hide_can = False
while True:
# Demonstrates how to call VR events - replace pass with custom logic
# See pollVREvents description in simulator for full list of events
event_list = s.poll_vr_events()
for event in event_list:
device_type, event_type = event
if device_type == 'right_controller':
if event_type == 'menu_press' and hide_can_on_press:
# Toggle mustard hidden state
hide_can = not hide_can
s.set_hidden_state(cans[2], hide=hide_can)
s.step()
# VR device data
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
l_is_valid, l_trans, l_rot = s.get_data_for_vr_device('left_controller')
r_is_valid, r_trans, r_rot = s.get_data_for_vr_device('right_controller')
# VR button data
l_trig, l_touch_x, l_touch_y = s.get_button_data_for_controller('left_controller')
r_trig, r_touch_x, r_touch_y = s.get_button_data_for_controller('right_controller')
# VR eye tracking data
if use_eye_tracking:
is_eye_data_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = s.get_eye_tracking_data()
if is_eye_data_valid:
# Move gaze marker based on eye tracking data
updated_marker_pos = [origin[0] + dir[0], origin[1] + dir[1], origin[2] + dir[2]]
gaze_marker.set_position(updated_marker_pos)
if enable_vr_body:
if not r_is_valid:
# See VrBody class for more details on this method
vr_body.move_body(s, 0, 0, movement_speed, relative_movement_device)
else:
vr_body.move_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
if r_is_valid:
r_hand.move(r_trans, r_rot)
r_hand.set_close_fraction(r_trig)
# Right hand used to control movement
# Move VR system based on device coordinate system and touchpad press location
move_player_no_body(s, r_touch_x, r_touch_y, movement_speed, relative_movement_device)
# Trigger haptic pulse on right touchpad, modulated by trigger close fraction
# Close the trigger to create a stronger pulse
# Note: open trigger has closed fraction of 0.05 when open, so cutoff haptic input under 0.1
# to avoid constant rumbling
s.trigger_haptic_pulse('right_controller', r_trig if r_trig > 0.1 else 0)
if l_is_valid:
l_hand.move(l_trans, l_rot)
l_hand.set_close_fraction(l_trig)
s.trigger_haptic_pulse('left_controller', l_trig if l_trig > 0.1 else 0)
s.disconnect()

View File

@ -32,6 +32,8 @@ class ArticulatedObject(Object):
"""
body_id = p.loadURDF(self.filename, globalScaling=self.scale,
flags=p.URDF_USE_MATERIAL_COLORS_FROM_MTL)
# Enable sleeping for all objects that are loaded in
p.changeDynamics(body_id, -1, activationState=p.ACTIVATION_STATE_ENABLE_SLEEPING)
self.mass = p.getDynamicsInfo(body_id, -1)[0]
return body_id

View File

@ -4,148 +4,209 @@ import pybullet as p
from gibson2 import assets_path
from gibson2.objects.articulated_object import ArticulatedObject
from gibson2.objects.visual_marker import VisualMarker
from gibson2.utils.utils import multQuatLists
from gibson2.utils.vr_utils import translate_vr_position_by_vecs
from gibson2.utils.vr_utils import move_player, calc_offset, translate_vr_position_by_vecs, calc_z_dropoff
class VrAgent(object):
"""
A class representing all the VR objects comprising a single agent.
The individual parts of an agent can be used separately, however
use of this class is recommended for most VR applications, especially if you
just want to get a VR scene up and running quickly.
"""
def __init__(self, sim, agent_num=1, use_constraints=True, hands=['left', 'right'], use_body=True, use_gaze_marker=True):
"""
Initializes VR body:
sim - iGibson simulator object
agent_num - the number of the agent - used in multi-user VR
use_constraints - whether to use constraints to move agent (normally set to True - set to false in state replay mode)
hands - list containing left, right or no hands
use_body - true if using VrBody
use_gaze_marker - true if we want to visualize gaze point
muvr - whether the VrAgent is a multi-user VR agent which is only to be renderered
"""
self.sim = sim
self.agent_num = agent_num
# Start z coordinate for all VR objects belonging to this agent (they are spaced out along the x axis at a given height value)
self.z_coord = 50 * agent_num
self.use_constraints = use_constraints
self.hands = hands
self.use_body = use_body
self.use_gaze_marker = use_gaze_marker
# Dictionary of vr object names to objects
self.vr_dict = dict()
if 'left' in self.hands:
self.vr_dict['left_hand'] = VrHand(self.sim, hand='left', use_constraints=self.use_constraints)
self.vr_dict['left_hand'].hand_setup(self.z_coord)
if 'right' in self.hands:
self.vr_dict['right_hand'] = VrHand(self.sim, hand='right', use_constraints=self.use_constraints)
self.vr_dict['right_hand'].hand_setup(self.z_coord)
if self.use_body:
self.vr_dict['body'] = VrBody(self.sim, self.z_coord, use_constraints=self.use_constraints)
if self.use_gaze_marker:
self.vr_dict['gaze_marker'] = VrGazeMarker(self.sim, self.z_coord)
def update(self, vr_data=None):
"""
Updates VR agent - transforms of all objects managed by this class.
If vr_data is set to a non-None value (a VrData object), we use this data and overwrite all data from the simulator.
"""
for vr_obj in self.vr_dict.values():
vr_obj.update(vr_data=vr_data)
def update_frame_offset(self):
"""
Calculates the new VR offset after a single frame of VR interaction.
"""
new_offset = self.sim.get_vr_offset()
for hand in ['left', 'right']:
vr_device = '{}_controller'.format(hand)
is_valid, trans, rot = self.sim.get_data_for_vr_device(vr_device)
if not is_valid:
continue
trig_frac, touch_x, touch_y = self.sim.get_button_data_for_controller(vr_device)
if hand == self.sim.vr_settings.movement_controller and self.sim.vr_settings.touchpad_movement:
new_offset = calc_offset(self.sim, touch_x, touch_y, self.sim.vr_settings.movement_speed, self.sim.vr_settings.relative_movement_device)
# Offset z coordinate using menu press
if self.sim.query_vr_event(vr_device, 'menu_press'):
vr_z_offset = 0.01 if hand == 'right' else -0.01
new_offset = [new_offset[0], new_offset[1], new_offset[2] + vr_z_offset]
self.sim.set_vr_offset(new_offset)
class VrBody(ArticulatedObject):
"""
A simple cylinder representing a VR user's body. This stops
A simple ellipsoid representing a VR user's body. This stops
them from moving through physical objects and wall, as well
as other VR users.
"""
def __init__(self):
def __init__(self, s, z_coord, use_constraints=True):
self.vr_body_fpath = os.path.join(assets_path, 'models', 'vr_body', 'vr_body.urdf')
self.sim = s
self.use_constraints = use_constraints
super(VrBody, self).__init__(filename=self.vr_body_fpath, scale=1)
# Height of VR body - this is relatively tall since we have disabled collision with the floor
# TODO: Fine tune this height variable!
self.height = 0.8
# Distance between shoulders
self.shoulder_width = 0.1
# Width of body from front to back
self.body_width = 0.01
# This is the start that center of the body will float at
# We give it 0.2m of room off the floor to avoid any collisions
self.start_height = self.height/2 + 0.2
# This is the distance of the top of the body below the HMD, so as to not obscure vision
self.dist_below_hmd = 0.4
# Body needs to keep track of first frame so it can set itself to the player's
# coordinates on that first frame
self.first_frame = True
# Keep track of previous hmd world position for movement calculations
self.prev_hmd_wp = None
# Keep track of start x and y rotation so we can lock object to these values
self.start_x_rot = 0.0
self.start_y_rot = 0.0
# Need this extra factor to amplify HMD movement vector, since body doesn't reach HMD each frame (since constraints don't set position)
self.hmd_vec_amp = 2
# This is multiplication factor for backwards distance behind the HMD - this is the distance in m that the torso will be behind the HMD
# TODO: Change this back after experimenting
self.back_disp_factor = 0.2
# Start body far above the scene so it doesn't interfere with physics
self.start_pos = [30, 0, z_coord]
# Number of degrees of forward axis away from +/- z axis at which HMD stops rotating body
self.min_z = 20.0
self.max_z = 45.0
self.sim.import_object(self, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
self.init_body()
# TIMELINE: Call this after loading the VR body into the simulator
def init_body(self, start_pos):
def _load(self):
"""
Initialize VR body to start in a specific location.
Start pos should just contain an x and y value
Overidden load that keeps VrBody awake upon initialization.
"""
# TODO: Change this constraint to add rotation from the hmd!
x, y = start_pos
self.movement_cid = p.createConstraint(self.body_id, -1, -1, -1, p.JOINT_FIXED,
[0, 0, 0], [0, 0, 0], [x, y, self.start_height])
self.start_rot = self.get_orientation()
body_id = p.loadURDF(self.filename, globalScaling=self.scale,
flags=p.URDF_USE_MATERIAL_COLORS_FROM_MTL)
self.mass = p.getDynamicsInfo(body_id, -1)[0]
def rotate_offset_vec(self, offset_vec, theta):
return body_id
def init_body(self):
"""
Rotate offset vector by an angle theta in the xy plane (z axis rotation). This offset vector has a z component of 0.
Initializes VR body to start in a specific location.
use_contraints specifies whether we want to move the VR body with
constraints. This is True by default, but we set it to false
when doing state replay, so constraints do not interfere with the replay.
"""
x = offset_vec[0]
y = offset_vec[1]
x_new = x * np.cos(theta) - y * np.sin(theta)
y_new = y * np.cos(theta) + x * np.sin(theta)
return np.array([x_new, y_new, 0])
self.set_position(self.start_pos)
if self.use_constraints:
self.movement_cid = p.createConstraint(self.body_id, -1, -1, -1, p.JOINT_FIXED,
[0, 0, 0], [0, 0, 0], self.start_pos)
self.set_body_collision_filters()
def set_body_collision_filters(self):
"""
Sets VrBody's collision filters.
"""
# Get body ids of the floor
floor_ids = self.sim.get_floor_ids()
body_link_idxs = [-1] + [i for i in range(p.getNumJoints(self.body_id))]
for f_id in floor_ids:
floor_link_idxs = [-1] + [i for i in range(p.getNumJoints(f_id))]
for body_link_idx in body_link_idxs:
for floor_link_idx in floor_link_idxs:
p.setCollisionFilterPair(self.body_id, f_id, body_link_idx, floor_link_idx, 0)
def move_body(self, s, rTouchX, rTouchY, movement_speed, relative_device):
def update(self, vr_data=None):
"""
Moves VrBody to new position, via constraints. Takes in the simulator, s, so that
it can obtain the VR data needed to perform the movement calculation. Also takes
in right touchpad information, movement speed and the device relative to which movement
is calculated.
Updates VrBody to new position and rotation, via constraints.
If vr_data is passed in, uses this data to update the VrBody instead of the simulator's data.
"""
# Calculate right and forward vectors relative to input device
right, _, forward = s.get_device_coordinate_system(relative_device)
# Backwards HMD direction
back_dir = np.array(forward) * -1
# Project backwards direction onto horizontal plane to get body direction - just remove z component
back_dir[2] = 0.0
# Normalize back dir
back_dir = back_dir / np.linalg.norm(back_dir)
back_dir = back_dir * self.back_disp_factor
# Get HMD data
hmd_is_valid, hmd_trans, hmd_rot = s.get_data_for_vr_device('hmd')
# Set the body to the HMD position on the first frame that it is valid, to aid calculation accuracy
if self.first_frame and hmd_is_valid:
body_pos = hmd_trans + back_dir
# TODO: Need to do the rotation here as well
self.set_position(body_pos)
if vr_data:
hmd_is_valid, _, hmd_rot, right, _, forward = vr_data.query('hmd')
hmd_pos, _ = vr_data.query('vr_positions')
else:
hmd_is_valid, _, hmd_rot = self.sim.get_data_for_vr_device('hmd')
right, _, forward = self.sim.get_device_coordinate_system('hmd')
hmd_pos = self.sim.get_vr_pos()
# Set collision filter between body and floor so we can bend down without any obstruction
# This is an alternative solution to scaling the body height as the player bends down
#self.floor_ids = s.get_floor_ids()
#for f_id in self.floor_ids:
# p.setCollisionFilterPair(f_id, self.body_id, -1, -1, 0) # the last argument is 0 for disabling collision, 1 for enabling collision
#for obj_id in s.objects:
# p.setCollisionFilterPair(obj_id, self.body_id, -1, -1, 0) # the last argument is 0 for disabling collision, 1 for enabling collision
# TODO: Disable collision with VR hands as well
self.first_frame = False
# First frame will not register HMD offset, since no previous hmd position has been recorded
if self.prev_hmd_wp is None:
self.prev_hmd_wp = s.get_hmd_world_pos()
# Get offset to VR body
# offset_to_body = self.get_position() - self.prev_hmd_wp - back_dir
# Move the HMD to be aligned with the VR body
# Set x and y coordinate offsets, but keep current system height (otherwise we teleport into the VR body)
# s.set_vr_offset([offset_to_body[0], offset_to_body[1], s.get_vr_offset()[2]])
# Get current HMD world position and VR offset
hmd_wp = s.get_hmd_world_pos()
# curr_offset = s.get_vr_offset()
# Translate VR offset using controller information
# translated_offset = translate_vr_position_by_vecs(rTouchX, rTouchY, right, forward, curr_offset, movement_speed)
# New player position calculated - amplify delta in HMD positiion to account for constraints not moving body exactly to new position each frame
# new_player_pos = (hmd_wp - self.prev_hmd_wp) * self.hmd_vec_amp + translated_offset + self.prev_hmd_wp + back_dir
new_body_pos = hmd_wp + back_dir
# Attempt to set the vr body to this new position (will stop if collides with wall, for example)
# This involves setting translation and rotation constraint
x, y, z = new_body_pos
new_center = z - self.dist_below_hmd - self.height/2
# Extract only z rotation from HMD so we can spin the body on the vertical axis
_, _, old_body_z = p.getEulerFromQuaternion(self.get_orientation())
delta_hmd_z = 0
# Only update the body if the HMD data is valid - this also only teleports the body to the player
# once the HMD has started tracking when they first load into a scene
if hmd_is_valid:
_, _, hmd_z = p.getEulerFromQuaternion(hmd_rot)
delta_hmd_z = hmd_z - old_body_z
# Get hmd and current body rotations for use in calculations
hmd_x, hmd_y, hmd_z = p.getEulerFromQuaternion(hmd_rot)
_, _, curr_z = p.getEulerFromQuaternion(self.get_orientation())
# Use starting x and y rotation so our body does not get knocked over when we collide with low objects
new_rot = p.getQuaternionFromEuler([self.start_x_rot, self.start_y_rot, old_body_z + delta_hmd_z])
# Finally move the body based on the rotation - it pivots around the HMD in a circle whose circumference
# is defined by self.back_disp_factor. We can calculate this translation vector by drawing a vector triangle
# where the two radii are back_dir and the angle is delta_hmd_z. Some 2D trigonometry gets us the final result
self.rot_trans_vec = self.rotate_offset_vec(back_dir, -1 * delta_hmd_z) - back_dir
# Add translated vector to current offset value
x += self.rot_trans_vec[0]
y += self.rot_trans_vec[1]
p.changeConstraint(self.movement_cid, [x, y, new_center], new_rot, maxForce=2000)
# Reset the body position to the HMD if either of the controller reset buttons are pressed
if vr_data:
grip_press =(['left_controller', 'grip_press'] in vr_data.query('event_data')
or ['right_controller', 'grip_press'] in vr_data.query('event_data'))
else:
grip_press = (self.sim.query_vr_event('left_controller', 'grip_press') or self.sim.query_vr_event('right_controller', 'grip_press'))
if grip_press:
self.set_position(hmd_pos)
self.set_orientation(p.getQuaternionFromEuler([0, 0, hmd_z]))
# Update previous HMD world position at end of frame
self.prev_hmd_wp = hmd_wp
# If VR body is more than 2 meters away from the HMD, don't update its constraint
curr_pos = np.array(self.get_position())
dest = np.array(hmd_pos)
dist_to_dest = np.linalg.norm(curr_pos - dest)
if dist_to_dest < 2.0:
# Check whether angle between forward vector and pos/neg z direction is less than self.z_rot_thresh, and only
# update if this condition is fulfilled - this stops large body angle swings when HMD is pointed up/down
n_forward = np.array(forward)
# Normalized forward direction and z direction
n_forward = n_forward / np.linalg.norm(n_forward)
n_z = np.array([0.0, 0.0, 1.0])
# Calculate angle and convert to degrees
theta_z = np.arccos(np.dot(n_forward, n_z)) / np.pi * 180
# Move theta into range 0 to max_z
if theta_z > (180.0 - self.max_z):
theta_z = 180.0 - theta_z
# Calculate z multiplication coefficient based on how much we are looking in up/down direction
z_mult = calc_z_dropoff(theta_z, self.min_z, self.max_z)
delta_z = hmd_z - curr_z
# Modulate rotation fraction by z_mult
new_z = curr_z + delta_z * z_mult
new_body_rot = p.getQuaternionFromEuler([0, 0, new_z])
# Update body transform constraint
p.changeConstraint(self.movement_cid, hmd_pos, new_body_rot, maxForce=2000)
# Use 100% strength haptic pulse in both controllers for vr body collisions - this should notify the user immediately
# Note: haptics can't be used in networking situations like MUVR (due to network latency)
# or in action replay, since no VR device is connected
if not vr_data:
if len(p.getContactPoints(self.body_id)) > 0:
for controller in ['left_controller', 'right_controller']:
is_valid, _, _ = self.sim.get_data_for_vr_device(controller)
if is_valid:
self.sim.trigger_haptic_pulse(controller, 1.0)
class VrHand(ArticulatedObject):
@ -173,9 +234,14 @@ class VrHand(ArticulatedObject):
"""
# VR hand can be one of three types - no_pbr (diffuse white/grey color), skin or metal
def __init__(self, hand='right', tex_type='no_pbr'):
def __init__(self, s, hand='right', tex_type='no_pbr', use_constraints=True):
# We store a reference to the simulator so that VR data can be acquired under the hood
self.sim = s
self.vr_settings = self.sim.vr_settings
self.vr_hand_folder = os.path.join(assets_path, 'models', 'vr_hand')
self.hand = hand
self.tex_type = tex_type
self.use_constraints = use_constraints
if self.hand not in ['left', 'right']:
print('ERROR: hand parameter must either be left or right!')
return
@ -187,6 +253,7 @@ class VrHand(ArticulatedObject):
self.base_rot = p.getQuaternionFromEuler([0, 160, -80])
else:
self.base_rot = p.getQuaternionFromEuler([0, 160, 80])
self.vr_device = '{}_controller'.format(self.hand)
# Lists of joint indices for hand part
self.base_idxs = [0]
# Proximal indices for non-thumb fingers
@ -204,39 +271,114 @@ class VrHand(ArticulatedObject):
# Closed positions for all joints
self.close_pos = [0, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 0.8, 1.2, 0.5, 0.5, 0.5, 0.8, 0.8, 0.8]
def set_start_state(self, start_pos):
"""Call after importing the hand."""
# Import hand and setup
if tex_type == 'no_pbr':
self.sim.import_object(self, use_pbr=False, use_pbr_mapping=False, shadow_caster=True)
else:
self.sim.import_object(self, use_pbr=True, use_pbr_mapping=True, shadow_caster=True)
def _load(self):
"""
Overidden load that keeps VrHand awake upon initialization.
"""
body_id = p.loadURDF(self.filename, globalScaling=self.scale,
flags=p.URDF_USE_MATERIAL_COLORS_FROM_MTL)
self.mass = p.getDynamicsInfo(body_id, -1)[0]
return body_id
def hand_setup(self, z_coord):
"""
Called after hand is imported. This sets the hand constraints and starting position.
The z coordinate of the hand can be specified - this is used by the VrAgent class.
"""
# Set the hand to z=100 so it won't interfere with physics upon loading
x_coord = 10 if self.hand == 'right' else 20
start_pos = [x_coord, 0, z_coord]
self.set_position(start_pos)
for jointIndex in range(p.getNumJoints(self.body_id)):
# Make masses larger for greater stability
# Mass is in kg, friction is coefficient
p.changeDynamics(self.body_id, jointIndex, mass=0.2, lateralFriction=4)
p.changeDynamics(self.body_id, jointIndex, mass=0.2, lateralFriction=2)
open_pos = self.open_pos[jointIndex]
p.resetJointState(self.body_id, jointIndex, open_pos)
p.setJointMotorControl2(self.body_id, jointIndex, p.POSITION_CONTROL, targetPosition=open_pos, force=500)
p.changeDynamics(self.body_id, -1, mass=0.2, lateralFriction=2)
# Create constraint that can be used to move the hand
self.movement_cid = p.createConstraint(self.body_id, -1, -1, -1, p.JOINT_FIXED, [0, 0, 0], [0, 0, 0], start_pos)
if self.use_constraints:
self.movement_cid = p.createConstraint(self.body_id, -1, -1, -1, p.JOINT_FIXED, [0, 0, 0], [0, 0, 0], start_pos)
# TODO: Get this working!
def set_hand_no_collision(self, no_col_id):
# TIMELINE: Call after step in main while loop
def update(self, vr_data=None):
"""
Sets VrHand to not collide with the body specified by no_col_id.
Updates position and close fraction of hand, and also moves player.
If vr_data is passed in, uses this data to update the hand instead of the simulator's data.
"""
p.setCollisionFilterPair(self.body_id, no_col_id, -1, -1, 0)
hand_joint_num = p.getNumJoints(self.body_id)
no_col_joint_num = p.getNumJoints(no_col_id)
# Set all links to ignore collision, if no_col_id has joints
if no_col_joint_num == 0:
return
if vr_data:
transform_data = vr_data.query(self.vr_device)[:3]
touch_data = vr_data.query('{}_button'.format(self.vr_device))
else:
transform_data = self.sim.get_data_for_vr_device(self.vr_device)
touch_data = self.sim.get_button_data_for_controller(self.vr_device)
for i in range(hand_joint_num):
for j in range(no_col_joint_num):
p.setCollisionFilterPair(self.body_id, no_col_id, i, j, 0)
# Unpack transform and touch data
is_valid, trans, rot = transform_data
trig_frac, touch_x, touch_y = touch_data
# Close frac of 1 indicates fully closed joint, and close frac of 0 indicates fully open joint
# Joints move smoothly between their values in self.open_pos and self.close_pos
if is_valid:
# Detect hand-relevant VR events
if vr_data:
grip_press = [self.vr_device, 'grip_press'] in vr_data.query('event_data')
else:
grip_press = self.sim.query_vr_event(self.vr_device, 'grip_press')
# Reset the hand if the grip has been pressed
if grip_press:
self.set_position(trans)
# Apply base rotation first so the virtual controller is properly aligned with the real controller
final_rot = multQuatLists(rot, self.base_rot)
self.set_orientation(final_rot)
# Note: adjusting the player height can only be done in VR
if not vr_data:
# Move the vr offset up/down if menu button is pressed - this can be used
# to adjust user height in the VR experience
if self.sim.query_vr_event(self.vr_device, 'menu_press'):
# Right menu button moves up, left menu button moves down
vr_z_offset = 0.01 if self.hand == 'right' else -0.01
curr_offset = self.sim.get_vr_offset()
self.sim.set_vr_offset([curr_offset[0], curr_offset[1], curr_offset[2] + vr_z_offset])
self.move(trans, rot)
self.set_close_fraction(trig_frac)
if not vr_data:
if self.vr_settings.touchpad_movement and self.hand == self.vr_settings.movement_controller:
move_player(self.sim, touch_x, touch_y, self.vr_settings.movement_speed, self.vr_settings.relative_movement_device)
# Use 30% strength haptic pulse for general collisions with controller
if len(p.getContactPoints(self.body_id)) > 0:
self.sim.trigger_haptic_pulse(self.vr_device, 0.3)
# Note: This function can be called manually during data replay
def move(self, trans, rot):
# If the hand is more than 2 meters away from the target, it will not move
# We have a reset button to deal with this case, and we don't want to disturb the physics by trying to reconnect
# the hand to the body when it might be stuck behind a wall/in an object
curr_pos = np.array(self.get_position())
dest = np.array(trans)
dist_to_dest = np.linalg.norm(curr_pos - dest)
if dist_to_dest < 2.0:
final_rot = multQuatLists(rot, self.base_rot)
p.changeConstraint(self.movement_cid, trans, final_rot, maxForce=2000)
# Note: This function can be called manually during data replay
def set_close_fraction(self, close_frac):
"""
Sets close fraction of hands. Close frac of 1 indicates fully closed joint,
and close frac of 0 indicates fully open joint. Joints move smoothly between
their values in self.open_pos and self.close_pos.
"""
for jointIndex in range(p.getNumJoints(self.body_id)):
open_pos = self.open_pos[jointIndex]
close_pos = self.close_pos[jointIndex]
@ -244,6 +386,32 @@ class VrHand(ArticulatedObject):
target_pos = open_pos + interp_frac
p.setJointMotorControl2(self.body_id, jointIndex, p.POSITION_CONTROL, targetPosition=target_pos, force=2000)
def move(self, trans, rot):
final_rot = multQuatLists(rot, self.base_rot)
p.changeConstraint(self.movement_cid, trans, final_rot, maxForce=2000)
class VrGazeMarker(VisualMarker):
"""
Represents the marker used for VR gaze tracking
"""
def __init__(self, s, z_coord=100):
# We store a reference to the simulator so that VR data can be acquired under the hood
self.sim = s
super(VrGazeMarker, self).__init__(visual_shape=p.GEOM_SPHERE, radius=0.02)
s.import_object(self, use_pbr=False, use_pbr_mapping=False, shadow_caster=False)
# Set high above scene initially
self.set_position([0, 0, z_coord])
def update(self, vr_data=None):
"""
Updates the gaze marker using simulator data - if vr_data is not None, we use this data instead.
"""
if vr_data:
eye_data = vr_data.query('eye_data')
else:
eye_data = self.sim.get_eye_tracking_data()
# Unpack eye tracking data
is_eye_data_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = eye_data
if is_eye_data_valid:
updated_marker_pos = [origin[0] + dir[0], origin[1] + dir[1], origin[2] + dir[2]]
self.set_position(updated_marker_pos)

View File

@ -2,16 +2,47 @@ from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer, MeshRen
from gibson2.utils.mesh_util import lookat
import numpy as np
class VrSettings(object):
"""
Class containing VR settings pertaining to both the VR renderer
and VR functionality in the simulator/of VR objects
"""
def __init__(self,
use_vr = True,
eye_tracking = True,
touchpad_movement = True,
movement_controller = 'right',
relative_movement_device = 'hmd',
movement_speed = 0.01):
"""
Initializes VR settings:
1) use_vr - whether to render to the HMD and use VR system or just render to screen (used for debugging)
2) eye_tracking - whether to use eye tracking
3) touchpad_movement - whether to enable use of touchpad to move
4) movement_controller - device to controler movement - can be right or left (representing the corresponding controllers)
4) relative_movement_device - which device to use to control touchpad movement direction (can be any VR device)
5) movement_speed - touchpad movement speed
"""
assert movement_controller in ['left', 'right']
self.use_vr = use_vr
self.eye_tracking = eye_tracking
self.touchpad_movement = touchpad_movement
self.movement_controller = movement_controller
self.relative_movement_device = relative_movement_device
self.movement_speed = movement_speed
class MeshRendererVR(MeshRenderer):
"""
MeshRendererVR is iGibson's VR rendering class. It handles rendering to the VR headset and provides
a link to the underlying VRRendererContext, on which various functions can be called.
"""
def __init__(self, rendering_settings=MeshRendererSettings(), use_eye_tracking=False, vr_mode=True):
def __init__(self, rendering_settings=MeshRendererSettings(), vr_settings=VrSettings()):
self.vr_rendering_settings = rendering_settings
self.use_eye_tracking = use_eye_tracking
self.vr_mode = vr_mode
self.vr_settings = vr_settings
self.base_width = 1080
self.base_height = 1200
self.scale_factor = 1.4
@ -21,12 +52,12 @@ class MeshRendererVR(MeshRenderer):
# Rename self.r to self.vrsys
self.vrsys = self.r
if self.vr_mode:
self.vrsys.initVR(self.use_eye_tracking)
if self.vr_settings.use_vr:
self.vrsys.initVR(self.vr_settings.eye_tracking)
# Renders VR scenes and returns the left eye frame
def render(self):
if self.vr_mode:
if self.vr_settings.use_vr:
left_proj, left_view, left_cam_pos, right_proj, right_view, right_cam_pos = self.vrsys.preRenderVR()
# Render and submit left eye
@ -55,4 +86,5 @@ class MeshRendererVR(MeshRenderer):
# Releases VR system and renderer
def release(self):
super().release()
self.vrsys.releaseVR()
if self.vr_settings.use_vr:
self.vrsys.releaseVR()

View File

@ -3,6 +3,7 @@ import numpy as np
import pybullet as p
from gibson2.external.pybullet_tools.utils import joints_from_names, set_joint_positions
from gibson2.objects.visual_marker import VisualMarker
from gibson2.robots.robot_locomotor import LocomotorRobot
@ -10,14 +11,23 @@ class FetchVR(LocomotorRobot):
"""
Fetch robot used in VR embodiment demos.
"""
def __init__(self, config):
def __init__(self, config, s, start_pos, update_freq=1, control_hand='right'):
self.config = config
self.sim = s
self.update_freq = update_freq
# The hand to use to control FetchVR - this can be set to left or right based on the user's preferences
self.control_hand = control_hand
self.control_device = '{}_controller'.format(self.control_hand)
self.wheel_velocity = config.get('wheel_velocity', 1.0)
self.torso_lift_velocity = config.get('torso_lift_velocity', 1.0)
self.arm_velocity = config.get('arm_velocity', 1.0)
self.wheel_dim = 2
self.torso_lift_dim = 1
self.arm_dim = 7
self.height = 1.2
self.wheel_axle_half = 0.18738 # half of the distance between the wheels
self.wheel_radius = 0.054 # radius of the wheels themselves
LocomotorRobot.__init__(self,
"fetch/fetch_vr.urdf",
action_dim=self.wheel_dim + self.torso_lift_dim + self.arm_dim,
@ -25,6 +35,30 @@ class FetchVR(LocomotorRobot):
is_discrete=config.get("is_discrete", False),
control="velocity",
self_collision=True)
self.sim.import_robot(self)
# Position setup
self.set_position(start_pos)
self.robot_specific_reset()
self.keep_still()
self.r_wheel_joint = self.ordered_joints[0]
self.l_wheel_joint = self.ordered_joints[1]
self.wheel_speed_multiplier = 1000
# Variables used in IK to move end effector
self.bid = self.robot_body.bodies[self.robot_body.body_index]
self.joint_num = p.getNumJoints(self.bid)
self.effector_link_id = 19
# Update data
self.frame_count = 0
# Load end effector
self.effector_marker = VisualMarker(rgba_color = [1, 0, 1, 0.2], radius=0.05)
self.sim.import_object(self.effector_marker, use_pbr=False, use_pbr_mapping=False, shadow_caster=False)
# Hide marker upon initialization
self.effector_marker.set_position([0,0,-5])
def set_up_continuous_action_space(self):
self.action_high = np.array([self.wheel_velocity] * self.wheel_dim +
@ -55,19 +89,95 @@ class FetchVR(LocomotorRobot):
rest_position = (0.02, np.pi / 2.0 - 0.4, np.pi / 2.0 - 0.1, -0.4, np.pi / 2.0 + 0.1, 0.0, np.pi / 2.0, 0.0)
# might be a better pose to initiate manipulation
# rest_position = (0.30322468280792236, -1.414019864768982,
#rest_position = (0.30322468280792236, -1.414019864768982,
# 1.5178184935241699, 0.8189625336474915,
# 2.200358942909668, 2.9631312579803466,
# -1.2862852996643066, 0.0008453550418615341)
set_joint_positions(robot_id, arm_joints, rest_position)
def get_end_effector_position(self):
return self.parts['gripper_link'].get_position()
# Return body id of fetch robot
def get_fetch_body_id(self):
return self.robot_body.bodies[self.robot_body.body_index]
def update(self):
"""
Updates FetchVR robot using VR data.
"""
hmd_is_valid, hmd_trans, hmd_rot = self.sim.get_data_for_vr_device('hmd')
is_valid, trans, rot = self.sim.get_data_for_vr_device(self.control_device)
trig_frac, touch_x, touch_y = self.sim.get_button_data_for_controller(self.control_device)
if hmd_is_valid:
# Set fetch orientation directly from HMD to avoid lag when turning and resultant motion sickness
self.set_z_rotation(hmd_rot)
# Get world position and fetch position
hmd_world_pos = self.sim.get_hmd_world_pos()
fetch_pos = self.get_position()
# Calculate x and y offset to get to fetch position
# z offset is to the desired hmd height, corresponding to fetch head height
offset_to_fetch = [fetch_pos[0] - hmd_world_pos[0],
fetch_pos[1] - hmd_world_pos[1],
self.height - hmd_world_pos[2]]
self.sim.set_vr_offset(offset_to_fetch)
if is_valid:
# Update effector marker to desired end-effector transform
self.effector_marker.set_position(trans)
self.effector_marker.set_orientation(rot)
# Linear velocity is relative to current direction fetch is pointing,
# so only need to know how fast we should travel in that direction (Y touchpad direction is used for this)
lin_vel = self.wheel_speed_multiplier * touch_y
ang_vel = 0
left_wheel_ang_vel = (lin_vel - ang_vel * self.wheel_axle_half) / self.wheel_radius
right_wheel_ang_vel = (lin_vel + ang_vel * self.wheel_axle_half) / self.wheel_radius
print("L and R wheel ang vel: {}, {}".format(left_wheel_ang_vel, right_wheel_ang_vel))
self.l_wheel_joint.set_motor_velocity(left_wheel_ang_vel)
self.r_wheel_joint.set_motor_velocity(right_wheel_ang_vel)
# Ignore sideays rolling dimensions of controller (x axis) since fetch can't "roll" its final arm link
euler_rot = p.getEulerFromQuaternion(rot)
rot_no_x = p.getQuaternionFromEuler([0, euler_rot[1], euler_rot[2]])
# Iteration and residual threshold values are based on recommendations from PyBullet
# TODO: Use rest poses here!
if self.frame_count % self.update_freq == 0:
ik_joint_poses = None
ik_joint_poses = p.calculateInverseKinematics(self.bid,
self.effector_link_id,
trans,
rot_no_x,
solver=0,
maxNumIterations=100,
residualThreshold=.01)
# Set joints to the results of the IK
if ik_joint_poses is not None:
for i in range(len(ik_joint_poses)):
next_pose = ik_joint_poses[i]
next_joint = self.ordered_joints[i]
# Set wheel joint back to original position so IK calculation does not affect movement
# Note: PyBullet does not currently expose the root of the IK calculation
if next_joint.joint_name == 'r_wheel_joint' or next_joint.joint_name == 'l_wheel_joint':
next_pose, _, _ = next_joint.get_state()
p.resetJointState(self.bid, next_joint.joint_index, next_pose)
# TODO: Arm is not moving with this function - debug!
# TODO: This could be causing some problems with movement
#p.setJointMotorControl2(bodyIndex=fetch_body_id,
# jointIndex=next_joint.joint_index,
# controlMode=p.POSITION_CONTROL,
# targetPosition=next_pose,
# force=500)
# TODO: Implement opening/closing the end effectors
# Something like this: fetch.set_fetch_gripper_fraction(rTrig)
self.frame_count += 1
def set_z_rotation(self, hmd_rot):
"""
@ -75,9 +185,8 @@ class FetchVR(LocomotorRobot):
"""
# Get z component of hmd rotation
_, _, hmd_z = p.getEulerFromQuaternion(hmd_rot)
prev_x, prev_y, _ = p.getEulerFromQuaternion(self.get_orientation())
# Preserve pre-existing x and y rotations, just force z rotation to be same as HMD
fetch_rot = p.getQuaternionFromEuler([prev_x, prev_y, hmd_z])
fetch_rot = p.getQuaternionFromEuler([0, 0, hmd_z])
self.set_orientation(fetch_rot)
# Set open/close fraction of the end grippers
@ -101,10 +210,8 @@ class FetchVR(LocomotorRobot):
targetPosition=target_pos,
force=maxForce)
def get_end_effector_position(self):
return self.parts['gripper_link'].get_position()
def load(self):
print("DID LOAD SELF-----------------------------------------")
ids = super(FetchVR, self).load()
robot_id = self.robot_ids[0]

View File

@ -2,6 +2,7 @@ from gibson2.utils.mesh_util import quat2rotmat, xyzw2wxyz, xyz2mat
from gibson2.utils.semantics_utils import get_class_name_to_class_id
from gibson2.utils.constants import SemanticClass, PyBulletSleepState
from gibson2.render.mesh_renderer.mesh_renderer_cpu import MeshRenderer
from gibson2.render.mesh_renderer.mesh_renderer_vr import MeshRendererVR, VrSettings
from gibson2.render.mesh_renderer.mesh_renderer_settings import MeshRendererSettings
from gibson2.render.mesh_renderer.instances import InstanceGroup, Instance, Robot
from gibson2.render.mesh_renderer.mesh_renderer_tensor import MeshRendererG2G
@ -19,6 +20,7 @@ import os
import numpy as np
import platform
import logging
import time
class Simulator:
@ -38,8 +40,7 @@ class Simulator:
device_idx=0,
render_to_tensor=False,
rendering_settings=MeshRendererSettings(),
vr_eye_tracking=False,
vr_mode=True):
vr_settings=VrSettings()):
"""
:param gravity: gravity on z direction.
:param physics_timestep: timestep of physical simulation, p.stepSimulation()
@ -52,8 +53,7 @@ class Simulator:
:param render_to_tensor: Render to GPU tensors
disable it when you want to run multiple physics step but don't need to visualize each frame
:param rendering_settings: settings to use for mesh renderer
:param vr_eye_tracking: whether to use eye tracking in VR
:param vr_mode: whether to render to the VR headset as well as the screen
:param vr_settings: settings to use for VR in simulator and MeshRendererVR
"""
# physics simulator
self.gravity = gravity
@ -87,9 +87,6 @@ class Simulator:
if self.mode in ['simple']:
self.use_simple_viewer = True
# renderer + VR
self.vr_eye_tracking = vr_eye_tracking
self.vr_mode = vr_mode
# Starting position for the VR (default set to None if no starting position is specified by the user)
self.vr_start_pos = None
self.max_haptic_duration = 4000
@ -102,6 +99,9 @@ class Simulator:
self.optimized_renderer = rendering_settings.optimized
self.rendering_settings = rendering_settings
self.viewer = None
self.vr_settings = vr_settings
# We must be using the Simulator's vr mode and have use_vr set to true in the settings to access the VR context
self.can_access_vr_context = self.use_vr_renderer and self.vr_settings.use_vr
# Settings for adjusting physics and render timestep in vr
# Fraction to multiple previous render timestep by in low-pass filter
@ -166,9 +166,8 @@ class Simulator:
device_idx=self.device_idx,
rendering_settings=self.rendering_settings)
elif self.use_vr_renderer:
self.renderer = MeshRendererVR(rendering_settings=self.rendering_settings,
use_eye_tracking=self.vr_eye_tracking,
vr_mode=self.vr_mode)
self.renderer = MeshRendererVR(
rendering_settings=self.rendering_settings, vr_settings=self.vr_settings)
else:
self.renderer = MeshRenderer(width=self.image_width,
height=self.image_height,
@ -189,7 +188,6 @@ class Simulator:
self.visual_objects = {}
self.robots = []
self.scene = None
if (self.use_ig_renderer or self.use_vr_renderer or self.use_simple_viewer) and not self.render_to_tensor:
self.add_viewer()
@ -626,6 +624,12 @@ class Simulator:
"""
Step the simulation at self.render_timestep and update positions in renderer
"""
# First poll VR events and store them
if self.can_access_vr_context:
# Note: this should only be called once per frame - use get_vr_events to read the event data list in
# subsequent read operations
self.poll_vr_events()
physics_start_time = time.time()
physics_timestep_num = int(
self.render_timestep / self.physics_timestep)
@ -683,39 +687,54 @@ class Simulator:
if hmd_is_valid:
offset_to_start = np.array(
self.vr_start_pos) - self.get_hmd_world_pos()
if self.vr_height_offset:
if self.vr_height_offset is not None:
offset_to_start[2] = self.vr_height_offset
self.set_vr_offset(offset_to_start)
self.vr_start_pos = None
# Returns event data as list of lists. Each sub-list contains deviceType and eventType. List is empty is all
# events are invalid.
# Returns VR event data as list of lists. Each sub-list contains deviceType and eventType.
# List is empty if all events are invalid.
# deviceType: left_controller, right_controller
# eventType: grip_press, grip_unpress, trigger_press, trigger_unpress, touchpad_press, touchpad_unpress,
# touchpad_touch, touchpad_untouch, menu_press, menu_unpress (menu is the application button)
def poll_vr_events(self):
if not self.use_vr_renderer:
return []
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
eventData = self.renderer.vrsys.pollVREvents()
return eventData
self.vr_event_data = self.renderer.vrsys.pollVREvents()
return self.vr_event_data
# Returns the VR events processed by the simulator
def get_vr_events(self):
return self.vr_event_data
# Queries system for a VR event, and returns true if that event happened this frame
def query_vr_event(self, device, event):
for ev_data in self.vr_event_data:
if device == ev_data[0] and event == ev_data[1]:
return True
return False
# Call this after step - returns all VR device data for a specific device
# Device can be hmd, left_controller or right_controller
# Returns isValid (indicating validity of data), translation and rotation in Gibson world space
def get_data_for_vr_device(self, deviceName):
if not self.use_vr_renderer:
return [None, None, None]
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
# Use fourth variable in list to get actual hmd position in space
isValid, translation, rotation, _ = self.renderer.vrsys.getDataForVRDevice(
is_valid, translation, rotation, _ = self.renderer.vrsys.getDataForVRDevice(
deviceName)
return [isValid, translation, rotation]
return [is_valid, translation, rotation]
# Get world position of HMD without offset
def get_hmd_world_pos(self):
if not self.use_vr_renderer:
return None
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
_, _, _, hmd_world_pos = self.renderer.vrsys.getDataForVRDevice('hmd')
return hmd_world_pos
@ -727,8 +746,9 @@ class Simulator:
# Trigger data: 1 (closed) <------> 0 (open)
# Analog data: X: -1 (left) <-----> 1 (right) and Y: -1 (bottom) <------> 1 (top)
def get_button_data_for_controller(self, controllerName):
if not self.use_vr_renderer:
return [None, None, None]
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
trigger_fraction, touch_x, touch_y = self.renderer.vrsys.getButtonDataForController(
controllerName)
@ -737,16 +757,18 @@ class Simulator:
# Returns eye tracking data as list of lists. Order: is_valid, gaze origin, gaze direction, gaze point, left pupil diameter, right pupil diameter (both in millimeters)
# Call after getDataForVRDevice, to guarantee that latest HMD transform has been acquired
def get_eye_tracking_data(self):
if not self.use_vr_renderer or not self.vr_eye_tracking:
return [None, None, None, None, None]
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
is_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = self.renderer.vrsys.getEyeTrackingData()
return [is_valid, origin, dir, left_pupil_diameter, right_pupil_diameter]
# Sets the starting position of the VR system in iGibson space
def set_vr_start_pos(self, start_pos=None, vr_height_offset=None):
if not self.use_vr_renderer or not start_pos:
return
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
# The VR headset will actually be set to this position during the first frame.
# This is because we need to know where the headset is in space when it is first picked
@ -754,14 +776,15 @@ class Simulator:
self.vr_start_pos = start_pos
# This value can be set to specify a height offset instead of an absolute height.
# We might want to adjust the height of the camera based on the height of the person using VR,
# but still offset this height. When this option is non-zero it offsets the height by the amount
# but still offset this height. When this option is not None it offsets the height by the amount
# specified instead of overwriting the VR system height output.
self.vr_height_offset = vr_height_offset
# Sets the world position of the VR system in iGibson space
def set_vr_pos(self, pos=None):
if not self.use_vr_renderer or not pos:
return
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
offset_to_pos = np.array(pos) - self.get_hmd_world_pos()
self.set_vr_offset(offset_to_pos)
@ -774,15 +797,17 @@ class Simulator:
# Can be used for many things, including adjusting height and teleportation-based movement
# Input must be a list of three floats, corresponding to x, y, z in Gibson coordinate space
def set_vr_offset(self, pos=None):
if not self.use_vr_renderer:
return
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
self.renderer.vrsys.setVROffset(-pos[1], pos[2], -pos[0])
# Gets the current VR offset vector in list form: x, y, z (in Gibson coordinates)
def get_vr_offset(self):
if not self.use_vr_renderer:
return [None, None, None]
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
x, y, z = self.renderer.vrsys.getVROffset()
return [x, y, z]
@ -791,8 +816,9 @@ class Simulator:
# List contains "right", "up" and "forward" vectors in that order
# Device can be one of "hmd", "left_controller" or "right_controller"
def get_device_coordinate_system(self, device):
if not self.use_vr_renderer:
return [None, None, None]
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
vec_list = []
@ -805,11 +831,12 @@ class Simulator:
# Triggers a haptic pulse of the specified strength (0 is weakest, 1 is strongest)
# Device can be one of "hmd", "left_controller" or "right_controller"
def trigger_haptic_pulse(self, device, strength):
if not self.use_vr_renderer:
print("Error: can't use haptics without VR system!")
else:
self.renderer.vrsys.triggerHapticPulseForDevice(
device, int(self.max_haptic_duration * strength))
if not self.can_access_vr_context:
raise RuntimeError(
'ERROR: Trying to access VR context without enabling vr mode and use_vr in vr settings!')
self.renderer.vrsys.triggerHapticPulseForDevice(
device, int(self.max_haptic_duration * strength))
# Note: this function must be called after optimize_vertex_and_texture is called
# Note: this function currently only works with the optimized renderer - please use the renderer hidden list
@ -826,11 +853,21 @@ class Simulator:
self.renderer.update_hidden_state([instance])
return
def get_hidden_state(self, obj):
"""
Returns the current hidden state of the object - hidden (True) or not hidden (False).
"""
for instance in self.renderer.instances:
if obj.body_id == instance.pybullet_uuid:
return instance.hidden
def get_floor_ids(self):
"""
Gets the body ids for all floor objects in the scene. This is used internally
by the VrBody class to disable collisions with the floor.
"""
if not hasattr(self.scene, 'objects_by_id'):
return []
floor_ids = []
for body_id in self.objects:
if body_id in self.scene.objects_by_id.keys() and self.scene.objects_by_id[body_id].category == 'floors':
@ -841,7 +878,6 @@ class Simulator:
def update_position(instance):
"""
Update position for an object or a robot in renderer.
:param instance: Instance in the renderer
"""
body_links_awake = 0

View File

@ -32,11 +32,13 @@ the computer's display when the VR is running
------ vr_device_data (group)
--------- hmd (dataset)
------------ DATA: [is_valid, trans, rot] (len 8)
------------ DATA: [is_valid, trans, rot, right, up, forward] (len 17)
--------- left_controller (dataset)
------------ DATA: [is_valid, trans, rot] (len 8)
------------ DATA: [is_valid, trans, rot, right, up, forward] (len 17)
--------- right_controller (dataset)
------------ DATA: [is_valid, trans, rot] (len 8)
------------ DATA: [is_valid, trans, rot, right, up, forward] (len 17)
--------- vr_position_data (dataset)
------------ DATA: [vr_world_pos, vr_offset] (len 6)
------ vr_button_data (group)
@ -47,13 +49,24 @@ the computer's display when the VR is running
------ vr_eye_tracking_data (dataset)
--------- DATA: [is_valid, origin, dir, left_pupil_diameter, right_pupil_diameter] (len 9)
------ vr_event_data (group)
--------- left_controller (dataset)
------------ DATA: [grip press/unpress, trigger press/unpress, touchpad press/unpress, touchpad touch/untouch, menu press/unpress] (len 10)
--------- right_controller (dataset)
------------ DATA: [grip press/unpress, trigger press/unpress, touchpad press/unpress, touchpad touch/untouch, menu press/unpress] (len 10)
"""
# TODO: Implement this new binary event system!
import h5py
import numpy as np
import pybullet as p
import time
from gibson2.utils.vr_utils import VrData, convert_events_to_binary
class VRLogWriter():
"""Class that handles saving of VR data, physics data and user-defined actions.
@ -111,9 +124,12 @@ class VRLogWriter():
['vr', 'vr_device_data', 'hmd'],
['vr', 'vr_device_data', 'left_controller'],
['vr', 'vr_device_data', 'right_controller'],
['vr', 'vr_device_data', 'vr_position_data'],
['vr', 'vr_button_data', 'left_controller'],
['vr', 'vr_button_data', 'right_controller'],
['vr', 'vr_eye_tracking_data'],
['vr', 'vr_event_data', 'left_controller'],
['vr', 'vr_event_data', 'right_controller']
])
def create_data_map(self):
@ -137,15 +153,20 @@ class VRLogWriter():
'right_camera_pos': np.full((self.frames_before_write, 3), self.default_fill_sentinel)
},
'vr_device_data': {
'hmd': np.full((self.frames_before_write, 8), self.default_fill_sentinel),
'left_controller': np.full((self.frames_before_write, 8), self.default_fill_sentinel),
'right_controller': np.full((self.frames_before_write, 8), self.default_fill_sentinel)
'hmd': np.full((self.frames_before_write, 17), self.default_fill_sentinel),
'left_controller': np.full((self.frames_before_write, 17), self.default_fill_sentinel),
'right_controller': np.full((self.frames_before_write, 17), self.default_fill_sentinel),
'vr_position_data': np.full((self.frames_before_write, 6), self.default_fill_sentinel)
},
'vr_button_data': {
'left_controller': np.full((self.frames_before_write, 3), self.default_fill_sentinel),
'right_controller': np.full((self.frames_before_write, 3), self.default_fill_sentinel)
},
'vr_eye_tracking_data': np.full((self.frames_before_write, 9), self.default_fill_sentinel)
'vr_eye_tracking_data': np.full((self.frames_before_write, 9), self.default_fill_sentinel),
'vr_event_data': {
'left_controller': np.full((self.frames_before_write, 10), self.default_fill_sentinel),
'right_controller': np.full((self.frames_before_write, 10), self.default_fill_sentinel)
}
}
# TIMELINE: Register all actions immediately after calling init
@ -256,10 +277,14 @@ class VRLogWriter():
for device in ['hmd', 'left_controller', 'right_controller']:
is_valid, trans, rot = s.get_data_for_vr_device(device)
right, up, forward = s.get_device_coordinate_system(device)
if is_valid is not None:
data_list = [is_valid]
data_list.extend(trans)
data_list.extend(rot)
data_list.extend(list(right))
data_list.extend(list(up))
data_list.extend(list(forward))
self.data_map['vr']['vr_device_data'][device][self.frame_counter, ...] = np.array(data_list)
if device == 'left_controller' or device == 'right_controller':
@ -267,6 +292,11 @@ class VRLogWriter():
if button_data_list[0] is not None:
self.data_map['vr']['vr_button_data'][device][self.frame_counter, ...] = np.array(button_data_list)
vr_pos_data = []
vr_pos_data.extend(list(s.get_vr_pos()))
vr_pos_data.extend(list(s.get_vr_offset()))
self.data_map['vr']['vr_device_data']['vr_position_data'][self.frame_counter, ...] = np.array(vr_pos_data)
is_valid, origin, dir, left_pupil_diameter, right_pupil_diameter = s.get_eye_tracking_data()
if is_valid is not None:
eye_data_list = [is_valid]
@ -276,6 +306,16 @@ class VRLogWriter():
eye_data_list.append(right_pupil_diameter)
self.data_map['vr']['vr_eye_tracking_data'][self.frame_counter, ...] = np.array(eye_data_list)
controller_events = {
'left_controller': [],
'right_controller': []
}
for device, event in s.get_vr_events():
controller_events[device].append(event)
for controller in controller_events.keys():
bin_events = convert_events_to_binary(controller_events[controller])
self.data_map['vr']['vr_event_data'][controller][self.frame_counter, ...] = np.array(bin_events)
def write_pybullet_data_to_map(self):
"""Write all pybullet data to the class' internal map."""
for pb_id in self.pb_ids:
@ -349,6 +389,8 @@ class VRLogReader():
self.total_frame_num = self.hf['vr/vr_device_data/hmd'].shape[0]
# Boolean indicating if we still have data left to read
self.data_left_to_read = True
# Placeholder VrData object, which will be filled every frame if we are performing action replay
self.vr_data = VrData()
print('----- VRLogReader initialized -----')
print('Preparing to read {0} frames'.format(self.total_frame_num))
@ -410,6 +452,14 @@ class VRLogReader():
if read_duration < frame_duration:
time.sleep(frame_duration - read_duration)
def get_vr_action_data(self):
"""
Returns all vr action data as a VrData object.
"""
# Update VrData with new HF data
self.vr_data.refresh_action_replay_data(self.hf, self.frame_counter)
return self.vr_data
def read_value(self, value_path):
"""Reads any saved value at value_path for the current frame.
@ -428,10 +478,6 @@ class VRLogReader():
an action that was previously registered with the VRLogWriter during data saving
"""
full_action_path = 'action/' + action_path
if self.frame_counter == 0:
print('Printing first frame actions:')
print('Reading action at path {} for frame {}'.format(full_action_path, self.frame_counter))
print(self.hf[full_action_path][self.frame_counter])
return self.hf[full_action_path][self.frame_counter]
# TIMELINE: Use this as the while loop condition to keep reading frames!

View File

@ -1,16 +1,160 @@
"""This module contains vr utility functions."""
"""This module contains vr utility functions and classes."""
import numpy as np
from gibson2.utils.utils import normalizeListVec
def move_player_no_body(s, rTouchX, rTouchY, movement_speed, relative_device):
"""Moves the VR player when they are not using a VR body. Takes in the simulator,
# List of all VR events
VR_EVENT_LIST = [
'grip_press',
'grip_unpress',
'trigger_press',
'trigger_unpress',
'touchpad_press',
'touchpad_unpress',
'touchpad_touch',
'touchpad_untouch',
'menu_press',
'menu_unpress'
]
# ----- Utility classes ------
class VrData(object):
"""
A class that holds VR data for a given frame. This is a clean way to pass
around VR data that has been produced/saved, either in MUVR or in data replay.
The class contains a dictionary with the following key/value pairs:
Key: hmd, left_controller, right_controller
Values: is_valid, trans, rot, right, up, forward
Key: left_controller_button, right_controller_button
Values: trig_frac, touch_x, touch_y
Key: eye_data
Values: is_valid, origin, direction, left_pupil_diameter, right_pupil_diameter
Key: event_data
Values: list of lists, where each sublist is a device, event_type pair
Key: vr_positions
Values: vr_pos (world position of VR in iGibson), vr_offset (offset of VR system from origin)
Key: vr_settings
Values: touchpad_movement, movement_controller, movement_speed, relative_movement_device
"""
def __init__(self):
# All internal data is stored in a dictionary
self.vr_data_dict = dict()
self.controllers = ['left_controller', 'right_controller']
self.devices = ['hmd'] + self.controllers
def query(self, q):
"""
Queries VrData object and returns values. Please see class description for
possible values that can be queried.
q is the input query and must be a string corresponding to one of the keys of the self.vr_data_dict object
"""
if q not in self.vr_data_dict.keys():
raise RuntimeError('ERROR: Key {} does not exist in VR data dictionary!'.format(q))
return self.vr_data_dict[q]
def refresh_action_replay_data(self, ar_data, frame_num):
"""
Updates the vr dictionary with data from action replay. Needs a frame number
to get the correct slice of the saved data.
"""
for device in self.devices:
device_data = ar_data['vr/vr_device_data/{}'.format(device)][frame_num].tolist()
self.vr_data_dict[device] = [device_data[0], device_data[1:4], device_data[4:8], device_data[8:11], device_data[11:14], device_data[14:]]
if device in self.controllers:
self.vr_data_dict['{}_button'.format(device)] = ar_data['vr/vr_button_data/{}'.format(device)][frame_num].tolist()
eye_data = ar_data['vr/vr_eye_tracking_data'][frame_num].tolist()
self.vr_data_dict['eye_data'] = [eye_data[0], eye_data[1:4], eye_data[4:7], eye_data[7], eye_data[8]]
events = []
for controller in self.controllers:
for event in convert_binary_to_events(ar_data['vr/vr_event_data/{}'.format(controller)][frame_num]):
events.append([controller, event])
self.vr_data_dict['event_data'] = events
pos_data = ar_data['vr/vr_device_data/vr_position_data'][frame_num].tolist()
self.vr_data_dict['vr_positions'] = [pos_data[:3], pos_data[3:]]
# Action replay does not use VR settings, so we leave this as an empty list
self.vr_data_dict['vr_settings'] = []
def refresh_muvr_data(self, muvr_data):
"""
Updates the vr dictionary with data from MUVR.
"""
for device in self.devices:
device_data = muvr_data[device]
self.vr_data_dict[device] = device_data[:6]
if device in self.controllers:
self.vr_data_dict['{}_button'.format(device)] = device_data[6:]
self.vr_data_dict['eye_data'] = muvr_data['eye_data']
self.vr_data_dict['event_data'] = muvr_data['event_data']
self.vr_data_dict['vr_positions'] = [muvr_data['vr_pos'], muvr_data['vr_offset']]
self.vr_data_dict['vr_settings'] = muvr_data['vr_settings']
# ----- Utility functions ------
def calc_z_dropoff(theta, t_min, t_max):
"""
Calculates and returns the dropoff coefficient for a z rotation (used in both VR body and Fetch VR).
The dropoff is 1 if theta > t_max, falls of quadratically between t_max and t_min and is then clamped to 0 thereafter.
"""
z_mult = 1.0
if t_min < theta and theta < t_max:
# Apply the following quadratic to get faster falloff closer to the poles:
# y = -1/(min_z - max_z)^2 * x*2 + 2 * max_z / (min_z - max_z) ^2 * x + (min_z^2 - 2 * min_z * max_z) / (min_z - max_z) ^2
d = (t_min - t_max) ** 2
z_mult = -1/d * theta ** 2 + 2*t_max/d * theta + (t_min ** 2 - 2*t_min*t_max)/d
elif theta < t_min:
z_mult = 0.0
return z_mult
def convert_events_to_binary(events):
"""
Converts a list of vr events to binary form, resulting in the following list:
[grip press/unpress, trigger press/unpress, touchpad press/unpress, touchpad touch/untouch, menu press/unpress]
"""
bin_events = [0] * 10
for event in events:
event_idx = VR_EVENT_LIST.index(event)
bin_events[event_idx] = 1
return bin_events
def convert_binary_to_events(bin_events):
"""
Converts a list of binary vr events to string names, from the following list:
[grip press/unpress, trigger press/unpress, touchpad press/unpress, touchpad touch/untouch, menu press/unpress]
"""
str_events = []
for i in range(10):
if bin_events[i]:
str_events.append(VR_EVENT_LIST[i])
return str_events
def move_player(s, touch_x, touch_y, movement_speed, relative_device):
"""Moves the VR player. Takes in the simulator,
information from the right touchpad, player movement speed and the device relative to which
we would like to move."""
s.set_vr_offset(calc_offset(s, touch_x, touch_y, movement_speed, relative_device))
def calc_offset(s, touch_x, touch_y, movement_speed, relative_device):
curr_offset = s.get_vr_offset()
right, _, forward = s.get_device_coordinate_system(relative_device)
new_offset = translate_vr_position_by_vecs(rTouchX, rTouchY, right, forward, curr_offset, movement_speed)
s.set_vr_offset(new_offset)
return translate_vr_position_by_vecs(touch_x, touch_y, right, forward, curr_offset, movement_speed)
def get_normalized_translation_vec(right_frac, forward_frac, right, forward):
"""Generates a normalized translation vector that is a linear combination of forward and right."""
@ -23,3 +167,12 @@ def translate_vr_position_by_vecs(right_frac, forward_frac, right, forward, curr
"""direction vectors of the chosen VR device (HMD/controller), and adds this vector to the current offset."""
vr_offset_vec = get_normalized_translation_vec(right_frac, forward_frac, right, forward)
return [curr_offset[i] + vr_offset_vec[i] * movement_speed for i in range(3)]
if __name__ == "__main__":
print('Running VR utils tests...')
example_events = ['grip_press', 'touchpad_touch', 'menu_unpress']
bin_events = convert_events_to_binary(example_events)
print(bin_events)
recovered_events = convert_binary_to_events(bin_events)
print(recovered_events)

View File

@ -131,6 +131,7 @@ class PostInstallCommand(install):
check_call("bash realenv/envs/build.sh".split())
install.run(self)
'''
with open("README.md", "r", encoding="utf-8") as fh:
long_description = fh.read()