Merge branch 'og-develop' into better-install

This commit is contained in:
Cem Gökmen 2024-10-01 01:09:50 -07:00 committed by GitHub
commit a34866aee3
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
17 changed files with 188 additions and 6 deletions

38
.github/workflows/publish-pypi.yml vendored Normal file
View File

@ -0,0 +1,38 @@
# This workflow will upload a Python Package using Twine when a release is created
# For more information see: https://docs.github.com/en/actions/automating-builds-and-tests/building-and-testing-python#publishing-to-package-registries
# This workflow uses actions that are not certified by GitHub.
# They are provided by a third-party and are governed by
# separate terms of service, privacy policy, and support
# documentation.
name: Upload Python Package
on:
release:
types: [published]
jobs:
pypi-publish:
name: Upload release to PyPI
runs-on: ubuntu-latest
environment:
name: pypi
url: https://pypi.org/p/omnigibson
permissions:
id-token: write # IMPORTANT: this permission is mandatory for trusted publishing
contents: read
steps:
- uses: actions/checkout@v4
- name: Set up Python
uses: actions/setup-python@v3
with:
python-version: '3.x'
- name: Install dependencies
run: |
python -m pip install --upgrade pip
pip install setuptools wheel twine
- name: Build package
run: python setup.py sdist
- name: Publish package distributions to PyPI
uses: pypa/gh-action-pypi-publish@v1.10.2

1
.gitignore vendored
View File

@ -72,6 +72,7 @@ gibson/assets
notebook
build
dist
omnigibson.egg-info
# Directories used for QC pipeline
omnigibson/utils/data_utils/mesh_decimation/collision

View File

@ -13,6 +13,8 @@
-------
### Latest Updates
- [10/01/24] **v1.1.0**: Major improvements, stability fixes, pip installation, and much more! [[release notes]](https://github.com/StanfordVL/OmniGibson/releases/tag/v1.1.0)
- [03/17/24] **v1.0.0**: First full release with 1,004 pre-sampled tasks, all 50 scenes, and many new objects! [[release notes]](https://github.com/StanfordVL/OmniGibson/releases/tag/v1.0.0)
- [08/04/23] **v0.2.0**: More assets! 600 pre-sampled tasks, 7 new scenes, and many new objects 📈 [[release notes]](https://github.com/StanfordVL/OmniGibson/releases/tag/v0.2.0)

View File

@ -1,4 +1,4 @@
FROM nvcr.io/nvidia/isaac-sim:4.0.0
FROM nvcr.io/nvidia/isaac-sim:4.1.0
# Set up all the prerequisites.
RUN apt-get update && DEBIAN_FRONTEND=noninteractive apt-get install -y \

BIN
docs/assets/robots/A1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 46 KiB

BIN
docs/assets/robots/R1.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 53 KiB

Binary file not shown.

After

Width:  |  Height:  |  Size: 37 KiB

Binary file not shown.

Before

Width:  |  Height:  |  Size: 700 KiB

After

Width:  |  Height:  |  Size: 1.8 MiB

View File

@ -1,3 +1,7 @@
---
icon: material/email
---
# **Contact**
If you have any questions, comments, or concerns, please feel free to reach out to us by joining our Discord server:

View File

@ -1,3 +1,7 @@
---
icon: material/arm-flex
---
# **Contribution Guidelines**
We sincerely welcome contributions of any form to OmniGibson, as our aim is to make it a more robust and useful resource for the community. We have always held the belief that a collective effort from the community is essential to tailor BEHAVIOR/OmniGibson to meet diverse needs and unlock its full potential.
@ -49,3 +53,17 @@ The BEHAVIOR suite has continuous integration running via Github Actions in cont
* When GitHub releases are created, a source distribution will be packed and shipped on PyPI by a hosted runner
For more information about the workflows and runners, please reach out on our Discord channel.
## **Release Process**
At the time of each release, we follow the below process:
1. Update the version of OmniGibson in the pyproject.toml and __init__.py files.
2. Add a release note on the README.md file
3. Push to `og-develop`
4. Wait for all tests to finish, confirm they are passing, confirm docs build on behavior-website
5. Push `og-develop` to `main`
6. Click on create release on GitHub, tag the version starting with the letter `v`
7. Create release notes. You can use the automated release notes but edit to include the important info.
8. Create the release.
9. Wait for docker and PyPI releases to finish, confirm success
10. Announce on Discord, user channels.

View File

@ -1,3 +1,7 @@
---
icon: material/file-question
---
# **Frequently Asked Questions**
## **What is the relationship between BEHAVIOR-1K and OmniGibson?**

View File

@ -1,3 +1,7 @@
---
icon: octicons/question-16
---
# **Known Issues & Troubleshooting**
## 🤔 **Known Issues**

View File

@ -80,7 +80,7 @@ Controllers and sensors can be accessed directly via the `controllers` and `sens
## Types
**`OmniGibson`** currently supports 9 robots, consisting of 4 mobile robots, 2 manipulation robots, 2 mobile manipulation robots, and 1 anthropomorphic "robot" (a bimanual agent proxy used for VR teleoperation). Below, we provide a brief overview of each model:
**`OmniGibson`** currently supports 12 robots, consisting of 4 mobile robots, 3 manipulation robots, 4 mobile manipulation robots, and 1 anthropomorphic "robot" (a bimanual agent proxy used for VR teleoperation). Below, we provide a brief overview of each model:
### Mobile Robots
These are navigation-only robots (an instance of [`LocomotionRobot`](../reference/robots/locomotion_robot.md)) that solely consist of a base that can move.
@ -170,6 +170,19 @@ These are manipulation-only robots (an instance of [`ManipulationRobot`](../refe
<img src="../assets/robots/VX300S.png" alt="rgb">
</td>
</tr>
<tr>
<td valign="top" width="60%">
[**`A1`**](../reference/robots/A1.md)<br><br>
The 6-DOF A1 model equipped with a Inspire-Robots Dexterous Hand.<br><br>
<ul>
<li>_Controllers_: Arm, Gripper</li>
<li>_Sensors_: Wrist Camera</li>
</ul>
</td>
<td>
<img src="../assets/robots/A1.png" alt="rgb">
</td>
</tr>
</table>
@ -203,6 +216,32 @@ These are robots that can both navigate and manipulate (and inherit from both [`
<img src="../assets/robots/Tiago.png" alt="rgb">
</td>
</tr>
<tr>
<td valign="top" width="60%">
[**`Stretch`**](../reference/robots/stretch.md)<br><br>
The <a href="https://hello-robot.com/stretch-3-product">Stretch</a> model from Hello Robot, composed of a two-wheeled base, 2-DOF head, 5-DOF arm, and 1-DOF gripper.<br><br>
<ul>
<li>_Controllers_: Base, Head, Arm, Gripper</li>
<li>_Sensors_: Head Camera</li>
</ul>
</td>
<td>
<img src="../assets/robots/Stretch.png" alt="rgb">
</td>
</tr>
<tr>
<td valign="top" width="60%">
[**`R1`**](../reference/robots/R1.md)<br><br>
The bimanual R1 model, composed of a holonomic base (which we model as a 3-DOF (x,y,rz) set of joints), 4-DOF torso, x2 6-DOF arm, and x2 2-DOF parallel jaw grippers.<br><br>
<ul>
<li>_Controllers_: Base, Left Arm, Right Arm, Left Gripper, Right Gripper</li>
<li>_Sensors_: Head Camera</li>
</ul>
</td>
<td>
<img src="../assets/robots/R1.png" alt="rgb">
</td>
</tr>
</table>
### Additional Robots

View File

@ -34,3 +34,75 @@ action = teleop_sys.get_action(teleop_sys.get_obs())
```
to get the action based on the user teleoperation input, and pass the action to the `env.step` function.
## Data Collection and Playback
OmniGibson provides tools for collecting demonstration data and playing it back for further analysis, training, or evaluation. This is implemented via two environment wrapper classes: `DataCollectionWrapper` and `DataPlaybackWrapper`.
### DataCollectionWrapper
The `DataCollectionWrapper` is used to collect data during environment interactions. It wraps around an existing OmniGibson environment and records relevant information at each step.
Key features:
- Records actions, states, rewards, and termination conditions
- Optimizes the simulator for data collection
- Tracks object and system transitions within the environment
Example usage:
```python
import omnigibson as og
from omnigibson.envs import DataCollectionWrapper
# Create your OmniGibson environment
env = og.Environment(configs=your_config)
# Wrap it with DataCollectionWrapper
wrapped_env = DataCollectionWrapper(
env=env,
output_path="path/to/save/data.hdf5",
only_successes=False, # Set to True to only save successful episodes
)
# Use the wrapped environment as you would normally
obs, info = wrapped_env.reset()
for _ in range(num_steps):
action = your_policy(obs)
obs, reward, terminated, truncated, info = wrapped_env.step(action)
# Save the collected data
wrapped_env.save_data()
```
### DataPlaybackWrapper
The `DataPlaybackWrapper` is used to replay collected data and optionally record additional observations. This is particularly useful for gathering visual data or other sensor information that wasn't collected during the initial demonstration.
Key features:
- Replays episodes from collected data
- Can record additional observation modalities during playback
- Supports custom robot sensor configurations and external sensors
Example usage:
```python
from omnigibson.envs import DataPlaybackWrapper
# Create a playback environment
playback_env = DataPlaybackWrapper.create_from_hdf5(
input_path="path/to/collected/data.hdf5",
output_path="path/to/save/playback/data.hdf5",
robot_obs_modalities=["proprio", "rgb", "depth_linear"],
robot_sensor_config=your_robot_sensor_config,
external_sensors_config=your_external_sensors_config,
n_render_iterations=5,
only_successes=False,
)
# Playback the entire dataset and record observations
playback_env.playback_dataset(record=True)
# Save the recorded playback data
playback_env.save_data()
```

View File

@ -131,9 +131,9 @@ nav:
- FAQ: miscellaneous/faq.md
- Known Issues & Troubleshooting: miscellaneous/known_issues.md
- Contributing: miscellaneous/contributing.md
- Changelog: https://github.com/StanfordVL/OmniGibson/releases
- Contact Us: miscellaneous/contact.md
- API Reference: reference/*
- Changelog: https://github.com/StanfordVL/OmniGibson/releases
extra:
analytics:

View File

@ -28,7 +28,7 @@ import nest_asyncio
nest_asyncio.apply()
__version__ = "1.0.0"
__version__ = "1.1.0"
root_path = os.path.dirname(os.path.realpath(__file__))

View File

@ -13,7 +13,7 @@ long_description = "".join(lines)
setup(
name="omnigibson",
version="1.0.0",
version="1.1.0",
author="Stanford University",
long_description_content_type="text/markdown",
long_description=long_description,