Merge branch 'og-develop' into rl-multiple-envs
|
@ -12,10 +12,25 @@ concurrency:
|
|||
cancel-in-progress: true
|
||||
|
||||
jobs:
|
||||
test:
|
||||
run_test:
|
||||
name: Run Tests
|
||||
runs-on: [self-hosted, linux, gpu, dataset-enabled]
|
||||
|
||||
strategy:
|
||||
matrix:
|
||||
test_file:
|
||||
- test_dump_load_states
|
||||
- test_envs
|
||||
- test_object_removal
|
||||
- test_object_states
|
||||
- test_primitives
|
||||
- test_robot_states
|
||||
- test_robot_teleoperation
|
||||
- test_sensors
|
||||
- test_symbolic_primitives
|
||||
- test_systems
|
||||
- test_transition_rules
|
||||
|
||||
defaults:
|
||||
run:
|
||||
shell: micromamba run -n omnigibson /bin/bash -leo pipefail {0}
|
||||
|
@ -40,14 +55,37 @@ jobs:
|
|||
|
||||
- name: Run tests
|
||||
working-directory: omnigibson-src
|
||||
run: pytest --junitxml=results.xml
|
||||
run: pytest tests/${{ matrix.test_file }}.py --junitxml=${{ matrix.test_file }}.xml && cp ${{ matrix.test_file }}.xml ${GITHUB_WORKSPACE}/
|
||||
|
||||
- name: Test Report
|
||||
- name: Deploy artifact
|
||||
uses: actions/upload-artifact@v3
|
||||
with:
|
||||
name: ${{ github.run_id }}-tests-${{ matrix.test_file }}
|
||||
path: ${{ matrix.test_file }}.xml
|
||||
|
||||
upload_report:
|
||||
name: Compile Report
|
||||
runs-on: [self-hosted, linux]
|
||||
defaults:
|
||||
run:
|
||||
shell: micromamba run -n omnigibson /bin/bash -leo pipefail {0}
|
||||
needs: [run_test]
|
||||
steps:
|
||||
- name: Checkout source
|
||||
uses: actions/checkout@v2
|
||||
with:
|
||||
submodules: true
|
||||
path: omnigibson-src
|
||||
- name: Pull reports
|
||||
uses: actions/download-artifact@v3
|
||||
with:
|
||||
path: omnigibson-src
|
||||
- name: Test Report0
|
||||
uses: dorny/test-reporter@v1
|
||||
with:
|
||||
name: Test Results
|
||||
working-directory: omnigibson-src
|
||||
path: results.xml
|
||||
path: ${{ github.run_id }}-tests-*/test_*.xml
|
||||
reporter: java-junit
|
||||
fail-on-error: 'true'
|
||||
fail-on-empty: 'true'
|
||||
|
|
|
@ -4,8 +4,8 @@ repos:
|
|||
hooks:
|
||||
- id: black
|
||||
language_version: python3.10
|
||||
# - repo: https://github.com/pycqa/isort
|
||||
# rev: 5.8.0
|
||||
# hooks:
|
||||
# - id: isort
|
||||
# name: isort (python)
|
||||
- repo: https://github.com/pycqa/isort
|
||||
rev: 5.13.2
|
||||
hooks:
|
||||
- id: isort
|
||||
name: isort (python)
|
|
@ -1,7 +1,7 @@
|
|||

|
||||
|
||||
# <h1><img height="40" src="./docs/assets/OmniGibson_logo.png" style="float:left;padding-right:10px"> OmniGibson</h1>
|
||||
[](https://github.com/StanfordVL/OmniGibson/actions/workflows/tests.yml)
|
||||
[](https://github.com/StanfordVL/OmniGibson/actions/workflows/tests.yml)
|
||||
[](https://hub.docker.com/r/stanfordvl/omnigibson)
|
||||
[](https://stanfordvl.github.io/OmniGibson/profiling/)
|
||||
|
||||
|
|
After Width: | Height: | Size: 700 KiB |
After Width: | Height: | Size: 1.3 MiB |
After Width: | Height: | Size: 1.3 MiB |
After Width: | Height: | Size: 1.2 MiB |
After Width: | Height: | Size: 1.2 MiB |
After Width: | Height: | Size: 2.0 MiB |
After Width: | Height: | Size: 1.9 MiB |
After Width: | Height: | Size: 2.2 MiB |
After Width: | Height: | Size: 2.1 MiB |
After Width: | Height: | Size: 1.2 MiB |
After Width: | Height: | Size: 1.3 MiB |
After Width: | Height: | Size: 1.8 MiB |
After Width: | Height: | Size: 1.8 MiB |
After Width: | Height: | Size: 1.9 MiB |
After Width: | Height: | Size: 2.0 MiB |
After Width: | Height: | Size: 1.1 MiB |
After Width: | Height: | Size: 928 KiB |
After Width: | Height: | Size: 1.4 MiB |
After Width: | Height: | Size: 1.4 MiB |
After Width: | Height: | Size: 1.2 MiB |
After Width: | Height: | Size: 1.2 MiB |
After Width: | Height: | Size: 1.5 MiB |
After Width: | Height: | Size: 1.3 MiB |
|
@ -8,7 +8,7 @@ icon: material/knob
|
|||
|
||||
In **`OmniGibson`**, `Controller`s convert high-level actions into low-level joint motor (position, velocity, or effort) controls for a subset of an individual [`Robot`](./robots.md)'s joints.
|
||||
|
||||
In an [`Environment`](./environment.md) instance, actions are passed to controllers via the `env.step(action)` call, resulting in the following behavior:
|
||||
In an [`Environment`](./environments.md) instance, actions are passed to controllers via the `env.step(action)` call, resulting in the following behavior:
|
||||
|
||||
<div class="annotate" markdown>
|
||||
- When `env.step(action)` is called, actions are parsed and passed to the respective robot owned by the environment (`env.robots`) via `robot.apply_action(action)`
|
||||
|
|
|
@ -1,42 +0,0 @@
|
|||
---
|
||||
icon: material/earth
|
||||
---
|
||||
|
||||
# 🌎 **Environment**
|
||||
|
||||
The OpenAI Gym Environment serves as a top-level simulation object, offering a suite of common interfaces. These include methods such as `step`, `reset`, `render`, and properties like `observation_space` and `action_space`. The OmniGibson Environment builds upon this foundation by also supporting the loading of scenes, robots, and tasks. Following the OpenAI Gym interface, the OmniGibson environment further provides access to both the action space and observation space of the robots and external sensors.
|
||||
|
||||
Creating a minimal environment requires the definition of a config dictionary. This dictionary should contain details about the scene, objects, robots, and specific characteristics of the environment:
|
||||
|
||||
<details>
|
||||
<summary>Click to see code!</summary>
|
||||
<pre><code>
|
||||
import omnigibson as og
|
||||
|
||||
cfg = {
|
||||
"env": {
|
||||
"action_frequency": 10,
|
||||
"physics_frequency": 120,
|
||||
},
|
||||
"scene": {
|
||||
"type": "Scene",
|
||||
},
|
||||
"objects": [],
|
||||
"robots": [
|
||||
{
|
||||
"type": "Fetch",
|
||||
"obs_modalities": 'all',
|
||||
"controller_config": {
|
||||
"arm_0": {
|
||||
"name": "NullJointController",
|
||||
"motor_type": "position",
|
||||
},
|
||||
},
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
env = og.Environment(configs=cfg)
|
||||
</code></pre>
|
||||
</details>
|
||||
|
|
@ -0,0 +1,57 @@
|
|||
---
|
||||
icon: material/earth
|
||||
---
|
||||
|
||||
# 🌎 **Environment**
|
||||
|
||||
## Description
|
||||
|
||||
**`OmniGibson`**'s Environment class is an [OpenAI gym-compatible](https://gymnasium.farama.org/content/gym_compatibility/) interface and is the main entry point for interacting with the underlying simulation. A single environment loads a user-specified scene, object(s), robot(s), and task combination, and steps the resulting simulation while deploying user-specified actions to the loaded robot(s) and returning observations and task information from the simulator.
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating
|
||||
|
||||
Creating a minimal environment requires the definition of a config dictionary. This dictionary should contain details about the [scene](./scenes.md), [objects](./objects.md), [robots](./robots.md), and specific characteristics of the environment:
|
||||
|
||||
??? code "env_simple.py"
|
||||
``` python linenums="1"
|
||||
import omnigibson as og
|
||||
cfg = {
|
||||
"env": {
|
||||
"action_frequency": 30,
|
||||
"physics_frequency": 60,
|
||||
},
|
||||
"scene": {
|
||||
"type": "Scene",
|
||||
},
|
||||
"objects": [],
|
||||
"robots": [
|
||||
{
|
||||
"type": "Fetch",
|
||||
"obs_modalities": 'all',
|
||||
"controller_config": {
|
||||
"arm_0": {
|
||||
"name": "NullJointController",
|
||||
"motor_type": "position",
|
||||
},
|
||||
},
|
||||
}
|
||||
]
|
||||
}
|
||||
|
||||
env = og.Environment(configs=cfg)
|
||||
action = ...
|
||||
obs, reward, done, info = env.step(action)
|
||||
```
|
||||
|
||||
### Runtime
|
||||
|
||||
Once created, the environment can be interfaced roughly in the same way as an OpenAI gym environment, and include common methods such as `step`, `reset`, `render`, as well as properties such as `observation_space` and `action_space`. Stepping the environment is done via `obs, reward, done, info = env.step(action)`, and resetting can manually be executed via `obs = env.reset()`. Robots are tracked explicitly via `env.robots`, and the underlying scene and all corresponding objects within the scene can be accessed via `env.scene`.
|
||||
|
||||
|
||||
## Types
|
||||
|
||||
**`OmniGibson`** provides the main [`Environment`](../reference/environments/env_base.html) class, which should offer most of the essential functionality necessary for running robot experiments and interacting with the underlying simulator.
|
||||
|
||||
However, for more niche use-caches (such as demonstration collection, or batched environments), **`OmniGibson`** offers the [`EnvironmentWrapper`](../reference/environments/env_wrapper.html) class to easily extend the core environment functionality.
|
|
@ -1,54 +0,0 @@
|
|||
---
|
||||
icon: material/food-apple-outline
|
||||
---
|
||||
|
||||
# 🍎 **Object**
|
||||
|
||||
Objects, such as furnitures, are essential to building manipulation environments. We designed the MujocoObject interfaces to standardize and simplify the procedure for importing 3D models into the scene or procedurally generate new objects. MuJoCo defines models via the MJCF XML format. These MJCF files can either be stored as XML files on disk and loaded into simulator, or be created on-the-fly by code prior to simulation.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
### Importing Objects
|
||||
|
||||
Objects can be added to a given `Environment` instance by specifying them in the config that is passed to the environment constructor via the `objects` key. This is expected to be a list of dictionaries, each of which specifies the desired configuration for a single object to be created. For each dict, the `type` key is required and specifies the desired object class, and global `position` and `orientation` (in (x,y,z,w) quaternion form) can also be specified. Additional keys can be specified and will be passed directly to the specific robot class constructor. An example of a robot configuration is shown below in `.yaml` form:
|
||||
|
||||
|
||||
??? code "single_object_config_example.yaml"
|
||||
``` yaml linenums="1"
|
||||
robots:
|
||||
- type: USDObject
|
||||
name: some_usd_object
|
||||
usd_path: your_path_to_model.usd
|
||||
visual_only: False
|
||||
position: [0, 0, 0]
|
||||
orientation: [0, 0, 0, 1]
|
||||
scale: [0.5, 0.6, 0.7]
|
||||
```
|
||||
|
||||
`OmniGibson` supports 6 types of objects shown as follows:
|
||||
|
||||
- `ControllableObject`: This class represents objects that can be controller through joint controllers. It is used as the parent class of the robot classes and provide functionalities to apply control actions to the objects. In general, users should not create object of this class, but rather directly spawn the desired robot type in the `robots` section of the config.
|
||||
|
||||
- `StatefulObject`: This class represents objects that comes with object states. For more information regarding object states please take a look at `object_states`. This is also meant to be a parent class, and should generally not be instantiated directly.
|
||||
|
||||
- `PrimitiveObject`: This class represents primitive shape objects (Cubes, Spheres, Cones, etc.) This are usually used as visual objects in the scene. For example, users can instantiate a sphere object to visualize the target location of a robot reaching task, and set it's property `visual_only` to true to disable it's kinematics and collision with other objects.
|
||||
|
||||
- `LightObject`: This class specifically represents lights in the scene, and provide funtionalities to modify the properties of lights. There are 6 types of lights users can instantiate in OmniGibson, cylinder light, disk light, distant light, dome light, geometry ligtht, rectangle light, and sphere light. Users can choose whichever type of light that works for the best, and set the `intensity` property to control the brightness of it.
|
||||
|
||||
- `USDObject`: This class represents objects loaded through a USD file. This is useful when users want to load a custom USD asset into the simulator. Users should specify the `usd_path` parameter of the `USDObject` in order to load the desired file of their choice.
|
||||
|
||||
- `DatasetObject`: This class inherits from `USDObject` and represents object from the OmniGibson dataset. Users should specify the category of objects they want to load, as well as the model id, which is a 6 character string unique to each dataset object. For the possible categories and models, please refer to our [Knowledgebase Dashboard](https://behavior.stanford.edu/knowledgebase/)
|
||||
|
||||
|
||||
### Runtime
|
||||
|
||||
Usually, objects are instantiated upon startup. We can modify certain properties of the object when the simulator is running. For example, one might desire to teleop the object from one place to another, then simply call `object.set_position_orientation(new_pos, new_orn)` will do the job. Another example might be to highlight an object by setting `object.highlighed = True`, the object when then be highlighted in pick in the scene.
|
||||
|
||||
To access the objects from the environment, one can call `env.scene.object_registry`. Here are a couple examples:
|
||||
|
||||
- `env.scene.object_registry("name", OBJECT_NAME): get the object by its name
|
||||
|
||||
- `env.scene.object_registry("category", CATEGORY): get the object by its category
|
||||
|
||||
- `env.scene.object_registry("prim_path", PRIM_PATH): get the object by its prim path
|
|
@ -55,7 +55,7 @@ It is important to note that object states are usually queried / computed _on de
|
|||
{ .annotate }
|
||||
|
||||
|
||||
## Object State Types
|
||||
## Types
|
||||
**`OmniGibson`** currently supports 34 object states, consisting of 19 `AbsoluteObjectState`s, 11 `RelativeObjectState`s, and 4 `InstrinsicObjectState`s. Below, we provide a brief overview of each type:
|
||||
|
||||
### `AbsoluteObjectState`
|
||||
|
|
|
@ -0,0 +1,91 @@
|
|||
---
|
||||
icon: material/food-apple-outline
|
||||
---
|
||||
|
||||
# 🍎 **Object**
|
||||
|
||||
## Description
|
||||
|
||||
In **`OmniGibson`**, `Object`s define entities that can be placed arbitrarily within a given [`Scene`](./scenes.md). These entities can range from arbitrarily imported USD assets, to BEHAVIOR-specific dataset assets, to procedurally generated lights. `Object`s serve as the main building block of a given `Scene` instance, and provide a unified interface for quickly prototyping scenes.
|
||||
|
||||
## Usage
|
||||
|
||||
### Importing Objects
|
||||
|
||||
Objects can be added to a given `Environment` instance by specifying them in the config that is passed to the environment constructor via the `objects` key. This is expected to be a list of dictionaries, each of which specifies the desired configuration for a single object to be created. For each dict, the `type` key is required and specifies the desired object class, and global `position` and `orientation` (in (x,y,z,w) quaternion form) can also be specified. Additional keys can be specified and will be passed directly to the specific object class constructor. An example of an object configuration is shown below in `.yaml` form:
|
||||
|
||||
|
||||
??? code "object_config_example.yaml"
|
||||
``` yaml linenums="1"
|
||||
objects:
|
||||
- type: USDObject # For your custom imported object
|
||||
name: my_object
|
||||
usd_path: your_path_to_model.usd
|
||||
visual_only: False
|
||||
position: [0, 0, 0]
|
||||
orientation: [0, 0, 0, 1]
|
||||
scale: [0.5, 0.6, 0.7]
|
||||
- type: DatasetObject # For a pre-existing BEHAVIOR-1K object
|
||||
name: apple0
|
||||
category: apple
|
||||
model: agveuv
|
||||
position: [0, 0, 0.5]
|
||||
orientation: [0, 0, 0, 1]
|
||||
scale: [0.4, 0.4, 0.4]
|
||||
```
|
||||
|
||||
Alternatively, an object can be directly imported at runtime by first creating the object class instance (e.g.: `obj = DatasetObject(...)`) and then importing it via `og.sim.import_object(obj)`. This can be useful for iteratively prototyping a desired scene configuration.
|
||||
|
||||
### Runtime
|
||||
|
||||
Once an object is imported into the simulator / environment, we can directly query and set variious properties. For example, to teleport the object, simply call `object.set_position_orientation(new_pos, new_orn)`. Setting a desired joint configuration can be done via `obj.set_joint_positions(new_joint_pos)`.
|
||||
|
||||
??? warning annotate "Some attributes require sim cycling"
|
||||
|
||||
For properties that fundamentally alter an object's physical behavior (such as scale, enabled collisions, or collision filter pairs), values set at runtime will **not** propagate until the simulator is stopped (`og.sim.stop()`) and re-started (`og.sim.play()`).
|
||||
|
||||
|
||||
All objects are tracked and organized by the underlying scene, and can quickly be [queried by relevant properties](./scenes.md#runtime).
|
||||
|
||||
|
||||
## Types
|
||||
**`OmniGibson`** directly supports multiple `Object` classes, which are intended to encapsulate different types of objects with varying functionalities. The most basic is [`BaseObject`](../reference/objects/object_base.html), which can capture any arbitrary object and thinly wraps an [`EntityPrim`](../reference/objects/entity_prim.html). The more specific classes are shown below:
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`StatefulObject`**](../reference/objects/stateful_object.html)<br><br>
|
||||
Encapsulates an object that owns a set of [object states](./object_states.html). In general, this is intended to be a parent class, and not meant to be instantiated directly.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`USDObject`**](../reference/objects/usd_object.html)<br><br>
|
||||
Encapsulates an object imported from a usd file. Useful when loading custom USD assets into **`OmniGibson`**. Users should specify the absolute `usd_path` to the desired file to import.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`DatasetObject`**](../reference/objects/dataset_object.html)<br><br>
|
||||
This inherits from `USDObject` and encapsulates an object from the BEHAVIOR-1K dataset. Users should specify the `category` and `model` of object to load, where `model` is a 6 character string unique to each dataset object. For an overview of all possible categories and models, please refer to our [Knowledgebase Dashboard](https://behavior.stanford.edu/knowledgebase/)<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`PrimitiveObject`**](../reference/objects/primitive_object.html)<br><br>
|
||||
Encapsulates an object defined by a single primitive geom, such a sphere, cube, or cylinder. These are often used as visual objects (via `visual_only=True`) in the scene, e.g., for visualizing the target location of a robot reaching task.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`LightObject`**](../reference/objects/light_object.html)<br><br>
|
||||
Encapsulates a virtual light source, where both the shape (sphere, disk, dome, etc.), size, and intensity can be specified.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`ControllableObject`**](../reference/objects/controllable_object.html)<br><br>
|
||||
Encapsulates an object that is motorized, for example, a conveyer belt, and provides functionality to apply actions and deploy control signals to the motors. However, currently this class is used exclusively as a parent class of `BaseRobot`, and should not be instantiated directly by users.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
|
@ -14,10 +14,10 @@ icon: material/graph-outline
|
|||
|
||||
We build upon IsaacSim's `Simulator` interface to construct our `Environment` class, which is an [OpenAI gym-compatible](https://gymnasium.farama.org/content/gym_compatibility/) interface and the main entry point into **`OmniGibson`**. An `Environment` instance generally consists of the following:
|
||||
|
||||
- A [`Scene`](./scene.md) instance, which by default is a "dummy" (empty) or a full-populated (`InteractiveTraversableScene`) instance,
|
||||
- A [`Scene`](./scenes.md) instance, which by default is a "dummy" (empty) or a full-populated (`InteractiveTraversableScene`) instance,
|
||||
- A [`BaseTask`](./task.md) instance, which can range from a complex `BehaviorTask`, navigation `PointNavigationTask`, or no-op `DummyTask`,
|
||||
- Optionally, one or more [`BaseRobot`](./robots.md)s, which define the action space for the given environment instance,
|
||||
- Optionally, one or more additional [`BaseObject`](./object.md)s, which are additional object models not explicitly defined in the environment's scene
|
||||
- Optionally, one or more additional [`BaseObject`](./objects.md)s, which are additional object models not explicitly defined in the environment's scene
|
||||
|
||||
The above figure describes **`OmniGibson`**'s simulation loop:
|
||||
|
||||
|
|
|
@ -1,12 +0,0 @@
|
|||
---
|
||||
icon: material/cube-outline
|
||||
---
|
||||
|
||||
# 🧱 **Prim**
|
||||
|
||||
A Prim, short for "primitive," is a fundamental building block of a scene, representing an individual object or entity within the scene's hierarchy. It is essentially a container that encapsulates data, attributes, and relationships, allowing it to represent various scene components like models, cameras, lights, or groups of prims. These prims are systematically organized into a hierarchical framework, creating a scene graph that depicts the relationships and transformations between objects.
|
||||
|
||||
Every prim is uniquely identified by a path, which serves as a locator within the scene graph. This path includes the names of all parent prims leading up to it. For example, a prim's path might be `/World/robot0/gripper_link`, indicating that the `gripper_link` is a child of `robot0`.
|
||||
|
||||
Additionally, prims carry a range of attributes, including position, rotation, scale, and material properties. These attributes define the properties and characteristics of the objects they represent.
|
||||
|
|
@ -0,0 +1,78 @@
|
|||
---
|
||||
icon: material/cube-outline
|
||||
---
|
||||
|
||||
# 🧱 **Prim**
|
||||
|
||||
## Description
|
||||
|
||||
A Prim, short for "primitive," is a fundamental building block of Omniverse's underlying scene representation (called a "stage"), and represents a single scene component, such as a rigid body, joint, light, camera, or material. **`OmniGibson`** implements `Prim` classes which directly encapsulate the underlying omniverse `UsdPrim` instances, and provides direct access to omniverse's low level prim APIs.
|
||||
|
||||
Each `Prim` instance uniquely wraps a corresponding prim in the current scene stage, and is defined by its corresponding `prim_path`. This filepath-like string defines the prim's name, as well as all of its preceding parent prim names. For example, a `RigidPrim` capturing a robot's gripper link may have a prim path of `/World/robot0/gripper_link`, indicating that the `gripper_link` is a child of the `robot0` prim, which in turn is a child of the `World` prim.
|
||||
|
||||
Additionally, prims carry a range of attributes, including position, rotation, scale, and material properties. These attributes define the properties and characteristics of the objects they represent.
|
||||
|
||||
## Usage
|
||||
|
||||
### Loading a Prim
|
||||
Generally, you should not have to directly instantiate any `Prim` class instance, as all of the main entry-point level classes within **`OmniGibson`** either do not require or directly import `Prim` instances themselves. However, you can always create a `Prim` instance directly. This requires two arguments at the minimum: a unique `name`, and a corresponding `prim_path` (which can either point to a pre-existing prim on the Omniverse scene stage or to a novel location where a new prim will be created).
|
||||
|
||||
If a prim already exists at `prim_path`, the created `Prim` instance will automatically point to it. However, if it does _not_ exist, you must call `prim.load()` explicitly to load the prim to the omniverse stage at the desired `prim_path`. Note that not all prim classes can be loaded from scratch -- for example, `GeomPrim`s require a pre-existing `prim_path` when created!
|
||||
|
||||
After the prim has been created, it may additionally require further initialization via `prim.initialize()`, which _must_ occur at least 1 simulation step after the prim has been loaded. (1)
|
||||
{ .annotate }
|
||||
|
||||
1. This is a fundamental quirk of omniverse and unfortunately cannot be changed ):
|
||||
|
||||
### Runtime
|
||||
Once initialized, a `Prim` instance can be used as a direct interface with the corresponding low-level prim on the omniverse stage. The low-level attributes of the underlying prim can be queried / set via `prim.get_attribute(name)` / `prim.set_attribute(name, val)`. In addition, some `Prim` classes implement higher-level functionality to more easily manipulate the underlying prim, such as `MaterialPrim`'s `bind(prim_path)`, which binds its owned material to the desired prim located at `prim_path`.
|
||||
|
||||
## Types
|
||||
**`OmniGibson`** directly supports multiple `Prim` classes, which are intended to encapsulate different types of prims from the omniverse scene stage. The most basic is [`BasePrim`](../reference/prims/prim_base.html), which can capture any arbitrary prim. The more specific classes are shown below:
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`XFormPrim`**](../reference/prims/xform_prim.html)<br><br>
|
||||
Encapsulates a transformable prim. This prim can get and set its local or global pose, as well as its own scale.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`GeomPrim`**](../reference/prims/geom_prim.html#prims.geom_prim.GeomPrim)<br><br>
|
||||
Encapsulates a prim defined by a geom (shape or mesh). It is an `XFormPrim` that can additionally owns geometry defined by its set of `points`. Its subclasses [`VisualGeomPrim`](../reference/prims/geom_prim.html) and [`CollisionGeomPrim`](../reference/prims/geom_prim.html#prims.geom_prim.CollisionGeomPrim) implement additional utility for dealing with those respective types of geometries (e.g.: `CollisionGeomPrim.set_collision_approximation(...)`).<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`ClothPrim`**](../reference/prims/cloth_prim.html)<br><br>
|
||||
Encapsulates a prim defined by a mesh geom that is to be converted into cloth. It is a `GeomPrim` that dynamically transforms its owned (rigid) mesh into a (compliant, particle-based) cloth. Its methods can be used to query and set its individual particles' state, as well as track a subset of keypoints / keyfaces.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`RigidPrim`**](../reference/prims/rigid_prim.html)<br><br>
|
||||
Encapsulates a prim defined by a rigid body. It is an `XFormPrim` that is subject to physics and gravity, and may belong to an `EntityPrim`. It additionally has attributes to control its own mass, density, and other physics-related behavior.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`JointPrim`**](../reference/prims/joint_prim.html)<br><br>
|
||||
Encapsulates a prim defined by a joint. It belongs to an `EntityPrim` and has attributes to control its own joint state (position, velocity, effort).<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`EntityPrim`**](../reference/prims/entity_prim.html)<br><br>
|
||||
Encapsulates the top-level prim of an imported object. Since the underlying object consists of a set of links and joints, this class owns its corresponding set of `RigidPrim`s and `JointPrim`s, and provides high-level functionality to controlling the object's pose, joint state, and physics-related behavior.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`MaterialPrim`**](../reference/prims/material_prim.html)<br><br>
|
||||
Encapsulates a prim defining a material specification. It provides high-level functionality for directly controlling the underlying material's properties and behavior.<br><br>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
|
|
@ -4,10 +4,12 @@ icon: material/robot-outline
|
|||
|
||||
# 🤖 **Robots**
|
||||
|
||||

|
||||
|
||||
## Description
|
||||
|
||||
In **`OmniGibson`**, `Robot`s define agents that can interact with other objects in a given environment. Each robot can _interact_ by deploying joint
|
||||
commands via its set of [`Controller`](./controllers.md)s, and can _perceive_ its surroundings via its set of [`Sensor`](./sensor.md)s.
|
||||
commands via its set of [`Controller`](./controllers.md)s, and can _perceive_ its surroundings via its set of [`Sensor`](./sensors.md)s.
|
||||
|
||||
**`OmniGibson`** supports both navigation and manipulation robots, and allows for modular specification of individual controllers for controlling the different components of a given robot. For example, the `Fetch` robot is a mobile manipulator composed of a mobile (two-wheeled) base, two head joints, a trunk, seven arm joints, and two gripper finger joints. `Fetch` owns 4 controllers, one for controlling the base, the head, the trunk + arm, and the gripper. There are multiple options for each controller depending on the desired action space. For more information, check out our [robot examples](../getting_started/examples.md#robots).
|
||||
|
||||
|
@ -71,13 +73,13 @@ Usually, actions are passed to robots and observations retrieved via the `obs, i
|
|||
</div>
|
||||
|
||||
1. `action` is a 1D-numpy array. For more information, please see the [Controller](./controllers.md) section!
|
||||
2. `obs` is a dict mapping observation name to observation data, and `info` is a dict of relevant metadata about the observations. For more information, please see the [Sensor](./sensor.md) section!
|
||||
2. `obs` is a dict mapping observation name to observation data, and `info` is a dict of relevant metadata about the observations. For more information, please see the [Sensor](./sensors.md) section!
|
||||
|
||||
|
||||
Controllers and sensors can be accessed directly via the `controllers` and `sensors` properties, respectively. And, like all objects in **`OmniGibson`**, common information such as joint data and object states can also be directly accessed from the `robot` class.
|
||||
|
||||
|
||||
## Models
|
||||
## Types
|
||||
**`OmniGibson`** currently supports 9 robots, consisting of 4 mobile robots, 2 manipulation robots, 2 mobile manipulation robots, and 1 anthropomorphic "robot" (a bimanual agent proxy used for VR teleoperation). Below, we provide a brief overview of each model:
|
||||
|
||||
### Mobile Robots
|
||||
|
|
|
@ -4,12 +4,53 @@ icon: material/home-outline
|
|||
|
||||
# 🏠 **Scene**
|
||||
|
||||
Scene are one level higher than objects. A scene consists of multiple objects that interacts with each other. OmniGibson currently supports two types of scenes:
|
||||
## Description
|
||||
|
||||
- `EmptyScene`: This is an empty scene that can be used to create custom scenes. It does not contain any pre-defined objects.
|
||||
- `InteractiveTraversableScene`: This type of scene are interactive and traversible. It comes with traversable maps that enables robots to perform navigation tasks. Users can choose from the predefined 51 scenes in the OmniGibson dataset.
|
||||
In **`OmniGibson`**, `Scene`s represent a collection of [`Object`](./objects.md)s and global [`System`](./systems.md)s, potentially defined with a pre-configured state. A scene can be constructed iteratively and interactively, or generated from a pre-cached file.
|
||||
|
||||
Here's a list of all the `InteractiveTraversableScene` scenes available in OmniGibson:
|
||||
## Usage
|
||||
|
||||
### Importing
|
||||
|
||||
Every `Environment` instance includes a scene, defined by its config that is passed to the environment constructor via the `scene` key. This is expected to be a dictionary of relevant keyword arguments, specifying the desired scene configuration to be created. The `type` key is required and specifies the desired scene class. Additional keys can be specified and will be passed directly to the specific scene class constructor. An example of a scene configuration is shown below in `.yaml` form:
|
||||
|
||||
??? code "rs_int_example.yaml"
|
||||
``` yaml linenums="1"
|
||||
scene:
|
||||
type: InteractiveTraversableScene
|
||||
scene_model: Rs_int
|
||||
trav_map_resolution: 0.1
|
||||
default_erosion_radius: 0.0
|
||||
trav_map_with_objects: true
|
||||
num_waypoints: 1
|
||||
waypoint_resolution: 0.2
|
||||
not_load_object_categories: null
|
||||
load_room_types: null
|
||||
load_room_instances: null
|
||||
seg_map_resolution: 0.1
|
||||
```
|
||||
|
||||
Alternatively, a scene can be directly imported at runtime by first creating the scene class instance (e.g.: `scene = InteractiveTraversableScene(...)`) and then importing it via `og.sim.import_scene(obj)`. This can be useful for iteratively prototyping a desired scene configuration. Note that a scene _must_ be imported before any additional objects are imported!
|
||||
|
||||
### Runtime
|
||||
|
||||
The scene keeps track of and organizes all imported objects via its owned `scene.object_registry`. Objects can quickly be queried by relevant property keys (1), such as `name`, `prim_path`, and `category`, from `env.scene.object_registry` as follows:
|
||||
{ .annotate }
|
||||
|
||||
`scene.object_registry_unique_keys` and `scene.object_registry_group_keys` define the valid possible key queries
|
||||
|
||||
- `env.scene.object_registry("name", OBJECT_NAME)`: get the object by its name
|
||||
|
||||
- `env.scene.object_registry("prim_path", PRIM_PATH)`: get the object by its prim path
|
||||
|
||||
- `env.scene.object_registry("category", CATEGORY)`: get all objects with category `CATEGORY`
|
||||
|
||||
Similarly, systems can be queried via `scene.system_registry`.
|
||||
|
||||
In addition, a scene can always be reset by calling `reset()`. The scene's initial state is cached when the scene is first imported, but can manually be updated by calling `scene.update_initial_state(state)`, where `state` can either be a desired state (output of `og.sim.dump_state()`) or `None`, corresponding to the current sim state.
|
||||
|
||||
## Types
|
||||
**`OmniGibson`** currently supports two types of scenes. The basic scene class `Scene` implements a minimal scene setup, which can optionally include a skybox and / or ground plane. The second scene class `InteractiveTraversableScene` represents a pre-cached, curated scene exclusively populated with fully-interactive objects from the BEHAVIOR-1K dataset. This scene type additionally includes traversability and semantic maps of the scene floorplan. For a breakdown of all the available scenes and the corresponding objects included in each scene, please refer our [Knowledgebase Dashboard](https://behavior.stanford.edu/knowledgebase/). Below, we provide brief snapshots of each of our 50 BEHAVIOR-1K scenes:
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
|
@ -31,7 +31,7 @@ Besides the actual data, `get_obs()` also returns a secondary dictionary contain
|
|||
For instance, calling `get_obs()` on an environment with a single robot, which has all modalities enabled, might produce results similar to this:
|
||||
|
||||
<details>
|
||||
<summary>Click to see code!</summary>
|
||||
<summary>Example observations</summary>
|
||||
<pre><code>
|
||||
data:
|
||||
{
|
||||
|
@ -75,9 +75,12 @@ info:
|
|||
</code></pre>
|
||||
</details>
|
||||
|
||||
## Observations
|
||||
## Types
|
||||
**`OmniGibson`** currently supports two types of sensors (`VisionSensor`, `ScanSensor`), and three types of observations(vision, scan, low-dimensional). Below, we describe each of the types of observations:
|
||||
|
||||
### Vision Sensor
|
||||
### Vision
|
||||
|
||||
Vision observations are captured by the [`VisionSensor`](../reference/sensors/vision_sensor.html) class, which encapsulates a virtual pinhole camera sensor equipped with various modalities, including RGB, depth, normals, three types of segmentation, optical flow, 2D and 3D bounding boxes, shown below:
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
|
@ -165,7 +168,7 @@ info:
|
|||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
<strong>2D Bounding Box Tight</strong><br><br>
|
||||
2D bounding boxes wrapping individual objects, excluding any parts that are occluded.<br><br>
|
||||
2D bounding boxes wrapping individual objects, excluding occluded parts.<br><br>
|
||||
Size: a list of <br>
|
||||
semanticID, numpy.uint32;<br>
|
||||
x_min, numpy.int32;<br>
|
||||
|
@ -215,7 +218,8 @@ info:
|
|||
</tr>
|
||||
</table>
|
||||
|
||||
### Range Sensor
|
||||
### Range
|
||||
Range observations are captured by the [`ScanSensor`](../reference/sensors/scan_sensor.html) class, which encapsulates a virtual 2D LiDAR range sensor with the following observations:
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
|
@ -240,7 +244,12 @@ info:
|
|||
</tr>
|
||||
</table>
|
||||
|
||||
### Proprioception
|
||||
### Low-Dimensional
|
||||
Low-dimensional observations are not captured by any specific sensor, but are simply an aggregation of the underlying simulator state. There are two main types of low-dimensional observations: proprioception and task-relevant:
|
||||
|
||||
|
||||
#### Proprioception
|
||||
The following proprioceptive observations are supported off-the-shelf in **`OmniGibson`** (though additional ones may arbitrarily be added):
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
|
@ -315,7 +324,8 @@ info:
|
|||
</tr>
|
||||
</table>
|
||||
|
||||
### Task Observation
|
||||
#### Task-Relevant
|
||||
Each task implements its own set of relevant observations:
|
||||
|
||||
<table markdown="span" style="width: 100%;">
|
||||
<tr>
|
|
@ -0,0 +1,43 @@
|
|||
---
|
||||
icon: material/repeat
|
||||
---
|
||||
|
||||
# 🔁 **Simulator**
|
||||
|
||||
## Description
|
||||
|
||||
**`OmniGibson`**'s [Simulator](../reference/simulator.html) class is the global singleton that serves as the interface with omniverse's low-level physx (physics) backend. It provides utility functions for modulating the ongoing simulation as well as the low-level interface for importing scenes and objects. For standard use-cases, interfacing with the simulation exclusively through a created [environment](./environments.md) should be sufficient, though for more advanced or prototyping use-cases it may be common to interface via this simulator class.
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating
|
||||
|
||||
Because this `Simulator` is a global singleton, it is only instantiated exactly once, and always occurs at the _very beginning_ of **`OmniGibson`**'s launching. This either occurs when an environment instance is created (`env = Environment(...)`), or when `og.launch()` is explicitly called.
|
||||
|
||||
### Runtime
|
||||
|
||||
After **`OmniGibson`** is launched, the simulator interface can be accessed globally via `og.sim`. Below, we briefly describe multiple common usages of the simulation interface:
|
||||
|
||||
#### Importing and Removing Scenes / Objects
|
||||
The simulator can directly import a scene via `sim.import_scene(scene)` or object via `sim.import_object(object)`. The imported scene and its corresponding objects can be directly accessed via `sim.scene`. To remove a desired object, call `sim.remove_object(object)`. The simulator can also clear the entire scene via `sim.clear()`.
|
||||
|
||||
#### Propagating Physics
|
||||
The simulator can be manually stepped, with or without physics / rendering (`sim.step()`, `sim.step_physics()`, `sim.render()`), and can be stopped (`sim.stop()`), paused (`sim.pause()`), or played (`sim.play()`). Note that physics only runs when the simulator is playing! The current sim mode can be checked via `sim.is_stopped()`, `sim.is_paused()`, and `sim.is_playing()`.
|
||||
|
||||
??? warning annotate "Cycling the sim can cause unexpected behavior"
|
||||
|
||||
If the simulator is playing and then suddenly stopped, all objects are immediately teleported back to their "canonical poses" -- i.e.: the corresponding global poses that were set _before_ the sim started playing. Thus, if `og.sim.play()` is immediately called after, the objects will _not_ teleport back to their respective pre-stopped poses. You can think of the canonical poses as a sort of "initial states", and we recommend explicitly setting a desired initial sim state configuration via `scene.update_initial_state()` and then calling `scene.reset()` after `og.sim.play()`.
|
||||
|
||||
#### Modifying Physics
|
||||
If necessary, low-level physics behavior can also be set as well, via the physics interface (`sim.pi`), physics simulation interface (`sim.psi`), physics scene query interface (`sim.psqi`), and physics context (`sim.get_physics_context()`). The simulation timesteps can also be directly set via `sim.set_simulation_dt(...)`.
|
||||
|
||||
#### Callbacks
|
||||
It may be useful to have callbacks trigger when certain simulation events occur. We provide utility functions to add / remove callbacks when `sim.play()`, `sim.stop()`, `sim.import_object()`, and `sim.remove_object()` is called.
|
||||
|
||||
#### Viewport Camera
|
||||
The simulator owns a global [`VisionSensor`](./sensors.md) camera which can be controlled by the user and accessed via `sim.viewer_camera`. To enable keyboard teleoperation of the camera, call `sim.enable_viewer_camera_teleoperation()`.
|
||||
|
||||
#### Saving / Loading State
|
||||
To record the current global simulation state (which includes the state of all tracked objects within the simulator), call `state = sim.dump_state(serialized)`, where `serialized` can be `True` or `False`, defining whether the outputted state should be in nested dictionary (more interpretable) format or a more compressed, 1D numpy array format. Both representations are equivalent, with the former being more useful for prototyping and the latter being useful when recording large amounts of data (e.g.: when collecting demonstrations). To load a sim state, call `sim.load_state(state, serialized)`.
|
||||
|
||||
Note, however, that the above paradigm generally assumes a consistent configuration of objects -- `sim.load_state(...)` will not import / remove objects if there is a mismatch between the desired state and current simulator object set. Instead, we provide `sim.save(fpath)` and `sim.restore(fpath)` functionality for arbitrarily saving and restoring a scene / object configuration from scratch, where `fpath` defines the absolute filepath to the simulator state + metadata.
|
|
@ -0,0 +1,85 @@
|
|||
---
|
||||
icon: material/water-outline
|
||||
---
|
||||
|
||||
# 💧 **Systems**
|
||||
|
||||
## Description
|
||||
|
||||
**`OmniGibson`**'s [`System`](../reference/systems/base_system.html) class represents global singletons that encapsulate a single particle type. These system classes provide functionality for generating, tracking, and removing any number of particles arbitrarily located throughout the current scene.
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating
|
||||
For efficiency reasons, systems are created dynamically on an as-needed basis. A system can be dynamically created (or referenced, if it already exists) via `get_system(name)`, where `name` defines the name of the system. For a list of all possible system names, see `REGISTERED_SYSTEMS`. Both of these utility functions can be directly imported from `omnigibson.systems`.
|
||||
|
||||
### Runtime
|
||||
A given system can be accessed globally at any time via `get_system(...)`. Systems can generate particles via `system.generate_particles(...)`, track their states via `system.get_particles_position_orientation()`, and remove them via `system.remove_particles(...)`. Please refer to the [`System`'s API Reference](../reference/systems/base_system.html) for specific information regarding arguments. Moreover, specific subclasses may implement more complex generation behavior, such as `VisualParticleSystem`s `generate_group_particles(...)` which spawn visual (non-collidable) particles that are attached to a specific object.
|
||||
|
||||
## Types
|
||||
|
||||
**`OmniGibson`** currently supports 4 types of systems, each representing a different particle concept:
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`GranularSystem`**](../reference/systems/micro_particle_system.html#systems.micro_particle_system.GranularSystem)<br><br>
|
||||
Represents particles that are fine-grained and are generally less than a centimeter in size, such as brown rice, black pepper, and chia seeds. These are particles subject to physics.<br><br>**Collides with...**
|
||||
<ul>
|
||||
<li>_Rigid bodies_: Yes</li>
|
||||
<li>_Cloth_: No</li>
|
||||
<li>_Other system particles_: No</li>
|
||||
<li>_Own system particles_: No (for stability reasons)</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/systems/granular.png" alt="rgb">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`FluidSystem`**](../reference/systems/micro_particle_system.html#systems.micro_particle_system.FluidSystem)<br><br>
|
||||
Represents particles that are relatively homogeneous and liquid (though potentially viscous) in nature, such as water, baby oil, and hummus. These are particles subject to physics.<br><br>**Collides with...**
|
||||
<ul>
|
||||
<li>_Rigid bodies_: Yes</li>
|
||||
<li>_Cloth_: No</li>
|
||||
<li>_Other system particles_: No</li>
|
||||
<li>_Own system particles_: Yes</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/systems/fluid.png" alt="rgb">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`MacroPhysicalParticleSystem`**](../reference/systems/macro_particle_system.html#systems.macro_particle_system.MacroPhysicalParticleSystem)<br><br>
|
||||
Represents particles that are small but replicable, such as pills, diced fruit, and hair. These are particles subject to physics.<br><br>**Collides with...**
|
||||
<ul>
|
||||
<li>_Rigid bodies_: Yes</li>
|
||||
<li>_Cloth_: Yes</li>
|
||||
<li>_Other system particles_: Yes</li>
|
||||
<li>_Own system particles_: Yes</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/systems/macro_physical.png" alt="rgb">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`MacroVisualParticleSystem`**](../reference/systems/macro_particle_system.html#systems.macro_particle_system.MacroVisualParticleSystem)<br><br>
|
||||
Represents particles that are usually flat and varied, such as stains, lint, and moss. These are particles not subject to physics, and are attached rigidly to specific objects in the scene.<br><br>**Collides with...**
|
||||
<ul>
|
||||
<li>_Rigid bodies_: No</li>
|
||||
<li>_Cloth_: No</li>
|
||||
<li>_Other system particles_: No</li>
|
||||
<li>_Own system particles_: No</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/systems/macro_visual.png" alt="rgb">
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
|
@ -0,0 +1,223 @@
|
|||
---
|
||||
icon: material/magic-staff
|
||||
---
|
||||
|
||||
# 🪄 **Transition Rules**
|
||||
|
||||
## Description
|
||||
|
||||
Transition rules are **`OmniGibson`**'s method for simulating complex physical phenomena not directly supported by the underlying omniverse physics engine, such as slicing, blending, and cooking. A given [`TransitionRule`](../reference/transition_rules.html#transition_rules.BaseTransitionRule) dynamically checks for its internal sets of conditions, and, if validated, executes its corresponding `transition`.
|
||||
|
||||
!!! info annotate "Transition Rules must be enabled before usage!"
|
||||
|
||||
To enable usage of transition rules, `gm.ENABLE_TRANSITION_RULES` (1) must be set!
|
||||
|
||||
1. Access global macros via `from omnigibson.macros import gm`
|
||||
|
||||
## Usage
|
||||
|
||||
### Creating
|
||||
Because `TransitionRule`s are monolithic classes, these should be defined _before_ **`OmniGibson`** is launched. A rule can be easily extended by subclassing the `BaseTransitionRule` class and implementing the necessary functions. For a simple example, please see the [`SlicingRule`](../reference/transition_rules.html#transition_rules.SlicingRule) class.
|
||||
|
||||
### Runtime
|
||||
At runtime, the monolithic [`TransitionRuleAPI`](../reference/transition_rules.html#transition_rules.TransitionRuleAPI) automatically handles the stepping and processing of all defined transition rule classes. For efficiency reasons, rules are dynamically loaded and checked based on the object / system set currently active in the simulator. A rule will only be checked if there is at least one valid candidate combination amongst the current object / system set. For example, if there is no sliceable object present in the simulator, then `SlicingRule` will not be active. Every time an object / system is added / removed from the simulator, all rules are refreshed so that the current active transition rule set is always accurate.
|
||||
|
||||
In general, you should not need to interface with the `TransitionRuleAPI` class at all -- if your rule implementation is correct, then the API will automatically handle the transition when the appropriate conditions are met!
|
||||
|
||||
## Types
|
||||
|
||||
**`OmniGibson`** currently supports _ types of diverse transition rules, each representing a different complex physical phenomena:
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`SlicingRule`**](../reference/transition_rules.html#transition_rules.SlicingRule)<br><br>
|
||||
Encapsulates slicing an object into halves (e.g.: slicing an apple).<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ sliceable objects</li>
|
||||
<li>1+ slicer objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>slicer is touching sliceable object</li>
|
||||
<li>slicer is active</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>sliceable object is removed</li>
|
||||
<li>x2 sliceable half objects are spawned where the original object was</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/slicing_rule_before.png" alt="slicing_rule_before">
|
||||
<img src="../assets/transition_rules/slicing_rule_after.png" alt="slicing_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`DicingRule`**](../reference/transition_rules.html#transition_rules.DicingRule)<br><br>
|
||||
Encapsulates dicing a diceable into small chunks (e.g.: dicing an apple).<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ diceable objects</li>
|
||||
<li>1+ slicer objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>slicer is touching diceable object</li>
|
||||
<li>slicer is active</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>sliceable object is removed</li>
|
||||
<li>diceable physical particles are spawned where the original object was</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/dicing_rule_before.png" alt="dicing_rule_before">
|
||||
<img src="../assets/transition_rules/dicing_rule_after.png" alt="dicing_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`MeltingRule`**](../reference/transition_rules.html#transition_rules.MeltingRule)<br><br>
|
||||
Encapsulates melting an object into liquid (e.g.: melting chocolate).<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ meltable objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>meltable object's max temperature > melting temperature</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>meltable object is removed</li>
|
||||
<li>`melted__<category>` fluid particles are spawned where the original object was</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/melting_rule_before.png" alt="melting_rule_before">
|
||||
<img src="../assets/transition_rules/melting_rule_after.png" alt="melting_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`CookingPhysicalParticleRule`**](../reference/transition_rules.html#transition_rules.CookingPhysicalParticleRule)<br><br>
|
||||
Encapsulates cooking physical particles (e.g.: boiling water).<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ fillable and heatable objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>fillable object is heated</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>particles within the fillable object are removed</li>
|
||||
<li>`cooked__<category>` particles are spawned where the original particles were</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/cooking_physical_particle_rule_before.png" alt="cooking_physical_particle_rule_before">
|
||||
<img src="../assets/transition_rules/cooking_physical_particle_rule_after.png" alt="cooking_physical_particle_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`ToggleableMachineRule`**](../reference/transition_rules.html#transition_rules.ToggleableMachineRule)<br><br>
|
||||
Encapsulates transformative changes when a button is pressed (e.g.: blending a smoothie). Valid transitions are defined by a pre-defined set of "recipes" (input / output combinations).<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ fillable and toggleable objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>fillable object has just been toggled on</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>all objects and particles within the fillable object are removed</li>
|
||||
<li>if relevant recipe is found given inputs, relevant output is spawned in the fillable object, otherwise "sludge" is spawned instead</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/toggleable_machine_rule_before.png" alt="toggleable_machine_rule_before">
|
||||
<img src="../assets/transition_rules/toggleable_machine_rule_after.png" alt="toggleable_machine_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`MixingToolRule`**](../reference/transition_rules.html#transition_rules.MixingToolRule)<br><br>
|
||||
Encapsulates transformative changes during tool-driven mixing (e.g.: mixing a drink with a stirrer). Valid transitions are defined by a pre-defined set of "recipes" (input / output combinations).<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ fillable objects</li>
|
||||
<li>1+ mixingTool objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>mixingTool object has just touched fillable object</li>
|
||||
<li>valid recipe is found</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>recipe-relevant objects and particles within the fillable object are removed</li>
|
||||
<li>relevant recipe output is spawned in the fillable object</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/mixing_rule_before.png" alt="mixing_rule_before">
|
||||
<img src="../assets/transition_rules/mixing_rule_after.png" alt="mixing_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`CookingRule`**](../reference/transition_rules.html#transition_rules.CookingRule)<br><br>
|
||||
Encapsulates transformative changes during cooking (e.g.: baking a cake). Valid transitions are defined by a pre-defined set of "recipes" (input / output combinations).<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ fillable objects</li>
|
||||
<li>1+ heatSource objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>heatSource object is on and affecting fillable object</li>
|
||||
<li>a certain amount of time has passed</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>recipe-relevant objects and particles within the fillable object are removed</li>
|
||||
<li>relevant recipe output is spawned in the fillable object</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/cooking_rule_before.png" alt="cooking_rule_before">
|
||||
<img src="../assets/transition_rules/cooking_rule_after.png" alt="cooking_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`WasherRule`**](../reference/transition_rules.html#transition_rules.WasherRule)<br><br>
|
||||
Encapsulates washing mechanism (e.g.: cleaning clothes in the washing machine with detergent). Washing behavior (i.e.: what types of particles are removed from clothes during washing) is predefined.<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ washer objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>washer object is closed</li>
|
||||
<li>washer object has just been toggled on</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>all "stain"-type particles within the washer object are removed</li>
|
||||
<li>all objects within the washer object are covered and saturated with water</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/washer_rule_before.png" alt="washer_rule_before">
|
||||
<img src="../assets/transition_rules/washer_rule_after.png" alt="washer_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`DryerRule`**](../reference/transition_rules.html#transition_rules.DryerRule)<br><br>
|
||||
Encapsulates drying mechanism (e.g.: drying clothes in the drying machine).<br><br>**Required Candidates**
|
||||
<ul>
|
||||
<li>1+ clothes_dryer objects</li>
|
||||
</ul><br><br>**Conditions**
|
||||
<ul>
|
||||
<li>washer object is closed</li>
|
||||
<li>washer object has just been toggled on</li>
|
||||
</ul><br><br>**Transition**
|
||||
<ul>
|
||||
<li>all water particles within the washer object are removed</li>
|
||||
<li>all objects within the washer object are no longer saturated with water</li>
|
||||
</ul>
|
||||
</td>
|
||||
<td>
|
||||
<img src="../assets/transition_rules/dryer_rule_before.png" alt="dryer_rule_before">
|
||||
<img src="../assets/transition_rules/dryer_rule_after.png" alt="dryer_rule_after">
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
13
mkdocs.yml
|
@ -89,14 +89,17 @@ nav:
|
|||
- Running on SLURM: getting_started/slurm.md
|
||||
- Modules:
|
||||
- Overview: modules/overview.md
|
||||
- Prim: modules/prim.md
|
||||
- Object: modules/object.md
|
||||
- Prims: modules/prims.md
|
||||
- Objects: modules/objects.md
|
||||
- Object States: modules/object_states.md
|
||||
- Robots: modules/robots.md
|
||||
- Controllers: modules/controllers.md
|
||||
- Sensor: modules/sensor.md
|
||||
- Scene: modules/scene.md
|
||||
- Environment: modules/environment.md
|
||||
- Sensors: modules/sensors.md
|
||||
- Systems: modules/systems.md
|
||||
- Scenes: modules/scenes.md
|
||||
- Transition Rules: modules/transition_rules.md
|
||||
- Simulator: modules/simulator.md
|
||||
- Environments: modules/environments.md
|
||||
- Tutorials:
|
||||
- Demo Collection: tutorials/demo_collection.md
|
||||
- API Reference: reference/*
|
||||
|
|
|
@ -7,8 +7,6 @@ import tempfile
|
|||
|
||||
from omnigibson.controllers import REGISTERED_CONTROLLERS
|
||||
from omnigibson.envs import Environment, VectorEnvironment
|
||||
|
||||
# TODO: Need to fix somehow -- omnigibson gets imported first BEFORE we can actually modify the macros
|
||||
from omnigibson.macros import gm
|
||||
from omnigibson.objects import REGISTERED_OBJECTS
|
||||
from omnigibson.robots import REGISTERED_ROBOTS
|
||||
|
|
|
@ -5,6 +5,7 @@ from typing import List
|
|||
|
||||
from future.utils import with_metaclass
|
||||
|
||||
from omnigibson import Environment
|
||||
from omnigibson.robots import BaseRobot
|
||||
from omnigibson.scenes.interactive_traversable_scene import InteractiveTraversableScene
|
||||
from omnigibson.tasks.task_base import BaseTask
|
||||
|
|
|
@ -779,8 +779,8 @@ class Environment(gym.Env, GymObservable, Recreatable):
|
|||
return {
|
||||
# Environment kwargs
|
||||
"env": {
|
||||
"action_frequency": 30,
|
||||
"physics_frequency": 120,
|
||||
"action_frequency": gm.DEFAULT_RENDERING_FREQ,
|
||||
"physics_frequency": gm.DEFAULT_PHYSICS_FREQ,
|
||||
"device": None,
|
||||
"automatic_reset": False,
|
||||
"flatten_action_space": False,
|
||||
|
|
|
@ -106,6 +106,10 @@ gm.ENABLE_TRANSITION_RULES = False
|
|||
gm.DEFAULT_VIEWER_WIDTH = 1280
|
||||
gm.DEFAULT_VIEWER_HEIGHT = 720
|
||||
|
||||
# Default physics / rendering frequencies (Hz)
|
||||
gm.DEFAULT_RENDERING_FREQ = 30
|
||||
gm.DEFAULT_PHYSICS_FREQ = 120
|
||||
|
||||
# (Demo-purpose) Whether to activate Assistive Grasping mode for Cloth (it's handled differently from RigidBody)
|
||||
gm.AG_CLOTH = False
|
||||
|
||||
|
|
|
@ -188,6 +188,10 @@ class XFormPrim(BasePrim):
|
|||
parent_world_transform = PoseAPI.get_world_pose_with_scale(parent_path)
|
||||
|
||||
local_transform = np.linalg.inv(parent_world_transform) @ my_world_transform
|
||||
product = local_transform[:3, :3] @ local_transform[:3, :3].T
|
||||
assert np.allclose(
|
||||
product, np.diag(np.diag(product)), atol=1e-3
|
||||
), f"{self.prim_path} local transform is not diagonal."
|
||||
self.set_local_pose(*T.mat2pose(local_transform))
|
||||
|
||||
def get_position_orientation(self):
|
||||
|
@ -360,6 +364,7 @@ class XFormPrim(BasePrim):
|
|||
Defaults to None, which means left unchanged.
|
||||
"""
|
||||
scale = np.array(scale, dtype=float) if isinstance(scale, Iterable) else np.ones(3) * scale
|
||||
assert np.all(scale > 0), f"Scale {scale} must consist of positive numbers."
|
||||
scale = lazy.pxr.Gf.Vec3d(*scale)
|
||||
properties = self.prim.GetPropertyNames()
|
||||
if "xformOp:scale" not in properties:
|
||||
|
|
|
@ -4,6 +4,7 @@ from omnigibson.robots.fetch import Fetch
|
|||
from omnigibson.robots.franka import FrankaPanda
|
||||
from omnigibson.robots.franka_allegro import FrankaAllegro
|
||||
from omnigibson.robots.franka_leap import FrankaLeap
|
||||
from omnigibson.robots.franka_mounted import FrankaMounted
|
||||
from omnigibson.robots.freight import Freight
|
||||
from omnigibson.robots.husky import Husky
|
||||
from omnigibson.robots.locobot import Locobot
|
||||
|
|
|
@ -15,6 +15,7 @@ from omnigibson.objects.usd_object import USDObject
|
|||
from omnigibson.robots.active_camera_robot import ActiveCameraRobot
|
||||
from omnigibson.robots.locomotion_robot import LocomotionRobot
|
||||
from omnigibson.robots.manipulation_robot import GraspingPoint, ManipulationRobot
|
||||
from omnigibson.utils.python_utils import classproperty
|
||||
|
||||
m = create_module_macros(module_path=__file__)
|
||||
# component suffixes for the 6-DOF arm joint names
|
||||
|
@ -140,12 +141,12 @@ class BehaviorRobot(ManipulationRobot, LocomotionRobot, ActiveCameraRobot):
|
|||
def model_name(self):
|
||||
return "BehaviorRobot"
|
||||
|
||||
@property
|
||||
def n_arms(self):
|
||||
@classproperty
|
||||
def n_arms(cls):
|
||||
return 2
|
||||
|
||||
@property
|
||||
def arm_names(self):
|
||||
@classproperty
|
||||
def arm_names(cls):
|
||||
return ["left", "right"]
|
||||
|
||||
@property
|
||||
|
|
|
@ -265,12 +265,6 @@ class Fetch(ManipulationRobot, TwoWheelRobot, ActiveCameraRobot):
|
|||
# Run super method first
|
||||
super()._initialize()
|
||||
|
||||
# Set the joint friction for EEF to be higher
|
||||
for arm in self.arm_names:
|
||||
for joint in self.finger_joints[arm]:
|
||||
if joint.joint_type != JointType.JOINT_FIXED:
|
||||
joint.friction = 500
|
||||
|
||||
def _postprocess_control(self, control, control_type):
|
||||
# Run super method first
|
||||
u_vec, u_type_vec = super()._postprocess_control(control=control, control_type=control_type)
|
||||
|
|
|
@ -0,0 +1,66 @@
|
|||
import os
|
||||
|
||||
import numpy as np
|
||||
|
||||
from omnigibson.macros import gm
|
||||
from omnigibson.robots.franka import FrankaPanda
|
||||
from omnigibson.robots.manipulation_robot import GraspingPoint, ManipulationRobot
|
||||
from omnigibson.utils.transform_utils import euler2quat
|
||||
|
||||
|
||||
class FrankaMounted(FrankaPanda):
|
||||
"""
|
||||
The Franka Emika Panda robot mounted on a custom chassis with a custom gripper
|
||||
"""
|
||||
|
||||
@property
|
||||
def model_name(self):
|
||||
return "FrankaMounted"
|
||||
|
||||
@property
|
||||
def controller_order(self):
|
||||
return ["arm_{}".format(self.default_arm), "gripper_{}".format(self.default_arm)]
|
||||
|
||||
@property
|
||||
def _default_controllers(self):
|
||||
controllers = super()._default_controllers
|
||||
controllers["arm_{}".format(self.default_arm)] = "InverseKinematicsController"
|
||||
controllers["gripper_{}".format(self.default_arm)] = "MultiFingerGripperController"
|
||||
return controllers
|
||||
|
||||
@property
|
||||
def finger_lengths(self):
|
||||
return {self.default_arm: 0.15}
|
||||
|
||||
@property
|
||||
def usd_path(self):
|
||||
return os.path.join(gm.ASSET_PATH, "models/franka/franka_mounted.usd")
|
||||
|
||||
@property
|
||||
def robot_arm_descriptor_yamls(self):
|
||||
return {self.default_arm: os.path.join(gm.ASSET_PATH, "models/franka/franka_mounted_description.yaml")}
|
||||
|
||||
@property
|
||||
def urdf_path(self):
|
||||
return os.path.join(gm.ASSET_PATH, "models/franka/franka_mounted.urdf")
|
||||
|
||||
@property
|
||||
def eef_usd_path(self):
|
||||
# TODO: Update!
|
||||
return {self.default_arm: os.path.join(gm.ASSET_PATH, "models/franka/franka_panda_eef.usd")}
|
||||
|
||||
@property
|
||||
def assisted_grasp_start_points(self):
|
||||
return {
|
||||
self.default_arm: [
|
||||
GraspingPoint(link_name="panda_rightfinger", position=[0.0, 0.001, 0.045]),
|
||||
]
|
||||
}
|
||||
|
||||
@property
|
||||
def assisted_grasp_end_points(self):
|
||||
return {
|
||||
self.default_arm: [
|
||||
GraspingPoint(link_name="panda_leftfinger", position=[0.0, 0.001, 0.045]),
|
||||
]
|
||||
}
|
|
@ -15,6 +15,14 @@ class Husky(LocomotionRobot):
|
|||
def _create_discrete_action_space(self):
|
||||
raise ValueError("Husky does not support discrete actions!")
|
||||
|
||||
@property
|
||||
def wheel_radius(self):
|
||||
return 0.165
|
||||
|
||||
@property
|
||||
def wheel_axle_length(self):
|
||||
return 0.670
|
||||
|
||||
@property
|
||||
def base_control_idx(self):
|
||||
return np.array([0, 1, 2, 3])
|
||||
|
|
|
@ -506,23 +506,23 @@ class ManipulationRobot(BaseRobot):
|
|||
|
||||
return controllers
|
||||
|
||||
@property
|
||||
def n_arms(self):
|
||||
@classproperty
|
||||
def n_arms(cls):
|
||||
"""
|
||||
Returns:
|
||||
int: Number of arms this robot has. Returns 1 by default
|
||||
"""
|
||||
return 1
|
||||
|
||||
@property
|
||||
def arm_names(self):
|
||||
@classproperty
|
||||
def arm_names(cls):
|
||||
"""
|
||||
Returns:
|
||||
list of str: List of arm names for this robot. Should correspond to the keys used to index into
|
||||
arm- and gripper-related dictionaries, e.g.: eef_link_names, finger_link_names, etc.
|
||||
Default is string enumeration based on @self.n_arms.
|
||||
"""
|
||||
return [str(i) for i in range(self.n_arms)]
|
||||
return [str(i) for i in range(cls.n_arms)]
|
||||
|
||||
@property
|
||||
def default_arm(self):
|
||||
|
|
|
@ -186,12 +186,12 @@ class Tiago(ManipulationRobot, LocomotionRobot, ActiveCameraRobot):
|
|||
def model_name(self):
|
||||
return "Tiago"
|
||||
|
||||
@property
|
||||
def n_arms(self):
|
||||
@classproperty
|
||||
def n_arms(cls):
|
||||
return 2
|
||||
|
||||
@property
|
||||
def arm_names(self):
|
||||
@classproperty
|
||||
def arm_names(cls):
|
||||
return ["left", "right"]
|
||||
|
||||
@property
|
||||
|
|
|
@ -142,10 +142,9 @@ class InteractiveTraversableScene(TraversableScene):
|
|||
load_room_instances = [load_room_instances]
|
||||
load_room_instances_filtered = []
|
||||
for room_instance in load_room_instances:
|
||||
if room_instance in self._seg_map.room_ins_name_to_ins_id:
|
||||
load_room_instances_filtered.append(room_instance)
|
||||
else:
|
||||
if room_instance not in self._seg_map.room_ins_name_to_ins_id:
|
||||
log.warning("room_instance [{}] does not exist.".format(room_instance))
|
||||
load_room_instances_filtered.append(room_instance)
|
||||
self.load_room_instances = load_room_instances_filtered
|
||||
elif load_room_types is not None:
|
||||
if isinstance(load_room_types, str):
|
||||
|
|
|
@ -1,3 +1,4 @@
|
|||
import math
|
||||
import time
|
||||
|
||||
import gymnasium as gym
|
||||
|
@ -679,16 +680,18 @@ class VisionSensor(BaseSensor):
|
|||
n-array: (3, 3) camera intrinsic matrix. Transforming point p (x,y,z) in the camera frame via K * p will
|
||||
produce p' (x', y', w) - the point in the image plane. To get pixel coordiantes, divide x' and y' by w
|
||||
"""
|
||||
projection_matrix = self.camera_parameters["cameraProjection"]
|
||||
projection_matrix = np.array(projection_matrix).reshape(4, 4)
|
||||
focal_length = self.camera_parameters["cameraFocalLength"]
|
||||
width, height = self.camera_parameters["renderProductResolution"]
|
||||
horizontal_aperture = self.camera_parameters["cameraAperture"][0]
|
||||
horizontal_fov = 2 * math.atan(horizontal_aperture / (2 * focal_length))
|
||||
vertical_fov = horizontal_fov * height / width
|
||||
|
||||
fx = projection_matrix[0, 0]
|
||||
fy = projection_matrix[1, 1]
|
||||
cx = projection_matrix[0, 2]
|
||||
cy = projection_matrix[1, 2]
|
||||
s = projection_matrix[0, 1] # Skew factor
|
||||
fx = (width / 2.0) / np.tan(horizontal_fov / 2.0)
|
||||
fy = (height / 2.0) / np.tan(vertical_fov / 2.0)
|
||||
cx = width / 2
|
||||
cy = height / 2
|
||||
|
||||
intrinsic_matrix = np.array([[fx, s, cx], [0.0, fy, cy], [0.0, 0.0, 1.0]])
|
||||
intrinsic_matrix = np.array([[fx, 0.0, cx], [0.0, fy, cy], [0.0, 0.0, 1.0]])
|
||||
return intrinsic_matrix
|
||||
|
||||
@property
|
||||
|
@ -754,6 +757,9 @@ class VisionSensor(BaseSensor):
|
|||
# Render to update
|
||||
render()
|
||||
|
||||
cls.SEMANTIC_REMAPPER = Remapper()
|
||||
cls.INSTANCE_REMAPPER = Remapper()
|
||||
cls.INSTANCE_ID_REMAPPER = Remapper()
|
||||
cls.SENSORS = dict()
|
||||
cls.KNOWN_SEMANTIC_IDS = set()
|
||||
cls.KEY_ARRAY = None
|
||||
|
|
|
@ -223,10 +223,12 @@ def launch_simulator(*args, **kwargs):
|
|||
|
||||
Args:
|
||||
gravity (float): gravity on z direction.
|
||||
physics_dt (float): dt between physics steps. Defaults to 1.0 / 120.0.
|
||||
rendering_dt (float): dt between rendering steps. Note: rendering means rendering a frame of the current
|
||||
application and not only rendering a frame to the viewports/ cameras. So UI elements of Isaac Sim will
|
||||
be refreshed with this dt as well if running non-headless. Defaults to 1.0 / 30.0.
|
||||
physics_dt (None or float): dt between physics steps. If None, will use default value
|
||||
1 / gm.DEFAULT_PHYSICS_FREQ
|
||||
rendering_dt (None or float): dt between rendering steps. Note: rendering means rendering a frame of the
|
||||
current application and not only rendering a frame to the viewports/ cameras. So UI elements of
|
||||
Isaac Sim will be refreshed with this dt as well if running non-headless. If None, will use default
|
||||
value 1 / gm.DEFAULT_RENDERING_FREQ
|
||||
stage_units_in_meters (float): The metric units of assets. This will affect gravity value..etc.
|
||||
Defaults to 0.01.
|
||||
viewer_width (int): width of the camera image, in pixels
|
||||
|
@ -239,8 +241,8 @@ def launch_simulator(*args, **kwargs):
|
|||
def __init__(
|
||||
self,
|
||||
gravity=9.81,
|
||||
physics_dt=1.0 / 120.0,
|
||||
rendering_dt=1.0 / 30.0,
|
||||
physics_dt=None,
|
||||
rendering_dt=None,
|
||||
stage_units_in_meters=1.0,
|
||||
viewer_width=gm.DEFAULT_VIEWER_WIDTH,
|
||||
viewer_height=gm.DEFAULT_VIEWER_HEIGHT,
|
||||
|
@ -264,8 +266,8 @@ def launch_simulator(*args, **kwargs):
|
|||
|
||||
# Run super init
|
||||
super().__init__(
|
||||
physics_dt=physics_dt,
|
||||
rendering_dt=rendering_dt,
|
||||
physics_dt=1.0 / gm.DEFAULT_PHYSICS_FREQ if physics_dt is None else physics_dt,
|
||||
rendering_dt=1.0 / gm.DEFAULT_RENDERING_FREQ if rendering_dt is None else rendering_dt,
|
||||
stage_units_in_meters=stage_units_in_meters,
|
||||
device=device,
|
||||
)
|
||||
|
@ -1285,13 +1287,16 @@ def launch_simulator(*args, **kwargs):
|
|||
|
||||
return
|
||||
|
||||
def save(self, json_path):
|
||||
def save(self, json_path=None):
|
||||
"""
|
||||
Saves the current simulation environment to @json_path.
|
||||
|
||||
Args:
|
||||
json_path (str): Full path of JSON file to save (should end with .json), which contains information
|
||||
to recreate the current scene.
|
||||
json_path (None or str): Full path of JSON file to save (should end with .json), which contains information
|
||||
to recreate the current scene, if specified. If None, will return json string insted
|
||||
|
||||
Returns:
|
||||
None or str: If @json_path is None, returns dumped json string. Else, None
|
||||
"""
|
||||
# Make sure the sim is not stopped, since we need to grab joint states
|
||||
assert not self.is_stopped(), "Simulator cannot be stopped when saving to USD!"
|
||||
|
@ -1320,11 +1325,15 @@ def launch_simulator(*args, **kwargs):
|
|||
}
|
||||
|
||||
# Write this to the json file
|
||||
Path(os.path.dirname(json_path)).mkdir(parents=True, exist_ok=True)
|
||||
with open(json_path, "w+") as f:
|
||||
json.dump(scene_info, f, cls=NumpyEncoder, indent=4)
|
||||
if json_path is None:
|
||||
return json.dumps(scene_info, cls=NumpyEncoder, indent=4)
|
||||
|
||||
log.info("The current simulation environment saved.")
|
||||
else:
|
||||
Path(os.path.dirname(json_path)).mkdir(parents=True, exist_ok=True)
|
||||
with open(json_path, "w+") as f:
|
||||
json.dump(scene_info, f, cls=NumpyEncoder, indent=4)
|
||||
|
||||
log.info("The current simulation environment saved.")
|
||||
|
||||
def _open_new_stage(self):
|
||||
"""
|
||||
|
|
|
@ -486,7 +486,7 @@ class CameraMover:
|
|||
pan_angle = np.arctan2(-xy_direction[0], xy_direction[1])
|
||||
tilt_angle = np.arcsin(z)
|
||||
# Infer global quat orientation from these angles
|
||||
quat = T.euler2quat([np.pi / 2 - tilt_angle, 0.0, pan_angle])
|
||||
quat = T.euler2quat([np.pi / 2 + tilt_angle, 0.0, pan_angle])
|
||||
poses.append([positions[j], quat])
|
||||
|
||||
# Record the generated trajectory
|
||||
|
|
|
@ -6,7 +6,7 @@ force-exclude = 'omnigibson/(data|external)'
|
|||
[tool.isort]
|
||||
profile = "black"
|
||||
line_length = 120
|
||||
py_version = 'all'
|
||||
py_version = '310'
|
||||
filter_files = true
|
||||
extend_skip_glob = [
|
||||
'omnigibson/data/*',
|
||||
|
|
1
setup.py
|
@ -48,6 +48,5 @@ setup(
|
|||
],
|
||||
tests_require=[],
|
||||
python_requires=">=3",
|
||||
package_data={"": ["omnigibson/global_config.yaml"]},
|
||||
include_package_data=True,
|
||||
) # yapf: disable
|
||||
|
|
|
@ -57,7 +57,7 @@ def main():
|
|||
cfg["robots"].append(
|
||||
{
|
||||
"type": "Fetch",
|
||||
"obs_modalities": "all",
|
||||
"obs_modalities": ["rgb"],
|
||||
"position": [-1.3 + 0.75 * i + SCENE_OFFSET[args.scene][0], 0.5 + SCENE_OFFSET[args.scene][1], 0],
|
||||
"orientation": [0.0, 0.0, 0.7071, -0.7071],
|
||||
}
|
||||
|
|
|
@ -79,7 +79,8 @@ def test_rs_int_full_load():
|
|||
}
|
||||
|
||||
# Make sure sim is stopped
|
||||
og.sim.stop()
|
||||
if og.sim:
|
||||
og.sim.stop()
|
||||
|
||||
# Make sure GPU dynamics are enabled (GPU dynamics needed for cloth)
|
||||
gm.ENABLE_OBJECT_STATES = True
|
||||
|
|
|
@ -10,13 +10,11 @@ from omnigibson.action_primitives.starter_semantic_action_primitives import (
|
|||
from omnigibson.macros import gm
|
||||
from omnigibson.objects.dataset_object import DatasetObject
|
||||
|
||||
|
||||
def execute_controller(ctrl_gen, env):
|
||||
for action in ctrl_gen:
|
||||
env.step(action)
|
||||
# Make sure that Omniverse is launched before setting up the tests.
|
||||
og.launch()
|
||||
|
||||
|
||||
def primitive_tester(load_object_categories, objects, primitives, primitives_args):
|
||||
def setup_environment(load_object_categories):
|
||||
cfg = {
|
||||
"scene": {
|
||||
"type": "InteractiveTraversableScene",
|
||||
|
@ -67,12 +65,18 @@ def primitive_tester(load_object_categories, objects, primitives, primitives_arg
|
|||
gm.ENABLE_OBJECT_STATES = True
|
||||
gm.USE_GPU_DYNAMICS = False
|
||||
gm.ENABLE_FLATCACHE = False
|
||||
|
||||
# Create the environment
|
||||
env = og.Environment(configs=cfg)
|
||||
robot = env.robots[0]
|
||||
env.reset()
|
||||
return env
|
||||
|
||||
|
||||
def execute_controller(ctrl_gen, env):
|
||||
for action in ctrl_gen:
|
||||
env.step(action)
|
||||
|
||||
|
||||
def primitive_tester(env, objects, primitives, primitives_args):
|
||||
for obj in objects:
|
||||
env.scene.add_object(obj["object"])
|
||||
obj["object"].set_position_orientation(obj["position"], obj["orientation"])
|
||||
|
@ -95,6 +99,7 @@ def primitive_tester(load_object_categories, objects, primitives, primitives_arg
|
|||
|
||||
def test_navigate():
|
||||
categories = ["floors", "ceilings", "walls"]
|
||||
env = setup_environment(categories)
|
||||
|
||||
objects = []
|
||||
obj_1 = {
|
||||
|
@ -107,11 +112,12 @@ def test_navigate():
|
|||
primitives = [StarterSemanticActionPrimitiveSet.NAVIGATE_TO]
|
||||
primitives_args = [(obj_1["object"],)]
|
||||
|
||||
assert primitive_tester(categories, objects, primitives, primitives_args)
|
||||
assert primitive_tester(env, objects, primitives, primitives_args)
|
||||
|
||||
|
||||
def test_grasp():
|
||||
categories = ["floors", "ceilings", "walls", "coffee_table"]
|
||||
env = setup_environment(categories)
|
||||
|
||||
objects = []
|
||||
obj_1 = {
|
||||
|
@ -124,11 +130,12 @@ def test_grasp():
|
|||
primitives = [StarterSemanticActionPrimitiveSet.GRASP]
|
||||
primitives_args = [(obj_1["object"],)]
|
||||
|
||||
assert primitive_tester(categories, objects, primitives, primitives_args)
|
||||
assert primitive_tester(env, objects, primitives, primitives_args)
|
||||
|
||||
|
||||
def test_place():
|
||||
categories = ["floors", "ceilings", "walls", "coffee_table"]
|
||||
env = setup_environment(categories)
|
||||
|
||||
objects = []
|
||||
obj_1 = {
|
||||
|
@ -147,12 +154,13 @@ def test_place():
|
|||
primitives = [StarterSemanticActionPrimitiveSet.GRASP, StarterSemanticActionPrimitiveSet.PLACE_ON_TOP]
|
||||
primitives_args = [(obj_2["object"],), (obj_1["object"],)]
|
||||
|
||||
assert primitive_tester(categories, objects, primitives, primitives_args)
|
||||
assert primitive_tester(env, objects, primitives, primitives_args)
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="primitives are broken")
|
||||
def test_open_prismatic():
|
||||
categories = ["floors"]
|
||||
env = setup_environment(categories)
|
||||
|
||||
objects = []
|
||||
obj_1 = {
|
||||
|
@ -167,12 +175,13 @@ def test_open_prismatic():
|
|||
primitives = [StarterSemanticActionPrimitiveSet.OPEN]
|
||||
primitives_args = [(obj_1["object"],)]
|
||||
|
||||
assert primitive_tester(categories, objects, primitives, primitives_args)
|
||||
assert primitive_tester(env, objects, primitives, primitives_args)
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="primitives are broken")
|
||||
def test_open_revolute():
|
||||
categories = ["floors"]
|
||||
env = setup_environment(categories)
|
||||
|
||||
objects = []
|
||||
obj_1 = {
|
||||
|
@ -185,4 +194,4 @@ def test_open_revolute():
|
|||
primitives = [StarterSemanticActionPrimitiveSet.OPEN]
|
||||
primitives_args = [(obj_1["object"],)]
|
||||
|
||||
assert primitive_tester(categories, objects, primitives, primitives_args)
|
||||
assert primitive_tester(env, objects, primitives, primitives_args)
|
||||
|
|
|
@ -18,7 +18,8 @@ from omnigibson.systems import get_system
|
|||
|
||||
|
||||
def start_env():
|
||||
og.sim.stop()
|
||||
if og.sim:
|
||||
og.sim.stop()
|
||||
config = {
|
||||
"env": {"initial_pos_z_offset": 0.1},
|
||||
"render": {"viewer_width": 1280, "viewer_height": 720},
|
||||
|
|
|
@ -21,7 +21,6 @@ from omnigibson.utils.constants import PrimType
|
|||
from omnigibson.utils.physx_utils import apply_force_at_pos, apply_torque
|
||||
|
||||
|
||||
@pytest.mark.skip(reason="dryer is not fillable yet.")
|
||||
@og_test
|
||||
def test_dryer_rule(env):
|
||||
assert len(REGISTERED_RULES) > 0, "No rules registered!"
|
||||
|
@ -34,8 +33,8 @@ def test_dryer_rule(env):
|
|||
og.sim.step()
|
||||
|
||||
# Place the two objects inside the dryer
|
||||
remover_dishtowel.set_position_orientation([0.0, 0.0, 0.4], [0, 0, 0, 1])
|
||||
bowl.set_position_orientation([0.0, 0.0, 0.5], [0, 0, 0, 1])
|
||||
remover_dishtowel.set_position_orientation([0.06, 0, 0.2], [0.0311883, -0.23199339, -0.06849886, 0.96980107])
|
||||
bowl.set_position_orientation([0.0, 0.0, 0.2], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
|
||||
assert remover_dishtowel.states[Saturated].set_value(water, True)
|
||||
|
@ -47,15 +46,23 @@ def test_dryer_rule(env):
|
|||
|
||||
# The rule will not execute if Open is True
|
||||
clothes_dryer.states[Open].set_value(True)
|
||||
clothes_dryer.states[ToggledOn].set_value(True)
|
||||
og.sim.step()
|
||||
|
||||
assert remover_dishtowel.states[Saturated].get_value(water)
|
||||
assert clothes_dryer.states[Contains].get_value(water)
|
||||
|
||||
# The rule will not execute if ToggledOn is False
|
||||
clothes_dryer.states[Open].set_value(False)
|
||||
clothes_dryer.states[ToggledOn].set_value(False)
|
||||
og.sim.step()
|
||||
|
||||
assert remover_dishtowel.states[Saturated].get_value(water)
|
||||
assert clothes_dryer.states[Contains].get_value(water)
|
||||
|
||||
# The rule will execute if Open is False and ToggledOn is True
|
||||
clothes_dryer.states[Open].set_value(False)
|
||||
clothes_dryer.states[ToggledOn].set_value(True)
|
||||
|
||||
# The rule will execute when Open is False and ToggledOn is True
|
||||
og.sim.step()
|
||||
|
||||
# Need to take one more step for the state setters to take effect
|
||||
|
@ -791,12 +798,12 @@ def test_cooking_object_rule_failure_wrong_container(env):
|
|||
og.sim.step()
|
||||
|
||||
# This fails the recipe because it requires the baking sheet to be inside the oven, not the stockpot
|
||||
stockpot.set_position_orientation([0, 0, 0.47], [0, 0, 0, 1])
|
||||
stockpot.set_position_orientation([0, 0, 0.487], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert stockpot.states[Inside].get_value(oven)
|
||||
|
||||
bagel_dough.set_position_orientation([0, 0, 0.45], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.02, 0, 0.50], [0, 0, 0, 1])
|
||||
bagel_dough.set_position_orientation([0, 0, 0.464], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.02, 0, 0.506], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert bagel_dough.states[Inside].get_value(stockpot)
|
||||
assert raw_egg.states[OnTop].get_value(bagel_dough)
|
||||
|
@ -833,7 +840,7 @@ def test_cooking_object_rule_failure_recipe_objects(env):
|
|||
place_obj_on_floor_plane(oven)
|
||||
og.sim.step()
|
||||
|
||||
baking_sheet.set_position_orientation([0, 0, 0.455], [0, 0, 0, 1])
|
||||
baking_sheet.set_position_orientation([0.0, 0.05, 0.455], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert baking_sheet.states[Inside].get_value(oven)
|
||||
|
||||
|
@ -875,12 +882,12 @@ def test_cooking_object_rule_failure_unary_states(env):
|
|||
place_obj_on_floor_plane(oven)
|
||||
og.sim.step()
|
||||
|
||||
baking_sheet.set_position_orientation([0, 0, 0.455], [0, 0, 0, 1])
|
||||
baking_sheet.set_position_orientation([0.0, 0.05, 0.455], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert baking_sheet.states[Inside].get_value(oven)
|
||||
|
||||
bagel_dough.set_position_orientation([0, 0, 0.5], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.02, 0, 0.55], [0, 0, 0, 1])
|
||||
bagel_dough.set_position_orientation([0, 0, 0.492], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.02, 0, 0.534], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert bagel_dough.states[OnTop].get_value(baking_sheet)
|
||||
assert raw_egg.states[OnTop].get_value(bagel_dough)
|
||||
|
@ -918,12 +925,12 @@ def test_cooking_object_rule_failure_binary_system_states(env):
|
|||
place_obj_on_floor_plane(oven)
|
||||
og.sim.step()
|
||||
|
||||
baking_sheet.set_position_orientation([0, 0, 0.455], [0, 0, 0, 1])
|
||||
baking_sheet.set_position_orientation([0.0, 0.05, 0.455], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert baking_sheet.states[Inside].get_value(oven)
|
||||
|
||||
bagel_dough.set_position_orientation([0, 0, 0.5], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.02, 0, 0.55], [0, 0, 0, 1])
|
||||
bagel_dough.set_position_orientation([0, 0, 0.492], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.02, 0, 0.534], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert bagel_dough.states[OnTop].get_value(baking_sheet)
|
||||
assert raw_egg.states[OnTop].get_value(bagel_dough)
|
||||
|
@ -961,11 +968,11 @@ def test_cooking_object_rule_failure_binary_object_states(env):
|
|||
place_obj_on_floor_plane(oven)
|
||||
og.sim.step()
|
||||
|
||||
baking_sheet.set_position_orientation([0, 0, 0.455], [0, 0, 0, 1])
|
||||
baking_sheet.set_position_orientation([0.0, 0.05, 0.455], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert baking_sheet.states[Inside].get_value(oven)
|
||||
|
||||
bagel_dough.set_position_orientation([0, 0, 0.5], [0, 0, 0, 1])
|
||||
bagel_dough.set_position_orientation([0, 0, 0.492], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.12, 0.15, 0.47], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert bagel_dough.states[OnTop].get_value(baking_sheet)
|
||||
|
@ -1053,12 +1060,12 @@ def test_cooking_object_rule_success(env):
|
|||
place_obj_on_floor_plane(oven)
|
||||
og.sim.step()
|
||||
|
||||
baking_sheet.set_position_orientation([0, 0, 0.455], [0, 0, 0, 1])
|
||||
baking_sheet.set_position_orientation([0.0, 0.05, 0.455], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert baking_sheet.states[Inside].get_value(oven)
|
||||
|
||||
bagel_dough.set_position_orientation([0, 0, 0.5], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.02, 0, 0.55], [0, 0, 0, 1])
|
||||
bagel_dough.set_position_orientation([0, 0, 0.492], [0, 0, 0, 1])
|
||||
raw_egg.set_position_orientation([0.02, 0, 0.534], [0, 0, 0, 1])
|
||||
og.sim.step()
|
||||
assert bagel_dough.states[OnTop].get_value(baking_sheet)
|
||||
assert raw_egg.states[OnTop].get_value(bagel_dough)
|
||||
|
|
|
@ -165,6 +165,7 @@ def assert_test_env():
|
|||
get_obj_cfg("half_apple", "half_apple", "sguztn"),
|
||||
get_obj_cfg("washer", "washer", "dobgmu"),
|
||||
get_obj_cfg("carpet_sweeper", "carpet_sweeper", "xboreo"),
|
||||
get_obj_cfg("clothes_dryer", "clothes_dryer", "smcyys"),
|
||||
],
|
||||
"robots": [
|
||||
{
|
||||
|
|