Merge branch 'og-develop' into feat/doc-update-josiah
After Width: | Height: | Size: 247 KiB |
After Width: | Height: | Size: 286 KiB |
After Width: | Height: | Size: 282 KiB |
After Width: | Height: | Size: 374 KiB |
After Width: | Height: | Size: 413 KiB |
After Width: | Height: | Size: 394 KiB |
After Width: | Height: | Size: 500 KiB |
|
@ -0,0 +1,111 @@
|
|||
---
|
||||
icon: material/lightbulb
|
||||
---
|
||||
|
||||
# **Important Concepts**
|
||||
|
||||
In this document, we discuss and disambiguate a number of concepts that are central to working with OmniGibson and BEHAVIOR-1K.
|
||||
|
||||
## **BEHAVIOR concepts**
|
||||
|
||||
At a high level, the BEHAVIOR dataset consists of tasks, synsets, categories, objects and substances. These are all interconnected and are used to define and simulate household robotics.
|
||||
|
||||
### Tasks
|
||||
|
||||
Tasks in the BEHAVIOR are first order logic formalizations of 1000+ long-horizon household activities that survey participants indicated they would benefit from robot help with. Each task is defined in a single BDDL file that includes the list of objects needed for the task (the *object scope*), and their *initial conditions* (e.g. what a scene should look like when the task begins) and *goal conditions* (e.g. what needs to be true for the task to be considered completed). Task definitions are symbolic - they can be grounded in a particular scene with particular object, which is called a *task instance*. Task instances are created through a process called *sampling* that finds scenes and rooms that match the task's requirements, and configures non-scene objects into configurations that satisfy the task's initial conditions.
|
||||
|
||||
### Synsets
|
||||
|
||||
Synsets are the nouns used in the BDDL object scopes, expanded from the WordNet hierarchy with additional synsets to suit BEHAVIOR needs. Synsets are laid out in the form of a directed acyclic graph, so each synset can have parents/ancestors and children/descendants. When a task object scope requires a synset (e.g. "grocery.n.01"), instantiations of the task might use objects belonging to any descendant of that synset (e.g. an apple, assigned to "apple.n.01"), allowing a high degree of flexibility in task definitions. Each synset is annotated with abilities and parameters that define the kind of behaviors expected from objects of that synset (e.g. a faucet is a source of water, a door is openable, a stove is a heat source, etc.)
|
||||
|
||||
### Categories
|
||||
|
||||
Categories act as a bridge between synsets and OmniGibson's objects. Each category is mapped to one leaf synset and can contain multiple objects. The purpose of a category is to disambiguate between objects that are semantically the same but functionally & physically not; e.g. both a wall-mounted sink and a standing sink are `sink.n.01` semantically (e.g. they have the same functions and can be used for the same purposes), but they should not be swapped for one another during object randomization for the sake of physical and visual realism. As a result, wall_mounted_sink and standing_sink are different categories, but they are mapped to the same synset and thus can be used for the same task-relevant purposes.
|
||||
|
||||
### Objects
|
||||
|
||||
Objects denote specific 3D object models in the dataset. Each object belongs to one category and has a unique 6-character ID that identifies that object in the dataset. Objects can have articulations and metadata annotated, used in OmniGibson to simulate the abilities expected by the object's assigned synset. For example, a faucet is a fluid source, so it needs to have an annotation for the position the water will come out of.
|
||||
|
||||
### Scenes
|
||||
|
||||
Scenes are specific configurations of objects. A scene file by default will contain the information needed to lay out all the objects to form the scene. BEHAVIOR-1K ships with 50 base scenes that show a variety of different environments like houses, offices, restaurants, etc. and these scenes can be randomized by performing object randomization by replacing objects with other objects from the same category within the existing objects' bounding boxes. During task sampling, additional objects as requested in the object scope can be added, and these scene/task combinations (*task instances*) can be saved separately. BEHAVIOR-1K ships with at least one instantiation of each task.
|
||||
|
||||
### Substances / Systems
|
||||
|
||||
Some synsets, such as water, are marked as substances. For substance synsets, categories and objects are not provided, instead, these synsets are mapped to *particle systems* inside OmniGibson. Particle systems can act in a variety of ways: some like water act and are rendered as fluids, others like stains are simply visual particles with custom meshes. Substances are implemented singletons at the scene level, e.g. there is only one *water* particle system in a scene, and its particles may be arbitrarily placed in the scene. At a symbolic level, other objects can be filled with, covered in, or simply containing, particles of a particle system.
|
||||
|
||||
### Transition Rules
|
||||
|
||||
Transition rules define complex physical or chemical interactions between objects and substances not natively supported by Omniverse. They specify input and output synsets, conditions for transitions, and involve rules for washing, drying, slicing, dicing, melting, and recipe-based transformations. Each rule type has specific input and output requirements and conditions. When the input requirements are satisfied, the rule will be applied, causing the removal of some objects/substances and the addition of others into the scene.
|
||||
|
||||
|
||||
## **Components of the BEHAVIOR ecosystem**
|
||||
|
||||
The BEHAVIOR ecosystem consists of four components: BDDL (the symbolic knowledgebase), OmniGibson (the simulator), the BEHAVIOR dataset (the scene and object assets) and the OmniGibson assets (robots etc).
|
||||
|
||||
### OmniGibson
|
||||
|
||||
OmniGibson is the main software component of the BEHAVIOR ecosystem. It is a robotics simulator built on NVIDIA Isaac Sim and is the successor of the BEHAVIOR team's previous well known simulator, iGibson. OmniGibson is designed to meet the needs of the BEHAVIOR project, including realistic rendering, high-fidelity physics, and the ability to simulate soft bodies and fluids.
|
||||
|
||||
OmniGibson is a Python package, and it requires Isaac Sim to be available locally to function. It can also be used independently from the BEHAVIOR ecosystem to perform robot learning on different robots, assets, and tasks. The OmniGibson stack is discussed further in the "OmniGibson, Omniverse and Isaac Sim" section.
|
||||
|
||||
### OmniGibson Assets
|
||||
|
||||
The OmniGibson assets are a collection of robots and other simple graphical assets that are downloaded into the omnigibson/data directory. These assets are necessary to be able to OmniGibson (e.g. no robot simulation without robots!) for any purpose, and as such are shipped separately from the BEHAVIOR dataset which contains the items needed to simulate BEHAVIOR tasks. These assets are not encrypted.
|
||||
|
||||
### The BEHAVIOR dataset
|
||||
|
||||
The BEHAVIOR dataset consists of the scene, object and particle system assets that are used to simulate the BEHAVIOR-1K tasks. Most of the assets were procured through ShapeNet and TurboSquid and the dataset is encrypted to comply with their license.
|
||||
|
||||
* Objects are represented as USD files that contain the geometry, materials, and physics properties of the objects. Materials are separately provided.
|
||||
* Scene assets are represented as JSON files containing OmniGibson state dumps that describe a particular configuration of the USD objects in a scene. Scene directories also include additional information such as traversability maps of the scene with various subsets of objects included. *In the currently shipped versions of OmniGibson scenes, "clutter" objects that are not task-relevant are not included (e.g. the products for sale at the supermarket), to reduce the complexity of the scenes and improve simulation performance.*
|
||||
* The particle system assets are represented as JSON files describing the parameters of the particle system. Some particle systems also contain USD assets that are used as particles of that system. Other systems are rendered directly using isosurfaces, etc.
|
||||
|
||||
### BDDL
|
||||
|
||||
The BEHAVIOR Domain Definition Language (BDDL) library contains the symbolic knowledgebase for the BEHAVIOR ecosystem and the tools for interacting with it. The BDDL library contains the below main components:
|
||||
|
||||
* The BEHAVIOR Object Taxonomy, which contains a tree of nouns ("synsets") derived from WordNet and enriched with annotations and relationships that are useful for robotics and AI. The Object Taxonomy also includes mapping of BEHAVIOR dataset categories and systems to synsets. The Object Taxonomy can be accessed using the `bddl.object_taxonomy` module.
|
||||
* The BEHAVIOR Domain Definition Language (BDDL) standard, parsers, and implementations of all of the first-order logic predicates and functions defined in the standard.
|
||||
* The definitions of the 1,000 tasks that are part of the BEHAVIOR-1K dataset. These are defined with initial and goal conditions as first-order logic predicates in BDDL.
|
||||
* The backend abstract base class, which needs to be implemented by a simulator (e.g. OmniGibson) to provide the necessary functionality to sample the initial conditions and check the predicates in goal conditions of tasks.
|
||||
* Transition rule definitions, which define recipes, like cooking, that result in the removal and addition of nouns into the environment state at a given time. Some of these transitions are critical to completion of a task, e.g. blending lemons and water in a blender need to produce the blender substance for a `making_lemonade` task to be feasible. These need to be implemented by the simulator.
|
||||
* The knowledgebase module (`bddl.knowledge_base`) that contains an ORM representation of all of the BDDL + BEHAVIOR dataset concepts. This can be used to investigate the relationships between objects, synsets, categories, substances, systems, and tasks. The [BEHAVIOR knowledgebase website](https://behavior.stanford.edu/knowledgebase) is a web interface to this module.
|
||||
|
||||
|
||||
## **OmniGibson, Omniverse, Isaac Sim and PhysX**
|
||||
|
||||
OmniGibson is an open-source project that is built on top of NVIDIA's Isaac Sim and Omniverse. Here we discuss the relationship between these components.
|
||||
|
||||
### Omniverse
|
||||
|
||||
Omniverse is a platform developed by NVIDIA that provides a set of tools and services for creating, sharing, and rendering 3D content.
|
||||
|
||||
Omniverse on its own is a SDK containing a UI, a photorealistic renderer (RTX/Hydra), a scene representation (USD), a Physics engine (PhysX) and a number of other features. Its components, and other custom code, can be used in different combinations to create "Omniverse apps".
|
||||
|
||||
An Omniverse app usually involves rendering, but does not have to involve physics simulation. NVIDIA develops a number of such apps in-house, e.g. Omniverse Create which can be used as a CAD design tool, and Isaac Sim, which is an application for robotics simulation.
|
||||
|
||||
### PhysX
|
||||
|
||||
PhysX is a physics engine owned and developed by NVIDIA and used in a variety of games and platforms like Unity. It is integrated into Omniverse and thus can be used to apply physics updates to the state of the scene in an Omniverse app.
|
||||
|
||||
PhysX supports important features that are necessary for robotics simulation, such as articulated bodies, joints, motors, controllers, etc.
|
||||
|
||||
### Isaac Sim
|
||||
|
||||
Isaac Sim is an Omniverse app developed by NVIDIA that is designed for robotics simulation. It is built on top of Omniverse and uses PhysX for physics simulation. As an Omniverse app, it's defined as a list of Omniverse components that need to be enabled to comprise the application, as well as providing a thin layer of custom logic to support launching the application as a library and programmatically stepping the simulation rather than launching it as an asychronous, standalone desktop application.
|
||||
|
||||
It's important to note that the Omniverse SDK is generally meant as a CAD / collaboration / rendering platform and is monetized as such. Isaac Sim is a bit of a special case in that its main purpose is robotics simulation, which usually involves starting with a fixed state and simulating through physics, rather than manually making changes to a CAD file manually, or by making animations using keyframes. The application also runs as a MDP where the viewport updates on step rather than asynchronously like a typical interactive desktop app. As a result, a lot of Omniverse features are not used in Isaac Sim, and some features (e.g. timestamps, live windows, etc.) do not quite work as expected.
|
||||
|
||||
### OmniGibson
|
||||
|
||||
OmniGibson is a Python package that is built by the BEHAVIOR team at the Stanford Vision and Learning Group on top of Isaac Sim and provides a number of features that are necessary for simulating BEHAVIOR tasks. OmniGibson:
|
||||
|
||||
* completely abstracts away the Isaac Sim interface (e.g. users do not interact with NVIDIA code / interfaces / abstractions at all), instead providing a familiar scene/object/robot/task interface similar to those introduced in iGibson
|
||||
* provides a number of fast high-level APIs for interacting with the simulator, such as loading scenes, setting up tasks, and controlling robots
|
||||
* implements samplers and checkers for all of the predicates and functions defined in the BDDL standard to allow instantiation and simulation of BEHAVIOR-1K tasks
|
||||
* includes utilities for working with the BEHAVIOR dataset including decryption, saving / loading scene states, etc.
|
||||
* supports very simple vectorization across multiple copies of the scene to aid with training reinforcement learning agents
|
||||
* provides easily configurable controllers (direct joint control, inverse kinematics, operational space, differential drive, etc.) that can be used to control robots in the simulator
|
||||
|
||||
OmniGibson is shipped as a Python package through pip or GitHub, however, it requires Isaac Sim to be installed locally to function. It can also be used independently from the BEHAVIOR ecosystem to perform robot learning on different robots, assets, and tasks.
|
|
@ -1,5 +1,5 @@
|
|||
# **Contact**
|
||||
|
||||
If you have any questions, comments, or concerns, please feel free to reach out to use by joining our Discord server:
|
||||
If you have any questions, comments, or concerns, please feel free to reach out to us by joining our Discord server:
|
||||
|
||||
<a href="https://discord.gg/bccR5vGFEx"><img src="https://discordapp.com/api/guilds/1166422812160966707/widget.png?style=banner3"></a>
|
|
@ -8,12 +8,44 @@ If you encounter any bugs or have feature requests that could enhance the platfo
|
|||
|
||||
When reporting a bug, please kindly provide detailed information about the issue, including steps to reproduce it, any error messages, and relevant system details. For feature requests, we appreciate a clear description of the desired functionality and its potential benefits for the OmniGibson community.
|
||||
|
||||
## **Pull Requests**
|
||||
You can also ask questions on our Discord channel about issues.
|
||||
|
||||
## **Branch Structure**
|
||||
|
||||
The OmniGibson repository uses the below branching structure:
|
||||
|
||||
* *main* is the branch that contains the latest released version of OmniGibson. No code should be directly pushed to *main* and no pull requests should be merged directly to *main*. It is updated at release time by OmniGibson core team members. External users are expected to be on this branch.
|
||||
* *og-develop* is the development branch that contains the latest, stable development version. Internal users and developers are expected to be on, or branching from, this branch. Pull requests for new features should be made into this branch. **It is our expectation that og-develop is always stable, e.g. all tests need to be passing and all PRs need to be complete features to be merged.**
|
||||
|
||||
## **How to contribute**
|
||||
|
||||
We are always open to pull requests that address bugs, add new features, or improve the platform in any way. If you are considering submitting a pull request, we recommend opening an issue first to discuss the changes you would like to make. This will help us ensure that your proposed changes align with the goals of the project and that we can provide guidance on the best way to implement them.
|
||||
|
||||
When submitting a pull request, please ensure that your code adheres to the following guidelines:
|
||||
**Before starting a pull request, understand our expectations. Your PR must:**
|
||||
|
||||
- **Code Style**: We follow the [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guide for Python code. Please ensure that your code is formatted according to these guidelines.
|
||||
- **Documentation**: If your changes affect the public API or introduce new features, please update the relevant documentation to reflect these changes.
|
||||
- **Testing**: If your changes affect the behavior of the platform, please include tests to ensure that the new functionality works as expected and that existing functionality remains unaffected.
|
||||
1. Contain clean code with properly written English comments
|
||||
2. Contain all of the changes (no follow-up PRs), and **only** the changes (no huge PRs containing a bunch of things), necessary for **one** feature
|
||||
3. Should leave og-develop in a fully stable state as you found it
|
||||
|
||||
You can follow the below items to develop a feature:
|
||||
|
||||
1. **Branch off of og-develop.** You can start by checking out og-develop and branching from it. If you are an OmniGibson team member, you can push your branches onto the OmniGibson repo directly. Otherwise, you can fork the repository.
|
||||
2. **Implement your feature.** You can implement your feature, as discussed with the OmniGibson team on your feature request or otherwise. Some things to pay attention to:
|
||||
- **Examples:** If you are creating any new major features, create an example that a user can run to try out your feature. See the existing examples in the examples directory for inspiration, and follow the same structure that allows examples to be run headlessly as integration tests.
|
||||
- **Code style:** We follow the [PEP 8](https://www.python.org/dev/peps/pep-0008/) style guide for Python code. Please ensure that your code is formatted according to these guidelines. We have pre-commits that we recommend installing that fix style issues and sort imports. These are also applied automatically on pull requests.
|
||||
- **Inline documentation:** We request that all new APIs be documented via docstrings, and that functions be reasonably commented.
|
||||
3. **Write user documentation**: If your changes affect the public API or introduce new features, please update the relevant documentation to reflect these changes. If you are creating new features, consider writing a tutorial.
|
||||
4. **Testing**: Please include tests to ensure that the new functionality works as expected and that existing functionality remains unaffected. This will both confirm that your feature works, and it will protect your feature against regressions that can be caused by unrelated PRs by others. Unit tests are run on each pull request and failures will prevent PRs from being merged.
|
||||
5. **Create PR**: After you are done with all of the above steps, create a pull request on the OmniGibson repo. **Make sure you are picking og-develop as the base branch.** A friendly bot will complain if you don't. In the pull request description, explain the feature and the need for changes, link to any discussions with developers, and assign the feature for review by one of the core developers.
|
||||
6. **Go through review process**: Your reviewers may leave comments on things to be changed, or ask you questions. Even if you fix things or answer questions, do **NOT** mark things as resolved, let the reviewer do so in their next pass. After you are done responding, click the button to request another round of reviews. Repeat until there are no open conversations left.
|
||||
7. **Merged!** Once the reviewer is satisfied, they will go ahead and merge your PR. The PR will be merged into og-develop for immediate developer use, and included in the next release for public use. Public releases happen every few months. Thanks a lot for your contribution, and congratulations on becoming a contributor to what we hope will be the world's leading robotics benchmark!
|
||||
|
||||
## **Continuous Integration**
|
||||
The BEHAVIOR suite has continuous integration running via Github Actions in containers on our compute cluster. To keep our cluster safe, the CI will only be run on external work after one of our team members approves it.
|
||||
|
||||
* Tests and profiling are run directly on PRs and merges on the OmniGibson repo using our hosted runners
|
||||
* Docker image builds are performed using GitHub-owned runners
|
||||
* Docs builds are run on the behavior-website repo along with the rest of the website.
|
||||
* When GitHub releases are created, a source distribution will be packed and shipped on PyPI by a hosted runner
|
||||
|
||||
For more information about the workflows and runners, please reach out on our Discord channel.
|
|
@ -32,7 +32,7 @@ Object states have a unified API interface: a getter `state.get_value(...)`, and
|
|||
|
||||
Object states are intended to be added when an object is instantiated, during its constructor call via the `abilities` kwarg. This is expected to be a dictionary mapping ability name to a dictionary of keyword-arguments that dictate the instantiated object state's behavior. Normally, this is simply the keyword-arguments to pass to the specific `ObjectState` constructor, but this can be different. Concretely, the raw values in the `abilities` value dictionary are postprocessed via the specific object state's `postprocess_ability_params` classmethod. This is to allow `abilities` to be fully exportable in .json format, without requiring complex datatypes (which may be required as part of an object state's actual constructor) to be stored.
|
||||
|
||||
By default, `abilities=None` results in an object's abilities directly being inferred from its `category` kwarg. **`OmniGibson`** leverages a crowdsourced [knowledgebase](https://behavior.stanford.edu/knowledgebase/categories/index.html) to determine what abilities (or "properties" in the knowledgebase) a given entity (called "synset" in the knowledgebase) can have. Every category in **`OmniGibson`**'s asset dataset directly corresponds to a specific synset. By going to the knowledgebase and clicking on the corresponding synset, one can see the annotated abilities (properties) for that given synset, which will be applied to the object being created.
|
||||
By default, `abilities=None` results in an object's abilities directly being inferred from its `category` kwarg. **`OmniGibson`** leverages the crowdsourced [BEHAVIOR Knowledgebase](https://behavior.stanford.edu/knowledgebase/categories/index.html) to determine what abilities (or "properties" in the knowledgebase) a given entity (called "synset" in the knowledgebase) can have. Every category in **`OmniGibson`**'s asset dataset directly corresponds to a specific synset. By going to the knowledgebase and clicking on the corresponding synset, one can see the annotated abilities (properties) for that given synset, which will be applied to the object being created.
|
||||
|
||||
Alternatively, you can programmatically observe which abilities, with the exact default kwargs, correspond to a given category via:
|
||||
|
||||
|
@ -43,6 +43,9 @@ synset = OBJECT_TAXONOMY.get_synset_from_category(category)
|
|||
abilities = OBJECT_TAXONOMY.get_abilities(synset)
|
||||
```
|
||||
|
||||
!!! info annotate "Follow our tutorial on BEHAVIOR knowledgebase!"
|
||||
To better understand how to use / visualize / modify BEHAVIOR knowledgebase, please read our [tutorial](../tutorials/behavior_knowledgebase.html)!
|
||||
|
||||
??? warning annotate "Not all object states are guaranteed to be created!"
|
||||
|
||||
Some object states (such as `ParticleApplier` or `ToggledOn`) potentially require specific metadata to be defined for a given object model before the object state can be created. For example, `ToggledOn` represents a pressable virtual button, and requires this button to be defined a-priori in the raw object asset before it is imported. When parsing the `abilities` dictionary, each object state runs a compatibilty check via `state.is_compatible(obj, **kwargs)` before it is created, where `**kwargs` define any relevant keyword arguments that would be passed to the object state constructor. If the check fails, then the object state is **_not_** created!
|
||||
|
@ -51,8 +54,7 @@ abilities = OBJECT_TAXONOMY.get_abilities(synset)
|
|||
|
||||
As mentioned earlier, object states can be potentially read from via `get_state(...)` or written to via `set_state(...)`. The possibility of reading / writing, as well as the arguments expected and return value expected depends on the specific object state class type. For example, object states that inherit the `BooleanStateMixin` class expect `get_state(...)` to return and `set_state(...)` to receive a boolean. `AbsoluteObjectState`s are agnostic to any other object in the scene, and so `get_state()` takes no arguments. In contrast, `RelativeObjectState`s are computed with respect to another object, and so require `other_obj` to be passed into the getter and setter, e.g., `get_state(other_obj)` and `set_state(other_obj, ...)`. A `ValueError` will be raised if a `get_state(...)` or `set_state(...)` is called on an object that does not support that functionality. If `set_state()` is called and is successful, it will return `True`, otherwise, it will return `False`. For more information on specific object state types' behaviors, please see [Object State Types](#object-state-types).
|
||||
|
||||
It is important to note that object states are usually queried / computed _on demand_ and immediately cached until its value becomes stale (usually the immediately proceeding simulation step). This is done for efficiency reasons, and also means that object states are usually not automatically updated per-step unless absolutely necessary (1). Calling `state.clear_cache()` forces a clearing of an object state's internal cache.
|
||||
{ .annotate }
|
||||
It is important to note that object states are usually queried / computed _on demand_ and immediately cached until its value becomes stale (usually the immediately proceeding simulation step). This is done for efficiency reasons, and also means that object states are usually not automatically updated per-step unless absolutely necessary. Calling `state.clear_cache()` forces a clearing of an object state's internal cache.
|
||||
|
||||
|
||||
## Types
|
||||
|
|
|
@ -147,7 +147,7 @@ These are manipulation-only robots (an instance of [`ManipulationRobot`](../refe
|
|||
<tr>
|
||||
<td valign="top" width="60%">
|
||||
[**`Franka`**](../reference/robots/franka.html)<br><br>
|
||||
The popular 7-DOF <a href="https://franka.de/">Franka Research 3</a> model equipped with a parallel jaw gripper. Note that OmniGibson also includes two alternative versions of Franka: FrankaAllegro (equipped with an Allegro hand) and FrankaLeap (equipped with a Leap hand).<br><br>
|
||||
The popular 7-DOF <a href="https://franka.de/">Franka Research 3</a> model equipped with a parallel jaw gripper. Note that OmniGibson also includes three alternative versions of Franka with dexterous hands: FrankaAllegro (equipped with an Allegro hand), FrankaLeap (equipped with a Leap hand) and FrankaInspire (equipped with an inspire hand).<br><br>
|
||||
<ul>
|
||||
<li>_Controllers_: Arm, Gripper</li>
|
||||
<li>_Sensors_: Wrist Camera</li>
|
||||
|
|
|
@ -36,7 +36,7 @@ Alternatively, a scene can be directly imported at runtime by first creating the
|
|||
|
||||
To import an object into a scene, call `scene.add_object(obj)`.
|
||||
|
||||
The scene keeps track of and organizes all imported objects via its owned `scene.object_registry`. Objects can quickly be queried by relevant property keys (1), such as `name`, `prim_path`, and `category`, from `env.scene.object_registry` as follows:
|
||||
The scene keeps track of and organizes all imported objects via its owned `scene.object_registry`. Objects can quickly be queried by relevant property keys, such as `name`, `prim_path`, and `category`, from `env.scene.object_registry` as follows:
|
||||
{ .annotate }
|
||||
|
||||
`scene.object_registry_unique_keys` and `scene.object_registry_group_keys` define the valid possible key queries
|
||||
|
|
|
@ -0,0 +1,231 @@
|
|||
---
|
||||
icon: material/list-box
|
||||
---
|
||||
|
||||
# 📑 **Tasks**
|
||||
|
||||
## Description
|
||||
|
||||
`Task`s define the high-level objectives that an agent must complete in a given `Environment`, subject to certain constraints (e.g. not flip over).
|
||||
|
||||
`Task`s have two important internal variables:
|
||||
|
||||
- `_termination_conditions`: a dict of {`str`: `TerminationCondition`} that define when an episode should be terminated. For each of the termination conditions, `termination_condition.step(...)` returns a tuple of `(done [bool], success [bool])`. If any of the termination conditions returns `done = True`, the episode is terminated. If any returns `success = True`, the episode is cnosidered successful.
|
||||
- `_reward_functions`: a dict of {`str`: `RewardFunction`} that define how the agent is rewarded. Each reward function has a `reward_function.step(...)` method that returns a tuple of `(reward [float], info [dict])`. The `reward` is a scalar value that is added to the agent's total reward for the current step. The `info` is a dictionary that can contain additional information about the reward.
|
||||
|
||||
`Task`s usually specify task-relevant observations (e.g. goal location for a navigation task) via the `_get_obs` method, which returns a tuple of `(low_dim_obs [dict], obs [dict])`, where the first element is a dict of low-dimensional observations that will be automatically flattened into a 1D array, and the second element is everything else that shouldn't be flattened. Different types of tasks should overwrite the `_get_obs` method to return the appropriate observations.
|
||||
|
||||
`Task`s also define the reset behavior (in-between episodes) of the environment via the `_reset_scene`, `_reset_agent`, and `_reset_variables` methods.
|
||||
|
||||
- `_reset_scene`: reset the scene for the next episode, default is `scene.reset()`.
|
||||
- `_reset_agent`: reset the agent for the next episode, default is do nothing.
|
||||
- `_reset_variables`: reset any internal variables as needed, default is do nothing.
|
||||
|
||||
Different types of tasks should overwrite these methods for the appropriate reset behavior, e.g. a navigation task might want to randomize the initial pose of the agent and the goal location.
|
||||
|
||||
## Usage
|
||||
|
||||
### Specifying
|
||||
Every `Environment` instance includes a task, defined by its config that is passed to the environment constructor via the `task` key.
|
||||
This is expected to be a dictionary of relevant keyword arguments, specifying the desired task configuration to be created (e.g. reward type and weights, hyperparameters for reset behavior, etc).
|
||||
The `type` key is required and specifies the desired task class. Additional keys can be specified and will be passed directly to the specific task class constructor.
|
||||
An example of a task configuration is shown below in `.yaml` form:
|
||||
|
||||
??? code "point_nav_example.yaml"
|
||||
``` yaml linenums="1"
|
||||
task:
|
||||
type: PointNavigationTask
|
||||
robot_idn: 0
|
||||
floor: 0
|
||||
initial_pos: null
|
||||
initial_quat: null
|
||||
goal_pos: null
|
||||
goal_tolerance: 0.36 # turtlebot bodywidth
|
||||
goal_in_polar: false
|
||||
path_range: [1.0, 10.0]
|
||||
visualize_goal: true
|
||||
visualize_path: false
|
||||
n_vis_waypoints: 25
|
||||
reward_type: geodesic
|
||||
termination_config:
|
||||
max_collisions: 500
|
||||
max_steps: 500
|
||||
fall_height: 0.03
|
||||
reward_config:
|
||||
r_potential: 1.0
|
||||
r_collision: 0.1
|
||||
r_pointgoal: 10.0
|
||||
```
|
||||
|
||||
### Runtime
|
||||
|
||||
`Environment` instance has a `task` attribute that is an instance of the specified task class.
|
||||
Internally, `Environment`'s `reset` method will call the task's `reset` method, `step` method will call the task's `step` method, and the `get_obs` method will call the task's `get_obs` method.
|
||||
|
||||
## Types
|
||||
**`OmniGibson`** currently supports 5 types of tasks, 7 types of termination conditions, and 5 types of reward functions.
|
||||
|
||||
### `Task`
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`DummyTask`**](../reference/tasks/dummy_task.html)<br><br>
|
||||
Dummy task with trivial implementations.
|
||||
<ul>
|
||||
<li>`termination_conditions`: empty dict.</li>
|
||||
<li>`reward_functions`: empty dict.</li>
|
||||
<li>`_get_obs()`: empty dict.</li>
|
||||
<li>`_reset_scene()`: default.</li>
|
||||
<li>`_reset_agent()`: default.</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`PointNavigationTask`**](../reference/tasks/point_navigation_task.html)<br><br>
|
||||
PointGoal navigation task with fixed / randomized initial pose and goal location.
|
||||
<ul>
|
||||
<li>`termination_conditions`: `MaxCollision`, `Timeout`, `PointGoal`.</li>
|
||||
<li>`reward_functions`: `PotentialReward`, `CollisionReward`, `PointGoalReward`.</li>
|
||||
<li>`_get_obs()`: returns relative xy position to the goal, and the agent's current linear and angular velocities.</li>
|
||||
<li>`_reset_scene()`: default.</li>
|
||||
<li>`_reset_agent()`: sample initial pose and goal location.</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`PointReachingTask`**](../reference/tasks/point_reaching_task.html)<br><br>
|
||||
Similar to PointNavigationTask, except the goal is specified with respect to the robot's end effector.
|
||||
<ul>
|
||||
<li>`termination_conditions`: `MaxCollision`, `Timeout`, `PointGoal`.</li>
|
||||
<li>`reward_functions`: `PotentialReward`, `CollisionReward`, `PointGoalReward`.</li>
|
||||
<li>`_get_obs()`: returns the goal position and the end effector's position in the robot's frame, and the agent's current linear and angular velocities.</li>
|
||||
<li>`_reset_scene()`: default.</li>
|
||||
<li>`_reset_agent()`: sample initial pose and goal location.</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`GraspTask`**](../reference/tasks/grasp_task.html)<br><br>
|
||||
Grasp task for a single object.
|
||||
<ul>
|
||||
<li>`termination_conditions`: `Timeout`.</li>
|
||||
<li>`reward_functions`: `GraspReward`.</li>
|
||||
<li>`_get_obs()`: returns the object's pose in the robot's frame</li>
|
||||
<li>`_reset_scene()`: reset pose for objects in `_objects_config`.</li>
|
||||
<li>`_reset_agent()`: randomize the robot's pose and joint configurations.</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`BehaviorTask`**](../reference/tasks/behavior_task.html)<br><br>
|
||||
BEHAVIOR task of long-horizon household activity.
|
||||
<ul>
|
||||
<li>`termination_conditions`: `Timeout`, `PredicateGoal`.</li>
|
||||
<li>`reward_functions`: `PotentialReward`.</li>
|
||||
<li>`_get_obs()`: returns the existence, pose, and in-gripper information of all task relevant objects</li>
|
||||
<li>`_reset_scene()`: default.</li>
|
||||
<li>`_reset_agent()`: default.</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
!!! info annotate "Follow our tutorial on BEHAVIOR tasks!"
|
||||
To better understand how to use / sample / load / customize BEHAVIOR tasks, please read our [tutorial](../tutorials/behavior_tasks.html)!
|
||||
|
||||
### `TerminationCondition`
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`Timeout`**](../reference/termination_conditions/timeout.html)<br><br>
|
||||
`FailureCondition`: episode terminates if `max_step` steps have passed.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`Falling`**](../reference/termination_conditions/falling.html)<br><br>
|
||||
`FailureCondition`: episode terminates if the robot can no longer function (i.e.: falls below the floor height by at least
|
||||
`fall_height` or tilt too much by at least `tilt_tolerance`).
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`MaxCollision`**](../reference/termination_conditions/max_collision.html)<br><br>
|
||||
`FailureCondition`: episode terminates if the robot has collided more than `max_collisions` times.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`PointGoal`**](../reference/termination_conditions/point_goal.html)<br><br>
|
||||
`SuccessCondition`: episode terminates if point goal is reached within `distance_tol` by the robot's base.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`ReachingGoal`**](../reference/termination_conditions/reaching_goal.html)<br><br>
|
||||
`SuccessCondition`: episode terminates if reaching goal is reached within `distance_tol` by the robot's end effector.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`GraspGoal`**](../reference/termination_conditions/grasp_goal.html)<br><br>
|
||||
`SuccessCondition`: episode terminates if target object has been grasped (by assistive grasping).
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`PredicateGoal`**](../reference/termination_conditions/predicate_goal.html)<br><br>
|
||||
`SuccessCondition`: episode terminates if all the goal predicates of `BehaviorTask` are satisfied.
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
||||
|
||||
### `RewardFunction`
|
||||
|
||||
<table markdown="span">
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`CollisionReward`**](../reference/reward_functions/collision_reward.html)<br><br>
|
||||
Penalization of robot collision with non-floor objects, with a negative weight `r_collision`.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`PointGoalReward`**](../reference/reward_functions/point_goal_reward.html)<br><br>
|
||||
Reward for reaching the goal with the robot's base, with a positive weight `r_pointgoal`.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`ReachingGoalReward`**](../reference/reward_functions/reaching_goal_reward.html)<br><br>
|
||||
Reward for reaching the goal with the robot's end-effector, with a positive weight `r_reach`.
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`PotentialReward`**](../reference/reward_functions/potential_reward.html)<br><br>
|
||||
Reward for decreasing some arbitrary potential function value, with a positive weight `r_potential`.
|
||||
It assumes the task already has `get_potential` implemented.
|
||||
Generally low potential is preferred (e.g. a common potential for goal-directed task is the distance to goal).
|
||||
</td>
|
||||
</tr>
|
||||
<tr>
|
||||
<td valign="top">
|
||||
[**`GraspReward`**](../reference/reward_functions/grasp_reward.html)<br><br>
|
||||
Reward for grasping an object. It not only evaluates the success of object grasping but also considers various penalties and efficiencies.
|
||||
The reward is calculated based on several factors:
|
||||
<ul>
|
||||
<li>Grasping reward: A positive reward is given if the robot is currently grasping the specified object.</li>
|
||||
<li>Distance reward: A reward based on the inverse exponential distance between the end-effector and the object.</li>
|
||||
<li>Regularization penalty: Penalizes large magnitude actions to encourage smoother and more energy-efficient movements.</li>
|
||||
<li>Position and orientation penalties: Discourages excessive movement of the end-effector.</li>
|
||||
<li>Collision penalty: Penalizes collisions with the environment or other objects.</li>
|
||||
</ul>
|
||||
</td>
|
||||
</tr>
|
||||
</table>
|
|
@ -0,0 +1,53 @@
|
|||
---
|
||||
icon: material/car-wrench
|
||||
---
|
||||
|
||||
# Under the Hood: Isaac Sim Details
|
||||
In this page, we discuss the particulars of certain Isaac Sim features and behaviors.
|
||||
|
||||
## Playing and Stopping
|
||||
TODO
|
||||
|
||||
## CPU and GPU dynamics and pipelines
|
||||
TODO
|
||||
|
||||
## Sources of Truth: USD, PhysX and Fabric
|
||||
In Isaac Sim, there are three competing representations of the current state of the scene: USD, PhysX, and Fabric. These are used in different contexts, with USD being the main source of truth for loading and representing the scene, PhysX only being used opaquely during physics simulation, and Fabric providing a faster source of truth for the renderer during physics simulation.
|
||||
|
||||
### USD
|
||||
USD is the scene graph representation of the scene, as directly loaded from the USD files. This is the main scene / stage representation used by Omniverse apps.
|
||||
|
||||
* This representation involves maintaining the full USD tree in memory and mutating it as the scene changes.
|
||||
* It is a complete, flexible representation containing all scene meshes and hierarchy that works really well for representing static scenes (e.g. no realtime physics simulation), e.g. for usual CAD workflows.
|
||||
* During physics simulation, we need to repeatedly update the transforms of the objects in the scene so that they will be rendered in their new poses. USD is not optimized for this: especially due to specific USD features like transforms being defined locally (so to compute a world transform, you need to traverse the tree). Queries and reads/writes using the Pixar USD library are also overall relatively slow.
|
||||
|
||||
### PhysX
|
||||
PhysX contains an internal physics-only representation of the scene that it uses to perform computations during physics simulation.
|
||||
|
||||
* The PhysX representation is only available when simulation is playing (e.g. when it is stopped, all PhysX internal storage is freed, and when it is played again, the scene is reloaded from USD).
|
||||
* This representation is the fastest source for everything it provides (e.g. transforms, joint states, etc.) since it only contains physics-relevant information and provides methods to access these in a tensorized manner, called tensor APIs, used in a number of places in OmniGibson.
|
||||
* But it does not contain any rendering information and is not available when simulation is stopped. As such, it cannot be used as the renderer as the source of truth.
|
||||
* Therefore, by default, PhysX explicitly updates the USD state after every step so that the renderer and the representation of the scene in the viewport are updated. This is a really slow operation for large scenes, causing frame rates to drop below 10 fps even for our smallest scenes.
|
||||
|
||||
### Fabric
|
||||
Fabric (formerly Flatcache) is an optimized representation of the scene that is a flattened version of the USD scene graph that is optimized for fast accesses to transforms and for rendering.
|
||||
|
||||
* It can be enabled using the ENABLE_FLATCACHE global macro in OmniGibson, which causes the renderer to use Fabric to get object transforms instead of USD, and causes PhysX to stop updating the USD state after every step and update the Fabric state instead.
|
||||
* The Fabric state exists alongside the USD and captures much of the same information, although it is not as complete as USD. It is optimized for fast reads and writes of object transforms and is used by the renderer to render the scene.
|
||||
* The information it contains is usually fresher than the USD, e.g. when Fabric is enabled, special attention needs to be paid in order to not accidentally access stale information from USD instead of Fabric.
|
||||
* Fabric stores world transforms directly, e.g. any changes of a transform of an object's parent will not be reflected in the child's position because the child separately stores its world transform. One main advantage of this setup is that it is not necessary to traverse the tree to compute world transforms.
|
||||
* A new library called `usdrt` provides an interface that can be used to access Fabric state in a way that is similar to the Pixar USD library. This is used in a number of places in OmniGibson to access Fabric state.
|
||||
|
||||
To conclude, with ENABLE_FLATCACHE enabled, there will be three concurrent representations of the scene state in OmniGibson. USD will be the source of truth for the meshes and the hierarchy. While physics simulation is playing, PhysX will be the source of truth for the physics state of the scene, and we will use it for fast accesses to compute controls etc., and finally on every render step, PhysX will update Fabric which will then be the source of truth for the renderer and for the OmniGibson pose APIs.
|
||||
|
||||
The ENABLE_FLATCACHE macro is recommended to be enabled since large scenes will be unplayable without it, but it can be disabled for small scenes, in which case the Fabric representation will not be used, PhysX will update the USD's local transforms on every step, and the renderer will use USD directly.
|
||||
|
||||
## Lazy Imports
|
||||
Almost all of OmniGibson's simulation functionality uses Isaac Sim code, objects, and components to function. These Python components often need to be imported e.g. via an `import omni.isaac.core.utils.prims` statement. However, such imports of Omniverse libraries can only be performed if the Isaac Sim application has already been launched. Launching the application takes up to 10 minutes on the first try due to shader compilation, and 20 seconds every time after that, and requires the presence of a compatible GPU and permissions. However, certain OmniGibson functionality (e.g. downloading datasets, running linters, etc.) does not require the actual _execution_ of any Isaac Sim code, and should not be blocked by the need to import Isaac Sim libraries.
|
||||
|
||||
To solve this problem, OmniGibson uses a lazy import system. The `omnigibson.lazy` module, often imported as `import omnigibson.lazy as lazy` provides an interface that only imports modules when they are first used.
|
||||
|
||||
Thus, there are two important requirements enforced in OmniGibson with respect to lazy imports:
|
||||
|
||||
1. All imports of omni, pxr, etc. libraries should happen through the `omnigibson.lazy` module. Classes and functions can then be accessed using their fully qualified name. For example, instead of `from omni.isaac.core.utils.prims import get_prim_at_path` and then calling `get_prim_at_path(...)`, you should first import the lazy import module `import omnigibson.lazy as lazy` and then call your function using the full name `lazy.omni.isaac.core.utils.prims.get_prim_at_path(...)`.
|
||||
2. No module except `omnigibson/utils/deprecated_utils.py` should import any Isaac Sim modules at load time (that module is ignored by docs, linters, etc.). This is to ensure that the OmniGibson package can be imported and used without the need to launch Isaac Sim. Instead, Isaac Sim modules should be imported only when they are needed, and only in the functions that use them. If a class needs to inherit from a class in an Isaac Sim module, the class can be placed in the deprecated_utils.py file, or it can be wrapped in a function to delay the import, like in the case of simulator.py.
|
|
@ -0,0 +1,283 @@
|
|||
---
|
||||
icon: material/bookshelf
|
||||
---
|
||||
|
||||
# 🍴 **BEHAVIOR Knowledgebase**
|
||||
|
||||
## Overview
|
||||
|
||||
BEHAVIOR is short for Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments.
|
||||
|
||||
[**BEHAVIOR Knowledgebase**](https://behavior.stanford.edu/knowledgebase/) contains information about what synsets are valid, their relationship between each other, their abilities (or properties), the hyperparameters of the abilities, and the hand-specified [transition rules](../modules/transition_rules.md).
|
||||
|
||||
Here are the important conceptual components of the BEHAVIOR Knowledgebase:
|
||||
|
||||
|
||||
### [**Tasks**](https://behavior.stanford.edu/knowledgebase/tasks)
|
||||
A family of 1000 long-horizon household activities.
|
||||
|
||||
- As illustrated in the [**BEHAVIOR Tasks tutorial**](behavior_tasks.html), each task definition contains a list of task-relevant objects, and their initial and goal conditions.
|
||||
- The knowledgebase page also shows
|
||||
- Which scenes this task is compatible with.
|
||||
- (Experimental) The transition paths that help achieve the goal conditions.
|
||||
|
||||
### [**Synsets**](https://behavior.stanford.edu/knowledgebase/synsets)
|
||||
The basic building block of the knowledgebase.
|
||||
|
||||
- We follow the [**WordNet**](https://wordnet.princeton.edu/) hierarchy while expanding it with additional ("custom") synsets to suit the need of BEHAVIOR.
|
||||
- Each synset has at least one parent synset, and can have many children synsets (no children means it's a leaf synset).
|
||||
- Each synset can have many abilities (or properties).
|
||||
- Some properties define the physical attributes of the object and how OmniGibson simulates them, e.g. `liquid`, `cloth`, `visualSubstance`, etc.
|
||||
- Some properties define the semantic attributes (or affordances) of the object, e.g. `fillable`, `openable`, `cookable`, etc.
|
||||
- Each property might contain additional hyperparameters that define the exact behavior of the property, e.g. `heatSource` has hyperparameters `requires_toggled_on` (bool), `requires_closed` (bool), `requires_inside` (bool), `temperature` (float), and `heating_rate` (float).
|
||||
- The knowledgebase page also shows
|
||||
- The predicates that are used for the synset in the task definitions.
|
||||
- The tasks that involve the synset.
|
||||
- The object categories and models that belong to the synset.
|
||||
- The transition rules that involve the synset.
|
||||
- The synset's position in the WordNet hierarchy (e.g. ancestors, descendants, etc).
|
||||
|
||||
### [**Categories**](https://behavior.stanford.edu/knowledgebase/categories)
|
||||
The bridge between the WordNet(-like) synsets and OmniGibson's object and substance categories.
|
||||
|
||||
- Each category is mapped to **exactly one leaf synset**, e.g. `apple` is mapped to `apple.n.01`.
|
||||
- Multiple categories can be mapped to the same synset, e.g. `drop_in_sink` and `pedestal_sink` both map to `sink.n.01`, and share the exact same properties (because properties are annotated at the synset level, not the category level).
|
||||
- All objects belonging to the same category should share similar mass and size, i.e. should be interchangeable if object randomization is performed.
|
||||
- The knowledgebase page also shows
|
||||
- The objects that belong to the category, as well as their images.
|
||||
- The corresponding synset's position in the WordNet hierarchy (e.g. ancestors, descendants, etc).
|
||||
|
||||
### [**Objects**](https://behavior.stanford.edu/knowledgebase/objects)
|
||||
One-to-one mapping to a specific 3D object model in our dataset.
|
||||
|
||||
- Each object belongs to **exactly one category**, e.g. `coffee_maker-fwlabx` belongs to `coffee_maker`, correspounding to the object model residing at `<gm.DATASET_PATH>/objects/coffee_maker/fwlabx`.
|
||||
- Each object can have multiple meta links that serve the relevant object states in OmniGibson. For example, for the [`coffee_maker-fwlabx`](https://behavior.stanford.edu/knowledgebase/objects/coffee_maker-fwlabx/index.html) object, it is annotated with `connectedpart` for the `AttachedTo` state, `heatsource` for the `HeatSourceOrSink` state, and `toggleButton` for the `ToggledOn` state.
|
||||
- The knowledgebase page also shows
|
||||
- The object's image.
|
||||
- The scenes / rooms the object appears in.
|
||||
- The corresponding synset's position in the WordNet hierarchy (e.g. ancestors, descendants, etc).
|
||||
|
||||
### [**Scenes**](https://behavior.stanford.edu/knowledgebase/scenes)
|
||||
One-to-one mapping to a specific 3D scene model in our dataset.
|
||||
|
||||
- Each scene consists of multiple rooms with the following naming convention: `<room_type>_<room_id>`, e.g. `living_room_0`, `kitchen_1`, etc.
|
||||
- Each room contains a list of objects, e.g. in the [`Beechwood_0_int`](https://behavior.stanford.edu/knowledgebase/scenes/Beechwood_0_int/index.html) scene, `countertop-tpuwys: 6` means the `kitchen_0` room has 6 copies of the `countertop-tpuwys` object.
|
||||
|
||||
### [**Transition Rules**](https://behavior.stanford.edu/knowledgebase/transitions/index.html)
|
||||
Hand-specified rules that define complex physical or chemical interactions between objects and substances that are not natively supported by Omniverse.
|
||||
|
||||
- Each transition rule specifies a list of input synsets and a list of output synsets, as well as the conditions that need to be satisfied for the transition to occur.
|
||||
- For instance, in the [`beef_stew`](https://behavior.stanford.edu/knowledgebase/transitions/beef_stew) rule, the input synsets are `ground_beef.n.01`, `beef_broth.n.01`, `pea.n.01`, `diced__carrot.n.01` and `diced__vidalia_onion.n.01` and the output synset is `beef_stew.n.01`.
|
||||
- The conditions are not yet visualized on the website, but you can manually inspect them in the [JSON files](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/transition_map/tm_jsons).
|
||||
|
||||
We have 6 different types of transition rules:
|
||||
|
||||
- `WasherRule`: remove "dirty" substance from the washer if the necessary solvent is present, and wet the objects inside by making them either `Saturated` with or `Covered` by `water`.
|
||||
- `DryerRule`: dry the objects inside by making them not `Saturated` with `water`, and remove all `water` from the dryer.
|
||||
- `SlicingRule`: when an object with the `slicer` ability exerts a sufficient force on another object with the `sliceable` ability, it slices the latter object into two halves.
|
||||
- `DicingRule`: when an object with the `slicer` ability exerts a sufficient force on another object with the `diceable` ability, it dices the latter object into the corresounding diced substance.
|
||||
- `MeltingRule`: when an object with the `meltable` ability reaches a certain temperature, it melts into the corresounding melted substance.
|
||||
- `RecipeRule`: a general framework of recipe-based transitions that involve multiple objects and substances, and custom defined conditions.
|
||||
- `input_objects`: input objects and their counts that are required
|
||||
- `input_systems`: input systems that are required
|
||||
- `output_objects`: output objects and their counts that are produced
|
||||
- `output_systems`: output systems that are produced (the quantity depends on the collective volume of the input objects and systems)
|
||||
- `input_states`: the states that the input objects and systems should satisfy, e.g. an ingredient should not be `cooked` already.
|
||||
- `output_states`: the states that the output objects and systems should satisfy, e.g. the dish should be `cooked` after the recipe is done.
|
||||
- `fillable_categories`: `fillable` object categories needed for the recipe, e.g. pots and pans for cooking, and coffee makers for brewing coffee.
|
||||
|
||||
We have 5 different types of `RecipeRule`s:
|
||||
|
||||
- `CookingPhysicalParticleRule`: "cook" physical particles. It might or might not require water, depending on the synset's property `waterCook`.
|
||||
- Requires water: `rice` + `cooked__water` -> `cooked__rice`.
|
||||
- Doesn't require water: `diced__chicken` -> `cooked__diced__chicken`.
|
||||
- `ToggleableMachineRule`: leverages a `toggleable` ability machine (e.g. electric mixer, coffee machine, blender) that needs to be `ToggledOn`.
|
||||
- Output is a single object: `flour` + `butter` + `sugar` -> `dough`; the machine is `electric_mixer`.
|
||||
- Output is a single system: `strawberry` + `milk` -> `strawberry_smoothie`; the machine is `blender`.
|
||||
- `MixingToolRule`: leverages a `mixingTool` ability object that gets into contact with a `fillable` ability object.
|
||||
- Output is a single system: `water` + `lemon_juice` + `sugar` -> `lemonade`; the mixing tool is `spoon`.
|
||||
- `CookingRule`: leverages a `heatsource` ability object and a `fillable` ability object for general cooking.
|
||||
- `CookingObjectRule`: Output is one or more objects: `bagel_dough` + `egg` + `sesame_seed` -> `bagel`; the heat source is `oven`; the container is `baking_sheet`.
|
||||
- `CookingSystemRule`: Output is a single system: `beef` + `tomato` + `chicken_stock` -> `stew`; the heat source is `stove`; the container is `stockpot`.
|
||||
|
||||
|
||||
## Usage
|
||||
|
||||
OmniGibson interfaces with the BEHAVIOR Knowledgebase via a single interface: the [`ObjectTaxonomy`](https://github.com/StanfordVL/bddl/blob/master/bddl/object_taxonomy.py) class.
|
||||
|
||||
Here is an example of how to use the `ObjectTaxonomy` class to query the BEHAVIOR Knowledgebase.
|
||||
|
||||
```{.python .annotate}
|
||||
from omnigibson.utils.bddl_utils import OBJECT_TAXONOMY
|
||||
|
||||
# Get parents / children / ancestors / descendants / leaf descendants of a synset
|
||||
parents = OBJECT_TAXONOMY.get_parents("fruit.n.01")
|
||||
children = OBJECT_TAXONOMY.get_children("fruit.n.01")
|
||||
ancestors = OBJECT_TAXONOMY.get_ancestors("fruit.n.01")
|
||||
descendants = OBJECT_TAXONOMY.get_descendants("fruit.n.01")
|
||||
leaf_descendants = OBJECT_TAXONOMY.get_leaf_descendants("fruit.n.01")
|
||||
|
||||
# Checker functions for synsets
|
||||
is_leaf = OBJECT_TAXONOMY.is_leaf("fruit.n.01")
|
||||
is_ancestor = OBJECT_TAXONOMY.is_ancestor("fruit.n.01", "apple.n.01")
|
||||
is_descendant = OBJECT_TAXONOMY.is_descendant("apple.n.01", "fruit.n.01")
|
||||
is_valid = OBJECT_TAXONOMY.is_valid_synset("fruit.n.01")
|
||||
|
||||
# Get the abilities of a synset, e.g. "coffee_maker.n.01" -> {'rigidBody': {...}, 'heatSource': {...}, 'toggleable': {...}, ...}
|
||||
abilities = OBJECT_TAXONOMY.get_abilities("coffee_maker.n.01")
|
||||
|
||||
# Check if a synset has a specific ability, e.g. "coffee_maker.n.01" has "heatSource"
|
||||
has_ability = OBJECT_TAXONOMY.has_ability("coffee_maker.n.01", "heatSource")
|
||||
|
||||
# Get the synset of a object category, e.g. "apple" -> "apple.n.01"
|
||||
object_synset = OBJECT_TAXONOMY.get_synset_from_category("apple")
|
||||
|
||||
# Get the object categories of a synset, e.g. "apple.n.01" -> ["apple"]
|
||||
object_categories = OBJECT_TAXONOMY.get_categories("apple.n.01")
|
||||
|
||||
# Get the object categories of all the leaf descendants of a synset, e.g. "fruit.n.01" -> ["apple", "banana", "orange", ...]
|
||||
leaf_descendant_categories = OBJECT_TAXONOMY.get_subtree_categories("fruit.n.01")
|
||||
|
||||
# Get the synset of a substance category , e.g. "water" -> "water.n.06"
|
||||
substance_synset = OBJECT_TAXONOMY.get_synset_from_substance("water")
|
||||
|
||||
# Get the substance categories of a synset, e.g. "water.n.06" -> ["water"]
|
||||
substance_categories = OBJECT_TAXONOMY.get_substances("water.n.06")
|
||||
|
||||
# Get the substance categories of all the leaf descendants of a synset, e.g. "liquid.n.01" -> ["water", "milk", "juice", ...]
|
||||
leaf_descendant_substances = OBJECT_TAXONOMY.get_subtree_substances("liquid.n.01")
|
||||
```
|
||||
|
||||
## (Advanced) Customize BEHAVIOR Knowledgebase
|
||||
|
||||
To customize BEHAVIOR Knowedgebase, you can modify the source CSV files in the [bddl](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data) repository, and then rebuild the knowledgebase.
|
||||
|
||||
### Modify Source CSV Files
|
||||
|
||||
You can use Excel, Google Sheets or any other spreadsheet software to modify the source CSV files below.
|
||||
|
||||
[category_mapping.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/category_mapping.csv)
|
||||
|
||||
- **Information**: map an object category to a synset.
|
||||
- **When modify**: add a new object category.
|
||||
- **Caveat**: you also need to add the canonical density of the object category to `<gm.DATASET_PATH>/metadata/avg_category_specs.json`.
|
||||
|
||||
[substance_hyperparams.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/substance_hyperparams.csv)
|
||||
|
||||
- **Information**: map a substance category to a synset, and also specify the substance's type (e.g. `fluid`, `macro_physical_particle`), physical attributes (e.g. `is_viscous`, `particle_density`) and visual appearance (e.g. `material_mtl_name`, `diffuse_reflection_color`).
|
||||
- **When modify**: add a new substance category.
|
||||
- **Caveat**: you also need to add the metadata (in a JSON file) and (optionally) particle prototypes to the `<gm.DATASET_PATH>/systems/<substance_category>`.
|
||||
- `fluid`: only metadata is needed, e.g. `<gm.DATASET_PATH>/systems/water/metadata.json`.
|
||||
- `granular`: both metadata and particle prototypes are needed, e.g. `<gm.DATASET_PATH>/systems/salt/metadata.json` and `<gm.DATASET_PATH>/systems/sugar/iheusv`.
|
||||
- `macro_physical_particle`: both hyperparams and particle prototypes are needed, e.g. `<gm.DATASET_PATH>/systems/cashew/metadata.json` and `<gm.DATASET_PATH>/systems/cashew/qyglnm`.
|
||||
- `macro_visual_particle`: both hyperparams and particle prototypes are needed, e.g. `<gm.DATASET_PATH>/systems/stain/metadata.json` and `<gm.DATASET_PATH>/systems/stain/ahkjul`.
|
||||
|
||||
[synsets.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/synsets.csv)
|
||||
|
||||
- **Information**: specify the parent and abilities of a synset.
|
||||
- **When modify**: add a new synset.
|
||||
- **Caveat**: feel free to create custom synsets if you can't find existing ones from WordNet; you also need to update the property parameter annotations in the `prop_param_annots` folder accordingly (see below).
|
||||
|
||||
[prop_param_annots/*](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots)
|
||||
|
||||
- **Information**: specify the hyperparameters of the abilities (or properties) of a synset.
|
||||
- **When modify**: add a new synset that has the ability, or modify the hyperparameters of the ability.
|
||||
- **Caveat**: if a new object or substance synset is involved, you also need to modify `synsets.csv`, `category_mapping` and `substance_hyperparams.csv` accordingly (see above).
|
||||
|
||||
[prop_param_annots/heatSource.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/heatSource.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `heatSource` ability, e.g. whether the object needs to be toggled on or have its doors closed, whether it requires other objects to be inside it, and the heating temperature and rate.
|
||||
- **When modify**: add a new synset that has the `heatSource` ability.
|
||||
|
||||
[prop_param_annots/coldSource.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/coldSource.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `coldSource` ability, e.g. whether the object needs to be toggled on or have its doors closed, whether it requires other objects to be inside it, and the heating temperature and rate.
|
||||
- **When modify**: add a new synset that has the `coldSource` ability.
|
||||
|
||||
[prop_param_annots/cookable.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/cookable.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `cookable` ability, e.g. the temperature threshold, and the cooked version of the substance synset (if applicable).
|
||||
- **When modify**: add a new synset that has the `cookable` ability.
|
||||
|
||||
[prop_param_annots/flammable.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/flammable.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `flammable` ability, e.g. the ignition and fire temperature, the heating rate and distance threshold.
|
||||
- **When modify**: add a new synset that has the `flammable` ability.
|
||||
|
||||
[prop_param_annots/particleApplier.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/particleApplier.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `particleApplier` ability, e.g. modification method, conditions, and substance synset to be applied.
|
||||
- **When modify**: add a new synset that has the `particleApplier` ability.
|
||||
|
||||
[prop_param_annots/particleSource.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/particleSource.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `particleSource` ability, e.g. conditions, and substance synset to be applied.
|
||||
- **When modify**: add a new synset that has the `particleSource` ability.
|
||||
|
||||
[prop_param_annots/particleRemover.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/particleRemover.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `particleRemover` ability, e.g. conditions to remove white-listed substance synsets, and conditions to remove everything else.
|
||||
- **When modify**: add a new synset that has the `particleRemover` ability.
|
||||
|
||||
[prop_param_annots/particleSink.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/particleSink.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `particleSink` ability (deprecated).
|
||||
- **When modify**: add a new synset that has the `particleSink` ability.
|
||||
|
||||
[prop_param_annots/diceable.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/diceable.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `diceable` ability, e.g. the uncooked and cooked diced substance synsets.
|
||||
- **When modify**: add a new synset that has the `diceable` ability.
|
||||
|
||||
[prop_param_annots/sliceable.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/sliceable.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `sliceable` ability, e.g. the sliced halves' synset.
|
||||
- **When modify**: add a new synset that has the `sliceable` ability.
|
||||
|
||||
[prop_param_annots/meltable.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/prop_param_annots/meltable.csv)
|
||||
|
||||
- **Information**: specify the hyperparameters of the `meltable` ability, e.g. the melted substance synset.
|
||||
- **When modify**: add a new synset that has the `meltable` ability.
|
||||
|
||||
[transition_map/tm_raw_data/*](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/transition_map/tm_raw_data)
|
||||
|
||||
- **Information**: specify the transition rules for different types of transitions.
|
||||
- **Caveat**: if a new object or substance synset is involved, you also need to modify `synsets.csv`, `category_mapping` and `substance_hyperparams.csv` accordingly (see above).
|
||||
|
||||
[transition_map/tm_raw_data/heat_cook.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/transition_map/tm_raw_data/heat_cook.csv)
|
||||
|
||||
- **Information**: specify the transition rules for `CookingObjectRule` and `CookingSystemRule`, i.e. the input synsets / states, the output synsets / states, the heat source, the container, and the timesteps to cook.
|
||||
- **When modify**: add a new transition rule for cooking objects or systems.
|
||||
|
||||
[transition_map/tm_raw_data/mixing_stick.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/transition_map/tm_raw_data/mixing_stick.csv)
|
||||
|
||||
- **Information**: specify the transition rules for `MixingToolRule`, i.e. the input synsets, and the output synsets.
|
||||
- **When modify**: add a new transition rule for mixing systems.
|
||||
|
||||
[transition_map/tm_raw_data/single_toggleable_machine.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/transition_map/tm_raw_data/single_toggleable_machine.csv)
|
||||
|
||||
- **Information**: specify the transition rules for `ToggleableMachineRule`, i.e. the input synsets / states, the output synsets / states, and the machine.
|
||||
- **When modify**: add a new transition rule for toggleable machines.
|
||||
|
||||
[transition_map/tm_raw_data/washer.csv](https://github.com/StanfordVL/bddl/tree/master/bddl/generated_data/transition_map/tm_raw_data/washer.csv)
|
||||
|
||||
- **Information**: specify the transition rules for `WasherRule`, similar to `prop_param_annots/particleRemover.csv` , i.e. solvents required to remove white-listed substance synsets, and conditions to remove everything else.
|
||||
- **When modify**: add a new transition rule for washing machines.
|
||||
|
||||
### Rebuild Knowledgebase
|
||||
|
||||
To rebuild the knowledgebase, you need to run the following command:
|
||||
|
||||
```bash
|
||||
cd bddl
|
||||
python data_generation/generate_datafiles.py
|
||||
```
|
||||
|
||||
To make sure the new knowledgebase is consistent with the task definitions, you should also run the following command:
|
||||
|
||||
```bash
|
||||
python tests/bddl_tests.py batch_verify
|
||||
python tests/tm_tests.py
|
||||
```
|
||||
|
||||
If you encounter any errors during the rebuilding process, please read the error messages carefully and try to fix the issues accordingly.
|
|
@ -0,0 +1,217 @@
|
|||
---
|
||||
icon: material/silverware-fork-knife
|
||||
---
|
||||
|
||||
# 🍴 **BEHAVIOR Tasks**
|
||||
|
||||
## Overview
|
||||
|
||||
BEHAVIOR is short for Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments.
|
||||
|
||||
[**`BehaviorTask`**](../reference/tasks/behavior_task.html) represents a family of 1000 long-horizon household activities that humans benefit the most from robots' help based on our survey results.
|
||||
|
||||
To browse and modify the definition of BEHAVIOR tasks, you might find it helpful to download a local editable copy of our `bddl` repo.
|
||||
```{.python .annotate}
|
||||
git clone https://github.com/StanfordVL/bddl.git
|
||||
```
|
||||
|
||||
Then you can install it in the same conda environment as OmniGibson.
|
||||
```{.python .annotate}
|
||||
conda activate omnigibson
|
||||
cd bddl
|
||||
pip install -e .
|
||||
```
|
||||
|
||||
You can verify the installation by running the following command. This should now point to the local `bddl` repo, instead of the PyPI one.
|
||||
```{.python .annotate}
|
||||
>>> import bddl; print(bddl)
|
||||
<module 'bddl' from '/path/to/BDDL/bddl/__init__.py'>
|
||||
```
|
||||
|
||||
## Browse 1000 BEHAVIOR Tasks
|
||||
All 1000 activities are defined in BDDL, a domain-specific language designed for BEHAVIOR.
|
||||
|
||||
You can find them in [`bddl/activity_definitions`](https://github.com/StanfordVL/bddl/tree/master/bddl/activity_definitions) folder.
|
||||
|
||||
Alternatively, you can browse them on the [BEHAVIOR Knowledgebase](https://behavior.stanford.edu/knowledgebase/tasks).
|
||||
|
||||
Here is an example of a BEHAVIOR task definition, which consists of several components:
|
||||
|
||||
- **:objects**: task-relevant objects, where each line represents a [**WordNet**](https://wordnet.princeton.edu/) synset of the object. For example, `candle.n.01_1 candle.n.01_2 candle.n.01_3 candle.n.01_4 - candle.n.01` indicates that four objects that belong to the `candoe.n.01` synset are needed for this task.
|
||||
- **:init**: initial conditions of the task, where each line represents a ground predicate that holds at the beginning of the task. For example, `(ontop candle.n.01_1 table.n.02_1)` indicates that the first candle is on top of the first table when the task begins.
|
||||
- **:goal**: goal conditions of the task, where each line represents a ground predicate and each block represents a non-ground predicate (e.g. `forall`, `forpairs`, `and`, `or`, etc) that should hold for the task to be considered solved. For example, `(inside ?candle.n.01 ?wicker_basket.n.01)` indicates that the candle should be inside the wicker basket at the end of the task.
|
||||
|
||||
??? code "assembling_gift_baskets.bddl"
|
||||
``` yaml linenums="1"
|
||||
(define (problem assembling_gift_baskets-0)
|
||||
(:domain omnigibson)
|
||||
|
||||
(:objects
|
||||
wicker_basket.n.01_1 wicker_basket.n.01_2 wicker_basket.n.01_3 wicker_basket.n.01_4 - wicker_basket.n.01
|
||||
floor.n.01_1 - floor.n.01
|
||||
candle.n.01_1 candle.n.01_2 candle.n.01_3 candle.n.01_4 - candle.n.01
|
||||
butter_cookie.n.01_1 butter_cookie.n.01_2 butter_cookie.n.01_3 butter_cookie.n.01_4 - butter_cookie.n.01
|
||||
swiss_cheese.n.01_1 swiss_cheese.n.01_2 swiss_cheese.n.01_3 swiss_cheese.n.01_4 - swiss_cheese.n.01
|
||||
bow.n.08_1 bow.n.08_2 bow.n.08_3 bow.n.08_4 - bow.n.08
|
||||
table.n.02_1 table.n.02_2 - table.n.02
|
||||
agent.n.01_1 - agent.n.01
|
||||
)
|
||||
|
||||
(:init
|
||||
(ontop wicker_basket.n.01_1 floor.n.01_1)
|
||||
(ontop wicker_basket.n.01_2 floor.n.01_1)
|
||||
(ontop wicker_basket.n.01_3 floor.n.01_1)
|
||||
(ontop wicker_basket.n.01_4 floor.n.01_1)
|
||||
(ontop candle.n.01_1 table.n.02_1)
|
||||
(ontop candle.n.01_2 table.n.02_1)
|
||||
(ontop candle.n.01_3 table.n.02_1)
|
||||
(ontop candle.n.01_4 table.n.02_1)
|
||||
(ontop butter_cookie.n.01_1 table.n.02_1)
|
||||
(ontop butter_cookie.n.01_2 table.n.02_1)
|
||||
(ontop butter_cookie.n.01_3 table.n.02_1)
|
||||
(ontop butter_cookie.n.01_4 table.n.02_1)
|
||||
(ontop swiss_cheese.n.01_1 table.n.02_2)
|
||||
(ontop swiss_cheese.n.01_2 table.n.02_2)
|
||||
(ontop swiss_cheese.n.01_3 table.n.02_2)
|
||||
(ontop swiss_cheese.n.01_4 table.n.02_2)
|
||||
(ontop bow.n.08_1 table.n.02_2)
|
||||
(ontop bow.n.08_2 table.n.02_2)
|
||||
(ontop bow.n.08_3 table.n.02_2)
|
||||
(ontop bow.n.08_4 table.n.02_2)
|
||||
(inroom floor.n.01_1 living_room)
|
||||
(inroom table.n.02_1 living_room)
|
||||
(inroom table.n.02_2 living_room)
|
||||
(ontop agent.n.01_1 floor.n.01_1)
|
||||
)
|
||||
|
||||
(:goal
|
||||
(and
|
||||
(forpairs
|
||||
(?wicker_basket.n.01 - wicker_basket.n.01)
|
||||
(?candle.n.01 - candle.n.01)
|
||||
(inside ?candle.n.01 ?wicker_basket.n.01)
|
||||
)
|
||||
(forpairs
|
||||
(?wicker_basket.n.01 - wicker_basket.n.01)
|
||||
(?swiss_cheese.n.01 - swiss_cheese.n.01)
|
||||
(inside ?swiss_cheese.n.01 ?wicker_basket.n.01)
|
||||
)
|
||||
(forpairs
|
||||
(?wicker_basket.n.01 - wicker_basket.n.01)
|
||||
(?butter_cookie.n.01 - butter_cookie.n.01)
|
||||
(inside ?butter_cookie.n.01 ?wicker_basket.n.01)
|
||||
)
|
||||
(forpairs
|
||||
(?wicker_basket.n.01 - wicker_basket.n.01)
|
||||
(?bow.n.08 - bow.n.08)
|
||||
(inside ?bow.n.08 ?wicker_basket.n.01)
|
||||
)
|
||||
)
|
||||
)
|
||||
)
|
||||
```
|
||||
|
||||
## Sample BEHAVIOR Tasks
|
||||
|
||||
Given a BEHAVIOR task definition, you can sample an instance of the task in OmniGibson by specifying the `activity_name` and `activity_definition_id` in the task configuration, which correspounds to `bddl/activity_definitions/<activity_name>/problem<activity_definition_id>.bddl`.
|
||||
|
||||
Here is an example of sample a BEHAVIOR task in OmniGibson for [laying_wood_floors](https://github.com/StanfordVL/bddl/blob/master/bddl/activity_definitions/laying_wood_floors/problem0.bddl).
|
||||
```{.python .annotate}
|
||||
import omnigibson as og
|
||||
cfg = {
|
||||
"scene": {
|
||||
"type": "InteractiveTraversableScene",
|
||||
"scene_model": "Rs_int",
|
||||
},
|
||||
"robots": [
|
||||
{
|
||||
"type": "Fetch",
|
||||
"obs_modalities": ["rgb"],
|
||||
"default_arm_pose": "diagonal30",
|
||||
"default_reset_mode": "tuck",
|
||||
},
|
||||
],
|
||||
"task": {
|
||||
"type": "BehaviorTask",
|
||||
"activity_name": "laying_wood_floors",
|
||||
"activity_definition_id": 0,
|
||||
"activity_instance_id": 0,
|
||||
"online_object_sampling": True,
|
||||
},
|
||||
}
|
||||
env = og.Environment(configs=cfg)
|
||||
```
|
||||
|
||||
Each time you run the code above, a different instance of the task will be generated:
|
||||
|
||||
- A different object category might be sampled. For example, for a high-level synset like `fruit.n.01`, different types of fruits like apple, banana, and orange might be sampled.
|
||||
- A different object model might be sampled. For example, different models of the same category (e.g. apple) might be sampled
|
||||
- A different object pose might be sampled. For example, the apple might be placed at a different location in the scene.
|
||||
|
||||
Sampling can also fail for a wide variety of reasons:
|
||||
|
||||
- Missing room types: a required room type doesn't exist in the current scene
|
||||
- No valid scene objects: cannot find appropriate scene objects (objects with the `inroom` predicate in the task definition), e.g. category mismatch.
|
||||
- Cannot sample initial conditions: cannot find an appropraite physical configuration that satisfies all the initial conditions in the task definition, e.g. size mismatch.
|
||||
- Many more...
|
||||
|
||||
Once a task is successfully sampled, you can save it to disk.
|
||||
```{.python .annotate}
|
||||
env.task.save_task()
|
||||
```
|
||||
The default path for saving the task is:
|
||||
```
|
||||
<gm.DATASET_PATH>/scenes/<SCENE_MODEL>/json/<scene_model>_task_{activity_name}_{activity_definition_id}_{activity_instance_id}_template.json
|
||||
```
|
||||
|
||||
## Load Pre-sampled BEHAVIOR Tasks
|
||||
|
||||
Here is an example of loading a pre-sampled BEHAVIOR task instance in OmniGibson that you just saved.
|
||||
|
||||
```{.python .annotate}
|
||||
import omnigibson as og
|
||||
cfg = {
|
||||
"scene": {
|
||||
"type": "InteractiveTraversableScene",
|
||||
"scene_model": "Rs_int",
|
||||
},
|
||||
"robots": [
|
||||
{
|
||||
"type": "Fetch",
|
||||
"obs_modalities": ["rgb"],
|
||||
"default_arm_pose": "diagonal30",
|
||||
"default_reset_mode": "tuck",
|
||||
},
|
||||
],
|
||||
"task": {
|
||||
"type": "BehaviorTask",
|
||||
"activity_name": "laying_wood_floors",
|
||||
"activity_definition_id": 0,
|
||||
"activity_instance_id": 0,
|
||||
"online_object_sampling": False,
|
||||
},
|
||||
}
|
||||
env = og.Environment(configs=cfg)
|
||||
```
|
||||
|
||||
Curently, in our publicly available dataset, we have pre-sampled exactly **1** instance of all 1000 BEHAVIOR tasks.
|
||||
We recommend you to set `online_object_sampling` to `False` to load the pre-sampled task instances in the dataset.
|
||||
You can run the following command to find out the path to the pre-sampled task instances.
|
||||
```bash
|
||||
ls -l <gm.DATASET_PATH>/scenes/*/json/*task*
|
||||
```
|
||||
|
||||
## (Advanced) Customize BEHAVIOR Tasks
|
||||
|
||||
The easiest way to create custom BEHAVIOR tasks is to add new task definitions to the `bddl` repo.
|
||||
|
||||
For instance, you can emulate the existing task definitions and create a new task definition at `bddl/activity_definitions/<my_new_task>/problem0.bddl`.
|
||||
|
||||
Then you can run the following tests to ensure that your new task is compatible with the rest of BEHAVIOR knowledgebase (e.g. you are using valid synsets with valid states).
|
||||
```bash
|
||||
cd bddl
|
||||
python tests/bddl_tests.py batch_verify
|
||||
python tests/tm_tests.py
|
||||
```
|
||||
|
||||
Finally, you can sample and load your custom BEHAVIOR tasks in OmniGibson as shown above.
|
|
@ -0,0 +1,404 @@
|
|||
---
|
||||
icon: material/robot-industrial
|
||||
---
|
||||
|
||||
# 🕹️ **Importing a Custom Robot**
|
||||
|
||||
While OmniGibson assets includes a set of commonly-used robots, users might still want to import robot model of there own. This tutorial introduces users
|
||||
|
||||
## Preparation
|
||||
|
||||
In order to import a custom robot, You will need to first prepare your robot model file. For the next section we will assume you have the URDF file for the robots ready with all the corresponding meshes and textures. If your robot file is in another format (e.g. MCJF), please convert it to URDF format. If you already have the robot model USD file, feel free to skip the next section and move onto [Create the Robot Class](#create-the-robot-class).
|
||||
|
||||
Below, we will walk through each step for importing a new custom robot into **OmniGibson**. We use [Hello Robotic](https://hello-robot.com/)'s [Stretch](https://hello-robot.com/stretch-3-product) robot as an example, taken directly from their [official repo](https://github.com/hello-robot/stretch_urdf).
|
||||
|
||||
## Convert from URDF to USD
|
||||
|
||||
In this section, we will be using the URDF Importer in native Isaac Sim to convert our robot URDF model into USD format. Before we get started, it is strongly recommended that you read through the official [URDF Importer Tutorial](https://docs.omniverse.nvidia.com/isaacsim/latest/features/environment_setup/ext_omni_isaac_urdf.html).
|
||||
|
||||
1. Create a directory with the name of the new robot under `<PATH_TO_OG_ASSET_DIR>/models`. This is where all of our robot models live. In our case, we created a directory named `stretch`.
|
||||
|
||||
2. Put your URDF file under this directory. Additional asset files such as STL, obj, mtl, and texture files should be placed under a `meshes` directory (see our `stretch` directory as an example).
|
||||
|
||||
3. Launch Isaac Sim from the Omniverse Launcher. In an empty stage, open the URDF Importer via `Isaac Utils` -> `Workflows` -> `URDF Importer`.
|
||||
|
||||
4. In the `Import Options`, uncheck `Fix Base Link` (we will have a parameter for this in OmniGibson). We also recommend that you check the `Self Collision` flag. You can leave the rest unchanged.
|
||||
|
||||
5. In the `Import` section, choose the URDF file that you moved in Step 1. You can leave the Output Directory as it is (same as source). Press import to finish the conversion. If all goes well, you should see the imported robot model in the current stage. In our case, the Stretch robot model looks like the following:
|
||||
|
||||
|
||||
![Stretch Robot Import 0](../assets/tutorials/stretch-import-0.png)
|
||||
|
||||
|
||||
## Process USD Model
|
||||
|
||||
Now that we have the USD model, let's open it up in Isaac Sim and inspect it.
|
||||
|
||||
1. In IsaacSim, begin by first Opening a New Stage. Then, Open the newly imported robot model USD file.
|
||||
|
||||
2. Make sure the default prim or root link of the robot has `Articulation Root` property
|
||||
|
||||
Select the default prim in `Stage` panel on the top right, go to the `Property` section at the bottom right, scroll down to the `Physics` section, you should see the `Articulation Root` section. Make sure the `Articulation Enabled` is checked. If you don't see the section, scroll to top of the `Property` section, and `Add` -> `Physics` -> `Articulation Root`
|
||||
|
||||
![Stretch Robot Import 2](../assets/tutorials/stretch-import-2.png)
|
||||
|
||||
3. Make sure every link has visual mesh and collision mesh in the correct shape. You can visually inspect this by clicking on every link in the `Stage` panel and view the highlighted visual mesh in orange. To visualize all collision meshes, click on the Eye Icon at the top and select `Show By Type` -> `Physics` -> `Colliders` -> `All`. This will outline all the collision meshes in green. If any collision meshes do not look as expected, please inspect the original collision mesh referenced in the URDF. Note that IsaacSim cannot import a pre-convex-decomposed collision mesh, and so such a collision mesh must be manually split and explicitly defined as individual sub-meshes in the URDF before importing. In our case, the Stretch robot model already comes with rough cubic approximations of its meshes.
|
||||
|
||||
![Stretch Robot Import 3](../assets/tutorials/stretch-import-3.png)
|
||||
|
||||
4. Make sure the physics is stable:
|
||||
|
||||
- Create a fixed joint in the base: select the base link of the robot, then right click -> `Create` -> `Physics` -> `Joint` -> `Fixed Joint`
|
||||
|
||||
- Click on the play button on the left toolbar, you should see the robot either standing still or falling down due to gravity, but there should be no abrupt movements.
|
||||
|
||||
- If you observe the robot moving strangely, this suggests that there is something wrong with the robot physics. Some common issues we've observed are:
|
||||
|
||||
- Self-collision is enabled, but the collision meshes are badly modeled and there are collision between robot links.
|
||||
|
||||
- Some joints have bad damping/stiffness, max effort, friction, etc.
|
||||
|
||||
- One or more of the robot links have off-the-scale mass values.
|
||||
|
||||
At this point, there is unfortunately no better way then to manually go through each of the individual links and joints in the Stage and examine / tune the parameters to determine which aspect of the model is causing physics problems. If you experience significant difficulties, please post on our [Discord channel](https://discord.gg/bccR5vGFEx).
|
||||
|
||||
5. The robot additionally needs to be equipped with sensors, such as cameras and / or LIDARs. To add a sensor to the robot, select the link under which the sensor should be attached, and select the appropriate sensor:
|
||||
|
||||
- **LIDAR**: From the top taskbar, select `Create` -> `Isaac` -> `Sensors` -> `PhysX Lidar` -> `Rotating`
|
||||
- **Camera**: From the top taskbar, select `Create` -> `Camera`
|
||||
|
||||
You can rename the generated sensors as needed. Note that it may be necessary to rotate / offset the sensors so that the pose is unobstructed and the orientation is correct. This can be achieved by modifying the `Translate` and `Rotate` properties in the `Lidar` sensor, or the `Translate` and `Orient` properties in the `Camera` sensor. Note that the local camera convention is z-backwards, y-up. Additional default values can be specified in each sensor's respective properties, such as `Clipping Range` and `Focal Length` in the `Camera` sensor.
|
||||
|
||||
In our case, we created a LIDAR at the `laser` link (offset by 0.01m in the z direction), and cameras at the `camera_link` link (offset by 0.005m in the x direction and -90 degrees about the y-axis) and `gripper_camera_link` link (offset by 0.01m in the x direction and 90 / -90 degrees about the x-axis / y-axis).
|
||||
|
||||
![Stretch Robot Import 5a](../assets/tutorials/stretch-import-5a.png)
|
||||
![Stretch Robot Import 5b](../assets/tutorials/stretch-import-5b.png)
|
||||
![Stretch Robot Import 5c](../assets/tutorials/stretch-import-5c.png)
|
||||
|
||||
6. Finally, save your USD! Note that you need to remove the fixed link created at step 4 before saving.
|
||||
|
||||
## Create the Robot Class
|
||||
Now that we have the USD file for the robot, let's write our own robot class. For more information please refer to the [Robot module](../modules/robots.md).
|
||||
|
||||
1. Create a new python file named after your robot. In our case, our file exists under `omnigibson/robots` and is named `stretch.py`.
|
||||
|
||||
2. Determine which robot interfaces it should inherit. We currently support three modular interfaces that can be used together: [`LocomotionRobot`](../reference/robots/locomotion_robot.html) for robots whose bases can move (and a more specific [`TwoWheelRobot`](../reference/robots/two_wheel_robot.html) for locomotive robots that only have two wheels), [`ManipulationRobot`](../reference/robots/manipulation_robot.html) for robots equipped with one or more arms and grippers, and [`ActiveCameraRobot`](../reference/robots/active_camera_robot.html) for robots with a controllable head or camera mount. In our case, our robot is a mobile manipulator with a moveable camera mount, so our Python class inherits all three interfaces.
|
||||
|
||||
3. You must implement all required abstract properties defined by each respective inherited robot interface. In the most simple case, this is usually simply defining relevant metadata from the original robot source files, such as relevant joint / link names and absolute paths to the corresponding robot URDF and USD files. Please see our annotated `stretch.py` module below which serves as a good starting point that you can modify.
|
||||
|
||||
??? note "Optional properties"
|
||||
|
||||
We offer a more in-depth description of a couple of more advanced properties for ManipulationRobots below:
|
||||
|
||||
- `eef_usd_path`: if you want to teleoperate the robot using I/O devices other than keyboard, this usd path is needed to load the visualizer for the robot eef that would be used as a visual aid when teleoperating. To get such file, duplicate the robot USD file, and remove every prim except the robot end effector. You can then put the file path in the `eef_usd_path` attribute. Here is an example of the Franka Panda end effector USD:
|
||||
|
||||
![Franka Panda EEF](../assets/tutorials/franka_panda_eef.png)
|
||||
|
||||
- `assisted_grasp_start_points`, `assisted_grasp_end_points`: you need to implement this if you want to use sticky grasp/assisted grasp on the new robot.
|
||||
|
||||
These points are `omnigibson.robots.manipulation_robot.GraspingPoint` that is defined by the end effector link name and the relative position of the point w.r.t. to the pose of the link. Basically when the gripper receives a close command and OmniGibson tries to perform assisted grasping, it will cast rays from every start point to every end point, and if there is one object that is hit by any rays, then we consider the object is grasped by the robot.
|
||||
|
||||
In practice, for parallel grippers, naturally the start and end points should be uniformally sampled on the inner surface of the two fingers. You can refer to the Fetch class for an example of this case. For more complicated end effectors like dexterous hands, it's usually best practice to have start points at palm center and lower center, and thumb tip, and end points at each every other finger tips. You can refer to the Franka class for examples of this case.
|
||||
|
||||
Best practise of setting these points is to load the robot into Isaac Sim, and create a small sphere under the target link of the end effector. Then drag the sphere to the desired location (which should be just right outside the mesh of the link) or by setting the position in the `Property` tab. After you get a desired relative pose to the link, write down the link name and position in the robot class.
|
||||
|
||||
4. If your robot is a manipulation robot, you must additionally define a description .yaml file in order to use our inverse kinematics solver for end-effector control. Our example description file is shown below for our Stretch robot, which you can modify as needed. Place the descriptor file under `<PATH_TO_OG_ASSET_DIR>/models/<YOUR_MODEL>`.
|
||||
|
||||
5. In order for **OmniGibson** to register your new robot class internally, you must import the robot class before running the simulation. If your python module exists under `omnigibson/robots`, you can simply add an additional import line in `omnigibson/robots/__init__.py`. Otherwise, in any end use-case script, you can simply import your robot class from your python module at the top of the file.
|
||||
|
||||
|
||||
??? code "stretch.py"
|
||||
``` python linenums="1"
|
||||
import os
|
||||
import numpy as np
|
||||
from omnigibson.macros import gm
|
||||
from omnigibson.robots.active_camera_robot import ActiveCameraRobot
|
||||
from omnigibson.robots.manipulation_robot import GraspingPoint, ManipulationRobot
|
||||
from omnigibson.robots.two_wheel_robot import TwoWheelRobot
|
||||
|
||||
|
||||
class Stretch(ManipulationRobot, TwoWheelRobot, ActiveCameraRobot):
|
||||
"""
|
||||
Stretch Robot from Hello Robotics
|
||||
Reference: https://hello-robot.com/stretch-3-product
|
||||
"""
|
||||
|
||||
@property
|
||||
def discrete_action_list(self):
|
||||
# Only need to define if supporting a discrete set of high-level actions
|
||||
raise NotImplementedError()
|
||||
|
||||
def _create_discrete_action_space(self):
|
||||
# Only need to define if @discrete_action_list is defined
|
||||
raise ValueError("Stretch does not support discrete actions!")
|
||||
|
||||
@property
|
||||
def controller_order(self):
|
||||
# Controller ordering. Usually determined by general robot kinematics chain
|
||||
# You can usually simply take a subset of these based on the type of robot interfaces inherited for your robot class
|
||||
return ["base", "camera", f"arm_{self.default_arm}", f"gripper_{self.default_arm}"]
|
||||
|
||||
@property
|
||||
def _default_controllers(self):
|
||||
# Define the default controllers that should be used if no explicit configuration is specified when your robot class is created
|
||||
|
||||
# Always call super first
|
||||
controllers = super()._default_controllers
|
||||
|
||||
# We use multi finger gripper, differential drive, and IK controllers as default
|
||||
controllers["base"] = "DifferentialDriveController"
|
||||
controllers["camera"] = "JointController"
|
||||
controllers[f"arm_{self.default_arm}"] = "JointController"
|
||||
controllers[f"gripper_{self.default_arm}"] = "MultiFingerGripperController"
|
||||
|
||||
return controllers
|
||||
|
||||
@property
|
||||
def _default_joint_pos(self):
|
||||
# Define the default joint positions for your robot
|
||||
|
||||
return np.array([0, 0, 0.5, 0, 0, 0, 0, 0, 0, 0.0, 0, 0, np.pi / 8, np.pi / 8])
|
||||
|
||||
@property
|
||||
def wheel_radius(self):
|
||||
# Only relevant for TwoWheelRobots. Radius of each wheel
|
||||
return 0.050
|
||||
|
||||
@property
|
||||
def wheel_axle_length(self):
|
||||
# Only relevant for TwoWheelRobots. Distance between the two wheels
|
||||
return 0.330
|
||||
|
||||
@property
|
||||
def finger_lengths(self):
|
||||
# Only relevant for ManipulationRobots. Length of fingers
|
||||
return {self.default_arm: 0.04}
|
||||
|
||||
@property
|
||||
def assisted_grasp_start_points(self):
|
||||
# Only relevant for ManipulationRobots. The start points for grasping if using assisted grasping
|
||||
return {
|
||||
self.default_arm: [
|
||||
GraspingPoint(link_name="r_gripper_finger_link", position=[0.025, -0.012, 0.0]),
|
||||
GraspingPoint(link_name="r_gripper_finger_link", position=[-0.025, -0.012, 0.0]),
|
||||
]
|
||||
}
|
||||
|
||||
@property
|
||||
def assisted_grasp_end_points(self):
|
||||
# Only relevant for ManipulationRobots. The end points for grasping if using assisted grasping
|
||||
return {
|
||||
self.default_arm: [
|
||||
GraspingPoint(link_name="l_gripper_finger_link", position=[0.025, 0.012, 0.0]),
|
||||
GraspingPoint(link_name="l_gripper_finger_link", position=[-0.025, 0.012, 0.0]),
|
||||
]
|
||||
}
|
||||
|
||||
@property
|
||||
def disabled_collision_pairs(self):
|
||||
# Pairs of robot links whose pairwise collisions should be ignored.
|
||||
# Useful for filtering out bad collision modeling in the native robot meshes
|
||||
return [
|
||||
["base_link", "caster_link"],
|
||||
["base_link", "link_aruco_left_base"],
|
||||
["base_link", "link_aruco_right_base"],
|
||||
["base_link", "base_imu"],
|
||||
["base_link", "laser"],
|
||||
["base_link", "link_left_wheel"],
|
||||
["base_link", "link_right_wheel"],
|
||||
["base_link", "link_mast"],
|
||||
["link_mast", "link_head"],
|
||||
["link_head", "link_head_pan"],
|
||||
["link_head_pan", "link_head_tilt"],
|
||||
["camera_link", "link_head_tilt"],
|
||||
["camera_link", "link_head_pan"],
|
||||
["link_head_nav_cam", "link_head_tilt"],
|
||||
["link_head_nav_cam", "link_head_pan"],
|
||||
["link_mast", "link_lift"],
|
||||
["link_lift", "link_aruco_shoulder"],
|
||||
["link_lift", "link_arm_l4"],
|
||||
["link_lift", "link_arm_l3"],
|
||||
["link_lift", "link_arm_l2"],
|
||||
["link_lift", "link_arm_l1"],
|
||||
["link_arm_l4", "link_arm_l3"],
|
||||
["link_arm_l4", "link_arm_l2"],
|
||||
["link_arm_l4", "link_arm_l1"],
|
||||
["link_arm_l3", "link_arm_l2"],
|
||||
["link_arm_l3", "link_arm_l1"],
|
||||
["link_arm_l2", "link_arm_l1"],
|
||||
["link_arm_l0", "link_arm_l1"],
|
||||
["link_arm_l0", "link_arm_l2"],
|
||||
["link_arm_l0", "link_arm_l3"],
|
||||
["link_arm_l0", "link_arm_l4"],
|
||||
["link_arm_l0", "link_arm_l1"],
|
||||
["link_arm_l0", "link_aruco_inner_wrist"],
|
||||
["link_arm_l0", "link_aruco_top_wrist"],
|
||||
["link_arm_l0", "link_wrist_yaw"],
|
||||
["link_arm_l0", "link_wrist_yaw_bottom"],
|
||||
["link_arm_l0", "link_wrist_pitch"],
|
||||
["link_wrist_yaw_bottom", "link_wrist_pitch"],
|
||||
["gripper_camera_link", "link_gripper_s3_body"],
|
||||
["link_gripper_s3_body", "link_aruco_d405"],
|
||||
["link_gripper_s3_body", "link_gripper_finger_left"],
|
||||
["link_gripper_finger_left", "link_aruco_fingertip_left"],
|
||||
["link_gripper_finger_left", "link_gripper_fingertip_left"],
|
||||
["link_gripper_s3_body", "link_gripper_finger_right"],
|
||||
["link_gripper_finger_right", "link_aruco_fingertip_right"],
|
||||
["link_gripper_finger_right", "link_gripper_fingertip_right"],
|
||||
["respeaker_base", "link_head"],
|
||||
["respeaker_base", "link_mast"],
|
||||
]
|
||||
|
||||
@property
|
||||
def base_joint_names(self):
|
||||
# Names of the joints that control the robot's base
|
||||
return ["joint_left_wheel", "joint_right_wheel"]
|
||||
|
||||
@property
|
||||
def camera_joint_names(self):
|
||||
# Names of the joints that control the robot's camera / head
|
||||
return ["joint_head_pan", "joint_head_tilt"]
|
||||
|
||||
@property
|
||||
def arm_link_names(self):
|
||||
# Names of the links that compose the robot's arm(s) (not including gripper(s))
|
||||
return {
|
||||
self.default_arm: [
|
||||
"link_mast",
|
||||
"link_lift",
|
||||
"link_arm_l4",
|
||||
"link_arm_l3",
|
||||
"link_arm_l2",
|
||||
"link_arm_l1",
|
||||
"link_arm_l0",
|
||||
"link_aruco_inner_wrist",
|
||||
"link_aruco_top_wrist",
|
||||
"link_wrist_yaw",
|
||||
"link_wrist_yaw_bottom",
|
||||
"link_wrist_pitch",
|
||||
"link_wrist_roll",
|
||||
]
|
||||
}
|
||||
|
||||
@property
|
||||
def arm_joint_names(self):
|
||||
# Names of the joints that control the robot's arm(s) (not including gripper(s))
|
||||
return {
|
||||
self.default_arm: [
|
||||
"joint_lift",
|
||||
"joint_arm_l3",
|
||||
"joint_arm_l2",
|
||||
"joint_arm_l1",
|
||||
"joint_arm_l0",
|
||||
"joint_wrist_yaw",
|
||||
"joint_wrist_pitch",
|
||||
"joint_wrist_roll",
|
||||
]
|
||||
}
|
||||
|
||||
@property
|
||||
def eef_link_names(self):
|
||||
# Name of the link that defines the per-arm end-effector frame
|
||||
return {self.default_arm: "link_grasp_center"}
|
||||
|
||||
@property
|
||||
def finger_link_names(self):
|
||||
# Names of the links that compose the robot's gripper(s)
|
||||
return {self.default_arm: ["link_gripper_finger_left", "link_gripper_finger_right", "link_gripper_fingertip_left", "link_gripper_fingertip_right"]}
|
||||
|
||||
@property
|
||||
def finger_joint_names(self):
|
||||
# Names of the joints that control the robot's gripper(s)
|
||||
return {self.default_arm: ["joint_gripper_finger_right", "joint_gripper_finger_left"]}
|
||||
|
||||
@property
|
||||
def usd_path(self):
|
||||
# Absolute path to the native robot USD file
|
||||
return os.path.join(gm.ASSET_PATH, "models/stretch/stretch/stretch.usd")
|
||||
|
||||
@property
|
||||
def urdf_path(self):
|
||||
# Absolute path to the native robot URDF file
|
||||
return os.path.join(gm.ASSET_PATH, "models/stretch/stretch.urdf")
|
||||
|
||||
@property
|
||||
def robot_arm_descriptor_yamls(self):
|
||||
# Only relevant for ManipulationRobots. Absolute path(s) to the per-arm descriptor files (see Step 4 below)
|
||||
return {self.default_arm: os.path.join(gm.ASSET_PATH, "models/stretch/stretch_descriptor.yaml")}
|
||||
```
|
||||
|
||||
??? code "stretch_descriptor.yaml"
|
||||
``` yaml linenums="1"
|
||||
|
||||
# The robot descriptor defines the generalized coordinates and how to map those
|
||||
# to the underlying URDF dofs.
|
||||
|
||||
api_version: 1.0
|
||||
|
||||
# Defines the generalized coordinates. Each generalized coordinate is assumed
|
||||
# to have an entry in the URDF, except when otherwise specified below under
|
||||
# cspace_urdf_bridge
|
||||
cspace:
|
||||
- joint_lift
|
||||
- joint_arm_l3
|
||||
- joint_arm_l2
|
||||
- joint_arm_l1
|
||||
- joint_arm_l0
|
||||
- joint_wrist_yaw
|
||||
- joint_wrist_pitch
|
||||
- joint_wrist_roll
|
||||
|
||||
root_link: base_link
|
||||
subtree_root_link: base_link
|
||||
|
||||
default_q: [
|
||||
# Original version
|
||||
# 0.00, 0.00, 0.00, -1.57, 0.00, 1.50, 0.75
|
||||
|
||||
# New config
|
||||
0, 0, 0, 0, 0, 0, 0, 0
|
||||
]
|
||||
|
||||
# Most dimensions of the cspace have a direct corresponding element
|
||||
# in the URDF. This list of rules defines how unspecified coordinates
|
||||
# should be extracted.
|
||||
cspace_to_urdf_rules: []
|
||||
|
||||
active_task_spaces:
|
||||
- base_link
|
||||
- lift_link
|
||||
- link_mast
|
||||
- link_lift
|
||||
- link_arm_l4
|
||||
- link_arm_l3
|
||||
- link_arm_l2
|
||||
- link_arm_l1
|
||||
- link_arm_l0
|
||||
- link_aruco_inner_wrist
|
||||
- link_aruco_top_wrist
|
||||
- link_wrist_yaw
|
||||
- link_wrist_yaw_bottom
|
||||
- link_wrist_pitch
|
||||
- link_wrist_roll
|
||||
- link_gripper_s3_body
|
||||
- gripper_camera_link
|
||||
- link_aruco_d405
|
||||
- link_gripper_finger_left
|
||||
- link_aruco_fingertip_left
|
||||
- link_gripper_fingertip_left
|
||||
- link_gripper_finger_right
|
||||
- link_aruco_fingertip_right
|
||||
- link_gripper_fingertip_right
|
||||
- link_grasp_center
|
||||
|
||||
composite_task_spaces: []
|
||||
```
|
||||
|
||||
|
||||
## Deploy Your Robot!
|
||||
|
||||
You can now try testing your custom robot! Import and control the robot by launching `python omnigibson/examples/robot/robot_control_examples.py`! Try different controller options and teleop the robot with your keyboard, If you observe poor joint behavior, you can inspect and tune relevant joint parameters as needed. This test also exposes other bugs that may have occurred along the way, such as missing / bad joint limits, collisions, etc. Please refer to the Franka or Fetch robots as a baseline for a common set of joint parameters that work well. This is what our newly imported Stretch robot looks like in action:
|
||||
|
||||
![Stretch Import Test](../assets/tutorials/stretch-import-test.png)
|
||||
|
||||
|
|
@ -1,8 +1,8 @@
|
|||
---
|
||||
icon: octicons/rocket-16
|
||||
icon: material/controller
|
||||
---
|
||||
|
||||
# 🕹️ **Collecting Demonstrations**
|
||||
# 🎮 **Collecting Demonstrations**
|
||||
|
||||
|
||||
## Devices
|
||||
|
@ -16,36 +16,21 @@ We assume that we already have the scene and task setup. To initialize a teleope
|
|||
|
||||
After the config simply instantiate teh teleoperation system.
|
||||
|
||||
```
|
||||
```{.python .annotate}
|
||||
teleop_sys = TeleopSystem(config=teleop_config, robot=robot, show_control_marker=True)
|
||||
```
|
||||
|
||||
`TeleopSystem` takes in the config dictionary, which we just created. It also takes in the robot instance we want to teleoperate, as well as `show_control_marker`, which if set to `True`, will also create a green visual marker indicates the desired pose of the robot end effector that the user wants to robot to go.
|
||||
|
||||
After the `TeleopSystem` is created, start by calling
|
||||
```
|
||||
```{.python .annotate}
|
||||
teleop_sys.start()
|
||||
```
|
||||
|
||||
Then, within the simulation loop, simply call
|
||||
|
||||
```
|
||||
```{.python .annotate}
|
||||
action = teleop_sys.get_action(teleop_sys.get_obs())
|
||||
```
|
||||
|
||||
to get the action based on the user teleoperation input, and pass the action to the `env.step` function.
|
||||
|
||||
## (Optional) Saving and Loading Simulation State
|
||||
You can save the current state of the simulator to a json file by calling `save`:
|
||||
|
||||
```
|
||||
og.sim.save([JSON_PATH])
|
||||
```
|
||||
|
||||
To restore any saved state, simply call `restore`
|
||||
|
||||
```
|
||||
og.sim.restore([JSON_PATH])
|
||||
```
|
||||
|
||||
Alternatively, if you just want to save all the scene and objects info at the current tiemframe, you can also call `self.scene.dump_state(serialized=True)`, which will return a numpy array containing all the relavant information. You can then stack the array together to get the full trajectory of states.
|
||||
|
|
|
@ -0,0 +1,52 @@
|
|||
---
|
||||
icon: material/zip-disk
|
||||
---
|
||||
|
||||
# 💾 **Saving and Loading Simulation State**
|
||||
|
||||
## Memory
|
||||
|
||||
You can dump the current simulation state to memory by calling `dump_state`:
|
||||
|
||||
```{.python .annotate}
|
||||
state_dict = og.sim.dump_state(serialized=False)
|
||||
```
|
||||
This will return a dictionary containing all the information about the current state of the simulator.
|
||||
|
||||
If you want to save the state to a flat array, you can call `dump_state` with `serialized=True`:
|
||||
```{.python .annotate}
|
||||
state_flat_array = og.sim.dump_state(serialized=True)
|
||||
```
|
||||
|
||||
You can then load the state back into the simulator by calling `load_state`:
|
||||
```{.python .annotate}
|
||||
og.sim.load_state(state_dict, serialized=False)
|
||||
```
|
||||
Or
|
||||
```{.python .annotate}
|
||||
og.sim.load_state(state_flat_array, serialized=True)
|
||||
```
|
||||
|
||||
??? warning annotate "`load_state` assumes object permanence!"
|
||||
`load_state` assumes that the objects in the state match the objects in the current simulator. Only the state of the objects will be restored, not the objects themselves, i.e. no objects will be added or removed.
|
||||
If there is an object in the state that is not in the simulator, it will be ignored. If there is an object in the simulator that is not in the state, it will be left unchanged.
|
||||
|
||||
## Disk
|
||||
Alternatively, you can save the state to disk by calling `save`:
|
||||
|
||||
```{.python .annotate}
|
||||
og.sim.save(["path/to/scene_0.json"])
|
||||
```
|
||||
|
||||
The number of json files should match the number of scenes in the simulator (by default, 1).
|
||||
|
||||
You can then load the state back into the simulator by calling `og.clear()` first and then `restore`:
|
||||
|
||||
```{.python .annotate}
|
||||
og.clear()
|
||||
og.sim.restore(["path/to/scene_0.json"])
|
||||
```
|
||||
|
||||
??? warning annotate "`restore` assumes an empty simulator!"
|
||||
Always remember to call `og.clear()`, which clears the entire simualtor, before calling `restore`.
|
||||
Otherwise, the saved scenes will be appended to the existing scenes of the current simulator, which may lead to unexpected behavior.
|
|
@ -0,0 +1,29 @@
|
|||
---
|
||||
icon: octicons/rocket-16
|
||||
---
|
||||
|
||||
# 🕹️ **Speed Optimization**
|
||||
|
||||
This is an Ad Hoc page for tips & tricks about speed optimization. The page is currently under construction.
|
||||
|
||||
A lot of factors could affect the speed of OmniGibson. Here are a few of them:
|
||||
|
||||
|
||||
## Macros
|
||||
|
||||
`macros.py` contains some macros that affects the overall configuration of OmniGibson. Some of them will have an effect on the performance of OmniGibson:
|
||||
|
||||
1. `ENABLE_HQ_RENDERING`: While it is set to False by default, setting it to True will give us better rendering quality as well as other more advanced rendering features (e.g. isosurface for fluids), but with the cost of dragging down performance.
|
||||
|
||||
2. `USE_GPU_DYNAMICS`: setting it to True is required for more advanced features like particles and fluids, but it will lower the performance of OmniGibson.
|
||||
|
||||
3. `RENDER_VIEWER_CAMERA`: viewer camera refers to the camera that shows the default viewport in OmniGibson, if you don't want to view the entire scene (e.g. during training), you can set this to False andit will boost OmniGibson performance.
|
||||
|
||||
4. `ENABLE_FLATCACHE`: setting it to True will boost OmniGibson performance.
|
||||
|
||||
|
||||
## Miscellaneous
|
||||
|
||||
1. Assisted and sticky grasping is slower than physical grasp because they need to perform ray casting.
|
||||
|
||||
2. Setting a high `physics_frequency` vs. `action_frequency` will drag down OmniGibson's performance.
|
17
mkdocs.yml
|
@ -2,6 +2,7 @@ yaml-language-server: $schema=https://squidfunk.github.io/mkdocs-material/schema
|
|||
|
||||
site_name: OmniGibson Documentation
|
||||
repo_name: StanfordVL/OmniGibson
|
||||
site_url: https://behavior.stanford.edu/omnigibson
|
||||
repo_url: https://github.com/StanfordVL/OmniGibson
|
||||
theme:
|
||||
name: material
|
||||
|
@ -12,7 +13,9 @@ theme:
|
|||
|
||||
features:
|
||||
- navigation.tracking
|
||||
- navigation.tabs
|
||||
- navigation.instant
|
||||
- navigation.expand
|
||||
- toc.integrate
|
||||
- content.code.copy
|
||||
|
||||
extra:
|
||||
|
@ -85,8 +88,9 @@ nav:
|
|||
- Getting Started:
|
||||
- Installation: getting_started/installation.md
|
||||
- Quickstart: getting_started/quickstart.md
|
||||
- Important Concepts: getting_started/important_concepts.md
|
||||
- Examples: getting_started/examples.md
|
||||
- Modules:
|
||||
- OmniGibson Modules:
|
||||
- Overview: modules/overview.md
|
||||
- Prims: modules/prims.md
|
||||
- Objects: modules/objects.md
|
||||
|
@ -98,20 +102,27 @@ nav:
|
|||
- Scenes: modules/scenes.md
|
||||
- Transition Rules: modules/transition_rules.md
|
||||
- Simulator: modules/simulator.md
|
||||
- Tasks: modules/tasks.md
|
||||
- Environments: modules/environments.md
|
||||
- Vector Environments: modules/vector_environments.md
|
||||
- Under the Hood - Isaac Sim: modules/under_the_hood.md
|
||||
- Tutorials:
|
||||
- Demo Collection: tutorials/demo_collection.md
|
||||
- Using Macros: tutorials/using_macros.md
|
||||
- Customizing Robots: tutorials/customizing_robots.md
|
||||
- Running on a Server: tutorials/running_on_a_server.md
|
||||
- API Reference: reference/*
|
||||
- Saving and Loading Simulation State: tutorials/save_load.md
|
||||
- BEHAVIOR Tasks: tutorials/behavior_tasks.md
|
||||
- BEHAVIOR Knowledgebase: tutorials/behavior_knowledgebase.md
|
||||
- Custom Robot Import: tutorials/custom_robot_import.md
|
||||
- Speed Optimization: tutorials/speed_optimization.md
|
||||
- Miscellaneous:
|
||||
- FAQ: miscellaneous/faq.md
|
||||
- Known Issues & Troubleshooting: miscellaneous/known_issues.md
|
||||
- Contributing: miscellaneous/contributing.md
|
||||
- Changelog: https://github.com/StanfordVL/OmniGibson/releases
|
||||
- Contact Us: miscellaneous/contact.md
|
||||
- API Reference: reference/*
|
||||
|
||||
extra:
|
||||
analytics:
|
||||
|
|
|
@ -412,7 +412,10 @@ class Environment(gym.Env, GymObservable, Recreatable):
|
|||
|
||||
def post_play_load(self):
|
||||
"""Complete loading tasks that require the simulator to be playing."""
|
||||
# Save the state
|
||||
# Reset the scene first to potentially recover the state after load_task (e.g. BehaviorTask sampling)
|
||||
self.scene.reset()
|
||||
|
||||
# Save the state for objects from load_robots / load_objects / load_task
|
||||
self.scene.update_initial_state()
|
||||
|
||||
# Load the obs / action spaces
|
||||
|
|
|
@ -63,8 +63,12 @@ def main(random_selection=False, headless=False, short_exec=False):
|
|||
og.sim.save(json_paths=[save_path])
|
||||
|
||||
print("Re-loading scene...")
|
||||
og.clear()
|
||||
og.sim.restore(json_paths=[save_path])
|
||||
|
||||
# env is no longer valid after og.clear()
|
||||
del env
|
||||
|
||||
# Take a sim step and play
|
||||
og.sim.step()
|
||||
og.sim.play()
|
||||
|
@ -79,7 +83,7 @@ def main(random_selection=False, headless=False, short_exec=False):
|
|||
# Register callback so user knows to press space once they're done manipulating the scene
|
||||
KeyboardEventHandler.add_keyboard_callback(lazy.carb.input.KeyboardInput.Z, complete_loop)
|
||||
while not completed:
|
||||
env.step(np.zeros(env.robots[0].action_dim))
|
||||
og.sim.step()
|
||||
|
||||
# Shutdown omnigibson at the end
|
||||
og.shutdown()
|
||||
|
|
|
@ -89,8 +89,6 @@ class DatasetObject(USDObject):
|
|||
kwargs (dict): Additional keyword arguments that are used for other super() calls from subclasses, allowing
|
||||
for flexible compositions of various object subclasses (e.g.: Robot is USDObject + ControllableObject).
|
||||
"""
|
||||
# TODO(parallel-hang): Pass _xform_props_pre_loaded = True to object_base to entityprim, make sure entityprim passes it into rigidprim and jointprim initializers
|
||||
|
||||
# Store variables
|
||||
if isinstance(in_rooms, str):
|
||||
assert "," not in in_rooms
|
||||
|
@ -103,6 +101,8 @@ class DatasetObject(USDObject):
|
|||
# Add info to load config
|
||||
load_config = dict() if load_config is None else load_config
|
||||
load_config["bounding_box"] = bounding_box
|
||||
# All DatasetObjects should have xform properties pre-loaded
|
||||
load_config["xform_props_pre_loaded"] = True
|
||||
|
||||
# Infer the correct usd path to use
|
||||
if model is None:
|
||||
|
|
|
@ -249,6 +249,7 @@ class EntityPrim(XFormPrim):
|
|||
),
|
||||
"belongs_to_articulation": self._articulation_view is not None and link_name != self._root_link_name,
|
||||
"remesh": self._load_config.get("remesh", True),
|
||||
"xform_props_pre_loaded": self._load_config.get("xform_props_pre_loaded", False),
|
||||
}
|
||||
self._links[link_name] = link_cls(
|
||||
relative_prim_path=absolute_prim_path_to_scene_relative(self.scene, prim.GetPrimPath().__str__()),
|
||||
|
|
|
@ -52,10 +52,7 @@ class BasePrim(Serializable, Recreatable, ABC):
|
|||
# that get created during the _load phase of this class, but sometimes we create prims using
|
||||
# alternative methods and then create this class - in that case too we need to make sure we
|
||||
# add the right xform properties, so callers will just pass in the created manually flag.
|
||||
self._xform_props_pre_loaded = (
|
||||
"xform_props_pre_loaded" in self._load_config and self._load_config["xform_props_pre_loaded"]
|
||||
)
|
||||
|
||||
self._xform_props_pre_loaded = self._load_config.get("xform_props_pre_loaded", False)
|
||||
# Run super init
|
||||
super().__init__()
|
||||
|
||||
|
|
|
@ -164,17 +164,16 @@ class FrankaPanda(ManipulationRobot):
|
|||
([0.86, -0.27, -0.68, -1.52, -0.18, 1.29, 1.72], np.zeros(12))
|
||||
)
|
||||
self._teleop_rotation_offset = np.array([0, 0, 0.707, 0.707])
|
||||
# TODO: add ag support for inspire hand
|
||||
self._ag_start_points = [
|
||||
# GraspingPoint(link_name=f"base_link", position=[0, -0.025, 0.035]),
|
||||
# GraspingPoint(link_name=f"base_link", position=[0, 0.03, 0.035]),
|
||||
# GraspingPoint(link_name=f"link14", position=[-0.0115, -0.07, -0.015]),
|
||||
GraspingPoint(link_name=f"base_link", position=[-0.025, -0.07, 0.012]),
|
||||
GraspingPoint(link_name=f"base_link", position=[-0.015, -0.11, 0.012]),
|
||||
GraspingPoint(link_name=f"link14", position=[-0.01, 0.015, 0.004]),
|
||||
]
|
||||
self._ag_end_points = [
|
||||
# GraspingPoint(link_name=f"link22", position=[-0.0115, -0.06, 0.015]),
|
||||
# GraspingPoint(link_name=f"link32", position=[-0.0115, -0.06, 0.015]),
|
||||
# GraspingPoint(link_name=f"link42", position=[-0.0115, -0.06, 0.015]),
|
||||
# GraspingPoint(link_name=f"link52", position=[-0.0115, -0.06, 0.015]),
|
||||
GraspingPoint(link_name=f"link22", position=[0.006, 0.04, 0.003]),
|
||||
GraspingPoint(link_name=f"link32", position=[0.006, 0.045, 0.003]),
|
||||
GraspingPoint(link_name=f"link42", position=[0.006, 0.04, 0.003]),
|
||||
GraspingPoint(link_name=f"link52", position=[0.006, 0.04, 0.003]),
|
||||
]
|
||||
else:
|
||||
raise ValueError(f"End effector {end_effector} not supported for FrankaPanda")
|
||||
|
|
|
@ -141,6 +141,10 @@ class BaseRobot(USDObject, ControllableObject, GymObservable):
|
|||
scale is None or isinstance(scale, int) or isinstance(scale, float) or np.all(scale == scale[0])
|
||||
), f"Robot scale must be uniform! Got: {scale}"
|
||||
|
||||
# All BaseRobots should have xform properties pre-loaded
|
||||
load_config = {} if load_config is None else load_config
|
||||
load_config["xform_props_pre_loaded"] = True
|
||||
|
||||
# Run super init
|
||||
super().__init__(
|
||||
relative_prim_path=relative_prim_path,
|
||||
|
@ -182,7 +186,6 @@ class BaseRobot(USDObject, ControllableObject, GymObservable):
|
|||
visible=False,
|
||||
fixed_base=True,
|
||||
visual_only=True,
|
||||
load_config={"created_manually": True},
|
||||
)
|
||||
self._dummy.load(self.scene)
|
||||
|
||||
|
|
|
@ -91,30 +91,26 @@ class Scene(Serializable, Registerable, Recreatable, ABC):
|
|||
super().__init__()
|
||||
|
||||
# Prepare the initialization dicts
|
||||
self._init_info = {}
|
||||
self._init_objs = {}
|
||||
self._init_state = {}
|
||||
self._init_systems = []
|
||||
self._task_metadata = None
|
||||
self._init_objs = {}
|
||||
|
||||
# If we have any scene file specified, use it to create the objects within it
|
||||
if self.scene_file is not None:
|
||||
# Grab objects info from the scene file
|
||||
with open(self.scene_file, "r") as f:
|
||||
scene_info = json.load(f)
|
||||
self._init_info = scene_info["objects_info"]["init_info"]
|
||||
init_info = scene_info["objects_info"]["init_info"]
|
||||
self._init_state = scene_info["state"]["object_registry"]
|
||||
self._init_systems = list(scene_info["state"]["system_registry"].keys())
|
||||
self._task_metadata = {}
|
||||
if "metadata" in scene_info and "task" in scene_info["metadata"]:
|
||||
self._task_metadata = scene_info["metadata"]["task"]
|
||||
task_metadata = (
|
||||
scene_info["metadata"]["task"] if "metadata" in scene_info and "task" in scene_info["metadata"] else {}
|
||||
)
|
||||
|
||||
# Iterate over all scene info, and instantiate object classes linked to the objects found on the stage
|
||||
# accordingly
|
||||
self._init_objs = {}
|
||||
for obj_name, obj_info in self._init_info.items():
|
||||
# Iterate over all scene info, and instantiate object classes linked to the objects found on the stage accordingly
|
||||
for obj_name, obj_info in init_info.items():
|
||||
# Check whether we should load the object or not
|
||||
if not self._should_load_object(obj_info=obj_info, task_metadata=self._task_metadata):
|
||||
if not self._should_load_object(obj_info=obj_info, task_metadata=task_metadata):
|
||||
continue
|
||||
# Create object class instance
|
||||
obj = create_object_from_init_info(obj_info)
|
||||
|
@ -290,7 +286,6 @@ class Scene(Serializable, Registerable, Recreatable, ABC):
|
|||
self._scene_prim = XFormPrim(
|
||||
relative_prim_path=scene_relative_path,
|
||||
name=f"scene_{self.idx}",
|
||||
load_config={"created_manually": True},
|
||||
)
|
||||
self._scene_prim.load(None)
|
||||
if self.scene_file is not None:
|
||||
|
|
|
@ -546,7 +546,6 @@ def launch_simulator(*args, **kwargs):
|
|||
self._floor_plane = XFormPrim(
|
||||
relative_prim_path=ground_plane_relative_path,
|
||||
name=plane.name,
|
||||
load_config={"created_manually": True},
|
||||
)
|
||||
self._floor_plane.load(None)
|
||||
|
||||
|
@ -1340,6 +1339,10 @@ def launch_simulator(*args, **kwargs):
|
|||
json_paths (List[str]): Full paths of JSON file to load, which contains information
|
||||
to recreate a scene.
|
||||
"""
|
||||
if len(self.scenes) > 0:
|
||||
log.error("There are already scenes loaded. Please call og.clear() to relaunch the simulator first.")
|
||||
return
|
||||
|
||||
for json_path in json_paths:
|
||||
if not json_path.endswith(".json"):
|
||||
log.error(f"You have to define the full json_path to load from. Got: {json_path}")
|
||||
|
|
|
@ -1440,10 +1440,7 @@ class FluidSystem(MicroPhysicalParticleSystem):
|
|||
prototype = lazy.pxr.UsdGeom.Sphere.Define(og.sim.stage, prototype_prim_path)
|
||||
prototype.CreateRadiusAttr().Set(self.particle_radius)
|
||||
relative_prototype_prim_path = absolute_prim_path_to_scene_relative(self._scene, prototype_prim_path)
|
||||
load_config = {"created_manually": True}
|
||||
prototype = VisualGeomPrim(
|
||||
relative_prim_path=relative_prototype_prim_path, name=f"{self.name}_prototype0", load_config=load_config
|
||||
)
|
||||
prototype = VisualGeomPrim(relative_prim_path=relative_prototype_prim_path, name=f"{self.name}_prototype0")
|
||||
prototype.load(self._scene)
|
||||
prototype.visible = False
|
||||
lazy.omni.isaac.core.utils.semantics.add_update_semantics(
|
||||
|
@ -1565,10 +1562,7 @@ class GranularSystem(MicroPhysicalParticleSystem):
|
|||
|
||||
# Wrap it with VisualGeomPrim with the correct scale
|
||||
relative_prototype_path = absolute_prim_path_to_scene_relative(self._scene, prototype_path)
|
||||
load_config = {"created_manually": True}
|
||||
prototype = VisualGeomPrim(
|
||||
relative_prim_path=relative_prototype_path, name=prototype_path, load_config=load_config
|
||||
)
|
||||
prototype = VisualGeomPrim(relative_prim_path=relative_prototype_path, name=prototype_path)
|
||||
prototype.load(self._scene)
|
||||
prototype.scale *= self.max_scale
|
||||
prototype.visible = False
|
||||
|
|
|
@ -523,7 +523,7 @@ class BehaviorTask(BaseTask):
|
|||
|
||||
Args:
|
||||
path (None or str): If specified, absolute fpath to the desired path to write the .json. Default is
|
||||
<gm.DATASET_PATH/scenes/<SCENE_MODEL>/json/...>
|
||||
<gm.DATASET_PATH>/scenes/<SCENE_MODEL>/json/...>
|
||||
override (bool): Whether to override any files already found at the path to write the task .json
|
||||
"""
|
||||
if path is None:
|
||||
|
|
|
@ -5,7 +5,7 @@ from omnigibson.termination_conditions.termination_condition_base import Success
|
|||
class ReachingGoal(SuccessCondition):
|
||||
"""
|
||||
ReachingGoal (success condition) used for reaching-type tasks
|
||||
Episode terminates if reaching goal is reached within @distance_tol by the @robot_idn robot's base
|
||||
Episode terminates if reaching goal is reached within @distance_tol by the @robot_idn robot's end effector
|
||||
|
||||
Args:
|
||||
|
||||
|
|