Replace HTML links with MD links

This commit is contained in:
Cem Gökmen 2024-07-24 13:36:15 -07:00
parent 6926084db6
commit d12e8477ad
16 changed files with 114 additions and 114 deletions

View File

@ -8,7 +8,7 @@ icon: material/silverware-fork-knife
BEHAVIOR is short for Benchmark for Everyday Household Activities in Virtual, Interactive, and ecOlogical enviRonments.
[**`BehaviorTask`**](../reference/tasks/behavior_task.html) represents a family of 1000 long-horizon household activities that humans benefit the most from robots' help based on our survey results.
[**`BehaviorTask`**](../reference/tasks/behavior_task.md) represents a family of 1000 long-horizon household activities that humans benefit the most from robots' help based on our survey results.
To browse and modify the definition of BEHAVIOR tasks, you might find it helpful to download a local editable copy of our `bddl` repo.
```{.python .annotate}

View File

@ -67,7 +67,7 @@ These are general-purpose controllers that are agnostic to a robot's morphology,
<table markdown="span">
<tr>
<td valign="top">
[**`JointController`**](../reference/controllers/joint_controller.html)<br><br>
[**`JointController`**](../reference/controllers/joint_controller.md)<br><br>
Directly controls individual joints. Either outputs low-level joint position or velocity controls if `use_impedance=False`, otherwise will internally compensate the desired gains with the robot's mass matrix and output joint effort controls.<br><br>
<ul>
<li>_Command Dim_: n_joints</li>
@ -79,7 +79,7 @@ These are general-purpose controllers that are agnostic to a robot's morphology,
</tr>
<tr>
<td valign="top">
[**`NullJointController`**](../reference/controllers/null_joint_controller.html)<br><br>
[**`NullJointController`**](../reference/controllers/null_joint_controller.md)<br><br>
Directly controls individual joints via an internally stored `default_command`. Inputted commands will be ignored unless `default_command` is updated.<br><br>
<ul>
<li>_Command Dim_: n_joints</li>
@ -97,7 +97,7 @@ These are controllers specifically meant for robots with navigation capabilities
<table markdown="span" width="100%">
<tr>
<td valign="top" width="100%">
[**`DifferentialDriveController`**](../reference/controllers/dd_controller.html)<br><br>
[**`DifferentialDriveController`**](../reference/controllers/dd_controller.md)<br><br>
Commands 2-wheeled robots by setting linear / angular velocity setpoints and converting them into per-joint velocity control.<br><br>
<ul>
<li>_Command Dim_: n_joints</li>
@ -116,7 +116,7 @@ These are controllers specifically meant for robots with manipulation capabiliti
<table markdown="span">
<tr>
<td valign="top">
[**`InverseKinematicsController`**](../reference/controllers/ik_controller.html)<br><br>
[**`InverseKinematicsController`**](../reference/controllers/ik_controller.md)<br><br>
Controls a robot's end-effector by iteratively solving inverse kinematics to output a desired joint configuration to reach the desired end effector pose, and then runs an underlying `JointController` to reach the target joint configuration. Multiple modes are available, and dictate both the command dimension and behavior of the controller. `condition_on_current_position` can be set to seed the IK solver with the robot's current joint state, and `use_impedance` can be set if the robot's per-joint inertia should be taken into account when attempting to reach the target joint configuration.<br><br>
Note: Orientation convention is axis-angle `[ax,ay,az]` representation, and commands are expressed in the robot base frame unless otherwise noted.<br><br>
<ul>
@ -135,7 +135,7 @@ These are controllers specifically meant for robots with manipulation capabiliti
</tr>
<tr>
<td valign="top">
[**`OperationalSpaceController`**](../reference/controllers/osc_controller.html)<br><br>
[**`OperationalSpaceController`**](../reference/controllers/osc_controller.md)<br><br>
Controls a robot's end-effector by applying the [operational space control](https://khatib.stanford.edu/publications/pdfs/Khatib_1987_RA.pdf) algorithm to apply per-joint efforts to perturb the robot's end effector with impedances ("force") along all six (x,y,z,ax,ay,az) axes. Unlike `InverseKinematicsController`, this controller is inherently compliant and especially useful for contact-rich tasks or settings where fine-grained forces are required. For robots with >6 arm joints, an additional null command is used as a secondary objective and is defined as joint state `reset_joint_pos`.<br><br>
Note: Orientation convention is axis-angle `[ax,ay,az]` representation, and commands are expressed in the robot base frame unless otherwise noted.<br><br>
<ul>
@ -161,7 +161,7 @@ These are controllers specifically meant for robots with manipulation capabiliti
<table markdown="span" width="100%">
<tr>
<td valign="top" width="100%">
[**`MultiFingerGripperController`**](../reference/controllers/multi_finger_gripper_controller.html)<br><br>
[**`MultiFingerGripperController`**](../reference/controllers/multi_finger_gripper_controller.md)<br><br>
Commands a robot's gripper joints, with behavior defined via `mode`. By default, &lt;closed, open&gt; is assumed to correspond to &lt;q_lower_limit, q_upper_limit&gt; for each joint, though this can be manually set via the `closed_qpos` and `open_qpos` arguments.<br><br>
<ul>
<li>_Command Dim_: 1 / n_gripper_joints</li>

View File

@ -52,6 +52,6 @@ Once created, the environment can be interfaced roughly in the same way as an Op
## Types
**`OmniGibson`** provides the main [`Environment`](../reference/environments/env_base.html) class, which should offer most of the essential functionality necessary for running robot experiments and interacting with the underlying simulator.
**`OmniGibson`** provides the main [`Environment`](../reference/environments/env_base.md) class, which should offer most of the essential functionality necessary for running robot experiments and interacting with the underlying simulator.
However, for more niche use-caches (such as demonstration collection, or batched environments), **`OmniGibson`** offers the [`EnvironmentWrapper`](../reference/environments/env_wrapper.html) class to easily extend the core environment functionality.
However, for more niche use-caches (such as demonstration collection, or batched environments), **`OmniGibson`** offers the [`EnvironmentWrapper`](../reference/environments/env_wrapper.md) class to easily extend the core environment functionality.

View File

@ -44,7 +44,7 @@ abilities = OBJECT_TAXONOMY.get_abilities(synset)
```
!!! info annotate "Follow our tutorial on BEHAVIOR knowledgebase!"
To better understand how to use / visualize / modify BEHAVIOR knowledgebase, please read our [tutorial](../tutorials/behavior_knowledgebase.html)!
To better understand how to use / visualize / modify BEHAVIOR knowledgebase, please read our [tutorial](../tutorials/behavior_knowledgebase.md)!
??? warning annotate "Not all object states are guaranteed to be created!"
@ -66,7 +66,7 @@ These are object states that are agnostic to other objects in a given scene.
<table markdown="span">
<tr>
<td valign="top" width="60%">
[**`AABB`**](../reference/object_states/aabb.html)<br><br>
[**`AABB`**](../reference/object_states/aabb.md)<br><br>
The axis-aligned bounding box (AABB) of the object in the world frame.<br><br>
<ul>
<li>`get_value()`: returns `aabb_min`, `aabb_max`</li>
@ -79,7 +79,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`VerticalAdjacency`** / **`HorizontalAdjacency`**](../reference/object_states/adjacency.html)<br><br>
[**`VerticalAdjacency`** / **`HorizontalAdjacency`**](../reference/object_states/adjacency.md)<br><br>
The nearby objects that are considered adjacent to the object, either in the +/- global Z axis or +/- global XY plane.<br><br>
<ul>
<li>`get_value()`: returns `AxisAdjacencyList`, a namedtuple with `positive_neighbors` and `negative_neighbors` each of which are lists of nearby objects</li>
@ -92,7 +92,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`Burnt`**](../reference/object_states/burnt.html)<br><br>
[**`Burnt`**](../reference/object_states/burnt.md)<br><br>
Whether the object is considered burnt or not. Note that if `True`, this object's visual appearance will also change accordingly. This corresponds to an object hitting some `MaxTemperature` threshold over the course of its lifetime.<br><br>
<ul>
<li>`get_value()`: returns `True / False`</li>
@ -105,7 +105,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`ContactBodies`**](../reference/object_states/contact_bodies.html)<br><br>
[**`ContactBodies`**](../reference/object_states/contact_bodies.md)<br><br>
The nearby rigid bodies that this object is currently in contact with.<br><br>
<ul>
<li>`get_value(ignore_objs=None)`: returns `rigid_prims`, a set of `RigidPrim`s the object is in contact with, optionally with `ignore_objs` filtered from the set</li>
@ -118,7 +118,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`Cooked`**](../reference/object_states/cooked.html)<br><br>
[**`Cooked`**](../reference/object_states/cooked.md)<br><br>
Whether the object is considered cooked or not. Note that if `True`, this object's visual appearance will also change accordingly. This corresponds to an object hitting some `MaxTemperature` threshold over the course of its lifetime.<br><br>
<ul>
<li>`get_value()`: returns `True / False`</li>
@ -131,7 +131,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`Folded`** / **`Unfolded`**](../reference/object_states/folded.html)<br><br>
[**`Folded`** / **`Unfolded`**](../reference/object_states/folded.md)<br><br>
A cloth-specific state. Determines whether a cloth object is sufficiently un / folded or not. This is inferred as a function of its overall smoothness, total area to current area ratio, and total diagonal to current diagonal ratio.<br><br>
<ul>
<li>`get_value()`: returns `True / False`</li>
@ -144,7 +144,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`Frozen`**](../reference/object_states/frozen.html)<br><br>
[**`Frozen`**](../reference/object_states/frozen.md)<br><br>
Whether the object is considered frozen or not. Note that if `True`, this object's visual appearance will also change accordingly. This corresponds to an object's `Temperature` value being under some threshold at the current timestep.<br><br>
<ul>
<li>`get_value()`: returns `True / False`</li>
@ -157,7 +157,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`HeatSourceOrSink`**](../reference/object_states/heat_source_or_sink.html)<br><br>
[**`HeatSourceOrSink`**](../reference/object_states/heat_source_or_sink.md)<br><br>
Defines a heat source or sink which raises / lowers the temperature of nearby objects, if enabled. Use `state.affects_obj(obj)` to check whether the given heat source / sink is currently impacting `obj`'s temperature.<br><br>
<ul>
<li>`get_value()`: returns `True / False` (whether the source / sink is enabled or not)</li>
@ -170,7 +170,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`Heated`**](../reference/object_states/heated.html)<br><br>
[**`Heated`**](../reference/object_states/heated.md)<br><br>
Whether the object is considered heated or not. Note that if `True`, this object's visual appearance will also change accordingly with steam actively coming off of the object. This corresponds to an object's `Temperature` value being above some threshold at the current timestep.<br><br>
<ul>
<li>`get_value()`: returns `True / False`</li>
@ -183,7 +183,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`MaxTemperature`**](../reference/object_states/max_temperature.html)<br><br>
[**`MaxTemperature`**](../reference/object_states/max_temperature.md)<br><br>
The object's max temperature over the course of its lifetime. This value gets automatically updated every simulation step and can be affected by nearby `HeatSourceOrSink`-enabled objects.<br><br>
<ul>
<li>`get_value()`: returns `float`</li>
@ -196,7 +196,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`OnFire`**](../reference/object_states/on_fire.html)<br><br>
[**`OnFire`**](../reference/object_states/on_fire.md)<br><br>
Whether the object is lit on fire or not. Note that if `True`, this object's visual appearance will also change accordingly with fire actively coming off of the object. This corresponds to an object's `Temperature` value being above some threshold at the current timestep. Note that if `True`, this object becomes an active `HeatSourceOrSink`-enabled object that will raise the temperature of nearby objects.<br><br>
<ul>
<li>`get_value()`: returns `True / False`</li>
@ -209,7 +209,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`ObjectsInFOVOfRobot`**](../reference/object_states/objects_in_fov_of_robot.html)<br><br>
[**`ObjectsInFOVOfRobot`**](../reference/object_states/objects_in_fov_of_robot.md)<br><br>
A robot-specific state. Comptues the list of objects that are currently in the robot's field of view.<br><br>
<ul>
<li>`get_value()`: returns `obj_list`, the list of `BaseObject`s</li>
@ -222,7 +222,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`Open`**](../reference/object_states/open.html)<br><br>
[**`Open`**](../reference/object_states/open.md)<br><br>
Whether the object's joint is considered open or not. This corresponds to at least one joint being above some threshold from its pre-defined annotated closed state.<br><br>
<ul>
<li>`get_value()`: returns `True / False`</li>
@ -235,7 +235,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`Pose`**](../reference/object_states/pose.html)<br><br>
[**`Pose`**](../reference/object_states/pose.md)<br><br>
The object's current (position, orientation) expressed in (cartesian, quaternion) form in the global frame.<br><br>
<ul>
<li>`get_value()`: returns (`pos`, `quat`), with quat in (x,y,z,w) form</li>
@ -248,7 +248,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`Temperature`**](../reference/object_states/temperature.html)<br><br>
[**`Temperature`**](../reference/object_states/temperature.md)<br><br>
The object's current temperature. This value gets automatically updated every simulation step and can be affected by nearby `HeatSourceOrSink`-enabled objects.<br><br>
<ul>
<li>`get_value()`: returns `float`</li>
@ -261,7 +261,7 @@ These are object states that are agnostic to other objects in a given scene.
</tr>
<tr>
<td valign="top" width="60%">
[**`ToggledOn`**](../reference/object_states/toggled_on.html)<br><br>
[**`ToggledOn`**](../reference/object_states/toggled_on.md)<br><br>
A virtual button that can be "pressed" by a robot's end-effector. Doing so will result in the state being toggled between `True` and `False`, and also corresponds to a visual change in the virtual button's appearance.<br><br>
<ul>
<li>`get_value()`: returns `True / False`</li>
@ -280,7 +280,7 @@ These are object states that are computed with respect to other entities in the
<table markdown=span>
<tr>
<td valign="top" width="60%">
[**`AttachedTo`**](../reference/object_states/attached_to.html)<br><br>
[**`AttachedTo`**](../reference/object_states/attached_to.md)<br><br>
Defines a rigid or flexible connection between this object and another object (parent). At any given moment, this object can only be attached to at most one parent, but the reverse is not true. That is,
a parent can have multiple children, but a child can only have one parent. An attachment is triggered and created when the this object makes contact with a compatible parent and is aligned correctly.<br><br>
<ul>
@ -294,7 +294,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Contains`**](../reference/object_states/contains.html)<br><br>
[**`Contains`**](../reference/object_states/contains.md)<br><br>
Defines whether this object currently contains any quantity of a specific particle system. Note that this state requires that a container virtual volume be pre-annotated in the underlying object asset for it to be created. Particles are considered contained if their position lies within the annotated volume.<br><br>
<ul>
<li>`get_value(system)`: returns `True / False`</li>
@ -307,7 +307,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Covered`**](../reference/object_states/covered.html)<br><br>
[**`Covered`**](../reference/object_states/covered.md)<br><br>
Defines whether this object is currently covered by a specific particle system. This corresponds to checking whether the number of particles either touching or attached to this object surpasses some minimum threshold.<br><br>
<ul>
<li>`get_value(system)`: returns `True / False`</li>
@ -320,7 +320,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Draped`**](../reference/object_states/draped.html)<br><br>
[**`Draped`**](../reference/object_states/draped.md)<br><br>
A cloth-specific state. Defines whether this cloth object is fully covering `other`, e.g., a tablecloth draped over a table. This object is considered draped if it is touching `other` and its center of mass is below the average position of the contact points.<br><br>
<ul>
<li>`get_value(other)`: returns `True / False`</li>
@ -333,7 +333,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Filled`**](../reference/object_states/filled.html)<br><br>
[**`Filled`**](../reference/object_states/filled.md)<br><br>
Defines whether this object is currently filled with a specific particle system. Note that this state requires that a container virtual volume be pre-annotated in the underlying object asset for it to be created. This state corresponds to checking whether the total volume of contained particles surpasses some minimum relative ratio with respect to its total annotated container volume.<br><br>
<ul>
<li>`get_value(system)`: returns `True / False`</li>
@ -346,7 +346,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Inside`**](../reference/object_states/inside.html)<br><br>
[**`Inside`**](../reference/object_states/inside.md)<br><br>
Defines whether this object is considered inside of `other`. This does raycasting in all axes (x,y,z), and checks to make sure that rays shot in at least two of these axes hit `other`.<br><br>
<ul>
<li>`get_value(other)`: returns `True / False`</li>
@ -359,7 +359,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`IsGrasping`**](../reference/object_states/robot_related_states.html)<br><br>
[**`IsGrasping`**](../reference/object_states/robot_related_states.md)<br><br>
A robot-specific state. Determines whether this robot is currently grasping `other`.<br><br>
<ul>
<li>`get_value(other)`: returns `True / False`</li>
@ -372,7 +372,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`NextTo`**](../reference/object_states/next_to.html)<br><br>
[**`NextTo`**](../reference/object_states/next_to.md)<br><br>
Defines whether this object is considered next to `other`. This checks to make sure this object is relatively close to `other` and that `other` is in either of this object's `HorizontalAdjacency` neighbor lists.<br><br>
<ul>
<li>`get_value(other)`: returns `True / False`</li>
@ -385,7 +385,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`OnTop`**](../reference/object_states/on_top.html)<br><br>
[**`OnTop`**](../reference/object_states/on_top.md)<br><br>
Defines whether this object is considered on top of `other`. This checks to make sure that this object is touching `other` and that `other` is in this object's `VerticalAdjacency` `negative_neighbors` list.<br><br>
<ul>
<li>`get_value(other)`: returns `True / False`</li>
@ -398,7 +398,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Overlaid`**](../reference/object_states/overlaid.html)<br><br>
[**`Overlaid`**](../reference/object_states/overlaid.md)<br><br>
A cloth-specific state. Defines whether this object is overlaid over `other`, e.g., a t-shirt overlaid over a table. This checks to make sure that the ratio of this cloth object's XY-projection of its convex hull to `other`'s XY-area of its bounding box surpasses some threshold.<br><br>
<ul>
<li>`get_value(other)`: returns `True / False`</li>
@ -411,7 +411,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Saturated`**](../reference/object_states/saturated.html)<br><br>
[**`Saturated`**](../reference/object_states/saturated.md)<br><br>
Defines whether this object has reached the maximum with respect to a specific particle system, e.g., a sponge fully absorbed with water, or a spray bottle fully emptied of cleaner fluid. This keeps a reference to this object's modified particle count for `system`, and checks whether the current value surpasses a desired limit. Specific limits can be queried via `get_limit(system)` and set via `set_limit(system, limit)`. Note that if `True`, this object's visual appearance will also change accordingly. <br><br>
<ul>
<li>`get_value(system)`: returns `True / False`</li>
@ -424,7 +424,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Touching`**](../reference/object_states/touching.html)<br><br>
[**`Touching`**](../reference/object_states/touching.md)<br><br>
Defines whether this object is in contact with `other`.<br><br>
<ul>
<li>`get_value(system)`: returns `True / False`</li>
@ -437,7 +437,7 @@ These are object states that are computed with respect to other entities in the
</tr>
<tr>
<td valign="top" width="60%">
[**`Under`**](../reference/object_states/under.html)<br><br>
[**`Under`**](../reference/object_states/under.md)<br><br>
Defines whether this object is considered under `other`. This checks to make sure that this object is touching `other` and that `other` is in this object's `VerticalAdjacency` `positive_neighbors` list.<br><br>
<ul>
<li>`get_value(other)`: returns `True / False`</li>
@ -456,7 +456,7 @@ These are object states that that define intrinsic properties of the object and
<table markdown="span">
<tr>
<td valign="top" width="60%">
[**`ParticleApplier` / `ParticleRemover`**](../reference/object_states/particle_modifier.html)<br><br>
[**`ParticleApplier` / `ParticleRemover`**](../reference/object_states/particle_modifier.md)<br><br>
Defines an object that has the ability to apply (spawn) or remove (absorb) particles from specific particle systems. This state's `conditions` property defines the per-particle system requirements in order for the applier / remover to be active for that specific system. For example, a spray bottle that is a `ParticleApplier` may require `toggled_on.get_value()` to be `True` in order to allow `cleaning_fluid` particles to be sprayed, simulating a "press" of the nozzle trigger. The `method` flag in the constructor determines the applier / removal behavior, which is triggered **_only_** by direct contact with the object (`ParticleModifyMethod.ADJACENCY`) or contact with a virtual volume (`ParticleModifyMethod.PROJECTION`). The former captures objects such as sponges, while the latter captures objects such as vacuum cleaners or spray bottles. This object state is updated at each simulation step such that particles are automatically added / removed as needed.<br><br>
<ul>
<li>`get_value()`: Not supported.</li>
@ -469,7 +469,7 @@ These are object states that that define intrinsic properties of the object and
</tr>
<tr>
<td valign="top" width="60%">
[**`ParticleSource` / `ParticleSink`**](../reference/object_states/particle_source_or_sink.html)<br><br>
[**`ParticleSource` / `ParticleSink`**](../reference/object_states/particle_source_or_sink.md)<br><br>
Defines an object that has the ability to apply (spawn) or remove (absorb) particles from specific particle systems. The behavior is nearly identical to **`ParticleApplier` / `ParticleRemover`**, with the exception that contact is not strictly necessary to add / remove particles. This is to provide the distinction between, e.g., a particle _source_ such as a sink, which always spawns water every timestep irregardless of whether its faucet volume is in contact with a surface, vs. a particle _applier_ such as a spray bottle, which (for efficiency reasons) only spawns water if its virtual spray cone is overlapping with a surface.<br><br>
<ul>
<li>`get_value()`: Not supported.</li>

View File

@ -49,42 +49,42 @@ All objects are tracked and organized by the underlying scene, and can quickly b
## Types
**`OmniGibson`** directly supports multiple `Object` classes, which are intended to encapsulate different types of objects with varying functionalities. The most basic is [`BaseObject`](../reference/objects/object_base.html), which can capture any arbitrary object and thinly wraps an [`EntityPrim`](../reference/objects/entity_prim.html). The more specific classes are shown below:
**`OmniGibson`** directly supports multiple `Object` classes, which are intended to encapsulate different types of objects with varying functionalities. The most basic is [`BaseObject`](../reference/objects/object_base.html), which can capture any arbitrary object and thinly wraps an [`EntityPrim`](../reference/objects/entity_prim.md). The more specific classes are shown below:
<table markdown="span">
<tr>
<td valign="top">
[**`StatefulObject`**](../reference/objects/stateful_object.html)<br><br>
[**`StatefulObject`**](../reference/objects/stateful_object.md)<br><br>
Encapsulates an object that owns a set of [object states](./object_states.html). In general, this is intended to be a parent class, and not meant to be instantiated directly.<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`USDObject`**](../reference/objects/usd_object.html)<br><br>
[**`USDObject`**](../reference/objects/usd_object.md)<br><br>
Encapsulates an object imported from a usd file. Useful when loading custom USD assets into **`OmniGibson`**. Users should specify the absolute `usd_path` to the desired file to import.<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`DatasetObject`**](../reference/objects/dataset_object.html)<br><br>
[**`DatasetObject`**](../reference/objects/dataset_object.md)<br><br>
This inherits from `USDObject` and encapsulates an object from the BEHAVIOR-1K dataset. Users should specify the `category` and `model` of object to load, where `model` is a 6 character string unique to each dataset object. For an overview of all possible categories and models, please refer to our [Knowledgebase Dashboard](https://behavior.stanford.edu/knowledgebase/)<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`PrimitiveObject`**](../reference/objects/primitive_object.html)<br><br>
[**`PrimitiveObject`**](../reference/objects/primitive_object.md)<br><br>
Encapsulates an object defined by a single primitive geom, such a sphere, cube, or cylinder. These are often used as visual objects (via `visual_only=True`) in the scene, e.g., for visualizing the target location of a robot reaching task.<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`LightObject`**](../reference/objects/light_object.html)<br><br>
[**`LightObject`**](../reference/objects/light_object.md)<br><br>
Encapsulates a virtual light source, where both the shape (sphere, disk, dome, etc.), size, and intensity can be specified.<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`ControllableObject`**](../reference/objects/controllable_object.html)<br><br>
[**`ControllableObject`**](../reference/objects/controllable_object.md)<br><br>
Encapsulates an object that is motorized, for example, a conveyer belt, and provides functionality to apply actions and deploy control signals to the motors. However, currently this class is used exclusively as a parent class of `BaseRobot`, and should not be instantiated directly by users.<br><br>
</td>
</tr>

View File

@ -28,48 +28,48 @@ After the prim has been created, it may additionally require further initializat
Once initialized, a `Prim` instance can be used as a direct interface with the corresponding low-level prim on the omniverse stage. The low-level attributes of the underlying prim can be queried / set via `prim.get_attribute(name)` / `prim.set_attribute(name, val)`. In addition, some `Prim` classes implement higher-level functionality to more easily manipulate the underlying prim, such as `MaterialPrim`'s `bind(prim_path)`, which binds its owned material to the desired prim located at `prim_path`.
## Types
**`OmniGibson`** directly supports multiple `Prim` classes, which are intended to encapsulate different types of prims from the omniverse scene stage. The most basic is [`BasePrim`](../reference/prims/prim_base.html), which can capture any arbitrary prim. The more specific classes are shown below:
**`OmniGibson`** directly supports multiple `Prim` classes, which are intended to encapsulate different types of prims from the omniverse scene stage. The most basic is [`BasePrim`](../reference/prims/prim_base.md), which can capture any arbitrary prim. The more specific classes are shown below:
<table markdown="span">
<tr>
<td valign="top">
[**`XFormPrim`**](../reference/prims/xform_prim.html)<br><br>
[**`XFormPrim`**](../reference/prims/xform_prim.md)<br><br>
Encapsulates a transformable prim. This prim can get and set its local or global pose, as well as its own scale.<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`GeomPrim`**](../reference/prims/geom_prim.html#prims.geom_prim.GeomPrim)<br><br>
Encapsulates a prim defined by a geom (shape or mesh). It is an `XFormPrim` that can additionally owns geometry defined by its set of `points`. Its subclasses [`VisualGeomPrim`](../reference/prims/geom_prim.html) and [`CollisionGeomPrim`](../reference/prims/geom_prim.html#prims.geom_prim.CollisionGeomPrim) implement additional utility for dealing with those respective types of geometries (e.g.: `CollisionGeomPrim.set_collision_approximation(...)`).<br><br>
[**`GeomPrim`**](../reference/prims/geom_prim.md#prims.geom_prim.GeomPrim)<br><br>
Encapsulates a prim defined by a geom (shape or mesh). It is an `XFormPrim` that can additionally owns geometry defined by its set of `points`. Its subclasses [`VisualGeomPrim`](../reference/prims/geom_prim.html) and [`CollisionGeomPrim`](../reference/prims/geom_prim.md#prims.geom_prim.CollisionGeomPrim) implement additional utility for dealing with those respective types of geometries (e.g.: `CollisionGeomPrim.set_collision_approximation(...)`).<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`ClothPrim`**](../reference/prims/cloth_prim.html)<br><br>
[**`ClothPrim`**](../reference/prims/cloth_prim.md)<br><br>
Encapsulates a prim defined by a mesh geom that is to be converted into cloth. It is a `GeomPrim` that dynamically transforms its owned (rigid) mesh into a (compliant, particle-based) cloth. Its methods can be used to query and set its individual particles' state, as well as track a subset of keypoints / keyfaces.<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`RigidPrim`**](../reference/prims/rigid_prim.html)<br><br>
[**`RigidPrim`**](../reference/prims/rigid_prim.md)<br><br>
Encapsulates a prim defined by a rigid body. It is an `XFormPrim` that is subject to physics and gravity, and may belong to an `EntityPrim`. It additionally has attributes to control its own mass, density, and other physics-related behavior.<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`JointPrim`**](../reference/prims/joint_prim.html)<br><br>
[**`JointPrim`**](../reference/prims/joint_prim.md)<br><br>
Encapsulates a prim defined by a joint. It belongs to an `EntityPrim` and has attributes to control its own joint state (position, velocity, effort).<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`EntityPrim`**](../reference/prims/entity_prim.html)<br><br>
[**`EntityPrim`**](../reference/prims/entity_prim.md)<br><br>
Encapsulates the top-level prim of an imported object. Since the underlying object consists of a set of links and joints, this class owns its corresponding set of `RigidPrim`s and `JointPrim`s, and provides high-level functionality to controlling the object's pose, joint state, and physics-related behavior.<br><br>
</td>
</tr>
<tr>
<td valign="top">
[**`MaterialPrim`**](../reference/prims/material_prim.html)<br><br>
[**`MaterialPrim`**](../reference/prims/material_prim.md)<br><br>
Encapsulates a prim defining a material specification. It provides high-level functionality for directly controlling the underlying material's properties and behavior.<br><br>
</td>
</tr>

View File

@ -83,12 +83,12 @@ Controllers and sensors can be accessed directly via the `controllers` and `sens
**`OmniGibson`** currently supports 9 robots, consisting of 4 mobile robots, 2 manipulation robots, 2 mobile manipulation robots, and 1 anthropomorphic "robot" (a bimanual agent proxy used for VR teleoperation). Below, we provide a brief overview of each model:
### Mobile Robots
These are navigation-only robots (an instance of [`LocomotionRobot`](../reference/robots/locomotion_robot.html)) that solely consist of a base that can move.
These are navigation-only robots (an instance of [`LocomotionRobot`](../reference/robots/locomotion_robot.md)) that solely consist of a base that can move.
<table markdown="span">
<tr>
<td valign="top" width="60%">
[**`Turtlebot`**](../reference/robots/turtlebot.html)<br><br>
[**`Turtlebot`**](../reference/robots/turtlebot.md)<br><br>
The two-wheeled <a href="https://www.turtlebot.com/turtlebot2/">Turtlebot 2</a> model with the Kobuki base.<br><br>
<ul>
<li>_Controllers_: Base</li>
@ -101,7 +101,7 @@ These are navigation-only robots (an instance of [`LocomotionRobot`](../referenc
</tr>
<tr>
<td valign="top" width="60%">
[**`Locobot`**](../reference/robots/locobot.html)<br><br>
[**`Locobot`**](../reference/robots/locobot.md)<br><br>
The two-wheeled, open-source <a href="http://www.locobot.org/">LoCoBot</a> model.<br><br> Note that in our model the arm is disabled and is fixed to the base.<br><br>
<ul>
<li>_Controllers_: Base</li>
@ -114,7 +114,7 @@ These are navigation-only robots (an instance of [`LocomotionRobot`](../referenc
</tr>
<tr>
<td valign="top" width="60%">
[**`Husky`**](../reference/robots/husky.html)<br><br>
[**`Husky`**](../reference/robots/husky.md)<br><br>
The four-wheeled <a href="https://clearpathrobotics.com/husky-unmanned-ground-vehicle-robot/">Husky UAV</a> model from Clearpath Robotics.<br><br>
<ul>
<li>_Controllers_: Base</li>
@ -127,7 +127,7 @@ These are navigation-only robots (an instance of [`LocomotionRobot`](../referenc
</tr>
<tr>
<td valign="top" width="60%">
[**`Freight`**](../reference/robots/freight.html)<br><br>
[**`Freight`**](../reference/robots/freight.md)<br><br>
The two-wheeled <a href="https://docs.fetchrobotics.com/">Freight</a> model which serves as the base for the Fetch robot.<br><br>
<ul>
<li>_Controllers_: Base</li>
@ -141,12 +141,12 @@ These are navigation-only robots (an instance of [`LocomotionRobot`](../referenc
</table>
### Manipulation Robots
These are manipulation-only robots (an instance of [`ManipulationRobot`](../reference/robots/manipulation_robot.html)) that cannot move and solely consist of an actuated arm with a gripper attached to its end effector.
These are manipulation-only robots (an instance of [`ManipulationRobot`](../reference/robots/manipulation_robot.md)) that cannot move and solely consist of an actuated arm with a gripper attached to its end effector.
<table markdown="span">
<tr>
<td valign="top" width="60%">
[**`Franka`**](../reference/robots/franka.html)<br><br>
[**`Franka`**](../reference/robots/franka.md)<br><br>
The popular 7-DOF <a href="https://franka.de/">Franka Research 3</a> model equipped with a parallel jaw gripper. Note that OmniGibson also includes three alternative versions of Franka with dexterous hands: FrankaAllegro (equipped with an Allegro hand), FrankaLeap (equipped with a Leap hand) and FrankaInspire (equipped with an inspire hand).<br><br>
<ul>
<li>_Controllers_: Arm, Gripper</li>
@ -159,7 +159,7 @@ These are manipulation-only robots (an instance of [`ManipulationRobot`](../refe
</tr>
<tr>
<td valign="top" width="60%">
[**`VX300S`**](../reference/robots/vx300s.html)<br><br>
[**`VX300S`**](../reference/robots/vx300s.md)<br><br>
The 6-DOF <a href="https://www.trossenrobotics.com/viperx-300-robot-arm-6dof.aspx">ViperX 300 6DOF</a> model from Trossen Robotics equipped with a parallel jaw gripper.<br><br>
<ul>
<li>_Controllers_: Arm, Gripper</li>
@ -174,12 +174,12 @@ These are manipulation-only robots (an instance of [`ManipulationRobot`](../refe
### Mobile Manipulation Robots
These are robots that can both navigate and manipulate (and inherit from both [`LocomotionRobot`](../reference/robots/locomotion_robot.html) and [`ManipulationRobot`](../reference/robots/manipulation_robot.html)), and are equipped with both a base that can move as well as one or more gripper-equipped arms that can actuate.
These are robots that can both navigate and manipulate (and inherit from both [`LocomotionRobot`](../reference/robots/locomotion_robot.html) and [`ManipulationRobot`](../reference/robots/manipulation_robot.md)), and are equipped with both a base that can move as well as one or more gripper-equipped arms that can actuate.
<table markdown="span">
<tr>
<td valign="top" width="60%">
[**`Fetch`**](../reference/robots/fetch.html)<br><br>
[**`Fetch`**](../reference/robots/fetch.md)<br><br>
The <a href="https://docs.fetchrobotics.com/">Fetch</a> model, composed of a two-wheeled base, linear trunk, 2-DOF head, 7-DOF arm, and 2-DOF parallel jaw gripper.<br><br>
<ul>
<li>_Controllers_: Base, Head, Arm, Gripper</li>
@ -192,7 +192,7 @@ These are robots that can both navigate and manipulate (and inherit from both [`
</tr>
<tr>
<td valign="top" width="60%">
[**`Tiago`**](../reference/robots/tiago.html)<br><br>
[**`Tiago`**](../reference/robots/tiago.md)<br><br>
The bimanual <a href="https://pal-robotics.com/robots/tiago/">Tiago</a> model from PAL robotics, composed of a holonomic base (which we model as a 3-DOF (x,y,rz) set of joints), linear trunk, 2-DOF head, x2 7-DOF arm, and x2 2-DOF parallel jaw grippers.<br><br>
<ul>
<li>_Controllers_: Base, Head, Left Arm, Right Arm, Left Gripper, Right Gripper</li>
@ -209,7 +209,7 @@ These are robots that can both navigate and manipulate (and inherit from both [`
<table markdown="span">
<tr>
<td valign="top" width="60%">
[**`BehaviorRobot`**](../reference/robots/behavior_robot.html#robots.behavior_robot.BehaviorRobot)<br><br>
[**`BehaviorRobot`**](../reference/robots/behavior_robot.md#robots.behavior_robot.BehaviorRobot)<br><br>
A hand-designed model intended to be used exclusively for VR teleoperation.<br><br>
<ul>
<li>_Controllers_: Base, Head, Left Arm, Right Arm, Left Gripper, Right Gripper</li>

View File

@ -255,7 +255,7 @@ In addition, a scene can always be reset by calling `reset()`. The scene's initi
</tr>
<tr>
<td valign="top" width="30%">
[**`Wainscott_0_int`**](../reference/scene/Wainscott_0_int.html)<br><br>
[**`Wainscott_0_int`**](../reference/scene/Wainscott_0_int.md)<br><br>
</td>
<td>
<img src="../assets/scenes/birds-eye-views/Wainscott_0_int.png" alt="Wainscott_0_int">
@ -266,7 +266,7 @@ In addition, a scene can always be reset by calling `reset()`. The scene's initi
</tr>
<tr>
<td valign="top" width="30%">
[**`Wainscott_1_int`**](../reference/scene/Wainscott_1_int.html)<br><br>
[**`Wainscott_1_int`**](../reference/scene/Wainscott_1_int.md)<br><br>
</td>
<td>
<img src="../assets/scenes/birds-eye-views/Wainscott_1_int.png" alt="Wainscott_1_int">

View File

@ -80,7 +80,7 @@ info:
### Vision
Vision observations are captured by the [`VisionSensor`](../reference/sensors/vision_sensor.html) class, which encapsulates a virtual pinhole camera sensor equipped with various modalities, including RGB, depth, normals, three types of segmentation, optical flow, 2D and 3D bounding boxes, shown below:
Vision observations are captured by the [`VisionSensor`](../reference/sensors/vision_sensor.md) class, which encapsulates a virtual pinhole camera sensor equipped with various modalities, including RGB, depth, normals, three types of segmentation, optical flow, 2D and 3D bounding boxes, shown below:
<table markdown="span">
<tr>
@ -219,7 +219,7 @@ Vision observations are captured by the [`VisionSensor`](../reference/sensors/vi
</table>
### Range
Range observations are captured by the [`ScanSensor`](../reference/sensors/scan_sensor.html) class, which encapsulates a virtual 2D LiDAR range sensor with the following observations:
Range observations are captured by the [`ScanSensor`](../reference/sensors/scan_sensor.md) class, which encapsulates a virtual 2D LiDAR range sensor with the following observations:
<table markdown="span">
<tr>

View File

@ -6,7 +6,7 @@ icon: octicons/gear-24
Macros are a global set of hard-coded, "magic" numbers that are used as default values across **OmniGibson**. These values can have significant implications that broadly impact **OmniGibson**'s runtime (such as setting `HEADLESS` or `DEFAULT_PHYSICS_FREQ`), or can have a much more narrow scope that impacts only a specific module within **OmniGibson** (such as `FIRE_EMITTER_HEIGHT_RATIO`).
All macros within **OmniGibson** can be directly accessed and set via the [`omnigibson/macros.py`](../reference/macros.html) module. There are two sets of macros:
All macros within **OmniGibson** can be directly accessed and set via the [`omnigibson/macros.py`](../reference/macros.md) module. There are two sets of macros:
1. **Global Macros**: Accessed via the `gm` module variable, these are fundamental settings that generally impact all parts of **OmniGibson** runtime, and include values such as `gm.HEADLESS`, `gm.DEFAULT_PHYSICS_FREQ`, and `gm.ENABLE_HQ_RENDERING`. Descriptions of each global macro can be seen directly in the `omnigibson/macros.py` file.
2. **Module Macros** Accessed via the `macros` module variable, these are module-level settings used by individual modules throughout **OmniGibson**. These tend to only impact the module they are defined in, though they can be referenced by other modules as well. Examples include values such as `macros.objects.stateful_object.FIRE_EMITTER_HEIGHT_RATIO` and `macros.robots.manipulation_robot.ASSIST_GRASP_MASS_THRESHOLD`. Descriptions of each module-level macro can be seen directly at the top of the module that it is defined in.

View File

@ -6,7 +6,7 @@ icon: material/repeat
## Description
**`OmniGibson`**'s [Simulator](../reference/simulator.html) class is the global singleton that serves as the interface with omniverse's low-level physx (physics) backend. It provides utility functions for modulating the ongoing simulation as well as the low-level interface for importing scenes and objects. For standard use-cases, interfacing with the simulation exclusively through a created [environment](./environments.md) or [vector environment](./vector_environments.md) should be sufficient, though for more advanced or prototyping use-cases it may be common to interface via this simulator class.
**`OmniGibson`**'s [Simulator](../reference/simulator.md) class is the global singleton that serves as the interface with omniverse's low-level physx (physics) backend. It provides utility functions for modulating the ongoing simulation as well as the low-level interface for importing scenes and objects. For standard use-cases, interfacing with the simulation exclusively through a created [environment](./environments.md) or [vector environment](./vector_environments.md) should be sufficient, though for more advanced or prototyping use-cases it may be common to interface via this simulator class.
## Usage

View File

@ -6,7 +6,7 @@ icon: material/water-outline
## Description
**`OmniGibson`**'s [`System`](../reference/systems/base_system.html)s represents scene singletons that encapsulate a single particle type. These systems provide functionality for generating, tracking, and removing any number of particles arbitrarily located throughout the current scene.
**`OmniGibson`**'s [`System`](../reference/systems/base_system.md)s represents scene singletons that encapsulate a single particle type. These systems provide functionality for generating, tracking, and removing any number of particles arbitrarily located throughout the current scene.
## Usage
@ -14,7 +14,7 @@ icon: material/water-outline
For efficiency reasons, systems are created dynamically on an as-needed basis. A system can be dynamically created (or referenced, if it already exists) via `scene.get_system(name)`, where `name` defines the name of the system. If you do not wish to initialize a system when refrencing it, e.g. for performance reasons, use the `force_init` flag: `scene.get_system(name, force_init=False)`. For a list of all possible system names, see `scene.system_registry.objects`.
### Runtime
A given system can be accessed at any time via `scene.get_system(...)`. Systems can generate particles via `system.generate_particles(...)`, track their states via `system.get_particles_position_orientation()`, and remove them via `system.remove_particles(...)`. Please refer to the [`System`'s API Reference](../reference/systems/base_system.html) for specific information regarding arguments. Moreover, specific subclasses may implement more complex generation behavior, such as `VisualParticleSystem`s `generate_group_particles(...)` which spawn visual (non-collidable) particles that are attached to a specific object.
A given system can be accessed at any time via `scene.get_system(...)`. Systems can generate particles via `system.generate_particles(...)`, track their states via `system.get_particles_position_orientation()`, and remove them via `system.remove_particles(...)`. Please refer to the [`System`'s API Reference](../reference/systems/base_system.md) for specific information regarding arguments. Moreover, specific subclasses may implement more complex generation behavior, such as `VisualParticleSystem`s `generate_group_particles(...)` which spawn visual (non-collidable) particles that are attached to a specific object.
## Types
@ -23,7 +23,7 @@ A given system can be accessed at any time via `scene.get_system(...)`. Systems
<table markdown="span">
<tr>
<td valign="top" width="60%">
[**`GranularSystem`**](../reference/systems/micro_particle_system.html#systems.micro_particle_system.GranularSystem)<br><br>
[**`GranularSystem`**](../reference/systems/micro_particle_system.md#systems.micro_particle_system.GranularSystem)<br><br>
Represents particles that are fine-grained and are generally less than a centimeter in size, such as brown rice, black pepper, and chia seeds. These are particles subject to physics.<br><br>**Collides with...**
<ul>
<li>_Rigid bodies_: Yes</li>
@ -38,7 +38,7 @@ A given system can be accessed at any time via `scene.get_system(...)`. Systems
</tr>
<tr>
<td valign="top" width="60%">
[**`FluidSystem`**](../reference/systems/micro_particle_system.html#systems.micro_particle_system.FluidSystem)<br><br>
[**`FluidSystem`**](../reference/systems/micro_particle_system.md#systems.micro_particle_system.FluidSystem)<br><br>
Represents particles that are relatively homogeneous and liquid (though potentially viscous) in nature, such as water, baby oil, and hummus. These are particles subject to physics.<br><br>**Collides with...**
<ul>
<li>_Rigid bodies_: Yes</li>
@ -53,7 +53,7 @@ A given system can be accessed at any time via `scene.get_system(...)`. Systems
</tr>
<tr>
<td valign="top" width="60%">
[**`MacroPhysicalParticleSystem`**](../reference/systems/macro_particle_system.html#systems.macro_particle_system.MacroPhysicalParticleSystem)<br><br>
[**`MacroPhysicalParticleSystem`**](../reference/systems/macro_particle_system.md#systems.macro_particle_system.MacroPhysicalParticleSystem)<br><br>
Represents particles that are small but replicable, such as pills, diced fruit, and hair. These are particles subject to physics.<br><br>**Collides with...**
<ul>
<li>_Rigid bodies_: Yes</li>
@ -68,7 +68,7 @@ A given system can be accessed at any time via `scene.get_system(...)`. Systems
</tr>
<tr>
<td valign="top" width="60%">
[**`MacroVisualParticleSystem`**](../reference/systems/macro_particle_system.html#systems.macro_particle_system.MacroVisualParticleSystem)<br><br>
[**`MacroVisualParticleSystem`**](../reference/systems/macro_particle_system.md#systems.macro_particle_system.MacroVisualParticleSystem)<br><br>
Represents particles that are usually flat and varied, such as stains, lint, and moss. These are particles not subject to physics, and are attached rigidly to specific objects in the scene.<br><br>**Collides with...**
<ul>
<li>_Rigid bodies_: No</li>

View File

@ -70,7 +70,7 @@ Internally, `Environment`'s `reset` method will call the task's `reset` method,
<table markdown="span">
<tr>
<td valign="top">
[**`DummyTask`**](../reference/tasks/dummy_task.html)<br><br>
[**`DummyTask`**](../reference/tasks/dummy_task.md)<br><br>
Dummy task with trivial implementations.
<ul>
<li>`termination_conditions`: empty dict.</li>
@ -83,7 +83,7 @@ Internally, `Environment`'s `reset` method will call the task's `reset` method,
</tr>
<tr>
<td valign="top">
[**`PointNavigationTask`**](../reference/tasks/point_navigation_task.html)<br><br>
[**`PointNavigationTask`**](../reference/tasks/point_navigation_task.md)<br><br>
PointGoal navigation task with fixed / randomized initial pose and goal location.
<ul>
<li>`termination_conditions`: `MaxCollision`, `Timeout`, `PointGoal`.</li>
@ -96,7 +96,7 @@ Internally, `Environment`'s `reset` method will call the task's `reset` method,
</tr>
<tr>
<td valign="top">
[**`PointReachingTask`**](../reference/tasks/point_reaching_task.html)<br><br>
[**`PointReachingTask`**](../reference/tasks/point_reaching_task.md)<br><br>
Similar to PointNavigationTask, except the goal is specified with respect to the robot's end effector.
<ul>
<li>`termination_conditions`: `MaxCollision`, `Timeout`, `PointGoal`.</li>
@ -109,7 +109,7 @@ Internally, `Environment`'s `reset` method will call the task's `reset` method,
</tr>
<tr>
<td valign="top">
[**`GraspTask`**](../reference/tasks/grasp_task.html)<br><br>
[**`GraspTask`**](../reference/tasks/grasp_task.md)<br><br>
Grasp task for a single object.
<ul>
<li>`termination_conditions`: `Timeout`.</li>
@ -122,7 +122,7 @@ Internally, `Environment`'s `reset` method will call the task's `reset` method,
</tr>
<tr>
<td valign="top">
[**`BehaviorTask`**](../reference/tasks/behavior_task.html)<br><br>
[**`BehaviorTask`**](../reference/tasks/behavior_task.md)<br><br>
BEHAVIOR task of long-horizon household activity.
<ul>
<li>`termination_conditions`: `Timeout`, `PredicateGoal`.</li>
@ -136,50 +136,50 @@ Internally, `Environment`'s `reset` method will call the task's `reset` method,
</table>
!!! info annotate "Follow our tutorial on BEHAVIOR tasks!"
To better understand how to use / sample / load / customize BEHAVIOR tasks, please read our [tutorial](../tutorials/behavior_tasks.html)!
To better understand how to use / sample / load / customize BEHAVIOR tasks, please read our [tutorial](../tutorials/behavior_tasks.md)!
### `TerminationCondition`
<table markdown="span">
<tr>
<td valign="top">
[**`Timeout`**](../reference/termination_conditions/timeout.html)<br><br>
[**`Timeout`**](../reference/termination_conditions/timeout.md)<br><br>
`FailureCondition`: episode terminates if `max_step` steps have passed.
</td>
</tr>
<tr>
<td valign="top">
[**`Falling`**](../reference/termination_conditions/falling.html)<br><br>
[**`Falling`**](../reference/termination_conditions/falling.md)<br><br>
`FailureCondition`: episode terminates if the robot can no longer function (i.e.: falls below the floor height by at least
`fall_height` or tilt too much by at least `tilt_tolerance`).
</td>
</tr>
<tr>
<td valign="top">
[**`MaxCollision`**](../reference/termination_conditions/max_collision.html)<br><br>
[**`MaxCollision`**](../reference/termination_conditions/max_collision.md)<br><br>
`FailureCondition`: episode terminates if the robot has collided more than `max_collisions` times.
</td>
</tr>
<tr>
<td valign="top">
[**`PointGoal`**](../reference/termination_conditions/point_goal.html)<br><br>
[**`PointGoal`**](../reference/termination_conditions/point_goal.md)<br><br>
`SuccessCondition`: episode terminates if point goal is reached within `distance_tol` by the robot's base.
</td>
</tr>
<tr>
<td valign="top">
[**`ReachingGoal`**](../reference/termination_conditions/reaching_goal.html)<br><br>
[**`ReachingGoal`**](../reference/termination_conditions/reaching_goal.md)<br><br>
`SuccessCondition`: episode terminates if reaching goal is reached within `distance_tol` by the robot's end effector.
</td>
</tr>
<tr>
<td valign="top">
[**`GraspGoal`**](../reference/termination_conditions/grasp_goal.html)<br><br>
[**`GraspGoal`**](../reference/termination_conditions/grasp_goal.md)<br><br>
`SuccessCondition`: episode terminates if target object has been grasped (by assistive grasping).
</td>
</tr>
<tr>
<td valign="top">
[**`PredicateGoal`**](../reference/termination_conditions/predicate_goal.html)<br><br>
[**`PredicateGoal`**](../reference/termination_conditions/predicate_goal.md)<br><br>
`SuccessCondition`: episode terminates if all the goal predicates of `BehaviorTask` are satisfied.
</td>
</tr>
@ -190,25 +190,25 @@ Internally, `Environment`'s `reset` method will call the task's `reset` method,
<table markdown="span">
<tr>
<td valign="top">
[**`CollisionReward`**](../reference/reward_functions/collision_reward.html)<br><br>
[**`CollisionReward`**](../reference/reward_functions/collision_reward.md)<br><br>
Penalization of robot collision with non-floor objects, with a negative weight `r_collision`.
</td>
</tr>
<tr>
<td valign="top">
[**`PointGoalReward`**](../reference/reward_functions/point_goal_reward.html)<br><br>
[**`PointGoalReward`**](../reference/reward_functions/point_goal_reward.md)<br><br>
Reward for reaching the goal with the robot's base, with a positive weight `r_pointgoal`.
</td>
</tr>
<tr>
<td valign="top">
[**`ReachingGoalReward`**](../reference/reward_functions/reaching_goal_reward.html)<br><br>
[**`ReachingGoalReward`**](../reference/reward_functions/reaching_goal_reward.md)<br><br>
Reward for reaching the goal with the robot's end-effector, with a positive weight `r_reach`.
</td>
</tr>
<tr>
<td valign="top">
[**`PotentialReward`**](../reference/reward_functions/potential_reward.html)<br><br>
[**`PotentialReward`**](../reference/reward_functions/potential_reward.md)<br><br>
Reward for decreasing some arbitrary potential function value, with a positive weight `r_potential`.
It assumes the task already has `get_potential` implemented.
Generally low potential is preferred (e.g. a common potential for goal-directed task is the distance to goal).
@ -216,7 +216,7 @@ Internally, `Environment`'s `reset` method will call the task's `reset` method,
</tr>
<tr>
<td valign="top">
[**`GraspReward`**](../reference/reward_functions/grasp_reward.html)<br><br>
[**`GraspReward`**](../reference/reward_functions/grasp_reward.md)<br><br>
Reward for grasping an object. It not only evaluates the success of object grasping but also considers various penalties and efficiencies.
The reward is calculated based on several factors:
<ul>

View File

@ -6,7 +6,7 @@ icon: material/magic-staff
## Description
Transition rules are **`OmniGibson`**'s method for simulating complex physical phenomena not directly supported by the underlying omniverse physics engine, such as slicing, blending, and cooking. A given [`TransitionRule`](../reference/transition_rules.html#transition_rules.BaseTransitionRule) dynamically checks for its internal sets of conditions, and, if validated, executes its corresponding `transition`.
Transition rules are **`OmniGibson`**'s method for simulating complex physical phenomena not directly supported by the underlying omniverse physics engine, such as slicing, blending, and cooking. A given [`TransitionRule`](../reference/transition_rules.md#transition_rules.BaseTransitionRule) dynamically checks for its internal sets of conditions, and, if validated, executes its corresponding `transition`.
!!! info annotate "Transition Rules must be enabled before usage!"
@ -17,10 +17,10 @@ Transition rules are **`OmniGibson`**'s method for simulating complex physical p
## Usage
### Creating
Because `TransitionRule`s are monolithic classes, these should be defined _before_ **`OmniGibson`** is launched. A rule can be easily extended by subclassing the `BaseTransitionRule` class and implementing the necessary functions. For a simple example, please see the [`SlicingRule`](../reference/transition_rules.html#transition_rules.SlicingRule) class.
Because `TransitionRule`s are monolithic classes, these should be defined _before_ **`OmniGibson`** is launched. A rule can be easily extended by subclassing the `BaseTransitionRule` class and implementing the necessary functions. For a simple example, please see the [`SlicingRule`](../reference/transition_rules.md#transition_rules.SlicingRule) class.
### Runtime
At runtime, each scene owns a [`TransitionRuleAPI`](../reference/transition_rules.html#transition_rules.TransitionRuleAPI) instance, which automatically handles the stepping and processing of all defined transition rule classes. For efficiency reasons, rules are dynamically loaded and checked based on the object / system set currently active in the scene. A rule will only be checked if there is at least one valid candidate combination amongst the current object / system set. For example, if there is no sliceable object present in this scene, then `SlicingRule` will not be active. Every time an object / system is added / removed from the scene, all rules are refreshed so that the current active transition rule set is always accurate.
At runtime, each scene owns a [`TransitionRuleAPI`](../reference/transition_rules.md#transition_rules.TransitionRuleAPI) instance, which automatically handles the stepping and processing of all defined transition rule classes. For efficiency reasons, rules are dynamically loaded and checked based on the object / system set currently active in the scene. A rule will only be checked if there is at least one valid candidate combination amongst the current object / system set. For example, if there is no sliceable object present in this scene, then `SlicingRule` will not be active. Every time an object / system is added / removed from the scene, all rules are refreshed so that the current active transition rule set is always accurate.
In general, you should not need to interface with the `TransitionRuleAPI` class at all -- if your rule implementation is correct, then the API will automatically handle the transition when the appropriate conditions are met!
@ -31,7 +31,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
<table markdown="span">
<tr>
<td valign="top" width="60%">
[**`SlicingRule`**](../reference/transition_rules.html#transition_rules.SlicingRule)<br><br>
[**`SlicingRule`**](../reference/transition_rules.md#transition_rules.SlicingRule)<br><br>
Encapsulates slicing an object into halves (e.g.: slicing an apple).<br><br>**Required Candidates**
<ul>
<li>1+ sliceable objects</li>
@ -53,7 +53,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
</tr>
<tr>
<td valign="top" width="60%">
[**`DicingRule`**](../reference/transition_rules.html#transition_rules.DicingRule)<br><br>
[**`DicingRule`**](../reference/transition_rules.md#transition_rules.DicingRule)<br><br>
Encapsulates dicing a diceable into small chunks (e.g.: dicing an apple).<br><br>**Required Candidates**
<ul>
<li>1+ diceable objects</li>
@ -75,7 +75,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
</tr>
<tr>
<td valign="top" width="60%">
[**`MeltingRule`**](../reference/transition_rules.html#transition_rules.MeltingRule)<br><br>
[**`MeltingRule`**](../reference/transition_rules.md#transition_rules.MeltingRule)<br><br>
Encapsulates melting an object into liquid (e.g.: melting chocolate).<br><br>**Required Candidates**
<ul>
<li>1+ meltable objects</li>
@ -95,7 +95,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
</tr>
<tr>
<td valign="top" width="60%">
[**`CookingPhysicalParticleRule`**](../reference/transition_rules.html#transition_rules.CookingPhysicalParticleRule)<br><br>
[**`CookingPhysicalParticleRule`**](../reference/transition_rules.md#transition_rules.CookingPhysicalParticleRule)<br><br>
Encapsulates cooking physical particles (e.g.: boiling water).<br><br>**Required Candidates**
<ul>
<li>1+ fillable and heatable objects</li>
@ -115,7 +115,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
</tr>
<tr>
<td valign="top" width="60%">
[**`ToggleableMachineRule`**](../reference/transition_rules.html#transition_rules.ToggleableMachineRule)<br><br>
[**`ToggleableMachineRule`**](../reference/transition_rules.md#transition_rules.ToggleableMachineRule)<br><br>
Encapsulates transformative changes when a button is pressed (e.g.: blending a smoothie). Valid transitions are defined by a pre-defined set of "recipes" (input / output combinations).<br><br>**Required Candidates**
<ul>
<li>1+ fillable and toggleable objects</li>
@ -135,7 +135,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
</tr>
<tr>
<td valign="top" width="60%">
[**`MixingToolRule`**](../reference/transition_rules.html#transition_rules.MixingToolRule)<br><br>
[**`MixingToolRule`**](../reference/transition_rules.md#transition_rules.MixingToolRule)<br><br>
Encapsulates transformative changes during tool-driven mixing (e.g.: mixing a drink with a stirrer). Valid transitions are defined by a pre-defined set of "recipes" (input / output combinations).<br><br>**Required Candidates**
<ul>
<li>1+ fillable objects</li>
@ -157,7 +157,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
</tr>
<tr>
<td valign="top" width="60%">
[**`CookingRule`**](../reference/transition_rules.html#transition_rules.CookingRule)<br><br>
[**`CookingRule`**](../reference/transition_rules.md#transition_rules.CookingRule)<br><br>
Encapsulates transformative changes during cooking (e.g.: baking a cake). Valid transitions are defined by a pre-defined set of "recipes" (input / output combinations).<br><br>**Required Candidates**
<ul>
<li>1+ fillable objects</li>
@ -179,7 +179,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
</tr>
<tr>
<td valign="top" width="60%">
[**`WasherRule`**](../reference/transition_rules.html#transition_rules.WasherRule)<br><br>
[**`WasherRule`**](../reference/transition_rules.md#transition_rules.WasherRule)<br><br>
Encapsulates washing mechanism (e.g.: cleaning clothes in the washing machine with detergent). Washing behavior (i.e.: what types of particles are removed from clothes during washing) is predefined.<br><br>**Required Candidates**
<ul>
<li>1+ washer objects</li>
@ -200,7 +200,7 @@ In general, you should not need to interface with the `TransitionRuleAPI` class
</tr>
<tr>
<td valign="top" width="60%">
[**`DryerRule`**](../reference/transition_rules.html#transition_rules.DryerRule)<br><br>
[**`DryerRule`**](../reference/transition_rules.md#transition_rules.DryerRule)<br><br>
Encapsulates drying mechanism (e.g.: drying clothes in the drying machine).<br><br>**Required Candidates**
<ul>
<li>1+ clothes_dryer objects</li>

View File

@ -82,7 +82,7 @@ Now that we have the USD file for the robot, let's write our own robot class. Fo
1. Create a new python file named after your robot. In our case, our file exists under `omnigibson/robots` and is named `stretch.py`.
2. Determine which robot interfaces it should inherit. We currently support three modular interfaces that can be used together: [`LocomotionRobot`](../reference/robots/locomotion_robot.html) for robots whose bases can move (and a more specific [`TwoWheelRobot`](../reference/robots/two_wheel_robot.html) for locomotive robots that only have two wheels), [`ManipulationRobot`](../reference/robots/manipulation_robot.html) for robots equipped with one or more arms and grippers, and [`ActiveCameraRobot`](../reference/robots/active_camera_robot.html) for robots with a controllable head or camera mount. In our case, our robot is a mobile manipulator with a moveable camera mount, so our Python class inherits all three interfaces.
2. Determine which robot interfaces it should inherit. We currently support three modular interfaces that can be used together: [`LocomotionRobot`](../reference/robots/locomotion_robot.html) for robots whose bases can move (and a more specific [`TwoWheelRobot`](../reference/robots/two_wheel_robot.html) for locomotive robots that only have two wheels), [`ManipulationRobot`](../reference/robots/manipulation_robot.html) for robots equipped with one or more arms and grippers, and [`ActiveCameraRobot`](../reference/robots/active_camera_robot.md) for robots with a controllable head or camera mount. In our case, our robot is a mobile manipulator with a moveable camera mount, so our Python class inherits all three interfaces.
3. You must implement all required abstract properties defined by each respective inherited robot interface. In the most simple case, this is usually simply defining relevant metadata from the original robot source files, such as relevant joint / link names and absolute paths to the corresponding robot URDF and USD files. Please see our annotated `stretch.py` module below which serves as a good starting point that you can modify.

View File

@ -8,7 +8,7 @@ icon: material/wrench-outline
## Customizing Action Spaces
A robot is equipped with multiple controllers, each of which control a subset of the robot's low-level joint motors. Together, these controllers' inputs form the robot's corresponding action space. For example, a [Fetch](../reference/robots/fetch.html) robot consists of (a) a base controller controlling its two wheels, (b) a head controller controlling its two head joints, (c) an arm controller controlling its seven arm joints, and (d) a gripper controller controlling its two gripper joints (resulting in 13 DOF being controlled). An example set of controllers would be using a [DifferentialDriveController](../reference/controllers/dd_controller.html) for the base, [JointController](../reference/controllers/joint_controller.html)s for the head and arm, and binary [MultiFingerGripperController](../reference/controllers/multi_finger_gripper_controller.html) for the gripper. In this case, the action space size would be 2 + 2 + 7 + 1 = 12. If we were to use an [InverseKinematicsController](../reference/controllers/ik_controller.html) commanding the 6DOF end-effector pose instead of the JointController for the arm, the action space size would be 2 + 2 + 6 + 1 = 11. Each of these controllers can be individual configured and swapped out for each robot.
A robot is equipped with multiple controllers, each of which control a subset of the robot's low-level joint motors. Together, these controllers' inputs form the robot's corresponding action space. For example, a [Fetch](../reference/robots/fetch.html) robot consists of (a) a base controller controlling its two wheels, (b) a head controller controlling its two head joints, (c) an arm controller controlling its seven arm joints, and (d) a gripper controller controlling its two gripper joints (resulting in 13 DOF being controlled). An example set of controllers would be using a [DifferentialDriveController](../reference/controllers/dd_controller.html) for the base, [JointController](../reference/controllers/joint_controller.html)s for the head and arm, and binary [MultiFingerGripperController](../reference/controllers/multi_finger_gripper_controller.html) for the gripper. In this case, the action space size would be 2 + 2 + 7 + 1 = 12. If we were to use an [InverseKinematicsController](../reference/controllers/ik_controller.md) commanding the 6DOF end-effector pose instead of the JointController for the arm, the action space size would be 2 + 2 + 6 + 1 = 11. Each of these controllers can be individual configured and swapped out for each robot.
### Modifying Via Config
@ -114,7 +114,7 @@ Robots' action spaces can also be modified at runtime after a robot has been imp
## Customizing Observation Spaces
A robot is equipped with multiple onboard sensors, each of which can be configured to return a unique set of observations. Together, these observation modalities form the robot's observation space. For example, a [Turtlebot](../reference/robots/turtlebot.html) robot consists of (a) a LIDAR ([ScanSensor](../reference/sensors/scan_sensor.html)) at its base, (b) an RGB-D camera ([VisionSensor](../reference/sensors/vision_sensor.html)) at its head, and (c) onboard proprioception. An example set of observations would be using modalities `["rgb", "normal", "proprio", "scan"]`, which would return RGB and surface normal maps, proprioception, and 2D radial LIDAR distances. Each of these modalities can be swapped out, depending on robot's set of equipped onboard sensors. Each of these controllers can be individual configured and swapped out for each robot. Please see the individual sensor classes for specific supported modalities.
A robot is equipped with multiple onboard sensors, each of which can be configured to return a unique set of observations. Together, these observation modalities form the robot's observation space. For example, a [Turtlebot](../reference/robots/turtlebot.html) robot consists of (a) a LIDAR ([ScanSensor](../reference/sensors/scan_sensor.html)) at its base, (b) an RGB-D camera ([VisionSensor](../reference/sensors/vision_sensor.md)) at its head, and (c) onboard proprioception. An example set of observations would be using modalities `["rgb", "normal", "proprio", "scan"]`, which would return RGB and surface normal maps, proprioception, and 2D radial LIDAR distances. Each of these modalities can be swapped out, depending on robot's set of equipped onboard sensors. Each of these controllers can be individual configured and swapped out for each robot. Please see the individual sensor classes for specific supported modalities.
### Modifying Via Config