Move dataset downloader into OmniGibson so it gets shipped via PyPI

This commit is contained in:
Cem Gökmen 2024-07-24 11:50:37 -07:00
parent 36c1a057f6
commit f6e5fa5cd4
4 changed files with 7 additions and 7 deletions

View File

@ -59,7 +59,7 @@ RUN micromamba run -n omnigibson /bin/bash --login -c 'source /isaac-sim/setup_c
RUN micromamba run -n omnigibson python -c "from ompl import base"
# Add setup to be executed on bash launch
RUN echo "OMNIGIBSON_NO_OMNIVERSE=1 python scripts/download_datasets.py" >> /root/.bashrc
RUN echo "OMNIGIBSON_NO_OMNIVERSE=1 python omnigibson/download_datasets.py" >> /root/.bashrc
# Copy over omnigibson source
ADD . /omnigibson-src

View File

@ -82,7 +82,7 @@ There are three ways to setup **`OmniGibson`**, all built upon different ways of
5. Download **`OmniGibson`** dataset and assets:
```shell
python scripts/download_datasets.py
python omnigibson/download_datasets.py
```
</div>
@ -184,7 +184,7 @@ There are three ways to setup **`OmniGibson`**, all built upon different ways of
4. Download **`OmniGibson`** dataset (within the conda env):
```shell
python scripts/download_datasets.py
python omnigibson/download_datasets.py
```
</div>

View File

@ -14,9 +14,9 @@ We assume the SLURM cluster using the _enroot_ container software, which is a re
With enroot installed, you can follow the below steps to run OmniGibson on SLURM:
1. Download the dataset to a location that is accessible by cluster nodes. To do this, you can use the download_dataset.py script inside OmniGibson's scripts directory, and move it to the right spot later. In the below example, /cvgl/ is a networked drive that is accessible by the cluster nodes. **For Stanford users, this step is already done for SVL and Viscam nodes**
1. Download the dataset to a location that is accessible by cluster nodes. To do this, you can use the download_datasets.py script inside OmniGibson's scripts directory, and move it to the right spot later. In the below example, /cvgl/ is a networked drive that is accessible by the cluster nodes. **For Stanford users, this step is already done for SVL and Viscam nodes**
```{.shell .annotate}
OMNIGIBSON_NO_OMNIVERSE=1 python scripts/download_dataset.py
OMNIGIBSON_NO_OMNIVERSE=1 python omnigibson/download_datasets.py
mv omnigibson/data /cvgl/group/Gibson/og-data-0-2-1
```

View File

@ -21,7 +21,7 @@ def main():
print(f" dataset (~25GB): {gm.DATASET_PATH}")
print(f" assets (~2.5GB): {gm.ASSET_PATH}")
print(
f"If you want to install data under a different path, please change the DATA_PATH variable in omnigibson/macros.py and rerun scripts/download_dataset.py."
f"If you want to install data under a different path, please change the DATA_PATH variable in omnigibson/macros.py and rerun omnigibson/download_datasets.py."
)
if click.confirm("Do you want to continue?"):
# Only download if the dataset path doesn't exist
@ -37,7 +37,7 @@ def main():
print("\nOmniGibson setup completed!\n")
else:
print(
"You chose not to install dataset for now. You can install it later by running python scripts/download_dataset.py."
"You chose not to install dataset for now. You can install it later by running python omnigibson/download_datasets.py."
)