Go to file
Zhiyang He 9c37b031c0 Refactored render script location (inside /env) 2017-08-16 17:40:43 -07:00
archive further tidy up 2017-08-16 15:30:24 -07:00
misc tidy up 2017-08-16 15:28:00 -07:00
realenv Refactored render script location (inside /env) 2017-08-16 17:40:43 -07:00
.gitignore Client setup file 2017-08-16 17:09:36 -07:00
README.md minor tweak 2017-08-16 16:00:58 -07:00
init.sh Refactored render script location (inside /env) 2017-08-16 17:40:43 -07:00
setup.py Refactored render script location (inside /env) 2017-08-16 17:40:43 -07:00

README.md

Real environment for semantic planning project

Note

This is a 0.0.1 alpha release, for use in Stanford SVL only.

Demo

Here is a demo of a human controlled agent navigating through a virtual environment. demo

Setup

Server side

  • Server side uses XVnc4 as vnc server. In order to use, first git clone this repository and go into root directory, then create a password first with vncpasswd pw.
  • You will also need a pytorch model file and a dataset to render the views, contact feixia@stanford.edu to obtain the model and the data. Replace the path in init.sh with path to the model and the data.
  • Build renderer with ./build.sh
  • Run init.sh, this will run the rendering engine and vncserver.
  • Connect with the client to 5901 port. This can also be configured in init.sh.

As a demo, a server is running at capri19.stanford.edu:5901, contact feixia@stanford.edu to obtain the password.

Client side

TBA