Merge pull request #11 from StanfordVL/docs

Docs
This commit is contained in:
Fei Xia 2018-05-12 01:18:02 -07:00 committed by GitHub
commit 807b50ee78
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
8 changed files with 136 additions and 41 deletions

View File

@ -27,7 +27,7 @@ Beta
**This is the <strike>0.1.0</strike> 0.2.0 beta release. Bug reports and suggestions for improvement are appreciated.** [change log file](https://github.com/StanfordVL/GibsonEnv/blob/master/misc/CHANGELOG.md).
**Dataset**: To make the beta release lighter for the users, we are including a small subset (9) of the spaces in it.
The full dataset includes hundreds of spaces which will be made available if we dont get a major bug report during the brief beta release.
The [full dataset](gibson/data/README.md) includes 572 spaces and 1440 floors. It will be made available if we dont get a major bug report during the brief beta release.
Table of contents
=================

View File

@ -1,8 +1,27 @@
Gibson ros binding
Gibson ROS Binding
============
This is a ros package that contains some examples of using Gibson Env with ros navigation stack.
* [Introduction](#introduction)
* [Environment Setup](#environment-setup)
* [Running](#running)
* [Topics](#topics)
Introduction
============
[ROS](http://www.ros.org) is a set of well-engineered software libraries for building robotics applications. It includes a wide variety of packages, from low level drivers to efficient implementations of state of the art algorithms. As we strive to build intelligent agents and transfer them to real-world (on a real robot), we need to take advantage of ROS packages to complete the robot application pipeline.
As a starter, we provide an example of integrating Gibson with ROS. This is a ros package integrates Gibson Env with ros navigation stack. It follows the same node topology and topics as `turtlebot_navigation` package. As shown below, so after a policy is trained in Gibson, it requires minimal changes to deploy onto a turtlebot.
![](misc/node_topo.jpg)
Environment Setup
============
Here is all the steps you need to perform to install gibson and ros. Note that here you will need to install __from source__ and use __python2.7__. If you did it differntly when installing Gibson, you will need to do it again. python3 is known to not being able to work with ros.
## Preparation
1. Install ROS: in this package, we use navigation stack from ros kinetic. Please follow the [instructions](http://wiki.ros.org/kinetic/Installation/Ubuntu).
@ -30,14 +49,17 @@ which python #should give /usr/bin/python
python -c 'import gibson, rospy, rospkg' #you should be able to do those without errors.
```
## Running
Running
===========
In order to run gibson+ros examples, you will need to perform the following steps:
1. Prepare ROS environment
```bash
source /opt/ros/kinetic/setup.bash
source <catkin-workspace-root>/catkin_ws/devel/setup.bash
```
2. Repeat step 3 from Preparation, sanitize `PATH` and `PYTHONPATH`
3. Enjoy
3. Here are some of the examples that you can run, including gmapping, hector mapping and navigation.
```bash
roslaunch gibson-ros turtlebot_gmapping.launch #Run gmapping
roslaunch gibson-ros turtlebot_hector_mapping.launch #Run hector mapping
@ -47,3 +69,47 @@ roslaunch gibson-ros turtlebot_navigation.launch #Run the navigation stack, we h
The following screenshot is captured when running the gmapping example.
![](misc/slam.png)
Topics
========
Here are all the topics that `turtlebot_rgbd.py` and `simulation_clock.py` publishes and subscribes.
- `simulation_clock.py`
Publishes:
| Topic name | Type | Usage|
|:------------------:|:---------------------------:|:---:|
|`/gibson_ros/sim_clock`|`std_msgs/Int64`|Controls the simulation clock, everytime `turtlebot_rgbd.py` receives this message it will tick the simulation.
Subscribes: None
- `turtlebot_rgbd.py`
Publishes:
| Topic name | Type | Usage|
|:------------------:|:---------------------------:|:---:|
|`/gibson_ros/camera/depth/camera_info`|`sensor_msgs/CameraInfo`| Camera parameters used in Gibson, same for depth and rgb|
|`/gibson_ros/camera/rgb/image`|`sensor_msgs/Image`| RGB image captured in Gibson|
|`/gibson_ros/camera/rgb/depth`|`sensor_msgs/Image`| depth image captured in Gibson, in meters, with dtype being float32|
|`/gibson_ros/camera/rgb/depth_raw`|`sensor_msgs/Image`| depth image captured in Gibson, mimic raw depth data captured with OpenNI cameras, with dtype being uint16, see more [here](http://www.ros.org/reps/rep-0118.html)|
|`/odom`|`nav_msgs/Odometry` |odometry from `odom` frame to `base_footprint`, generated with groudtruth pose in Gibson|
Subscribes:
| Topic name | Type | Usage|
|:------------------:|:---------------------------:|:---:|
|`/gibson_ros/sim_clock`|`std_msgs/Int64`|Controls the simulation clock, everytime `turtlebot_rgbd.py` receives this message it will tick the simulation.
|`/mobile_base/commands/velocity`|`geometry_msgs/Twist` |Velocity command for turtlebot, `msg.linear.x` is the forward velocity, `msg.angular.z` is the angular velocity|
### References
- [Turtlebot Navigation stack](http://wiki.ros.org/turtlebot_navigation/Tutorials/Setup%20the%20Navigation%20Stack%20for%20TurtleBot)
- [`Move_base` package](http://wiki.ros.org/move_base)

Binary file not shown.

After

Width:  |  Height:  |  Size: 445 KiB

View File

@ -5,7 +5,7 @@ from std_msgs.msg import Int64
def talker():
pub = rospy.Publisher('/gibson_ros/sim_clock', Int64, queue_size=10)
rospy.init_node('gibson_ros_clock')
rate = rospy.Rate(1000) # 10hz
rate = rospy.Rate(1000) # 1000hz
while not rospy.is_shutdown():
pub.publish(rospy.get_time())
rate.sleep()

View File

@ -2,7 +2,7 @@
#include <stdio.h>
#include <string>
#include <cstring>
#include <iostream>
#include <glm/glm.hpp>
#include "objloader.hpp"
@ -75,16 +75,22 @@ bool loadOBJ(
int matches = sscanf(stringBuffer, "%u/%u/%u %u/%u/%u %u/%u/%u\n", &vertexIndex[0], &uvIndex[0], &normalIndex[0], &vertexIndex[1], &uvIndex[1], &normalIndex[1], &vertexIndex[2], &uvIndex[2], &normalIndex[2] );
bool f_3_format = (matches == 9);
bool f_2_format = true;
bool f_2_format_normal = true;
if (! f_3_format) {
// .obj file has `f v1/uv1 v2/uv2 v3/uv3` format
int matches = sscanf(stringBuffer, " %u/%u %u/%u %u/%u\n", &vertexIndex[0], &uvIndex[0], &vertexIndex[1], &uvIndex[1], &vertexIndex[2], &uvIndex[2] );
f_2_format = (matches == 6);
if (! f_2_format) {
matches = sscanf(stringBuffer, " %u %u %u\n", &vertexIndex[0], &vertexIndex[1], &vertexIndex[2]);
if (matches != 3){
printf("File %s can't be read by our simple parser :-( Try exporting with other options\n", path);
fclose(file);
return false;
int matches = sscanf(stringBuffer, " %u//%u %u//%u %u//%u\n", &vertexIndex[0], &normalIndex[0], &vertexIndex[1], &normalIndex[1], &vertexIndex[2], &normalIndex[2] );
f_2_format_normal = (matches == 6);
if (! f_2_format_normal) {
int matches = sscanf(stringBuffer, " %u %u %u\n", &vertexIndex[0], &vertexIndex[1], &vertexIndex[2]);
if (matches != 3){
printf("File %s can't be read by our simple parser :-( Try exporting with other options\n", path);
fclose(file);
return false;
}
}
}
}
@ -96,7 +102,7 @@ bool loadOBJ(
uvIndices .push_back(uvIndex[1]);
uvIndices .push_back(uvIndex[2]);
}
if (f_3_format) {
if (f_3_format || f_2_format_normal) {
normalIndices.push_back(normalIndex[0]);
normalIndices.push_back(normalIndex[1]);
normalIndices.push_back(normalIndex[2]);
@ -150,28 +156,41 @@ bool loadOBJ(
out_normals.push_back(glm::vec3(0.0));
}
std::vector<unsigned int> vertexFaces(out_vertices.size());
std::vector<unsigned int> vertexFaces(temp_vertices.size());
std::fill(vertexFaces.begin(), vertexFaces.end(), 0);
for ( unsigned int i=0; i<vertexIndices.size(); i++ ){
for ( unsigned int i=0; i<vertexIndices.size(); i++ ) {
vertexFaces[vertexIndices[i]] += 1;
}
for ( unsigned int i=0; i<vertexIndices.size(); i++ ){
// make sure vertices are arranged in right hand order
unsigned int v1 = i;
unsigned int v2 = ((v1+1)%3==0) ? (v1-2) : (v1+1);
unsigned int v3 = ((v2+1)%3==0) ? (v2-2) : (v2+1);
//std::cout << "v1 " << v1 << " v2 " << v2 << " v3 " << v3 << std::endl;
glm::vec3 edge1 = out_vertices[v2] - out_vertices[v1];
glm::vec3 edge2 = out_vertices[v3] - out_vertices[v2];
//glm::vec3 edge2 = out_vertices[v3] - out_vertices[v2];
glm::vec3 edge2 = out_vertices[v3] - out_vertices[v1];
// set normal as cross product
unsigned int vertexIndex = vertexIndices[i];
glm::vec3 normal = glm::normalize(glm::cross(edge1, edge2));
out_normals[vertexIndex-1] += normal / float(vertexFaces[vertexIndex-1]);
//std::cout << normal.x << " " << normal.y << " " << normal.z << " " << normal.x * normal.x + normal.y * normal.y + normal.z * normal.z << " " << float(vertexFaces[vertexIndex-1]) << std::endl;
out_normals[i] += normal / float(vertexFaces[vertexIndex-1]);
//std::cout << "Writing to " << vertexIndex << std::endl;
//out_normals[vertexIndex-1] = glm::vec3(1, 0, 0);
}
// Renormalize all the normal vectors
for (unsigned int i=0; i<out_normals.size(); i++) {
out_normals[i] = glm::normalize(out_normals[i]);
}
}
// TODO: (hzyjerry) this is a dummy place holder
if ( out_uvs.size() == 0 ) {
for ( unsigned int i=0; i<out_vertices.size(); i++ ){

View File

@ -1,25 +1,30 @@
## Data Generation Script for Real Env
# Full Gibson Environment Dataset
### Requirements
Recommend: create virtual environment for the dependencies
Install Blender 2.79
```shell
sudo add-apt-repository ppa:thomas-schiex/blender
sudo apt-get update
sudo apt-get install blender
```
Create necessary environment
```shell
conda create -n (env_name) python=3.5
pip install pillow
curl -sL https://deb.nodesource.com/setup_8.x | sudo -E bash -
sudo apt-get install -y nodejs
npm install optimist bytebuffer long
```
Full Gibson Environment Dataset consists of 572 models and 1440 floors. We cover a diverse set of models including households, offices, hotels, venues, museums, hospitals, construction sites etc. You can contact. We have included [spec sheet](https://docs.google.com/spreadsheets/d/1hhjAtgASv8MBkXa7aXH5obyf7v6d-oDm1KRR0oiSEzk/edit?usp=sharing) of the full Gibson Dataset where you can look up individual models and their information
<img src=../../misc/spaces.png width="800">
### Run the code
```shell
source activate (env_name)
python start.py
```
## Dataset Metrics
**Floor Number** Total number of floors in each model.
We calculate floor numbers by using camera sweeping locations. We use `sklearn.cluster.DBSCAN` to cluster these locations by height and set minimum cluster size to `5`. This means areas with at least `5` sweeps are treated as one single floor. This helps us capture small building spaces such as backyard, attics, basements.
**Area** Total floor area of each model.
We calculate total floor area by summing up area of each floor. This is done by sampling point cloud locations based on floor height, and fitting a `scipy.spatial.ConvexHull` on sample locations.
**SSA** Specific surface area.
The ratio of inner mesh surface and volume of convex hull of the mesh. This is a measure of clutter in the models: if the inner space is placed with large number of furnitures, objects, etc, the model will have high SSA.
**Navigation Complexity** The highest complexity of navigating between arbitrary points within the model.
We sample arbitrary point pairs inside the model, and calculate `A` navigation distance between them. `Navigation Complexity` is equal to `A*` distance divide by `straight line distance` between the two points. We compute the highest navigation complexity for every model. Note that all point pairs are sample within the *same floor*.
**Subjective Attributes**
We examine each model manually, and note the subjective attributes of them. This includes their furnishing style, house shapes, whether they have long stairs, etc.
## Train/Test Set
We divide up 572 models into 515 training models (90%) and 57 testing models (10%). We sort all 572 models by the linear combination of `floor number`, `area`, `ssa` and `navigation complexity`. We select a diverse set of models as test set by selecting the 10% models with highest combination scores.

View File

@ -6,6 +6,11 @@ and this project adheres to [Semantic Versioning](http://semver.org/spec/v2.0.0.
https://github.com/StanfordVL/GibsonEnv/blob/master/README.md
## 0.3.0
### Added
- Full dataset
- ROS integration
## 0.2.1 - 2018-04-18
Bug fixes
### Fixed

BIN
misc/spaces.png Normal file

Binary file not shown.

After

Width:  |  Height:  |  Size: 376 KiB