Go to file
zhangyunze bd9e1aeb3f
fix: fix cuda conv_fp16 run fail (#105)
2023-08-10 15:22:18 +08:00
.github/workflows Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
3rd-party test: enhance ci (#62) 2023-02-12 00:01:36 +08:00
cmake NNET supports TVM backend and kernels (#78) 2023-04-18 00:26:36 +08:00
docs fix: 修正 README.md (#93) 2023-07-11 10:03:38 +08:00
example@d6ac8c8c73 Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
include refactor(core): 添加新的 `OpType` 定义 (#99) 2023-08-07 11:17:05 +08:00
proto Tensor serialization (#25) 2022-09-13 11:27:41 +08:00
pyinfinitensor refactor(core): 添加新的 `OpType` 定义 (#99) 2023-08-07 11:17:05 +08:00
python NNET supports TVM backend and kernels (#78) 2023-04-18 00:26:36 +08:00
scripts build: 实现格式化 git added c/c++ 源码的脚本 (#98) 2023-07-21 12:29:50 +08:00
src fix: fix cuda conv_fp16 run fail (#105) 2023-08-10 15:22:18 +08:00
test refactor(core): 添加新的 `OpType` 定义 (#99) 2023-08-07 11:17:05 +08:00
.clang-format Add: graph, tensor, and operator 2022-07-31 21:44:03 +08:00
.cmake-format.json Add: graph, tensor, and operator 2022-07-31 21:44:03 +08:00
.gitignore feat: 创建 pyinfinitensor 前端 2023-02-13 09:19:05 +08:00
.gitmodules Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
CHANGELOG.md Update docs (#92) 2023-07-10 02:31:45 +08:00
CMakeLists.txt Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
LICENSE Initial commit 2022-07-27 22:40:23 +08:00
Makefile build: 实现格式化 git added c/c++ 源码的脚本 (#98) 2023-07-21 12:29:50 +08:00
README.md fix: 修正 README.md (#93) 2023-07-11 10:03:38 +08:00
README_CN.md Update docs (#92) 2023-07-10 02:31:45 +08:00
env.sh Update doc 0627 (#89) 2023-07-06 16:57:10 +08:00

README.md

InfiniTensor

中文项目简介 | Documentation | 中文文档

Build issue license

InfiniTensor is a high-performance inference engine tailored for GPUs and AI accelerators. Its design focuses on effective deployment and swift academic validation.

Get started

Make Commands

  • make/make build: Builds the project;
  • make install-python: Builds the project then install the python frontend;
  • make test-cpp: Builds the project then run cpp unit tests;
  • make test-onnx: Run python unit tests;

  • Sets env: TEST=OFF to accelerate compiling.
  • Sets env: CUDA=ON to enable cuda.
  • Sets env: BANG=ON to enable bang.

CMake Options

There are several configurable CMake options, see the CMakeLists.txt file.

  • If USE_BACKTRACE is ON, libdw-dev have to be installed. See the README of backward-cpp for details.
  • If USE_PROTOBUF is ON, protobuf have to be installed. See the README of protobuf for details.
  • If USE_CUDA is ON, cuda have to be installed.

Roadmap

  • EinNet is going to be merged into the main branch.
  • Integration of PET, a tensor program optimizer supporting partially equivalent transformations.
  • Supported hardware
    • ✔ NVIDIA GPU
    • ✔ Cambricon MLU
    • Ascend NPU
    • Kunlunxin XPU

Contributor Guide

InfiniTensor development is based on the pull request on Github. Before requesting for merging, a PR should satisfy the following requirements

  1. Pass all tests.
    1. Now CI on Github will test everything that can be tested in the ci environment, including code format. So, script test/script/clang_format_inplace.sh is for formatting all code.
    2. Contributors should run ctest manually and copy its output to the PR. Use fenced code blocks (triple backquotes, i.e., ```) to avoid referencing in Github. Otherwise, # in the output is interpreted as a Github reference. Do not directly paste the ctest output in commit messages either for the same reason.
  2. Receive at least one approval from reviewers.
  3. PR title should be concise since it is going to be the commit message in the main branch after merging and squashing.

Reference

Please cite EinNet or PET in your publications if it helps your research:

@article{zheng2023einnet,
  title={EINNET: Optimizing Tensor Programs with Derivation-Based Transformations},
  author={Zheng, Liyan and Wang, Haojie and Zhai, Jidong and Hu, Muyan and Ma, Zixuan and Wang, Tuowei and Huang, Shuhong and Miao, Xupeng and Tang, Shizhi and Huang, Kezhao and Jia, Zhihao},
  booktitle={17th USENIX Symposium on Operating Systems Design and Implementation (OSDI 23)},
  pages={739--755},
  year={2023}
}

@inproceedings{wang2021pet,
  title={PET: Optimizing tensor programs with partially equivalent transformations and automated corrections},
  author={Wang, Haojie and Zhai, Jidong and Gao, Mingyu and Ma, Zixuan and Tang, Shizhi and Zheng, Liyan and Li, Yuanzhi and Rong, Kaiyuan and Chen, Yuanyong and Jia, Zhihao},
  booktitle={15th USENIX Symposium on Operating Systems Design and Implementation (OSDI 21)},
  pages={37--54},
  year={2021}
}