Go to file
Liyan Zheng 1ee4a60af0 Add: convert expression to operator 2023-06-28 11:06:17 +08:00
.github/workflows Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
3rd-party Fix: matmul transpose in convNHWC2gemm rule 2023-04-23 22:54:50 +08:00
cmake NNET supports TVM backend and kernels (#78) 2023-04-18 00:26:36 +08:00
example@d6ac8c8c73 Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
include Add: convert expression to operator 2023-06-28 11:06:17 +08:00
proto Tensor serialization (#25) 2022-09-13 11:27:41 +08:00
pyinfinitensor Add: efficient CUDA transpose for last two dims 2023-05-05 15:16:07 +08:00
python Add: C++ callback to export ONNX 2023-04-18 17:19:05 +08:00
src Add: convert expression to operator 2023-06-28 11:06:17 +08:00
test Add: convert expression to operator 2023-06-28 11:06:17 +08:00
.clang-format Add: graph, tensor, and operator 2022-07-31 21:44:03 +08:00
.cmake-format.json Add: graph, tensor, and operator 2022-07-31 21:44:03 +08:00
.gitignore feat: 创建 pyinfinitensor 前端 2023-02-13 09:19:05 +08:00
.gitmodules Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
CMakeLists.txt Add: CUDA graph stream capture (MemboundOp fails) 2023-04-19 16:32:16 +08:00
LICENSE Initial commit 2022-07-27 22:40:23 +08:00
Makefile Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
README.md Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
README_CN.md refactor(py): 使用工厂方法创建 OnnxStub 2023-04-20 10:44:39 +08:00

README.md

InfiniTensor

Compilation on Lotus

Compilation for cuda

# Enter the root of InfiniTensor
source test/script/env_lotus.sh
make CUDA=ON

Compilation for intelcpu

# Enter the root of InfiniTensor
source test/script/env_lotus.sh intelcpu
mkdir build && cd build
cmake -DUSE_INTELCPU=ON -DCMAKE_CXX_COMPILER=dpcpp .. && make -j 12

Make Commands

  • make/make build: Builds the project;
  • make install-python: Builds the project then install the python frontend;
  • make test-cpp: Builds the project then run cpp unit tests;
  • make test-onnx: Run python unit tests;

  • Sets env: TEST=OFF to accelerate compiling.
  • Sets env: CUDA=ON to enable cuda.
  • Sets env: BANG=ON to enable bang.

CMake Options

There are several configurable CMake options, see the CMakeLists.txt file.

  • If USE_BACKTRACE is ON, libdw-dev have to be installed. See the README of backward-cpp for details.
  • If USE_PROTOBUF is ON, protobuf have to be installed. See the README of protobuf for details.
  • If USE_CUDA is ON, cuda have to be installed.

Contributor Guide

InfiniTensor development is based on the pull request on Github. Before requesting for merging, a PR should satisfy the following requirements

  1. Pass all tests.
    1. Now CI on Github will test everything that can be tested in the ci environment, including code format. So, script test/script/clang_format_inplace.sh is for formatting all code.
    2. Contributors should run ctest manually and copy its output to the PR. Use fenced code blocks (triple backquotes, i.e., ```) to avoid referencing in Github. Otherwise, # in the output is interpreted as a Github reference. Do not directly paste the ctest output in commit messages either for the same reason.
  2. Receive at least one approval from reviewers.
  3. PR title should be concise since it is going to be the commit message in the main branch after merging and squashing.

Dependencies