InfiniTensor/include
constroy Li f60767a770
impl distributed launch with NCCL (#106)
* add cmake bits about NCCL

* move example to examples/NNmodel

* impl NCCL communicator

* add comm related function to Runtime

* export runtime interface

* add launch.py

* use unique name to distingush the the NCCL ID file

* add timeout to communicator init

* expose communicator obj from runtime obj, add unit test for nccl communicator

* reformat files

* Add allReduce operator and cuda nccl allReduce kernel

* impl model parallel for resnet

* add allGather nccl kernel and operator

* Add allreduce allgather operator tests, change allgather kernel to output list of tensor, fix shape infer, handle nullptr output

* fix format of onnx.py

* use concat following AllGather

* get tensor parallel for resnet

* fix format of graph_handler.cc

* change BUILD_DIST default to OFF

* polish code of communicator

* update .gitignore

* Add broadcast operator and cuda kernel

* Add comments for operators

* remove const of class member

* move communicator to CudaRuntimeObj

* Add an empty line at EOF.

---------

Co-authored-by: panzezhong <panzezhong@qiyuanlab.com>
Co-authored-by: Haojie Wang <haojie0429@gmail.com>
2023-09-05 09:47:35 +08:00
..
bang Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
core impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
cuda impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
ffi Add TVM codegen for MemboundOp (#35) 2022-09-22 18:06:45 +08:00
intelcpu Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
nnet Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
operators impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
utils Add cuda transpose kernel (#115) 2023-08-22 14:22:15 +08:00
test.h Add python interface for CUDA operator evaluation (#42) 2022-09-27 10:41:12 +08:00