forked from jiuyuan/InfiniTensor
f60767a770
* add cmake bits about NCCL * move example to examples/NNmodel * impl NCCL communicator * add comm related function to Runtime * export runtime interface * add launch.py * use unique name to distingush the the NCCL ID file * add timeout to communicator init * expose communicator obj from runtime obj, add unit test for nccl communicator * reformat files * Add allReduce operator and cuda nccl allReduce kernel * impl model parallel for resnet * add allGather nccl kernel and operator * Add allreduce allgather operator tests, change allgather kernel to output list of tensor, fix shape infer, handle nullptr output * fix format of onnx.py * use concat following AllGather * get tensor parallel for resnet * fix format of graph_handler.cc * change BUILD_DIST default to OFF * polish code of communicator * update .gitignore * Add broadcast operator and cuda kernel * Add comments for operators * remove const of class member * move communicator to CudaRuntimeObj * Add an empty line at EOF. --------- Co-authored-by: panzezhong <panzezhong@qiyuanlab.com> Co-authored-by: Haojie Wang <haojie0429@gmail.com> |
||
---|---|---|
.. | ||
test_all_gather.cc | ||
test_all_reduce.cc | ||
test_batch_norm.cc | ||
test_broadcast.cc | ||
test_clip.cc | ||
test_concat.cc | ||
test_conv.cc | ||
test_conv_transposed_2d.cc | ||
test_element_wise.cc | ||
test_expand.cc | ||
test_extend.cc | ||
test_gather.cc | ||
test_matmul.cc | ||
test_pad.cc | ||
test_pooling.cc | ||
test_reduce_mean.cc | ||
test_reshape.cc | ||
test_resize.cc | ||
test_slice.cc | ||
test_split.cc | ||
test_transpose.cc | ||
test_where.cc |