InfiniTensor/include/operators
Haojie Wang 8e4d88fb9f
add transpose, concat and split for native cpu (#158)
2023-10-12 10:14:28 +08:00
..
G2BMM.h Add documentation for operators. 2023-02-13 22:51:15 +08:00
GBMM.h Add documentation for operators. 2023-02-13 22:51:15 +08:00
activation_backward.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
all_gather.h impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
all_reduce.h impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
batch_norm.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
broadcast.h impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
concat.h Add documentation for operators. 2023-02-13 22:51:15 +08:00
conv.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
det.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
dropout.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
element_wise.h refactor(core): 添加新的 `OpType` 定义 (#99) 2023-08-07 11:17:05 +08:00
expand.h 框架支持bert/gpt2模型构图 (#94) 2023-08-29 16:06:52 +08:00
extend.h Add documentation for operators. 2023-02-13 22:51:15 +08:00
gather.h Add GatherElements op and cuda kernel (#149) 2023-10-12 09:18:12 +08:00
matmul.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
membound.h NNET supports TVM backend and kernels (#78) 2023-04-18 00:26:36 +08:00
pad.h feat: 前端支持 pad 及单元测试 2023-02-15 11:41:06 +08:00
pooling.h Pooling ceil mode (#155) 2023-10-09 20:51:39 +08:00
reduce_mean.h feat: 导出 ReduceMean 到 onnx 2023-03-15 15:09:12 +08:00
reshape.h 支持fp16 dtype (#96) 2023-08-02 16:38:16 +08:00
resize.h Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
slice.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
softmax.h Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
split.h ADD: sub graph replacement. (#56) 2023-04-17 13:09:07 +08:00
transpose.h add transpose, concat and split for native cpu (#158) 2023-10-12 10:14:28 +08:00
unary.h Add HardSigmoid and HardSwish (#156) 2023-10-10 22:41:06 +08:00
where.h 框架支持bert/gpt2模型构图 (#94) 2023-08-29 16:06:52 +08:00