InfiniTensor/include/operators
xiaonans d000f9750c add shape information to the kvcache attention operator 2024-04-11 14:52:39 +08:00
..
G2BMM.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
GBMM.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
activation_backward.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
all_gather.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
all_reduce.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
attention_kvcache.h add shape information to the kvcache attention operator 2024-04-11 14:52:39 +08:00
batch_norm.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
broadcast.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
concat.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
conv.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
det.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
dropout.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
element_wise.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
expand.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
extend.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
gather.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
layer_norm.h Add layer normalization (#181) 2023-11-24 15:15:14 +08:00
lrn.h Fix bang (#198) 2023-12-28 13:44:10 +08:00
matmul.h feature: add parameter to config matmul compute type (#218) 2024-03-26 09:00:45 +08:00
membound.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
pad.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
pooling.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
recv.h Add send and recv operators based on NCCL (#182) 2023-12-14 16:38:03 +08:00
reduce.h Add ReduceSum op and kernel (#160) 2023-11-24 09:29:58 +08:00
reshape.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
resize.h add frontend resize kernel (#194) 2023-12-29 13:32:56 +08:00
rms_norm.h Accelerate llama (#219) 2024-04-01 08:46:05 +08:00
rope.h rope and attention ops support multiple batchs/sequences. 2024-04-09 09:16:42 +08:00
send.h Add send and recv operators based on NCCL (#182) 2023-12-14 16:38:03 +08:00
slice.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
softmax.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
split.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
squeeze.h 解除前端对onnx infershape功能的依赖 (#206) 2024-01-12 14:54:27 +08:00
transpose.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
unary.h XCCL support (#171) 2024-02-29 11:48:35 +08:00
unsqueeze.h 解除前端对onnx infershape功能的依赖 (#206) 2024-01-12 14:54:27 +08:00
where.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00