InfiniTensor/include/cuda
PanZezhong1725 7f6aec6c17
针对bert和gpt2模型分布式推理的优化 (#221)
* fix(dist): 改善分布式脚本,只打印绝对误差

* feat(dist): 增加可导出onnx的pytorch运行脚本

* feat(front): 增加对Y值为-inf的where算子的图优化

* feat(kernel): 对b为常数的pow和div算子进行特判优化

* fix(front): 消除前端对global output形状信息的依赖,分布式脚本删除不必要的shape infer

* feat(kernel): 针对matmul中bias为行向量时的expand操作的特化优化

* fix(kernel): 删除div pow const中不必要的同步

* Update expand.cu

* fix: fix comments

---------

Co-authored-by: Haojie Wang <haojie0429@gmail.com>
Co-authored-by: Derui Yang <ydrml@hotmail.com>
2024-04-01 14:04:28 +08:00
..
cuda_attention_kvcache.h use workspace to optimize kvcache attention 2024-01-25 10:33:01 +08:00
cuda_clip.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
cuda_common.h [feature] add cudagraph support (#215) 2024-02-21 14:00:25 +08:00
cuda_element_wise.h 针对bert和gpt2模型分布式推理的优化 (#221) 2024-04-01 14:04:28 +08:00
cuda_expand.h 针对bert和gpt2模型分布式推理的优化 (#221) 2024-04-01 14:04:28 +08:00
cuda_kernel_wihtout_config.h ADD: batch norm operator and cuda kernel. (#44) 2022-10-15 16:29:28 +08:00
cuda_layernorm.h Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
cuda_pad_slice.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
cuda_rmsnorm.h Accelerate llama (#219) 2024-04-01 08:46:05 +08:00
cuda_rope.h add test for rotary embedding cuda kernel 2024-02-04 10:24:20 +08:00
cuda_runtime.h [feature] add cudagraph support (#215) 2024-02-21 14:00:25 +08:00
cuda_softmax.h Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
cuda_split_concat.h Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
cuda_transpose.h Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
cuda_unary.h add rope and silu support 2024-01-26 10:01:27 +08:00
cuda_utility.h Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
cuda_where.h Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
gather.h Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
gbmm_g2bmm.cuh Fix CMake USE_CUDA (#36) 2022-09-21 12:28:00 +08:00
gbmm_g2bmm.h Fix CMake USE_CUDA (#36) 2022-09-21 12:28:00 +08:00
nccl_communicator.h impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
operator_timer.h ADD: batch norm operator and cuda kernel. (#44) 2022-10-15 16:29:28 +08:00
resize.cuh ADD: reconfig ResizeObj, support "tf_crop_and_resize " and cubic coeff kernel. (#59) 2022-12-24 04:02:21 +08:00