InfiniTensor/include
PanZezhong1725 7f6aec6c17
针对bert和gpt2模型分布式推理的优化 (#221)
* fix(dist): 改善分布式脚本,只打印绝对误差

* feat(dist): 增加可导出onnx的pytorch运行脚本

* feat(front): 增加对Y值为-inf的where算子的图优化

* feat(kernel): 对b为常数的pow和div算子进行特判优化

* fix(front): 消除前端对global output形状信息的依赖,分布式脚本删除不必要的shape infer

* feat(kernel): 针对matmul中bias为行向量时的expand操作的特化优化

* fix(kernel): 删除div pow const中不必要的同步

* Update expand.cu

* fix: fix comments

---------

Co-authored-by: Haojie Wang <haojie0429@gmail.com>
Co-authored-by: Derui Yang <ydrml@hotmail.com>
2024-04-01 14:04:28 +08:00
..
bang fix mlu some kernel registration & gather op (#210) 2024-02-01 15:02:02 +08:00
core Accelerate llama (#219) 2024-04-01 08:46:05 +08:00
cuda 针对bert和gpt2模型分布式推理的优化 (#221) 2024-04-01 14:04:28 +08:00
ffi Add TVM codegen for MemboundOp (#35) 2022-09-22 18:06:45 +08:00
intelcpu Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
kunlun XCCL support (#171) 2024-02-29 11:48:35 +08:00
nnet test: 支持编译 einnet 单元测试,但不是所有测试都能通过 (#174) 2023-11-03 13:21:49 +08:00
operators Accelerate llama (#219) 2024-04-01 08:46:05 +08:00
utils XCCL support (#171) 2024-02-29 11:48:35 +08:00
test.h Add python interface for CUDA operator evaluation (#42) 2022-09-27 10:41:12 +08:00