InfiniTensor/include
zhangyunze 9b10a74788
支持fp16 dtype (#96)
* add conv_half kernel

* Conv Kernel FP16

* dcj:
replace "DataType::Float32" with "op->getDType()" to support more DataType

* feat: support Float16 dtype

* fix: set default clang-format to 14 version

* fix: 按照review意见修改

* fix: add data convert to convfp16 kernel test

* test: add conv_fp16 kernel test

---------

Co-authored-by: zhangyue207 <zhangyue@qiyuanlab.com>
Co-authored-by: kilinchange <kilinchange@163.com>
2023-08-02 16:38:16 +08:00
..
bang Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
core 支持fp16 dtype (#96) 2023-08-02 16:38:16 +08:00
cuda Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
ffi Add TVM codegen for MemboundOp (#35) 2022-09-22 18:06:45 +08:00
intelcpu Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
nnet Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
operators 支持fp16 dtype (#96) 2023-08-02 16:38:16 +08:00
utils 支持fp16 dtype (#96) 2023-08-02 16:38:16 +08:00
test.h Add python interface for CUDA operator evaluation (#42) 2022-09-27 10:41:12 +08:00