InfiniTensor/test/kernels/intelcpu
Chenjie Duan 51086d2b8d
Modify kernel registration & support fp16 (#205)
* - Remove dataType from the kernel registration.

* - support fp16 for conv

* - cpu kernel: adapt the new registration mechanism

* modified all register kernel

* add where fp16

* add layernorm fp16

* add split_concat fp16

* - element_wise support fp16

* feat: support transpose fp16

* feat: support sliceOp fp16

* - unary support fp16

* - feat: support reduceOp fp16

* feat: support matmulOp/expandOp fp16

* feat: support powOp int8

* add cuda cast & support half-precision for gather

* style: fix style

* feat:support int8 for gather

* style:fix style

* modified test_cuda_conv_transposed

* fix: fix dist code to support fp16

* fix(graph.cc): fix topo_sort

* fix: fix recv and send kernel registration

* feat: add field tensors for stub

* refactor(frontend): 先排序后构图

Signed-off-by: YdrMaster <ydrml@hotmail.com>

* fix: 为中间结果提供tensor到node的mapping

* fix (slice): add guard for area out of range

* fix: fix matmul fp16

* fix: fix re-dataMalloc for weight tensor and use of naive allocator

* feat: add dataType filter for cuda kernel

* feat: bang kernel adapt the new registration mechanism

* fix: fix some error on mlu

* feat: intelcpu kernel adapt the new registration mechanism

* feat: modify kernel registration on kunlun

* fix intelcpu compiler bug

* feat: bang reshape support all dataType

* fix: fix bang reduce

* fix(all_reduce.cc): fix as reviewer suggessted

* fix: fix style and restore unary test codes

---------

Signed-off-by: YdrMaster <ydrml@hotmail.com>
Co-authored-by: xgqdut2016 <kenan_gewei@163.com>
Co-authored-by: xgqdut2016 <140036308+xgqdut2016@users.noreply.github.com>
Co-authored-by: zhangyunze <z13785159769@163.com>
Co-authored-by: OdinaryWord <sx-hz@163.com>
Co-authored-by: YdrMaster <ydrml@hotmail.com>
Co-authored-by: panzezhong <panzezhong@qiyuanlab.com>
2024-01-15 11:02:13 +08:00
..
test_mkl_batch_norm.cc NNET supports TVM backend and kernels (#78) 2023-04-18 00:26:36 +08:00
test_mkl_concat.cc Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
test_mkl_conv.cc Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
test_mkl_conv_transposed.cc Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
test_mkl_element_wise.cc Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
test_mkl_extend.cc Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
test_mkl_gather.cc memory_allocator (#103) 2023-08-13 13:39:35 +08:00
test_mkl_matmul.cc memory_allocator (#103) 2023-08-13 13:39:35 +08:00
test_mkl_pad.cc Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
test_mkl_pooling.cc Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
test_mkl_reduce.cc Modify kernel registration & support fp16 (#205) 2024-01-15 11:02:13 +08:00
test_mkl_reshape.cc Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
test_mkl_resize.cc memory_allocator (#103) 2023-08-13 13:39:35 +08:00
test_mkl_slice.cc Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
test_mkl_softmax.cc Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00
test_mkl_split.cc Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00