..
test_cuda_G2BMM.cc
ADD: add mkl runtime for intel cpu , and add mkl kernel for matmul/conv/convtransposed. ( #61 )
2023-03-27 21:28:49 +08:00
test_cuda_GBMM.cc
ADD: add mkl runtime for intel cpu , and add mkl kernel for matmul/conv/convtransposed. ( #61 )
2023-03-27 21:28:49 +08:00
test_cuda_all_gather.cc
impl distributed launch with NCCL ( #106 )
2023-09-05 09:47:35 +08:00
test_cuda_all_reduce.cc
impl distributed launch with NCCL ( #106 )
2023-09-05 09:47:35 +08:00
test_cuda_attention.cc
[feature] add fused attention_kvcache operator support ( #179 )
2023-11-14 23:44:22 +08:00
test_cuda_batch_norm.cc
ADD: add mkl runtime for intel cpu , and add mkl kernel for matmul/conv/convtransposed. ( #61 )
2023-03-27 21:28:49 +08:00
test_cuda_broadcast.cc
impl distributed launch with NCCL ( #106 )
2023-09-05 09:47:35 +08:00
test_cuda_clip.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_concat.cc
support 8D tensor, add test example ( #170 )
2023-10-31 10:47:36 +08:00
test_cuda_conv.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_conv_fp16.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_conv_transposed_2d.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_element_wise.cc
add CUDNN impl for Min and Max ( #118 )
2023-08-22 16:19:29 +08:00
test_cuda_expand.cc
框架支持bert/gpt2模型构图 ( #94 )
2023-08-29 16:06:52 +08:00
test_cuda_extend.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_gather.cc
框架支持bert/gpt2模型构图 ( #94 )
2023-08-29 16:06:52 +08:00
test_cuda_gather_elements.cc
Add GatherElements op and cuda kernel ( #149 )
2023-10-12 09:18:12 +08:00
test_cuda_inception.cc
Pooling ceil mode ( #155 )
2023-10-09 20:51:39 +08:00
test_cuda_matmul.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_pad.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_pooling.cc
Pooling ceil mode ( #155 )
2023-10-09 20:51:39 +08:00
test_cuda_reduce_mean.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_reshape.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_resize.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_slice.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_softmax.cc
memory_allocator ( #103 )
2023-08-13 13:39:35 +08:00
test_cuda_split.cc
support 8D tensor, add test example ( #170 )
2023-10-31 10:47:36 +08:00
test_cuda_transpose.cc
Add cuda transpose kernel ( #115 )
2023-08-22 14:22:15 +08:00
test_cuda_unary.cc
Add HardSigmoid and HardSwish ( #156 )
2023-10-10 22:41:06 +08:00
test_cuda_where.cc
Cuda softmax ( #129 )
2023-11-06 08:56:23 +08:00
test_perfengine.cc
ADD: add mkl runtime for intel cpu , and add mkl kernel for matmul/conv/convtransposed. ( #61 )
2023-03-27 21:28:49 +08:00