InfiniTensor/include/cuda
xgqdut2016 8b2e3b8e19 add where fp16 2023-12-08 16:57:49 +08:00
..
cuda_attention_kvcache.h [feature] add fused attention_kvcache operator support (#179) 2023-11-14 23:44:22 +08:00
cuda_clip.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
cuda_common.h tensor parallel for transformer (#125) 2023-09-14 14:19:45 +08:00
cuda_element_wise.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
cuda_expand.h "modified where" (#131) 2023-09-14 10:45:57 +08:00
cuda_kernel_wihtout_config.h ADD: batch norm operator and cuda kernel. (#44) 2022-10-15 16:29:28 +08:00
cuda_layernorm.h Add layer normalization (#181) 2023-11-24 15:15:14 +08:00
cuda_pad_slice.h support Dynamic tensor infer shape and fix memory pool (#176) 2023-11-23 13:11:50 +08:00
cuda_runtime.h impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
cuda_softmax.h modified all register kernel 2023-12-07 17:53:28 +08:00
cuda_split_concat.h support 8D tensor, add test example (#170) 2023-10-31 10:47:36 +08:00
cuda_transpose.h Add cuda transpose kernel (#115) 2023-08-22 14:22:15 +08:00
cuda_unary.h Add HardSigmoid and HardSwish (#156) 2023-10-10 22:41:06 +08:00
cuda_utility.h - Remove dataType from the kernel registration. 2023-11-30 13:51:24 +08:00
cuda_where.h add where fp16 2023-12-08 16:57:49 +08:00
gather.h Add GatherElements op and cuda kernel (#149) 2023-10-12 09:18:12 +08:00
gbmm_g2bmm.cuh Fix CMake USE_CUDA (#36) 2022-09-21 12:28:00 +08:00
gbmm_g2bmm.h Fix CMake USE_CUDA (#36) 2022-09-21 12:28:00 +08:00
nccl_communicator.h impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
operator_timer.h ADD: batch norm operator and cuda kernel. (#44) 2022-10-15 16:29:28 +08:00
resize.cuh ADD: reconfig ResizeObj, support "tf_crop_and_resize " and cubic coeff kernel. (#59) 2022-12-24 04:02:21 +08:00