InfiniTensor/include/cuda
ChengXiang Qi 7f16fa353e
【Hackathon No.108】Add Gelu operator, ffi, kernel for cpu and gpu. (#148)
feat: Add Gelu kernel, operator, ffi.
2023-10-10 15:21:13 +08:00
..
cuda_clip.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
cuda_common.h tensor parallel for transformer (#125) 2023-09-14 14:19:45 +08:00
cuda_element_wise.h Dev for 202303ddl (#66) 2023-04-18 15:10:33 +08:00
cuda_expand.h "modified where" (#131) 2023-09-14 10:45:57 +08:00
cuda_kernel_wihtout_config.h ADD: batch norm operator and cuda kernel. (#44) 2022-10-15 16:29:28 +08:00
cuda_pad_slice.h ADD: batch norm operator and cuda kernel. (#44) 2022-10-15 16:29:28 +08:00
cuda_runtime.h impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
cuda_split_concat.h ADD: batch norm operator and cuda kernel. (#44) 2022-10-15 16:29:28 +08:00
cuda_transpose.h Add cuda transpose kernel (#115) 2023-08-22 14:22:15 +08:00
cuda_unary.h 【Hackathon No.108】Add Gelu operator, ffi, kernel for cpu and gpu. (#148) 2023-10-10 15:21:13 +08:00
cuda_utility.h Simplify tensor transfer between CPU and CUDA (#10) 2022-08-25 11:29:16 +08:00
cuda_where.h "modified where" (#131) 2023-09-14 10:45:57 +08:00
gather.h 框架支持bert/gpt2模型构图 (#94) 2023-08-29 16:06:52 +08:00
gbmm_g2bmm.cuh Fix CMake USE_CUDA (#36) 2022-09-21 12:28:00 +08:00
gbmm_g2bmm.h Fix CMake USE_CUDA (#36) 2022-09-21 12:28:00 +08:00
nccl_communicator.h impl distributed launch with NCCL (#106) 2023-09-05 09:47:35 +08:00
operator_timer.h ADD: batch norm operator and cuda kernel. (#44) 2022-10-15 16:29:28 +08:00
resize.cuh ADD: reconfig ResizeObj, support "tf_crop_and_resize " and cubic coeff kernel. (#59) 2022-12-24 04:02:21 +08:00
softmax.h Cpu backend2 (#77) 2023-04-17 12:15:23 +08:00