..
cuda_attention_kvcache.h
[feature] support kvcache with static graph ( #209 )
2024-01-25 14:20:43 +08:00
cuda_clip.h
Dev for 202303ddl ( #66 )
2023-04-18 15:10:33 +08:00
cuda_common.h
tensor parallel for transformer ( #125 )
2023-09-14 14:19:45 +08:00
cuda_element_wise.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_expand.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_kernel_wihtout_config.h
ADD: batch norm operator and cuda kernel. ( #44 )
2022-10-15 16:29:28 +08:00
cuda_layernorm.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_pad_slice.h
support Dynamic tensor infer shape and fix memory pool ( #176 )
2023-11-23 13:11:50 +08:00
cuda_runtime.h
impl distributed launch with NCCL ( #106 )
2023-09-05 09:47:35 +08:00
cuda_softmax.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_split_concat.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_transpose.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_unary.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_utility.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_where.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
gather.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
gbmm_g2bmm.cuh
Fix CMake USE_CUDA ( #36 )
2022-09-21 12:28:00 +08:00
gbmm_g2bmm.h
Fix CMake USE_CUDA ( #36 )
2022-09-21 12:28:00 +08:00
nccl_communicator.h
impl distributed launch with NCCL ( #106 )
2023-09-05 09:47:35 +08:00
operator_timer.h
ADD: batch norm operator and cuda kernel. ( #44 )
2022-10-15 16:29:28 +08:00
resize.cuh
ADD: reconfig ResizeObj, support "tf_crop_and_resize " and cubic coeff kernel. ( #59 )
2022-12-24 04:02:21 +08:00