..
cuda_attention_kvcache.h
use workspace to optimize kvcache attention
2024-01-25 10:33:01 +08:00
cuda_clip.h
Dev for 202303ddl ( #66 )
2023-04-18 15:10:33 +08:00
cuda_common.h
[feature] add cudagraph support ( #215 )
2024-02-21 14:00:25 +08:00
cuda_element_wise.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_expand.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_kernel_wihtout_config.h
ADD: batch norm operator and cuda kernel. ( #44 )
2022-10-15 16:29:28 +08:00
cuda_layernorm.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_pad_slice.h
support Dynamic tensor infer shape and fix memory pool ( #176 )
2023-11-23 13:11:50 +08:00
cuda_rmsnorm.h
Accelerate llama ( #219 )
2024-04-01 08:46:05 +08:00
cuda_rope.h
add test for rotary embedding cuda kernel
2024-02-04 10:24:20 +08:00
cuda_runtime.h
[feature] add cudagraph support ( #215 )
2024-02-21 14:00:25 +08:00
cuda_softmax.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_split_concat.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_transpose.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_unary.h
add rope and silu support
2024-01-26 10:01:27 +08:00
cuda_utility.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
cuda_where.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
gather.h
Modify kernel registration & support fp16 ( #205 )
2024-01-15 11:02:13 +08:00
gbmm_g2bmm.cuh
Fix CMake USE_CUDA ( #36 )
2022-09-21 12:28:00 +08:00
gbmm_g2bmm.h
Fix CMake USE_CUDA ( #36 )
2022-09-21 12:28:00 +08:00
nccl_communicator.h
impl distributed launch with NCCL ( #106 )
2023-09-05 09:47:35 +08:00
operator_timer.h
ADD: batch norm operator and cuda kernel. ( #44 )
2022-10-15 16:29:28 +08:00
resize.cuh
ADD: reconfig ResizeObj, support "tf_crop_and_resize " and cubic coeff kernel. ( #59 )
2022-12-24 04:02:21 +08:00