text-generation-inference/server/marlin/marlin_kernels
Daniël de Kok 85c3c5d64f Add support for FP8 on compute capability >=8.0, <8.9 (#2213)
Use FP8 GPTQ-Marlin kernels to enable FP8 support on CUDA GPUs
with compute capability >=8.0 and <8.9.

Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com>
2024-09-25 05:27:40 +00:00
..
sparse Add support for Marlin 2:4 sparsity (#2102) 2024-09-24 03:55:04 +00:00
__init__.pyi Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-09-25 05:27:40 +00:00
ext.cpp Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-09-25 05:27:40 +00:00
ext.hh Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-09-25 05:27:40 +00:00
fp8_marlin.cu Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-09-25 05:27:40 +00:00
gptq_marlin_dtypes.cuh Add support for GPTQ Marlin (#2052) 2024-09-24 03:43:30 +00:00
gptq_marlin_repack.cu Add support for GPTQ Marlin (#2052) 2024-09-24 03:43:30 +00:00
gptq_marlin.cu Add support for GPTQ Marlin (#2052) 2024-09-24 03:43:30 +00:00
gptq_marlin.cuh Add support for GPTQ Marlin (#2052) 2024-09-24 03:43:30 +00:00
marlin_cuda_kernel.cu Add support for GPTQ Marlin (#2052) 2024-09-24 03:43:30 +00:00
py.typed Add support for GPTQ Marlin (#2052) 2024-09-24 03:43:30 +00:00