text-generation-inference/server/text_generation_server/layers/attention
Mohit Sharma e07acc7f68
Enable FP8 Per-Tensor Scales and Integrate Marlin/MoE Kernels Repo for ROCm (#2825)
* (feat) convert tscales to tensorwise

* (fix) fp8 scaling for cuda

* (kernel) add marlin-kernels

* add moe-kernels

* fix moe kernel comit

* fix scaling

* nm changes
2025-01-15 11:38:58 +05:30
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
ipex.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
kv_cache.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
rocm.py Enable FP8 Per-Tensor Scales and Integrate Marlin/MoE Kernels Repo for ROCm (#2825) 2025-01-15 11:38:58 +05:30