text-generation-inference/server/text_generation_server/layers/compressed_tensors
Mohit Sharma e07acc7f68
Enable FP8 Per-Tensor Scales and Integrate Marlin/MoE Kernels Repo for ROCm (#2825)
* (feat) convert tscales to tensorwise

* (fix) fp8 scaling for cuda

* (kernel) add marlin-kernels

* add moe-kernels

* fix moe kernel comit

* fix scaling

* nm changes
2025-01-15 11:38:58 +05:30
..
__init__.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
loader.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
w8a8_int.py Add support for compressed-tensors w8a8 int checkpoints (#2745) 2024-11-18 17:20:31 +01:00
w8an_fp.py Enable FP8 Per-Tensor Scales and Integrate Marlin/MoE Kernels Repo for ROCm (#2825) 2025-01-15 11:38:58 +05:30
wna16_int_24.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
wna16_int.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00