text-generation-inference/server/text_generation_server/layers/gptq
Mohit Sharma 704a58c807
Fp8 e4m3_fnuz support for rocm (#2588)
* (feat) fp8 fnuz support for rocm

* (review comments) Fix compression_config load, type hints

* (bug) update all has_tensor

* (review_comments) fix typo and added comments

* (nit) improved comment
2024-10-16 09:54:50 +02:00
..
__init__.py Fp8 e4m3_fnuz support for rocm (#2588) 2024-10-16 09:54:50 +02:00
custom_autotune.py Some small fixes for the Torch 2.4.0 update (#2304) 2024-07-25 13:34:44 +02:00
exllama.py Fix GPTQWeight import (#2020) 2024-06-05 14:49:15 +02:00
exllamav2.py Upgrading exl2. (#2415) 2024-08-14 11:58:08 +02:00
quant_linear.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
quantize.py server quantize: store quantizer config in standard format (#2299) 2024-07-30 15:16:20 +02:00
utils.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00