text-generation-inference/server/text_generation_server
Daniël de Kok c29dc89c18
Add support for scalar FP8 weight scales (#2550)
* Add support for scalar FP8 weight scales

* Support LLM compressor FP8 checkpoints on H100

On H100, we use fbgemm-gpu, which requires bfloat16 as the input dtype.
However, we wouldn't pick up fp8 quantization for models quantized with
LLM compressor. This change adds enough parsing to detect if models have
FP8-quantized weights.

* Remove stray debug print
2024-09-24 13:57:40 +02:00
..
adapters feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
layers Add support for scalar FP8 weight scales (#2550) 2024-09-24 13:57:40 +02:00
models Add support for scalar FP8 weight scales (#2550) 2024-09-24 13:57:40 +02:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils Micro cleanup. (#2555) 2024-09-24 11:19:24 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
interceptor.py v2.0.0 (#1736) 2024-04-12 18:38:34 +02:00
server.py Upgrading exl2. (#2415) 2024-08-14 11:58:08 +02:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00