text-generation-inference/server/text_generation_server
Daniël de Kok cb150eb295
Add support for FP8 on compute capability >=8.0, <8.9 (#2213)
Use FP8 GPTQ-Marlin kernels to enable FP8 support on CUDA GPUs
with compute capability >=8.0 and <8.9.

Co-authored-by: Florian Zimmermeister <flozi00.fz@gmail.com>
2024-07-11 16:03:26 +02:00
..
adapters Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
layers Add support for FP8 on compute capability >=8.0, <8.9 (#2213) 2024-07-11 16:03:26 +02:00
models Move quantized weight handling out of the Weights class (#2194) 2024-07-09 20:04:03 +02:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils Move quantized weight handling out of the Weights class (#2194) 2024-07-09 20:04:03 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
interceptor.py v2.0.0 (#1736) 2024-04-12 18:38:34 +02:00
server.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00