text-generation-inference/server/text_generation_server
OlivierDehaene 85f10ec5c9 feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
* feat(fp8): add support for fbgemm

* allow loading fp8 weights directly

* update outlines

* fix makefile

* build fbgemm

* avoid circular import and fix dockerfile

* add default dtype

* refactored weights loader

* fix auto conversion

* fix quantization config parsing

* force new nccl on install

* missing get_weights implementation

* increase timeout
2024-09-25 05:30:41 +00:00
..
adapters Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
layers feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
models feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
pb chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
utils feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
interceptor.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
server.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-09-24 03:51:26 +00:00