text-generation-inference/server/text_generation_server
Daniël de Kok db922eb77e
Update to attention-kernels 0.2.0 (#2950)
This version removes our patches/custom API. Makes it simpler to
get changes from upstream. One of which is that we can enable FP8
KV cache for paged attention as well.
2025-01-27 11:42:36 +01:00
..
adapters feat: improve star coder to support multi lora layers (#2883) 2025-01-16 16:23:55 -05:00
layers Update to attention-kernels 0.2.0 (#2950) 2025-01-27 11:42:36 +01:00
models Transformers backend TP fix (#2945) 2025-01-23 18:09:57 +01:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils Flash Transformers modeling backend support (#2913) 2025-01-21 10:01:51 +01:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Fixing TRTLLM dockerfile. (#2922) 2025-01-20 11:13:46 +01:00
interceptor.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
server.py Tmp tp transformers (#2942) 2025-01-23 18:07:30 +01:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00