text-generation-inference/server/text_generation_server
drbh 56670398f3 fix: handle batches with and without grammars (#1676)
This PR correctly handles batches with a mixture of constrained and non
constrained generations.

Currently if batch contains mixed generations the generation will throw
an error because it will incorrectly attempt to constrain a request with
an empty grammar.

We now handled `None` grammars and only apply the mask if needed

Fixes:
https://github.com/huggingface/text-generation-inference/issues/1643
2024-04-25 14:06:48 +03:00
..
models feat: cohere (#1660) 2024-04-25 12:39:14 +03:00
pb chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
utils fix: handle batches with and without grammars (#1676) 2024-04-25 14:06:48 +03:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-04-25 09:13:03 +03:00
habana_quantization_env.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
interceptor.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
server.py fix: fix gpt-q with groupsize = -1 (#1358) 2024-04-19 15:05:50 +03:00
tgi_service.py Speculative (#1308) 2024-04-18 12:39:39 +00:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00