text-generation-inference/server/text_generation_server/utils
drbh 56670398f3 fix: handle batches with and without grammars (#1676)
This PR correctly handles batches with a mixture of constrained and non
constrained generations.

Currently if batch contains mixed generations the generation will throw
an error because it will incorrectly attempt to constrain a request with
an empty grammar.

We now handled `None` grammars and only apply the mask if needed

Fixes:
https://github.com/huggingface/text-generation-inference/issues/1643
2024-04-25 14:06:48 +03:00
..
awq ROCm AWQ support (#1514) 2024-04-24 09:21:34 +00:00
gptq chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
__init__.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
convert.py fit for baichuan models (#981) 2023-09-08 16:51:34 +02:00
debug.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
dist.py Add changes from Optimum Habana's TGI folder 2023-12-05 11:12:16 +01:00
flash_attn.py Fix missing make target platform for local install: 'install-flash-attention-v2' (#1414) 2024-04-22 09:18:00 +03:00
hub.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-04-25 09:13:03 +03:00
import_utils.py Add RoCm support (#1243) 2023-11-27 14:08:12 +01:00
layers.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-04-25 09:13:03 +03:00
log.py v1.3.4 2024-04-22 09:08:34 +03:00
logits_process.py fix: handle batches with and without grammars (#1676) 2024-04-25 14:06:48 +03:00
paged_attention.py chore: formatting 2024-04-18 16:26:00 +03:00
peft.py fix: fix local loading for .bin models (#1419) 2024-04-22 09:17:52 +03:00
speculate.py chore: formatting 2024-04-18 16:26:00 +03:00
tokens.py fix: Handle concurrent grammar requests (#1610) 2024-04-25 10:11:40 +03:00
watermark.py Add changes from Optimum Habana's TGI folder 2023-12-05 11:12:16 +01:00
weights.py feat: experimental support for cuda graphs (#1428) 2024-04-24 13:15:45 +03:00