mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-22 15:32:08 +00:00
This PR correctly handles batches with a mixture of constrained and non constrained generations. Currently if batch contains mixed generations the generation will throw an error because it will incorrectly attempt to constrain a request with an empty grammar. We now handled `None` grammars and only apply the mask if needed Fixes: https://github.com/huggingface/text-generation-inference/issues/1643 |
||
---|---|---|
.. | ||
awq | ||
gptq | ||
__init__.py | ||
convert.py | ||
debug.py | ||
dist.py | ||
flash_attn.py | ||
hub.py | ||
import_utils.py | ||
layers.py | ||
log.py | ||
logits_process.py | ||
paged_attention.py | ||
peft.py | ||
speculate.py | ||
tokens.py | ||
watermark.py | ||
weights.py |