text-generation-inference/server/text_generation_server/utils
drbh d4aebbd10a fix: correctly index into mask when applying grammar (#1618)
This PR fixes how the grammar mask is index when generating text and
adds a new test to ensure the grammars work with non flash models
2024-04-25 10:16:16 +03:00
..
awq ROCm AWQ support (#1514) 2024-04-24 09:21:34 +00:00
gptq chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
__init__.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
convert.py fit for baichuan models (#981) 2023-09-08 16:51:34 +02:00
debug.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
dist.py Add changes from Optimum Habana's TGI folder 2023-12-05 11:12:16 +01:00
flash_attn.py Fix missing make target platform for local install: 'install-flash-attention-v2' (#1414) 2024-04-22 09:18:00 +03:00
hub.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-04-25 09:13:03 +03:00
import_utils.py Add RoCm support (#1243) 2023-11-27 14:08:12 +01:00
layers.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-04-25 09:13:03 +03:00
log.py v1.3.4 2024-04-22 09:08:34 +03:00
logits_process.py fix: correctly index into mask when applying grammar (#1618) 2024-04-25 10:16:16 +03:00
paged_attention.py chore: formatting 2024-04-18 16:26:00 +03:00
peft.py fix: fix local loading for .bin models (#1419) 2024-04-22 09:17:52 +03:00
speculate.py chore: formatting 2024-04-18 16:26:00 +03:00
tokens.py fix: Handle concurrent grammar requests (#1610) 2024-04-25 10:11:40 +03:00
watermark.py Add changes from Optimum Habana's TGI folder 2023-12-05 11:12:16 +01:00
weights.py feat: experimental support for cuda graphs (#1428) 2024-04-24 13:15:45 +03:00