text-generation-inference/server/text_generation_server/utils
OlivierDehaene a1b65e5919 fix: fix CohereForAI/c4ai-command-r-plus (#1707)
@Narsil @drbh this will update flash attention v2 and vllm.
You will need to re-install them.
2024-04-25 17:51:35 +03:00
..
awq ROCm AWQ support (#1514) 2024-04-24 09:21:34 +00:00
gptq chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
__init__.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-25 15:10:53 +03:00
debug.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
dist.py Add changes from Optimum Habana's TGI folder 2023-12-05 11:12:16 +01:00
flash_attn.py fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-25 17:51:35 +03:00
hub.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-04-25 09:13:03 +03:00
import_utils.py Add RoCm support (#1243) 2023-11-27 14:08:12 +01:00
layers.py fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-25 17:51:35 +03:00
log.py v1.3.4 2024-04-22 09:08:34 +03:00
logits_process.py fix: handle batches with and without grammars (#1676) 2024-04-25 14:06:48 +03:00
paged_attention.py fix: fix CohereForAI/c4ai-command-r-plus (#1707) 2024-04-25 17:51:35 +03:00
peft.py fix: fix local loading for .bin models (#1419) 2024-04-22 09:17:52 +03:00
speculate.py chore: formatting 2024-04-18 16:26:00 +03:00
tokens.py fix: Handle concurrent grammar requests (#1610) 2024-04-25 10:11:40 +03:00
watermark.py Add changes from Optimum Habana's TGI folder 2023-12-05 11:12:16 +01:00
weights.py feat: experimental support for cuda graphs (#1428) 2024-04-24 13:15:45 +03:00