text-generation-inference/server/text_generation_server/utils
2024-07-03 10:57:41 +02:00
..
awq ROCm AWQ support (#1514) 2024-04-24 09:21:34 +00:00
gptq chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
__init__.py Pad next token chooser parameters with empty logits processors (#151) 2024-05-29 22:43:56 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-25 15:10:53 +03:00
debug.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
dist.py add intel xpu support for TGI (#1475) 2024-06-10 13:16:45 +03:00
flash_attn.py add intel xpu support for TGI (#1475) 2024-06-10 13:16:45 +03:00
hub.py Revamp medusa implementation so that every model can benefit. (#1588) 2024-04-25 09:13:03 +03:00
import_utils.py Dummy CI run. (#1817) 2024-06-10 13:57:59 +03:00
layers.py fix: use get_speculate to the number of layers (#1737) 2024-06-10 14:02:23 +03:00
log.py v1.3.4 2024-04-22 09:08:34 +03:00
logits_process.py Fix dtype mismatch in HeterogeneousFrequencyPenaltyLogitsProcessor (#163) 2024-07-03 10:57:41 +02:00
paged_attention.py add intel xpu support for TGI (#1475) 2024-06-10 13:16:45 +03:00
peft.py fix: fix local loading for .bin models (#1419) 2024-04-22 09:17:52 +03:00
speculate.py chore: formatting 2024-04-18 16:26:00 +03:00
tokens.py Use the generation config. (#1808) 2024-06-10 09:53:00 +03:00
watermark.py Add changes from Optimum Habana's TGI folder 2023-12-05 11:12:16 +01:00
weights.py Phi3 support (#1797) 2024-06-10 09:27:01 +03:00