text-generation-inference/server/text_generation_server/models
Wang, Yi 938a7f3c3a hotfix: fix regression of attention api change in intel platform (#2439)
fix regression caused by attention api change. ipex.varlen_attention does not support paged-cache
format kv input now.

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2024-09-25 06:13:36 +00:00
..
custom_modeling hotfix: fix regression of attention api change in intel platform (#2439) 2024-09-25 06:13:36 +00:00
__init__.py feat: support lora revisions and qkv_proj weights (#2482) 2024-09-25 06:13:11 +00:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
causal_lm.py Fixing exl2 and other quanize tests again. (#2419) 2024-09-25 06:08:38 +00:00
flash_causal_lm.py Lots of improvements (Still 2 allocators) (#2449) 2024-09-25 06:13:11 +00:00
galactica.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
globals.py Lots of improvements (Still 2 allocators) (#2449) 2024-09-25 06:13:11 +00:00
idefics_causal_lm.py Upgrading exl2. (#2415) 2024-09-25 06:07:40 +00:00
idefics.py Upgrading exl2. (#2415) 2024-09-25 06:07:40 +00:00
mamba.py Fixing exl2 and other quanize tests again. (#2419) 2024-09-25 06:08:38 +00:00
model.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
pali_gemma.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
seq2seq_lm.py Fixing exl2 and other quanize tests again. (#2419) 2024-09-25 06:08:38 +00:00
types.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
vlm_causal_lm.py Lots of improvements (Still 2 allocators) (#2449) 2024-09-25 06:13:11 +00:00