text-generation-inference/server/text_generation_server/models
Wang, Yi A 0b02d45a05 add gptq and awq int4 support in intel platform
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2024-08-21 22:47:34 -07:00
..
custom_modeling fix: prefer hidden_activation over hidden_act in gemma2 (#2381) 2024-08-08 14:08:56 -04:00
__init__.py feat: validate template variables before apply and improve sliding wi… (#2403) 2024-08-12 10:58:40 -04:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py Fixing exl2 and other quanize tests again. (#2419) 2024-08-15 11:12:51 +02:00
flash_causal_lm.py add gptq and awq int4 support in intel platform 2024-08-21 22:47:34 -07:00
galactica.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
globals.py Add support for prefix caching to the v3 router (#2392) 2024-08-12 14:59:17 +02:00
idefics_causal_lm.py Upgrading exl2. (#2415) 2024-08-14 11:58:08 +02:00
idefics.py Upgrading exl2. (#2415) 2024-08-14 11:58:08 +02:00
mamba.py Fixing exl2 and other quanize tests again. (#2419) 2024-08-15 11:12:51 +02:00
model.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
pali_gemma.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
seq2seq_lm.py Fixing exl2 and other quanize tests again. (#2419) 2024-08-15 11:12:51 +02:00
types.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
vlm_causal_lm.py fix crash in multi-modal (#2245) 2024-07-24 10:39:08 +02:00