text-generation-inference/server/text_generation_server/utils
Wang, Yi 0b3e3db043
xpu 2.6 update (#3051)
* xpu 2.6 update

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* install whl

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* update get xpu memory api

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* int

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix awq crash if modules_to_not_convert is None

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-17 13:48:48 +01:00
..
merges feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
__init__.py feat(server): Add native support for PEFT Lora models (#762) 2023-08-03 17:22:45 +02:00
adapter.py feat: improve star coder to support multi lora layers (#2883) 2025-01-16 16:23:55 -05:00
chunks.py server: use chunked inputs 2024-06-07 08:09:04 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-05 19:23:57 +02:00
dist.py Add deepseekv3 (#2968) 2025-01-30 16:40:25 +01:00
hub.py Micro cleanup. (#2555) 2024-09-24 11:19:24 +02:00
import_utils.py xpu 2.6 update (#3051) 2025-03-17 13:48:48 +01:00
kernels.py Update to kernels 0.2.1 (#3084) 2025-03-13 10:36:29 +01:00
log.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
logits_process.py Flash Transformers modeling backend support (#2913) 2025-01-21 10:01:51 +01:00
peft.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
prefill_chunking.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
quantization.py xpu 2.6 update (#3051) 2025-03-17 13:48:48 +01:00
segments.py fix: improve find_segments via numpy diff (#2686) 2024-11-18 09:51:06 -05:00
sgmv.py fix: allocate tmp based on sgmv kernel if available (#2345) 2024-08-12 17:24:32 +02:00
speculate.py chore: formatting 2023-12-11 14:49:52 +01:00
tokens.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
watermark.py Fixing watermark. (#851) 2023-08-16 07:17:26 +02:00
weights.py Add support for compressed-tensors w8a8 int checkpoints (#2745) 2024-11-18 17:20:31 +01:00