text-generation-inference/server/text_generation_server/utils
drbh 1057f28128 Pr 2337 ci branch (#2379)
* hotfix: fix xpu crash brought by code refine. torch.xpu rely on import ipex

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* reable gemma2 in xpu

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix in regression in ipex flashattention

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>
2024-09-25 05:55:39 +00:00
..
merges feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
__init__.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
adapter.py fix: refactor adapter weight loading and mapping (#2193) 2024-09-25 05:39:58 +00:00
chunks.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-25 15:10:53 +03:00
dist.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
hub.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
import_utils.py Pr 2337 ci branch (#2379) 2024-09-25 05:55:39 +00:00
log.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
logits_process.py patch-error-on-invalid-grammar (#2282) 2024-09-25 05:50:17 +00:00
peft.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
quantization.py Handle GPTQ-Marlin loading in GPTQMarlinWeightLoader (#2300) 2024-09-25 05:55:39 +00:00
segments.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
sgmv.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
speculate.py chore: formatting 2024-04-18 16:26:00 +03:00
tokens.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
watermark.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
weights.py fix(server): fix fp8 weight loading (#2268) 2024-09-25 05:31:08 +00:00