text-generation-inference/server/text_generation_server/layers/attention
drbh 2ca5980634
Pr 2337 ci branch (#2379)
* hotfix: fix xpu crash brought by code refine. torch.xpu rely on import ipex

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* reable gemma2 in xpu

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* fix in regression in ipex flashattention

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi A <yi.a.wang@intel.com>
2024-08-08 12:30:29 -04:00
..
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
common.py [Major Change][Undecided yet] Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-07-01 23:28:00 +02:00
cuda.py fix: return the out tensor rather then the functions return value (#2361) 2024-08-06 13:49:53 +02:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
ipex.py Pr 2337 ci branch (#2379) 2024-08-08 12:30:29 -04:00
rocm.py Unify attention output handling (#2343) 2024-08-01 17:03:28 +02:00