text-generation-inference/server/text_generation_server/layers/attention
Wang, Yi 6265956bc4 refine get xpu free memory/enable Qwen2/gemma2/gemma/phi in intel platform (#2132)
* refine get xpu free memory

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* enable qwen2 in xpu

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* enable gemma/gemma2/phi in intel platform

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2024-09-24 03:57:32 +00:00
..
__init__.py Removing IPEX_AVAIL. (#2115) 2024-09-24 03:52:23 +00:00
cuda.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
flash_attn_triton.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
ipex.py refine get xpu free memory/enable Qwen2/gemma2/gemma/phi in intel platform (#2132) 2024-09-24 03:57:32 +00:00
rocm.py ROCm and sliding windows fixes (#2033) 2024-09-24 03:42:29 +00:00