text-generation-inference/server/text_generation_server/layers/attention
Nicolas Patry d626685039 Removing IPEX_AVAIL. (#2115)
* Removing IPEX_AVAIL.

Chose to unify CPU and XPU under `ipex`. Most code is exactly similar
except for a very few spots.

The biggest number of spots is the kv-cache layout and the flash_xxx.py
files.
Since those files should be removed soon and factored away, we should
not need them.

* Forgot a few places.

* Unrelated change.

* Fixing HF_TOKEN.

* HF_TOKEN
2024-09-24 03:52:23 +00:00
..
__init__.py Removing IPEX_AVAIL. (#2115) 2024-09-24 03:52:23 +00:00
cuda.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
flash_attn_triton.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
ipex.py Removing IPEX_AVAIL. (#2115) 2024-09-24 03:52:23 +00:00
rocm.py ROCm and sliding windows fixes (#2033) 2024-09-24 03:42:29 +00:00