text-generation-inference/server/text_generation_server/layers/attention
Mohit Sharma ff905aeff3 Update ROCM libs and improvements (#2579)
* style

* update torch

* ix issues

* fix clone

* revert mkl

* added custom PA

* style

* fix style

* style

* hide env vart

* fix mixtral model

* add skinny kernel and merge fixes

* fixed style

* fix issue for sliding window models

* addressed review comments

* fix import

* improved error messag

* updated default value

* remove import

* fix imports after rebase

* float16 dep

* improve dockerfile

* cleaned dockerfile
2024-10-25 09:01:04 +00:00
..
__init__.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
common.py Update ROCM libs and improvements (#2579) 2024-10-25 09:01:04 +00:00
cuda.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_attn_triton.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
flashinfer.py flashinfer: pass window size and dtype (#2574) 2024-10-25 09:01:04 +00:00
ipex.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
rocm.py Update ROCM libs and improvements (#2579) 2024-10-25 09:01:04 +00:00