text-generation-inference/server/text_generation_server/layers/attention
Wang, Yi A d9e47b651c add softcap and slidingwindow
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-04-07 22:42:19 -07:00
..
__init__.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
common.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
cuda.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00
flash_attn_triton.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
flashinfer.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00
ipex.py add softcap and slidingwindow 2025-04-07 22:42:19 -07:00
kv_cache.py add kvcache dtype 2025-04-02 19:29:01 -07:00
rocm.py Bug Fix: Sliding Window Attention (#3112) 2025-03-18 10:37:33 +01:00