text-generation-inference/server/text_generation_server/layers/attention
Nicolas Patry 568cc9f3d0 Softcapping for gemma2. (#2273)
* Softcapping for gemma2.

* Less clutter.

* No access to transformers config, only config_dict here.

* 0.0 is the null value in the C++ API.
2024-09-25 05:31:08 +00:00
..
__init__.py Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-09-24 03:58:13 +00:00
common.py Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-09-24 03:58:13 +00:00
cuda.py Softcapping for gemma2. (#2273) 2024-09-25 05:31:08 +00:00
flash_attn_triton.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
ipex.py fix FlashDecoding change's regression in intel platform (#2161) 2024-09-24 03:58:13 +00:00
rocm.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00