text-generation-inference/server/text_generation_server/layers
Cyril Vallez b980848abf
Flash Transformers modeling backend support (#2913)
* add transformers_flash

* inits

* switch version to make it work

* Update Makefile-flash-att-v2

* Update Makefile-flash-att-v2

* Update Makefile-flash-att-v2

* Update Makefile-flash-att-v2

* Update Makefile-flash-att-v2

* Update Makefile-flash-att-v2

* runnable version

* working

* push change

* fix high dim

* init

* default

* latest transformers changes

* revert

* simplify check

* remove flag

* improve type hints + required args

* Update based on transformers PR

* small fix

* Remove Warpers for Processor

* fix compatibility version issue

* raise error if needed

* Simplify with monkey patch

* revert + style + minor improvements

* update comment

* device check

* move the import to avoid device issue

* Update __init__.py

* check for non-native models

* oupsi

---------

Co-authored-by: System administrator <root@ip-10-90-0-159.ec2.internal>
2025-01-21 10:01:51 +01:00
..
attention flashinfer: switch to plan API (#2904) 2025-01-17 18:18:02 +01:00
awq fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… (#2717) 2024-11-04 16:07:51 +01:00
compressed_tensors Do not convert weight scale to e4m3fnuz on CUDA (#2917) 2025-01-16 13:44:32 +01:00
gptq Flash Transformers modeling backend support (#2913) 2025-01-21 10:01:51 +01:00
marlin Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
moe Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
bnb.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
conv.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
eetq.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
exl2.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
fp8.py Enable FP8 Per-Tensor Scales and Integrate Marlin/MoE Kernels Repo for ROCm (#2825) 2025-01-15 11:38:58 +05:30
layernorm.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
linear.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
lora.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
medusa.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
mlp.py Tied embeddings in MLP speculator. (#2473) 2024-08-29 17:44:54 +02:00
rotary.py Update vllm kernels for ROCM (#2826) 2024-12-18 12:44:42 +01:00
speculative.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
tensor_parallel.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00