text-generation-inference/server/text_generation_server/layers
OlivierDehaene 85f10ec5c9 feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
* feat(fp8): add support for fbgemm

* allow loading fp8 weights directly

* update outlines

* fix makefile

* build fbgemm

* avoid circular import and fix dockerfile

* add default dtype

* refactored weights loader

* fix auto conversion

* fix quantization config parsing

* force new nccl on install

* missing get_weights implementation

* increase timeout
2024-09-25 05:30:41 +00:00
..
attention feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
awq Support AWQ quantization with bias (#2117) 2024-09-24 03:55:04 +00:00
gptq Add support for Deepseek V2 (#2224) 2024-09-25 05:27:40 +00:00
__init__.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
bnb.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
conv.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
eetq.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
exl2.py Add support for Deepseek V2 (#2224) 2024-09-25 05:27:40 +00:00
fp8.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-09-24 03:52:23 +00:00
linear.py Improve the handling of quantized weights (#2250) 2024-09-25 05:27:40 +00:00
lora.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
marlin.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
medusa.py fix: use path inside of speculator config (#1935) 2024-07-17 05:36:58 +00:00
mlp.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
rotary.py Add support for Deepseek V2 (#2224) 2024-09-25 05:27:40 +00:00
speculative.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
tensor_parallel.py Improve the handling of quantized weights (#2250) 2024-09-25 05:27:40 +00:00