text-generation-inference/server/text_generation_server/layers
Nicolas Patry 568cc9f3d0 Softcapping for gemma2. (#2273)
* Softcapping for gemma2.

* Less clutter.

* No access to transformers config, only config_dict here.

* 0.0 is the null value in the C++ API.
2024-09-25 05:31:08 +00:00
..
attention Softcapping for gemma2. (#2273) 2024-09-25 05:31:08 +00:00
awq Support AWQ quantization with bias (#2117) 2024-09-24 03:55:04 +00:00
gptq Add support for Deepseek V2 (#2224) 2024-09-25 05:27:40 +00:00
__init__.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
bnb.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
conv.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
eetq.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
exl2.py Add support for Deepseek V2 (#2224) 2024-09-25 05:27:40 +00:00
fp8.py fix(server): fix fp8 weight loading (#2268) 2024-09-25 05:31:08 +00:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-09-24 03:52:23 +00:00
linear.py Improve the handling of quantized weights (#2250) 2024-09-25 05:27:40 +00:00
lora.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
marlin.py fix(server): fix fp8 weight loading (#2268) 2024-09-25 05:31:08 +00:00
medusa.py fix: use path inside of speculator config (#1935) 2024-07-17 05:36:58 +00:00
mlp.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
rotary.py Add support for Deepseek V2 (#2224) 2024-09-25 05:27:40 +00:00
speculative.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
tensor_parallel.py Improve the handling of quantized weights (#2250) 2024-09-25 05:27:40 +00:00