text-generation-inference/server/text_generation_server/layers
Daniël de Kok 14980df2df
Support AWQ quantization with bias (#2117)
When the AWQ quantizer was used with a layer that uses a bias,
the bias tensor was not correctly passed/used. Instead, the
value `true`/`1.0` was added to the linear transformation.

Correctly pass through the bias when it is not `None`.

Fixes #2106.
2024-06-25 21:09:00 +02:00
..
attention Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
awq Support AWQ quantization with bias (#2117) 2024-06-25 21:09:00 +02:00
gptq Fix text-generation-server quantize (#2103) 2024-06-21 15:28:51 +02:00
__init__.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
bnb.py [Bug Fix] Update torch import reference in bnb quantization (#1902) 2024-05-15 21:08:32 +02:00
conv.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
eetq.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
exl2.py Add support for exl2 quantization 2024-05-30 11:28:05 +02:00
fp8.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
linear.py Support AWQ quantization with bias (#2117) 2024-06-25 21:09:00 +02:00
lora.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
marlin.py Add support for GPTQ Marlin (#2052) 2024-06-14 09:45:42 +02:00
medusa.py fix: use path inside of speculator config (#1935) 2024-05-22 20:46:29 +02:00
mlp.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
rotary.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
speculative.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
tensor_parallel.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00