text-generation-inference/server/text_generation_server/layers
Daniël de Kok 77ac0f364b Add support for Marlin-quantized models
This change adds support for Marlin-quantized models. Marlin is an
FP16xINT4 matmul kernel, which provides good speedups decoding batches
of 16-32 tokens. It supports quantized models with symmetric
quantization, groupsize -1 or 128, and 4-bit.

Tested with:

- Llama 2
- Llama 3
- Phi 3
2024-09-24 03:38:05 +00:00
..
attention Fixing rocm. (#2021) 2024-09-24 03:34:15 +00:00
awq Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
gptq Fix GPTQWeight import (#2020) 2024-09-24 03:34:15 +00:00
__init__.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
bnb.py Update torch import reference in bnb quantization (#1902) 2024-07-17 05:36:58 +00:00
conv.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
eetq.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
exl2.py Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
fp8.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
layernorm.py MI300 compatibility (#1764) 2024-07-17 05:36:58 +00:00
linear.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
marlin.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
medusa.py fix: use path inside of speculator config (#1935) 2024-07-17 05:36:58 +00:00
mlp.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
rotary.py reenable xpu for tgi (#1939) 2024-07-17 05:36:58 +00:00
speculative.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
tensor_parallel.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00