text-generation-inference/server/text_generation_server/layers
Daniël de Kok 3c9df21ff8
Add support for compressed-tensors w8a8 int checkpoints (#2745)
* Add support for compressed-tensors w8a8 int checkpoints

This change adds a loader for w8a8 int checkpoints. One large benefit of
int8 support is that the corresponding cutlass matmul kernels also work on
compute capability 7.5.

Evaluation on neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w8a8:

|     Tasks     |Version|     Filter     |n-shot|        Metric         |   |Value |   |Stderr|
|---------------|------:|----------------|-----:|-----------------------|---|-----:|---|------|
|gsm8k_cot_llama|      3|flexible-extract|     8|exact_match            |↑  |0.8431|±  |0.0100|
|               |       |strict-match    |     8|exact_match            |↑  |0.8393|±  |0.0101|
|ifeval         |      4|none            |     0|inst_level_loose_acc   |↑  |0.8597|±  |   N/A|
|               |       |none            |     0|inst_level_strict_acc  |↑  |0.8201|±  |   N/A|
|               |       |none            |     0|prompt_level_loose_acc |↑  |0.7967|±  |0.0173|
|               |       |none            |     0|prompt_level_strict_acc|↑  |0.7468|±  |0.0187|

Which is the same ballpark as vLLM.

As usual, lots of thanks to Neural Magic/vLLM for the kernels.

* Always use dynamic input quantization for w8a8 int

It's far less flaky and gives better output.

* Use marlin-kernels 0.3.5

* Fix a typo

Co-authored-by: drbh <david.richard.holtz@gmail.com>

* Small fixes

---------

Co-authored-by: drbh <david.richard.holtz@gmail.com>
2024-11-18 17:20:31 +01:00
..
attention Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
awq fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… (#2717) 2024-11-04 16:07:51 +01:00
compressed_tensors Add support for compressed-tensors w8a8 int checkpoints (#2745) 2024-11-18 17:20:31 +01:00
gptq fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… (#2717) 2024-11-04 16:07:51 +01:00
marlin Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
moe add ipex moe implementation to support Mixtral and PhiMoe (#2707) 2024-11-18 17:16:55 +01:00
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
bnb.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
conv.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
eetq.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
exl2.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
fp8.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
linear.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
lora.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
medusa.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
mlp.py Tied embeddings in MLP speculator. (#2473) 2024-08-29 17:44:54 +02:00
rotary.py Support qwen2 vl (#2689) 2024-10-30 12:40:51 -04:00
speculative.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
tensor_parallel.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00