text-generation-inference/server/text_generation_server/layers
Daniël de Kok eab07f746c
Add support for FP8 KV cache scales (#2628)
* Add support for FP8 KV cache scales

Since FP8 only has limited dynamic range, we can scale keys/values
before storing them into the cache (and unscale them in attention). To
avoid rescaling the cache as the absmax values change, good scales are
usually determined per layer using calibration calibration data and stored
in the checkpoint.

This change adds support for for using key-value scales and loading them
from checkpoints in the two most common formats:

- Separate per-layer `k_scale` and `v_scale` scalars.
- Per-layer `kv_scale` scalar (older format).

Currently, scales are only used with an `float8_e4m3fn` cache.

Besides adding support for key/value scales, the `fp8_quantize` function
is also extended to support quantization with a kernel vendored from
vLLM. This is slightly faster than the PyTorch implementation, but also
scales in FP32, potentially improving accuracy.

* Update FP8 KV cache test to use checkpoint with scales

* `can_scale`: check that the attention is flashinfer
2024-10-24 16:36:18 +02:00
..
attention Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
awq CI job. Gpt awq 4 (#2665) 2024-10-18 17:55:53 +02:00
gptq CI job. Gpt awq 4 (#2665) 2024-10-18 17:55:53 +02:00
marlin Fp8 e4m3_fnuz support for rocm (#2588) 2024-10-16 09:54:50 +02:00
moe Add support for fused MoE Marlin for AWQ (#2616) 2024-10-08 11:56:41 +02:00
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
bnb.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
conv.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
eetq.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
exl2.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
fp8.py Add support for FP8 KV cache scales (#2628) 2024-10-24 16:36:18 +02:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
linear.py Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
lora.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
medusa.py Prefix caching (#2402) 2024-08-20 11:15:30 +02:00
mlp.py Tied embeddings in MLP speculator. (#2473) 2024-08-29 17:44:54 +02:00
rotary.py feat: support phi3.5 moe (#2479) 2024-09-30 11:15:09 +02:00
speculative.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
tensor_parallel.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00