text-generation-inference/server/text_generation_server/models
Daniël de Kok 2a6c3caf1d Move quantized weight handling out of the Weights class (#2194)
Quantized weights were loaded in the `Weights` class, but this was
getting quite unwieldy, where every higher level method to load weights
was a long conditional to cover all the different quantizers.

This change moves loading of quantized weights out of the `Weights`
class. This is done by defining a simple `WeightsLoader` interface
that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`,
and `MarlinWeightsLoader`. These implementations are in the quantizers'
respective modules. The `Weights` class provides the low-level load
operations (such as loading tensors or sharded tensors), but delegates
loads that need quantizer-specific weight processing to a loader. The
loaders still use the low-level functionality provided by `Weights`.

I initially tried making a hierarchy where a class like `GPTQWeights`
would inherit from `Weights`. But it is not very flexible (e.g. does
not work well with the new weight storage mock used in tests) and
the implicit indirections made the code harder to follow.
2024-09-25 05:27:40 +00:00
..
custom_modeling Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
__init__.py Falcon/DBRX: get correct number of key-value heads (#2205) 2024-09-25 05:21:34 +00:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
causal_lm.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
flash_causal_lm.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
flash_mistral.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
galactica.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
globals.py Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-09-24 03:58:13 +00:00
idefics_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
idefics.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
mamba.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
model.py Hotfixing after refactor. 2024-09-25 05:20:28 +00:00
pali_gemma.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
seq2seq_lm.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
types.py chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
vlm_causal_lm.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00