mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
Quantized weights were loaded in the `Weights` class, but this was getting quite unwieldy, where every higher level method to load weights was a long conditional to cover all the different quantizers. This change moves loading of quantized weights out of the `Weights` class. This is done by defining a simple `WeightsLoader` interface that is implemented by `Exl2WeightsLoader`, `GPTQWeightsLoader`, and `MarlinWeightsLoader`. These implementations are in the quantizers' respective modules. The `Weights` class provides the low-level load operations (such as loading tensors or sharded tensors), but delegates loads that need quantizer-specific weight processing to a loader. The loaders still use the low-level functionality provided by `Weights`. I initially tried making a hierarchy where a class like `GPTQWeights` would inherit from `Weights`. But it is not very flexible (e.g. does not work well with the new weight storage mock used in tests) and the implicit indirections made the code harder to follow. |
||
---|---|---|
.. | ||
__init__.py | ||
bloom_modeling.py | ||
clip.py | ||
flash_cohere_modeling.py | ||
flash_dbrx_modeling.py | ||
flash_gemma2_modeling.py | ||
flash_gemma_modeling.py | ||
flash_gpt2_modeling.py | ||
flash_llama_modeling.py | ||
flash_mistral_modeling.py | ||
flash_mixtral_modeling.py | ||
flash_neox_modeling.py | ||
flash_pali_gemma_modeling.py | ||
flash_phi_modeling.py | ||
flash_qwen2_modeling.py | ||
flash_rw_modeling.py | ||
flash_santacoder_modeling.py | ||
flash_starcoder2_modeling.py | ||
idefics2.py | ||
idefics_config.py | ||
idefics_image_processing.py | ||
idefics_modeling.py | ||
idefics_perceiver.py | ||
idefics_processing.py | ||
idefics_vision.py | ||
llava_next.py | ||
mamba_modeling.py | ||
mpt_modeling.py | ||
neox_modeling.py | ||
opt_modeling.py | ||
phi_modeling.py | ||
siglip.py | ||
t5_modeling.py | ||
vlm.py |