mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-24 00:12:08 +00:00
* Improve the handling of quantized weights Handling of quantized weights was split between two mechanisms: - For quantized checkpoints, we used the new weight loader infrastructure. - For quantization while loading (EETQ, FP8, bitsandbytes) we instead relied on conditional in `get_linear`. Weight loaders support context managers to selectively load particular layers with different weight loaders, which is useful for models like Idefics2 AWQ, which uses a quantized text model, but unquantized vision and connector models. However, the context manager would be overrided by `get_linear`, which string-checks `quantizer`. Also, the context manager would not work with EETQ, FP8, and bitsandbytes. This change migrates all quantizers to the weight loader infrastructure. This has several benefits: - We can use context managers with all quantizers. - All the implementation details move down to the quantizer layers, `get_linear` does not need to know how to handle quantizer linear layers. - All quantizer weights are strongly typed, we don't pass around raw tensors. - We don't have to pass around the `quantizer` string everywhere. * Exclude non-MLP layers when using FP8 quantization with Llama |
||
---|---|---|
.. | ||
__snapshots__ | ||
test_bloom_560m_sharded.py | ||
test_bloom_560m.py | ||
test_chat_llama.py | ||
test_completion_prompts.py | ||
test_flash_awq_sharded.py | ||
test_flash_awq.py | ||
test_flash_falcon.py | ||
test_flash_gemma_gptq.py | ||
test_flash_gemma.py | ||
test_flash_gpt2.py | ||
test_flash_grammar_llama.py | ||
test_flash_llama_exl2.py | ||
test_flash_llama_gptq.py | ||
test_flash_llama_marlin_24.py | ||
test_flash_llama_marlin.py | ||
test_flash_llama.py | ||
test_flash_medusa.py | ||
test_flash_mistral.py | ||
test_flash_neox_sharded.py | ||
test_flash_neox.py | ||
test_flash_pali_gemma.py | ||
test_flash_phi.py | ||
test_flash_qwen2.py | ||
test_flash_santacoder.py | ||
test_flash_starcoder2.py | ||
test_flash_starcoder_gptq.py | ||
test_flash_starcoder.py | ||
test_grammar_llama.py | ||
test_grammar_response_format_llama.py | ||
test_idefics2.py | ||
test_idefics.py | ||
test_llava_next.py | ||
test_lora_mistral.py | ||
test_mamba.py | ||
test_mpt.py | ||
test_mt0_base.py | ||
test_neox_sharded.py | ||
test_neox.py | ||
test_t5_sharded.py | ||
test_tools_llama.py |