mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-23 16:02:10 +00:00
* Add support for compressed-tensors w8a8 int checkpoints This change adds a loader for w8a8 int checkpoints. One large benefit of int8 support is that the corresponding cutlass matmul kernels also work on compute capability 7.5. Evaluation on neuralmagic/Meta-Llama-3.1-8B-Instruct-quantized.w8a8: | Tasks |Version| Filter |n-shot| Metric | |Value | |Stderr| |---------------|------:|----------------|-----:|-----------------------|---|-----:|---|------| |gsm8k_cot_llama| 3|flexible-extract| 8|exact_match |↑ |0.8431|± |0.0100| | | |strict-match | 8|exact_match |↑ |0.8393|± |0.0101| |ifeval | 4|none | 0|inst_level_loose_acc |↑ |0.8597|± | N/A| | | |none | 0|inst_level_strict_acc |↑ |0.8201|± | N/A| | | |none | 0|prompt_level_loose_acc |↑ |0.7967|± |0.0173| | | |none | 0|prompt_level_strict_acc|↑ |0.7468|± |0.0187| Which is the same ballpark as vLLM. As usual, lots of thanks to Neural Magic/vLLM for the kernels. * Always use dynamic input quantization for w8a8 int It's far less flaky and gives better output. * Use marlin-kernels 0.3.5 * Fix a typo Co-authored-by: drbh <david.richard.holtz@gmail.com> * Small fixes --------- Co-authored-by: drbh <david.richard.holtz@gmail.com> |
||
---|---|---|
.. | ||
__snapshots__ | ||
test_bloom_560m_sharded.py | ||
test_bloom_560m.py | ||
test_chat_llama.py | ||
test_completion_prompts.py | ||
test_compressed_tensors_w8a8_int_dynamic_weight.py | ||
test_compressed_tensors_w8a8_int.py | ||
test_compressed_tensors_w8an_fp.py | ||
test_compressed_tensors_wna16_int.py | ||
test_flash_awq_sharded.py | ||
test_flash_awq.py | ||
test_flash_deepseek_v2.py | ||
test_flash_falcon.py | ||
test_flash_gemma2.py | ||
test_flash_gemma_gptq.py | ||
test_flash_gemma.py | ||
test_flash_gpt2.py | ||
test_flash_grammar_llama.py | ||
test_flash_llama_exl2.py | ||
test_flash_llama_fp8_kv_cache.py | ||
test_flash_llama_fp8.py | ||
test_flash_llama_gptq.py | ||
test_flash_llama_marlin_24.py | ||
test_flash_llama_marlin.py | ||
test_flash_llama_prefix_flashdecoding.py | ||
test_flash_llama_prefix.py | ||
test_flash_llama.py | ||
test_flash_medusa.py | ||
test_flash_mistral.py | ||
test_flash_mixtral_awq.py | ||
test_flash_mixtral_gptq.py | ||
test_flash_mixtral.py | ||
test_flash_neox_sharded.py | ||
test_flash_neox.py | ||
test_flash_pali_gemma.py | ||
test_flash_phi35_moe.py | ||
test_flash_phi.py | ||
test_flash_qwen2_vl.py | ||
test_flash_qwen2.py | ||
test_flash_santacoder.py | ||
test_flash_starcoder2.py | ||
test_flash_starcoder_gptq.py | ||
test_flash_starcoder.py | ||
test_grammar_llama.py | ||
test_grammar_response_format_llama.py | ||
test_idefics2.py | ||
test_idefics.py | ||
test_llava_next.py | ||
test_lora_mistral.py | ||
test_mamba.py | ||
test_mllama.py | ||
test_mpt.py | ||
test_mt0_base.py | ||
test_neox_sharded.py | ||
test_neox.py | ||
test_opt.py | ||
test_t5_sharded.py | ||
test_tools_llama.py |