mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-24 16:32:12 +00:00
* wip * rollback * refactor to use prefix/postfix namming + fix all_input_ids_tensor * maybe patching vlms? * fix filter and concat * wip, no filter, no concat * current * add prepare_for_prefill * working * load tested * re-create slots * re-create slots * fix slot_filtering_indices * feedback loop * remove log * fix benchmarker * fix vlm and seq2seq * rename to cache and input lengths * fix prefill logprobs * fix launcher * fix logprobs? * idk at this point * max input length * omfg * remove debugging lines * fix tests * fix mllama * fix cargo tests * remove support chunking for paged * Fixing non blocked attentions * Fixing dtype + AMD, Ipex targets. * lint fix. * rename * Fix prefix_caching variable, remove defaults in server (confusing a lot of the times). * Add simple resolution when user specifies ATTENTION=paged. * Put back non default simple tests. * Fix env name --------- Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com> |
||
---|---|---|
.. | ||
__snapshots__ | ||
test_bloom_560m_sharded.py | ||
test_bloom_560m.py | ||
test_chat_llama.py | ||
test_completion_prompts.py | ||
test_flash_awq_sharded.py | ||
test_flash_awq.py | ||
test_flash_deepseek_v2.py | ||
test_flash_falcon.py | ||
test_flash_gemma2.py | ||
test_flash_gemma_gptq.py | ||
test_flash_gemma.py | ||
test_flash_gpt2.py | ||
test_flash_grammar_llama.py | ||
test_flash_llama_exl2.py | ||
test_flash_llama_fp8_kv_cache.py | ||
test_flash_llama_fp8.py | ||
test_flash_llama_gptq.py | ||
test_flash_llama_marlin_24.py | ||
test_flash_llama_marlin.py | ||
test_flash_llama_prefix_flashdecoding.py | ||
test_flash_llama_prefix.py | ||
test_flash_llama.py | ||
test_flash_medusa.py | ||
test_flash_mistral.py | ||
test_flash_mixtral_awq.py | ||
test_flash_mixtral_gptq.py | ||
test_flash_mixtral.py | ||
test_flash_neox_sharded.py | ||
test_flash_neox.py | ||
test_flash_pali_gemma.py | ||
test_flash_phi35_moe.py | ||
test_flash_phi.py | ||
test_flash_qwen2.py | ||
test_flash_santacoder.py | ||
test_flash_starcoder2.py | ||
test_flash_starcoder_gptq.py | ||
test_flash_starcoder.py | ||
test_grammar_llama.py | ||
test_grammar_response_format_llama.py | ||
test_idefics2.py | ||
test_idefics.py | ||
test_llava_next.py | ||
test_lora_mistral.py | ||
test_mamba.py | ||
test_mllama.py | ||
test_mpt.py | ||
test_mt0_base.py | ||
test_neox_sharded.py | ||
test_neox.py | ||
test_opt.py | ||
test_t5_sharded.py | ||
test_tools_llama.py |