mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-22 15:32:08 +00:00
* wip * rollback * refactor to use prefix/postfix namming + fix all_input_ids_tensor * maybe patching vlms? * fix filter and concat * wip, no filter, no concat * current * add prepare_for_prefill * working * load tested * re-create slots * re-create slots * fix slot_filtering_indices * feedback loop * remove log * fix benchmarker * fix vlm and seq2seq * rename to cache and input lengths * fix prefill logprobs * fix launcher * fix logprobs? * idk at this point * max input length * omfg * remove debugging lines * fix tests * fix mllama * fix cargo tests * remove support chunking for paged * Fixing non blocked attentions * Fixing dtype + AMD, Ipex targets. * lint fix. * rename * Fix prefix_caching variable, remove defaults in server (confusing a lot of the times). * Add simple resolution when user specifies ATTENTION=paged. * Put back non default simple tests. * Fix env name --------- Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com> |
||
---|---|---|
.. | ||
__init__.py | ||
bloom_modeling.py | ||
clip.py | ||
flash_cohere_modeling.py | ||
flash_dbrx_modeling.py | ||
flash_deepseek_v2_modeling.py | ||
flash_gemma2_modeling.py | ||
flash_gemma_modeling.py | ||
flash_gpt2_modeling.py | ||
flash_gptj_modeling.py | ||
flash_llama_modeling.py | ||
flash_mistral_modeling.py | ||
flash_mixtral_modeling.py | ||
flash_neox_modeling.py | ||
flash_pali_gemma_modeling.py | ||
flash_phi_modeling.py | ||
flash_phi_moe_modeling.py | ||
flash_qwen2_modeling.py | ||
flash_rw_modeling.py | ||
flash_santacoder_modeling.py | ||
flash_starcoder2_modeling.py | ||
idefics2.py | ||
idefics_config.py | ||
idefics_image_processing.py | ||
idefics_modeling.py | ||
idefics_perceiver.py | ||
idefics_processing.py | ||
idefics_vision.py | ||
llava_next.py | ||
mamba_modeling.py | ||
mllama.py | ||
mpt_modeling.py | ||
neox_modeling.py | ||
opt_modeling.py | ||
phi_modeling.py | ||
siglip.py | ||
t5_modeling.py | ||
vlm.py |