mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-22 23:42:06 +00:00
* Use Hub kernels for Marlin and cutlass quantization kernels * Use hub kernels for MoE/GPTQ-Marlin MoE * Use attention kernels from the Hub * Cache the kernels in the Docker image * Update moe kernels * Support loading local kernels for development * Support latest moe kernels * Update to moe 0.1.1 * CI: download locked kernels for server tests * Fixup some imports * CI: activate venv * Fix unused imports * Nix: add attention/moe/quantization kernels * Update hf-kernels to 0.1.5 * Update kernels * Update tgi-nix flake for hf-kernels * Fix EOF * Take `load_kernel` out of a frequently-called function * Hoist another case of kernel loading out of a somewhat hot function * marlin-kernels -> quantization * attention -> paged-attention * EOF fix * Update hf-kernels, fixup Docker * ipex fix * Remove outdated TODO |
||
---|---|---|
.. | ||
__init__.py | ||
bloom_modeling.py | ||
clip.py | ||
flash_cohere_modeling.py | ||
flash_dbrx_modeling.py | ||
flash_deepseek_v2_modeling.py | ||
flash_deepseek_v3_modeling.py | ||
flash_gemma2_modeling.py | ||
flash_gemma_modeling.py | ||
flash_gpt2_modeling.py | ||
flash_gptj_modeling.py | ||
flash_llama_modeling.py | ||
flash_mistral_modeling.py | ||
flash_mixtral_modeling.py | ||
flash_neox_modeling.py | ||
flash_pali_gemma_modeling.py | ||
flash_phi_modeling.py | ||
flash_phi_moe_modeling.py | ||
flash_qwen2_modeling.py | ||
flash_rw_modeling.py | ||
flash_santacoder_modeling.py | ||
flash_starcoder2_modeling.py | ||
idefics2.py | ||
idefics3.py | ||
idefics_config.py | ||
idefics_image_processing.py | ||
idefics_modeling.py | ||
idefics_perceiver.py | ||
idefics_processing.py | ||
idefics_vision.py | ||
llava_next.py | ||
mamba_modeling.py | ||
mllama.py | ||
mpt_modeling.py | ||
neox_modeling.py | ||
opt_modeling.py | ||
phi_modeling.py | ||
qwen2_vl.py | ||
siglip.py | ||
t5_modeling.py | ||
vlm.py |