mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-10-15 09:55:23 +00:00
* Use Hub kernels for Marlin and cutlass quantization kernels * Use hub kernels for MoE/GPTQ-Marlin MoE * Use attention kernels from the Hub * Cache the kernels in the Docker image * Update moe kernels * Support loading local kernels for development * Support latest moe kernels * Update to moe 0.1.1 * CI: download locked kernels for server tests * Fixup some imports * CI: activate venv * Fix unused imports * Nix: add attention/moe/quantization kernels * Update hf-kernels to 0.1.5 * Update kernels * Update tgi-nix flake for hf-kernels * Fix EOF * Take `load_kernel` out of a frequently-called function * Hoist another case of kernel loading out of a somewhat hot function * marlin-kernels -> quantization * attention -> paged-attention * EOF fix * Update hf-kernels, fixup Docker * ipex fix * Remove outdated TODO |
||
|---|---|---|
| .. | ||
| merges | ||
| __init__.py | ||
| adapter.py | ||
| chunks.py | ||
| convert.py | ||
| dist.py | ||
| hub.py | ||
| import_utils.py | ||
| kernels.py | ||
| log.py | ||
| logits_process.py | ||
| peft.py | ||
| prefill_chunking.py | ||
| quantization.py | ||
| segments.py | ||
| sgmv.py | ||
| speculate.py | ||
| tokens.py | ||
| watermark.py | ||
| weights.py | ||