mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-10-21 21:05:23 +00:00
This version removes our patches/custom API. Makes it simpler to get changes from upstream. One of which is that we can enable FP8 KV cache for paged attention as well. |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| common.py | ||
| cuda.py | ||
| flash_attn_triton.py | ||
| flashinfer.py | ||
| ipex.py | ||
| kv_cache.py | ||
| rocm.py | ||