mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 14:52:20 +00:00
* Using an enum for flash backens (paged/flashdecoding/flashinfer) * Early exit on server too. * Clippy. * Fix clippy and fmt. |
||
---|---|---|
.. | ||
__init__.py | ||
common.py | ||
cuda.py | ||
flash_attn_triton.py | ||
flash_infer.py | ||
ipex.py | ||
rocm.py |