mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 14:52:20 +00:00
* Prefix caching WIP * Fixing prefix attention. * Fixing flashinfer import. * Fixing black. * Fixing medusa (still wrong outputs, but functional). * Just medusa values now. * Fixing medusa without prefix caching. * Fixing prefix caching. * Medusa requires reshaping. * Removing the logs. * Remove router.nix * Fixup: - Remove logs - Disable VLMs (they do not work) - Disable prefix caching when user wants prefill logprobs. * Update flake.lock --------- Co-authored-by: Daniël de Kok <me@danieldk.eu> |
||
---|---|---|
.. | ||
attention | ||
awq | ||
gptq | ||
marlin | ||
__init__.py | ||
bnb.py | ||
conv.py | ||
eetq.py | ||
exl2.py | ||
fp8.py | ||
layernorm.py | ||
linear.py | ||
lora.py | ||
medusa.py | ||
mlp.py | ||
rotary.py | ||
speculative.py | ||
tensor_parallel.py |