mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-24 00:12:08 +00:00
* add transformers_flash * inits * switch version to make it work * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * runnable version * working * push change * fix high dim * init * default * latest transformers changes * revert * simplify check * remove flag * improve type hints + required args * Update based on transformers PR * small fix * Remove Warpers for Processor * fix compatibility version issue * raise error if needed * Simplify with monkey patch * revert + style + minor improvements * update comment * device check * move the import to avoid device issue * Update __init__.py * check for non-native models * oupsi --------- Co-authored-by: System administrator <root@ip-10-90-0-159.ec2.internal> |
||
---|---|---|
.. | ||
attention | ||
awq | ||
compressed_tensors | ||
gptq | ||
marlin | ||
moe | ||
__init__.py | ||
bnb.py | ||
conv.py | ||
eetq.py | ||
exl2.py | ||
fp8.py | ||
layernorm.py | ||
linear.py | ||
lora.py | ||
medusa.py | ||
mlp.py | ||
rotary.py | ||
speculative.py | ||
tensor_parallel.py |