mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-19 22:02:06 +00:00
* add transformers_flash * inits * switch version to make it work * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * Update Makefile-flash-att-v2 * runnable version * working * push change * fix high dim * init * default * latest transformers changes * revert * simplify check * remove flag * improve type hints + required args * Update based on transformers PR * small fix * Remove Warpers for Processor * fix compatibility version issue * raise error if needed * Simplify with monkey patch * revert + style + minor improvements * update comment * device check * move the import to avoid device issue * Update __init__.py * check for non-native models * oupsi --------- Co-authored-by: System administrator <root@ip-10-90-0-159.ec2.internal> |
||
---|---|---|
.. | ||
custom_kernels | ||
exllama_kernels | ||
exllamav2_kernels | ||
tests | ||
text_generation_server | ||
.gitignore | ||
bounds-from-nix.py | ||
Makefile | ||
Makefile-awq | ||
Makefile-eetq | ||
Makefile-exllamav2 | ||
Makefile-flash-att | ||
Makefile-flash-att-v2 | ||
Makefile-flashinfer | ||
Makefile-lorax-punica | ||
Makefile-selective-scan | ||
Makefile-vllm | ||
pyproject.toml | ||
README.md | ||
requirements_cuda.txt | ||
requirements_intel.txt | ||
requirements_rocm.txt | ||
uv.lock |
Text Generation Inference Python gRPC Server
A Python gRPC server for Text Generation Inference
Install
make install
Run
make run-dev