mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
* Upgrade the version number. * Remove modifications in Lock. * Tmp branch to test transformers backend with 2.5.1 and TP>1 * Fixing the transformers backend. inference_mode forces the use of `aten.matmul` instead of `aten.mm` the former doesn't have sharding support crashing the transformers TP support. `lm_head.forward` also crashes because it skips the hook that cast/decast the DTensor. Torch 2.5.1 is required for sharding support. * Put back the attention impl. * Revert the flashinfer (this will fails). * Building AOT. * Using 2.5 kernels. * Remove the archlist, it's defined in the docker anyway. |
||
---|---|---|
.. | ||
custom_kernels | ||
exllama_kernels | ||
exllamav2_kernels | ||
tests | ||
text_generation_server | ||
.gitignore | ||
bounds-from-nix.py | ||
Makefile | ||
Makefile-awq | ||
Makefile-eetq | ||
Makefile-exllamav2 | ||
Makefile-flash-att | ||
Makefile-flash-att-v2 | ||
Makefile-flashinfer | ||
Makefile-lorax-punica | ||
Makefile-selective-scan | ||
Makefile-vllm | ||
pyproject.toml | ||
README.md | ||
requirements_cuda.txt | ||
requirements_intel.txt | ||
requirements_rocm.txt | ||
uv.lock |
Text Generation Inference Python gRPC Server
A Python gRPC server for Text Generation Inference
Install
make install
Run
make run-dev