mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-05-21 09:42:09 +00:00
* IPEX support FP8 kvcache Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * add kvcache dtype Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * add softcap and slidingwindow Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * kv scale in pageattn Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * remove triton installation, will be installed with torch Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * install xelink lib Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * softcap default -1.0 Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * softcap default -1.0 Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> |
||
---|---|---|
.. | ||
custom_kernels | ||
exllama_kernels | ||
exllamav2_kernels | ||
tests | ||
text_generation_server | ||
.gitignore | ||
bounds-from-nix.py | ||
kernels.lock | ||
Makefile | ||
Makefile-awq | ||
Makefile-eetq | ||
Makefile-exllamav2 | ||
Makefile-flash-att | ||
Makefile-flash-att-v2 | ||
Makefile-flashinfer | ||
Makefile-lorax-punica | ||
Makefile-selective-scan | ||
Makefile-vllm | ||
pyproject.toml | ||
README.md | ||
req.txt | ||
requirements_cuda.txt | ||
requirements_gen.txt | ||
requirements_intel.txt | ||
requirements_rocm.txt | ||
uv.lock |
Text Generation Inference Python gRPC Server
A Python gRPC server for Text Generation Inference
Install
make install
Run
make run-dev