text-generation-inference/server
Daniël de Kok 628d6a13da Add support for exl2 quantization
Mostly straightforward, changes to existing code:

* Wrap quantizer parameters in a small wrapper to avoid passing
  around untyped tuples and needing to repack them as a dict.
* Move scratch space computation to warmup, because we need the
  maximum input sequence length to avoid allocating huge
  scratch buffers that OOM.
2024-09-24 03:19:39 +00:00
..
custom_kernels chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
exllama_kernels MI300 compatibility (#1764) 2024-07-17 05:36:58 +00:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
tests Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
text_generation_server Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
.gitignore Impl simple mamba model (#1480) 2024-04-23 11:45:11 +03:00
Makefile Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
Makefile-awq chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-25 17:58:27 +03:00
Makefile-flash-att chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
Makefile-flash-att-v2 MI300 compatibility (#1764) 2024-07-17 05:36:58 +00:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
Makefile-vllm MI300 compatibility (#1764) 2024-07-17 05:36:58 +00:00
poetry.lock Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
pyproject.toml Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
README.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
requirements_cuda.txt Modifing the version number. 2024-07-17 05:36:58 +00:00
requirements_rocm.txt Modifing the version number. 2024-07-17 05:36:58 +00:00

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev