text-generation-inference/server
drbh 56670398f3 fix: handle batches with and without grammars (#1676)
This PR correctly handles batches with a mixture of constrained and non
constrained generations.

Currently if batch contains mixed generations the generation will throw
an error because it will incorrectly attempt to constrain a request with
an empty grammar.

We now handled `None` grammars and only apply the mask if needed

Fixes:
https://github.com/huggingface/text-generation-inference/issues/1643
2024-04-25 14:06:48 +03:00
..
custom_kernels chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
exllama_kernels chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
tests feat(server): add frequency penalty (#1541) 2024-04-24 08:43:50 +00:00
text_generation_server fix: handle batches with and without grammars (#1676) 2024-04-25 14:06:48 +03:00
.gitignore Impl simple mamba model (#1480) 2024-04-23 11:45:11 +03:00
Makefile feat: cohere (#1660) 2024-04-25 12:39:14 +03:00
Makefile-awq chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
Makefile-eetq feat: eetq gemv optimization when batch_size <= 4 (#1502) 2024-04-23 09:20:14 +03:00
Makefile-flash-att chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
Makefile-flash-att-v2 make install-flash-attn-v2-cuda should work like make install-flash-attn-v2 used to work. (#1294) 2023-11-28 16:28:40 +01:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
Makefile-vllm Speculative (#1308) 2024-04-18 12:39:39 +00:00
poetry.lock feat: cohere (#1660) 2024-04-25 12:39:14 +03:00
pyproject.toml v1.4.4 (#1668) 2024-04-25 12:40:30 +03:00
README.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
requirements_cuda.txt feat: cohere (#1660) 2024-04-25 12:39:14 +03:00
requirements_rocm.txt feat: cohere (#1660) 2024-04-25 12:39:14 +03:00
requirements.txt Update peft + transformers + accelerate + bnb + safetensors (#1646) 2024-04-25 11:49:44 +03:00

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev