text-generation-inference/server
Nicolas Patry 5df8059037
Auto max prefill (#2797)
* Attempt at automatic max batch prefill.

* Taking into account number of shards.

* Adding more cards.

* Adding A100 + H100

* Adding a few more cards.

* Logprobs cost too much.

* h100 better name, and keep factor of 2

* Damn inflated sparse tflops.

* Typo in h100.

* Updated the flops calculation (checked with fvcore).

* chunking by default.

* Fix prefix caching for chat completion since we removed logprobs.

* More tests.

* Dropping all the prefill logprobs.

* Add a flag that enables users to get logprobs back.

* Repairing prompt token counting.

* Fixing a few tests.

* Remove some scaffolding.

* Attempting to reduces the issues (workarounds for now).
2024-12-06 05:52:00 +01:00
..
custom_kernels All integration tests back everywhere (too many failed CI). (#2428) 2024-08-16 21:19:46 +02:00
exllama_kernels Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
exllamav2_kernels Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
tests feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
text_generation_server Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
bounds-from-nix.py Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
Makefile Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
Makefile-exllamav2 Upgrading exl2. (#2415) 2024-08-14 11:58:08 +02:00
Makefile-flash-att Hotfixing make install. (#2008) 2024-06-04 23:34:03 +02:00
Makefile-flash-att-v2 Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
Makefile-flashinfer Prefix test - Different kind of load test to trigger prefix test bugs. (#2490) 2024-09-11 18:10:40 +02:00
Makefile-lorax-punica Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm Remove vLLM dependency for CUDA (#2751) 2024-11-17 17:34:50 +01:00
poetry.lock Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
pyproject.toml Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
requirements_cuda.txt Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
requirements_intel.txt Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
requirements_rocm.txt Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev