text-generation-inference/server
Nicolas Patry b80bd724e1 Move to FlashDecoding instead of PagedAttention kernel. (#1940)
* Using flash decoding

Conditional flashdecoding.

Fix max_q.

Working kvcache

Working version with flash decoding.

Make it work for mistral.

Fix after rebase..

Less intrusive.

REvert changes in modeling.

Speedup flashdecoding.

HHachweew
Hack to make other models work.

Fixing non flash decoding llama path.

Router logic knows about page size.

Missing 2 models.

Missing cohere.

Fixing cohere flash decoding.

Revamped all this architecture.

Fix cohere.

Fixing falcon.

Enabling custom block size schedule.

Update router/src/infer.rs

Not sending preallocated output.

* Making it work on non flash decoding.

* Fix Cohere.

* Fix non decoding paths.

* Rebased.

* No need for cache_manager anymore.

* Update?

* "ipex" -> "cpu"

* These do not belong.

* Factoring cu_seqlen_qk for better abstracting over every model.

* Fixing non flash tests/imports.

* Changing return everywhere.

* Update mistral past.

* Fixing Mi{s,x}tral (non functional in Flash Decoding mode though).

* Fixup mistral clamping (had issues with cuda graphs).

* No need to recreate anything actually.
2024-09-24 03:58:13 +00:00
..
custom_kernels chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
exllama_kernels MI300 compatibility (#1764) 2024-07-17 05:36:58 +00:00
exllamav2_kernels chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
marlin Add support for Marlin 2:4 sparsity (#2102) 2024-09-24 03:55:04 +00:00
tests Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
text_generation_server Move to FlashDecoding instead of PagedAttention kernel. (#1940) 2024-09-24 03:58:13 +00:00
.gitignore Impl simple mamba model (#1480) 2024-04-23 11:45:11 +03:00
Makefile Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
Makefile-awq chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
Makefile-eetq Upgrade EETQ (Fixes the cuda graphs). (#1729) 2024-04-25 17:58:27 +03:00
Makefile-flash-att Hotfixing make install. (#2008) 2024-09-24 03:29:29 +00:00
Makefile-flash-att-v2 Hotfixing make install. (#2008) 2024-09-24 03:29:29 +00:00
Makefile-lorax-punica Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
Makefile-vllm Update LLMM1 bound (#2050) 2024-09-24 03:42:29 +00:00
poetry.lock Making make install work better by default. (#2004) 2024-09-24 03:29:29 +00:00
pyproject.toml Making make install work better by default. (#2004) 2024-09-24 03:29:29 +00:00
README.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
requirements_cuda.txt Modifing the version number. 2024-07-17 05:36:58 +00:00
requirements_intel.txt reable xpu, broken by gptq and setuptool upgrade (#1988) 2024-09-24 03:26:17 +00:00
requirements_rocm.txt Modifing the version number. 2024-07-17 05:36:58 +00:00

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev