text-generation-inference/server
Daniël de Kok e32528792c
Switch to punica-sgmv kernel from the Hub (#3236)
* Switch to punica-sgmv kernel from the Hub

This also switches (temporarily) to the tgi-nix/kernel-builder merge
branch, bumping up to CUDA 12.8 (same as non-Nix Torch).

* nix: client depends on aiohttp

This probably worked before the nixpkgs bump because a dependency
propagated aiohttp.
2025-05-21 15:44:15 +02:00
..
custom_kernels All integration tests back everywhere (too many failed CI). (#2428) 2024-08-16 21:19:46 +02:00
exllama_kernels Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
exllamav2_kernels Update ROCM libs and improvements (#2579) 2024-09-30 10:54:32 +02:00
tests Small test and typing fixes (#3078) 2025-03-10 15:08:23 +01:00
text_generation_server Switch to punica-sgmv kernel from the Hub (#3236) 2025-05-21 15:44:15 +02:00
.gitignore Impl simple mamba model (#1480) 2024-02-08 10:19:45 +01:00
bounds-from-nix.py Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
kernels.lock Switch to punica-sgmv kernel from the Hub (#3236) 2025-05-21 15:44:15 +02:00
Makefile Switch to punica-sgmv kernel from the Hub (#3236) 2025-05-21 15:44:15 +02:00
Makefile-awq chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-eetq Putting back the NCCL forced upgrade. (#2999) 2025-02-14 11:31:59 +01:00
Makefile-exllamav2 Upgrading exl2. (#2415) 2024-08-14 11:58:08 +02:00
Makefile-flash-att Putting back the NCCL forced upgrade. (#2999) 2025-02-14 11:31:59 +01:00
Makefile-flash-att-v2 Add Flash decoding kernel ROCm (#2855) 2025-01-13 11:12:35 +01:00
Makefile-flashinfer flashinfer 0.2.0.post1 -> post2 (#3040) 2025-02-20 12:34:20 +01:00
Makefile-selective-scan chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
Makefile-vllm Use ROCM 6.3.1 (#3141) 2025-04-07 12:55:11 +02:00
pyproject.toml Switch to punica-sgmv kernel from the Hub (#3236) 2025-05-21 15:44:15 +02:00
README.md chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
req.txt Using the "lockfile". (#2992) 2025-02-06 12:28:24 +01:00
requirements_cuda.txt Update to Torch 2.7.0 (#3221) 2025-05-15 11:48:33 +02:00
requirements_gen.txt Update to Torch 2.7.0 (#3221) 2025-05-15 11:48:33 +02:00
requirements_intel.txt Update to Torch 2.7.0 (#3221) 2025-05-15 11:48:33 +02:00
requirements_rocm.txt Update to Torch 2.7.0 (#3221) 2025-05-15 11:48:33 +02:00
uv.lock Update to Torch 2.7.0 (#3221) 2025-05-15 11:48:33 +02:00

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev