text-generation-inference/benchmark
Daniël de Kok 6b2cbd0169 Set maximum grpc message receive size to 2GiB (#2075)
* Set maximum grpc message receive size to 2GiB

The previous default was 4MiB, which doesn't really work well for
multi-modal models.

* Update to Rust 1.79.0

* Fixup formatting to make PR pass
2024-09-24 03:44:36 +00:00
..
src Set maximum grpc message receive size to 2GiB (#2075) 2024-09-24 03:44:36 +00:00
Cargo.toml Upgrading all versions. (#1759) 2024-06-03 15:39:47 +03:00
README.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00

Text Generation Inference benchmarking tool

benchmark

A lightweight benchmarking tool based inspired by oha and powered by tui.

Install

make install-benchmark

Run

First, start text-generation-inference:

text-generation-launcher --model-id bigscience/bloom-560m

Then run the benchmarking tool:

text-generation-benchmark --tokenizer-name bigscience/bloom-560m