mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-11-18 15:05:58 +00:00
* Set maximum grpc message receive size to 2GiB The previous default was 4MiB, which doesn't really work well for multi-modal models. * Update to Rust 1.79.0 * Fixup formatting to make PR pass |
||
|---|---|---|
| .. | ||
| src | ||
| Cargo.toml | ||
| README.md | ||
A lightweight benchmarking tool based inspired by oha and powered by tui.
Install
make install-benchmark
Run
First, start text-generation-inference:
text-generation-launcher --model-id bigscience/bloom-560m
Then run the benchmarking tool:
text-generation-benchmark --tokenizer-name bigscience/bloom-560m
