text-generation-inference/benchmark
OlivierDehaene 184c89fd55 feat: add SchedulerV3 (#1996)
- Refactor code to allow supporting multiple versions of the
generate.proto at the same time
- Add v3/generate.proto (ISO to generate.proto for now but allow for
future changes without impacting v2 backends)
- Add Schedule trait to abstract queuing and batching mechanisms that
will be different in the future
- Add SchedulerV2/V3 impl
2024-09-24 03:28:31 +00:00
..
src feat: add SchedulerV3 (#1996) 2024-09-24 03:28:31 +00:00
Cargo.toml Upgrading all versions. (#1759) 2024-06-03 15:39:47 +03:00
README.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00

Text Generation Inference benchmarking tool

benchmark

A lightweight benchmarking tool based inspired by oha and powered by tui.

Install

make install-benchmark

Run

First, start text-generation-inference:

text-generation-launcher --model-id bigscience/bloom-560m

Then run the benchmarking tool:

text-generation-benchmark --tokenizer-name bigscience/bloom-560m