text-generation-inference/backends/v3
yuanwu 67ee45a270 Pass the max_batch_total_tokens to causal_lm
refine the warmup

Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-10-23 08:28:26 +00:00
..
benches Keeping the benchmark somewhere (#2401) 2024-09-25 06:05:43 +00:00
src Pass the max_batch_total_tokens to causal_lm 2024-10-23 08:28:26 +00:00
build.rs Rebase TRT-llm (#2331) 2024-09-25 05:55:39 +00:00
Cargo.toml fix: bump minijinja version and add test for llama 3.1 tools (#2463) 2024-09-25 06:11:21 +00:00