text-generation-inference/backends
kaixuanliu c94f415af4
Change HPU warmup logic: seq length should be with exponential growth (#3217)
Signed-off-by: Liu, Kaixuan <kaixuan.liu@intel.com>
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
2025-05-10 15:41:18 +02:00
..
client Revert "feat: improve qwen2-vl startup " (#2924) 2025-01-17 12:09:05 -05:00
gaudi Change HPU warmup logic: seq length should be with exponential growth (#3217) 2025-05-10 15:41:18 +02:00
grpc-metadata Upgrading our rustc version. (#2908) 2025-01-15 17:04:03 +01:00
llamacpp Add option to configure prometheus port (#3187) 2025-04-23 20:43:25 +05:30
neuron setuptools <= 70.0 is vulnerable: CVE-2024-6345 (#3171) 2025-04-15 10:09:37 +02:00
trtllm Add option to configure prometheus port (#3187) 2025-04-23 20:43:25 +05:30
v2 Add option to configure prometheus port (#3187) 2025-04-23 20:43:25 +05:30
v3 Warmup gaudi backend (#3172) 2025-04-24 09:57:08 +02:00