text-generation-inference/backends/v3/src
Yuan Wu 1d3a4ab851
Enable mllama (#272)
Signed-off-by: Yuan Wu <yuan.wu@intel.com>
2025-02-27 16:12:15 +01:00
..
client Enable mllama (#272) 2025-02-27 16:12:15 +01:00
backend.rs Fix the issues of tgi-gaudi for v.2.3.1 2024-10-27 20:40:36 +00:00
block_allocator.rs Lots of improvements (Still 2 allocators) (#2449) 2024-09-25 06:13:11 +00:00
lib.rs Max token capacity metric (#2595) 2024-10-27 04:03:57 +00:00
main.rs Pr 2352 ci branch (#2382) 2024-09-25 06:01:59 +00:00
queue.rs Pass the max_batch_total_tokens to causal_lm 2024-10-23 08:28:26 +00:00
radix.rs Adding a test for FD. (#2516) 2024-09-25 06:17:09 +00:00