text-generation-inference/backends/v3/src
OlivierDehaene 8e0c161d0a
fix: incomplete generations w/ single tokens generations and models that did not support chunking (#2770)
* Incomplete generation stream fix (#2754)

entries.len() could > batch.size in prefill, so need to filter as well.

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* entries was wrongly extended for model that did not support chunking

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi <yi.a.wang@intel.com>
2024-11-21 16:37:55 +00:00
..
client Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
backend.rs fix: incomplete generations w/ single tokens generations and models that did not support chunking (#2770) 2024-11-21 16:37:55 +00:00
block_allocator.rs Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
lib.rs Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
main.rs Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
queue.rs feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
radix.rs Adding a test for FD. (#2516) 2024-09-16 17:00:54 +02:00