text-generation-inference/proto
Nicolas Patry 0c9b6cdd76
Choosing input/total tokens automatically based on available VRAM? ()
* Choosing input/total tokens automatically based on available VRAM?

* Update doc.

* Remove generated files.

* Trying to fix non chunking targets.

* Attempt 

* fix.

* QuantLinear is rocm compatible.

* Much simpler logic after the overhead.

* Updating logic + non flash.

* Revert doc text.

* Simple updates.

* Fix integration mt0 (transformers update).
2024-10-28 04:59:49 +01:00
..
v3 Choosing input/total tokens automatically based on available VRAM? () 2024-10-28 04:59:49 +01:00
generate.proto feat: add SchedulerV3 () 2024-06-04 15:56:56 +02:00