text-generation-inference/integration-tests/models/__snapshots__/test_flash_grammar_llama
Nicolas Patry 5df8059037
Auto max prefill (#2797)
* Attempt at automatic max batch prefill.

* Taking into account number of shards.

* Adding more cards.

* Adding A100 + H100

* Adding a few more cards.

* Logprobs cost too much.

* h100 better name, and keep factor of 2

* Damn inflated sparse tflops.

* Typo in h100.

* Updated the flops calculation (checked with fvcore).

* chunking by default.

* Fix prefix caching for chat completion since we removed logprobs.

* More tests.

* Dropping all the prefill logprobs.

* Add a flag that enables users to get logprobs back.

* Repairing prompt token counting.

* Fixing a few tests.

* Remove some scaffolding.

* Attempting to reduces the issues (workarounds for now).
2024-12-06 05:52:00 +01:00
..
test_flash_llama_grammar_json.json Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_grammar_load.json Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_grammar_regex.json Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_grammar_single_load_instance.json fix: correctly index into mask when applying grammar (#1618) 2024-03-01 18:22:01 +01:00
test_flash_llama_grammar.json Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00