* Using both value from config as they might not be correct.
* Fixing max_position_embeddings for falcon.
* Simple attempt to fix the healthcheck block allocation.
* Much simpler solution.
* Default value for Backend start_health
* Attempt at automatic max batch prefill.
* Taking into account number of shards.
* Adding more cards.
* Adding A100 + H100
* Adding a few more cards.
* Logprobs cost too much.
* h100 better name, and keep factor of 2
* Damn inflated sparse tflops.
* Typo in h100.
* Updated the flops calculation (checked with fvcore).
* chunking by default.
* Fix prefix caching for chat completion since we removed logprobs.
* More tests.
* Dropping all the prefill logprobs.
* Add a flag that enables users to get logprobs back.
* Repairing prompt token counting.
* Fixing a few tests.
* Remove some scaffolding.
* Attempting to reduces the issues (workarounds for now).
* Saving some VRAM.
- 8B on 4xL4 attention=flashdecoding . Before 4.28GB left, After 4.32GB
left, so 400MB saved.
- Effect not as visible on attention=flashinfer and n_shard=1. I suspect
it's linked to the torch allocator.
* Adding assertion.
* Sync (most) server dependencies with Nix
Skipped most grpcio packages, because of protobuf version
incompatibility with the opentelemetry packages.
* Add a primitive script to generate Poetry commands to sync with Nix
This is not fully automated, since getting the Nix versions may be
unresolvable. However, it does take most of the work out of doing
this manually.
* Upgrade eetq ?
* Fmt.
---------
Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
LLama 3 has a list of values as eos_token_id:
"['<|end_of_text|>', '<|eom_id|>', '<|eot_id|>']"
This breaks tokenizer since it expects single value. This
commit uses tokenizer.eos_token_id instead in such a case.
Fixes: #2440
Signed-off-by: Dmitry Rogozhkin <dmitry.v.rogozhkin@intel.com>
* feat: support continue_final_message param in chat request
* feat: add test for continue final message
* fix: bump openapi docs
* fix: remove continue_final_message chat request param
* fix: remove unneeded launcher args in continue test
* fix: bump test output
* fix: remove accidentally included guideline from rebase
* fix: remove guideline tests
* fix: adjust continuation tests expected text
* fix: replace expected output for continue test
The compressed-tensors configuration can specify the configuration of
the KV cache as well. Use an FP8 KV cache when the configuration tells
us to do so (all other options and types are ignored for now).
* Move JSON grammar -> regex grammar conversion to the router
This change moves the JSON grammar -> regex grammar conversion to the
router by adding a dependency on the `outlines-core` Rust crate. In
contrast to the Python implementation, the conversions are not LRU-cached
since they seem to be fast enough:
simple schema time: [5.8293 µs 5.8307 µs 5.8320 µs]
change: [-13.166% -12.884% -12.641%] (p = 0.00 < 0.05)
Performance has improved.
complex schema time: [14.875 µs 14.881 µs 14.887 µs]
change: [-2.1637% -1.9914% -1.7852%] (p = 0.00 < 0.05)
Performance has improved.
Using the schemas from:
https://github.com/dottxt-ai/outlines-core/blob/main/benchmarks/bench_json_schema.py
* Incomplete generation stream fix (#2754)
entries.len() could > batch.size in prefill, so need to filter as well.
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
* entries was wrongly extended for model that did not support chunking
---------
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Wang, Yi <yi.a.wang@intel.com>