text-generation-inference/server/text_generation_server
Daniël de Kok 289aa48554
Move JSON grammar -> regex grammar conversion to the router (#2772)
* Move JSON grammar -> regex grammar conversion to the router

This change moves the JSON grammar -> regex grammar conversion to the
router by adding a dependency on the `outlines-core` Rust crate. In
contrast to the Python implementation, the conversions are not LRU-cached
since they seem to be fast enough:

simple schema           time:   [5.8293 µs 5.8307 µs 5.8320 µs]
                        change: [-13.166% -12.884% -12.641%] (p = 0.00 < 0.05)
                        Performance has improved.

complex schema          time:   [14.875 µs 14.881 µs 14.887 µs]
                        change: [-2.1637% -1.9914% -1.7852%] (p = 0.00 < 0.05)
                        Performance has improved.

Using the schemas from:
https://github.com/dottxt-ai/outlines-core/blob/main/benchmarks/bench_json_schema.py
2024-11-25 18:47:34 +01:00
..
adapters feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
layers Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
models feat: add payload limit (#2726) 2024-11-21 18:20:15 +00:00
pb chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
utils Move JSON grammar -> regex grammar conversion to the router (#2772) 2024-11-25 18:47:34 +01:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
interceptor.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
server.py Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-06-25 09:33:01 +02:00