text-generation-inference/backends
David Corvoysier 79183d1647
Bump neuron SDK version (#3260)
* chore(neuron): bump version to 0.2.0

* refactor(neuron): use named parameters in inputs helpers

This allows to hide the differences between the two backends in terms of
input parameters.

* refactor(neuron): remove obsolete code paths

* fix(neuron): use neuron_config whenever possible

* fix(neuron): use new cache import path

* fix(neuron): neuron config is not stored in config anymore

* fix(nxd): adapt model retrieval to new APIs

* fix(generator): emulate greedy in sampling parameters

When on-device sampling is enabled, we need to emulate the greedy
behaviour using top-k=1, top-p=1, temperature=1.

* test(neuron): update models and expectations

* feat(neuron): support on-device sampling

* fix(neuron): adapt entrypoint

* tests(neuron): remove obsolete models

* fix(neuron): adjust test expectations for llama on nxd
2025-06-10 17:56:25 +02:00
..
client Revert "feat: improve qwen2-vl startup " (#2924) 2025-01-17 12:09:05 -05:00
gaudi Remove useless packages (#3253) 2025-06-03 13:42:29 +02:00
grpc-metadata Upgrading our rustc version. (#2908) 2025-01-15 17:04:03 +01:00
llamacpp Add option to configure prometheus port (#3187) 2025-04-23 20:43:25 +05:30
neuron Bump neuron SDK version (#3260) 2025-06-10 17:56:25 +02:00
trtllm Add option to configure prometheus port (#3187) 2025-04-23 20:43:25 +05:30
v2 Add option to configure prometheus port (#3187) 2025-04-23 20:43:25 +05:30
v3 fp8 compressed tensors w8a8 support for Gaudi backend (#3242) 2025-05-28 14:54:20 +02:00