mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
Mostly straightforward, changes to existing code: * Wrap quantizer parameters in a small wrapper to avoid passing around untyped tuples and needing to repack them as a dict. * Move scratch space computation to warmup, because we need the maximum input sequence length to avoid allocating huge scratch buffers that OOM. |
||
---|---|---|
.. | ||
consuming_tgi.md | ||
gated_model_access.md | ||
launcher.md | ||
monitoring.md | ||
non_core_models.md | ||
preparing_model.md | ||
safety.md | ||
train_medusa.md | ||
using_cli.md | ||
using_guidance.md | ||
visual_language_models.md |