mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-19 22:02:06 +00:00
Mostly straightforward, changes to existing code: * Wrap quantizer parameters in a small wrapper to avoid passing around untyped tuples and needing to repack them as a dict. * Move scratch space computation to warmup, because we need the maximum input sequence length to avoid allocating huge scratch buffers that OOM. |
||
---|---|---|
.. | ||
basic_tutorials | ||
conceptual | ||
_toctree.yml | ||
index.md | ||
installation_amd.md | ||
installation_gaudi.md | ||
installation_inferentia.md | ||
installation_nvidia.md | ||
installation.md | ||
messages_api.md | ||
quicktour.md | ||
supported_models.md |