mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-25 03:52:08 +00:00
* Choosing input/total tokens automatically based on available VRAM? * Update doc. * Remove generated files. * Trying to fix non chunking targets. * Attempt #2 * fix. * QuantLinear is rocm compatible. * Much simpler logic after the overhead. * Updating logic + non flash. * Revert doc text. * Simple updates. * Fix integration mt0 (transformers update). |
||
---|---|---|
.. | ||
basic_tutorials | ||
conceptual | ||
reference | ||
_toctree.yml | ||
architecture.md | ||
index.md | ||
installation_amd.md | ||
installation_gaudi.md | ||
installation_inferentia.md | ||
installation_intel.md | ||
installation_nvidia.md | ||
installation.md | ||
quicktour.md | ||
supported_models.md | ||
usage_statistics.md |