mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-05-23 02:32:09 +00:00
* Putting back the NCCL forced upgrade. * . * ... * Ignoring conda. * Dropping conda from the buidl system + torch 2.6 * Cache min. * Rolling back torch version. * Reverting the EETQ modification. * Fix flash attention ? * Actually stay on flash v1. * Patching flash v1. * Torch 2.6, fork of rotary, eetq updated. * Put back nccl latest (override torch). * Slightly more reproducible build and not as scary. |
||
---|---|---|
.. | ||
autodocs.yaml | ||
build_documentation.yaml | ||
build_pr_documentation.yaml | ||
build.yaml | ||
ci_build.yaml | ||
client-tests.yaml | ||
integration_tests.yaml | ||
load_test.yaml | ||
nix_cache.yaml | ||
nix_tests.yaml | ||
stale.yaml | ||
tests.yaml | ||
trufflehog.yaml | ||
upload_pr_documentation.yaml |