mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-19 22:02:06 +00:00
* Putting back the NCCL forced upgrade. * . * ... * Ignoring conda. * Dropping conda from the buidl system + torch 2.6 * Cache min. * Rolling back torch version. * Reverting the EETQ modification. * Fix flash attention ? * Actually stay on flash v1. * Patching flash v1. * Torch 2.6, fork of rotary, eetq updated. * Put back nccl latest (override torch). * Slightly more reproducible build and not as scary.
7 lines
163 B
Bash
Executable File
7 lines
163 B
Bash
Executable File
#!/bin/bash
|
|
|
|
ldconfig 2>/dev/null || echo 'unable to refresh ld cache, not a big deal in most cases'
|
|
|
|
source ./.venv/bin/activate
|
|
exec text-generation-launcher $@
|