Image is smaller but most importantly way faster to decompress.
L4 g6.2xlarge (base) in 1m53.837s (1m53.837s including waiting). Image
size: 5650343354 bytes.
L4 g6.2xlarge (zsd ) in 1m25.92s (1m25.92s including waiting). Image
size: 4581485004 bytes.
* Putting back the NCCL forced upgrade.
* .
* ...
* Ignoring conda.
* Dropping conda from the buidl system + torch 2.6
* Cache min.
* Rolling back torch version.
* Reverting the EETQ modification.
* Fix flash attention ?
* Actually stay on flash v1.
* Patching flash v1.
* Torch 2.6, fork of rotary, eetq updated.
* Put back nccl latest (override torch).
* Slightly more reproducible build and not as scary.
* Using the "lockfile".
* Revert dummy modifications.
* Lock on python 3.11
* Another attempt.
* ..
* Bad cache hits.
* The good old monkey.
* How in the world...
* We need the launcher still.
* .
* ..
* Attempt #42
* Don't break all other builds.
* Mode max.
* Applying to other builds.
* hotfix: fix trtllm CI build on release
* fix: test release.
* fix: test release.
* fix: test release. env not recognized https://github.com/actions/runner/issues/1661
* fix: test release. Works.
* backend(trtllm): attempt to remove AWS S3 flaky cache for sccache
* backend(trtllm): what if we expose ENV instead of inline?
* backend(trtllm): and with the right env var for gha sccache
* backend(trtllm): relax the way to detect sccache
* backend(trtllm): make sccache definition manually
* backend(trtllm): ok let's try to define the launchers in build.rs when rustc_wrapper is present
* backend(trtllm): export env variable in run mb?
* backend(trtllm): Cache mode max to cache intermediate layers
* backend(trtllm): inject ompi_version build arg in dependent step
* misc(cmake) update dependencies
* feat(hardware) enable new hardware.hpp and unittests
* test(ctest) enable address sanitizer
* feat(backend): initial rewrite of the backend for simplicity
* feat(backend): remove all the logs from hardware.hpp
* feat(backend): added some logging
* feat(backend): enable compiler warning if support for RVO not applying
* feat(backend): missing return statement
* feat(backend): introduce backend_workspace_t to store precomputed information from the engine folder
* feat(backend): delete previous backend impl
* feat(backend): more impl
* feat(backend): use latest trtllm main version to have g++ >= 13 compatibility
* feat(backend): allow overriding which Python to use
* feat(backend): fix backend_exception_t -> backend_error_t naming
* feat(backend): impl missing generation_step_t as return value of pull_tokens
* feat(backend): make backend_workspace_t::engines_folder constexpr
* feat(backend): fix main.rs retrieving the tokenizer
* feat(backend): add guard to multiple header definitions
* test(backend): add more unittest
* feat(backend): remove constexpr from par
* feat(backend): remove constexpig
* test(backend): more test coverage
* chore(trtllm): update dependency towards 0.15.0
* effectively cancel the request on the executor
* feat(backend) fix moving backend when pulling
* feat(backend): make sure we can easily cancel request on the executor
* feat(backend): fix missing "0" field access
* misc(backend): fix reborrowing Pin<&mut T> as described in the doc https://doc.rust-lang.org/stable/std/pin/struct.Pin.html#method.as_mut
* chore: Add doc and CI for TRTLLM (#2799)
* chore: Add doc and CI for TRTLLM
* chore: Add doc and CI for TRTLLM
* chore: Add doc and CI for TRTLLM
* chore: Add doc and CI for TRTLLM
* doc: Formatting
* misc(backend): indent
---------
Co-authored-by: Hugo Larcher <hugo.larcher@huggingface.co>
* Move to moe-kernels package and switch to common MoE layer
This change introduces the new `moe-kernels` package:
- Add `moe-kernels` as a dependency.
- Introduce a `SparseMoELayer` module that can be used by MoE
models.
- Port over Mixtral and Deepseek.
* Make `cargo check` pass
* Update runner
* All integration tests back everywhere (too many failed CI).
* Upgrade integration tests after 12.4
* Attempt to remove the specifed compute cap.
* Common arch list.
* Punica uses raw ASM which is not valid on 9.0 apparently.
* Add more representative Llama GPTQ test
The Llama GPTQ test is updated to use a model with the commonly-used
quantizer config format and activation sorting. The old test is
kept around (but renamed) since it tests the format produced by
`text-generation-server quantize`.
* Add support for manually triggering a release build
* Add pytest release marker
Annotate a test with `@pytest.mark.release` and it only gets run
with `pytest integration-tests --release`.
* Mark many models as `release` to speed up CI
* Removing IPEX_AVAIL.
Chose to unify CPU and XPU under `ipex`. Most code is exactly similar
except for a very few spots.
The biggest number of spots is the kv-cache layout and the flash_xxx.py
files.
Since those files should be removed soon and factored away, we should
not need them.
* Forgot a few places.
* Unrelated change.
* Fixing HF_TOKEN.
* HF_TOKEN
* New runner. Manual squash.
* Network host.
* Put back trufflehog with proper extension.
* No network host ?
* Moving buildx install after tailscale ?
* 1.79