text-generation-inference/backends/trtllm/cmake
Funtowicz Morgan 856709d5c3
[Backend] Bump TRTLLM to v.0.17.0 (#2991)
* backend(trtllm): bump TRTLLM to v.0.17.0

* backend(trtllm): forget to bump dockerfile

* backend(trtllm): use arg instead of env

* backend(trtllm): use correct library reference decoder_attention_src

* backend(trtllm): link against decoder_attention_{0|1}

* backend(trtllm): build against gcc-14 with cuda12.8

* backend(trtllm): use return value optimization flag as as error if available

* backend(trtllm): make sure we escalade all warnings as errors on the backend impl in debug mode

* backend(trtllm): link against CUDA 12.8
2025-02-06 16:45:03 +01:00
..
utils Rebase TRT-llm (#2331) 2024-07-31 10:33:10 +02:00
json.cmake TensorRT-LLM backend bump to latest version + misc fixes (#2791) 2024-12-13 15:50:59 +01:00
spdlog.cmake Give TensorRT-LLMa proper CI/CD 😍 (#2886) 2025-01-21 10:19:16 +01:00
trtllm.cmake [Backend] Bump TRTLLM to v.0.17.0 (#2991) 2025-02-06 16:45:03 +01:00