text-generation-inference/backends
drbh 24ee40d143
feat: support max_image_fetch_size to limit (#3339)
* feat: support max_image_fetch_size to limit

* fix: update model path for test

* fix: adjust model repo id for test again

* fix: apply clippy lints

* fix: clippy fix

* fix: avoid torch build isolation in docker

* fix: bump repo id in flash llama tests

* fix: temporarily avoid problematic repos in tests
2025-11-18 12:29:21 -05:00
..
client Revert "feat: improve qwen2-vl startup " (#2924) 2025-01-17 12:09:05 -05:00
gaudi chore: prepare version 3.3.5 (#3314) 2025-09-02 15:35:42 +02:00
grpc-metadata Upgrading our rustc version. (#2908) 2025-01-15 17:04:03 +01:00
llamacpp feat: support max_image_fetch_size to limit (#3339) 2025-11-18 12:29:21 -05:00
neuron chore: prepare version 3.3.5 (#3314) 2025-09-02 15:35:42 +02:00
trtllm feat: support max_image_fetch_size to limit (#3339) 2025-11-18 12:29:21 -05:00
v2 feat: support max_image_fetch_size to limit (#3339) 2025-11-18 12:29:21 -05:00
v3 feat: support max_image_fetch_size to limit (#3339) 2025-11-18 12:29:21 -05:00