text-generation-inference/server/tests/models
drbh 24ee40d143
feat: support max_image_fetch_size to limit (#3339)
* feat: support max_image_fetch_size to limit

* fix: update model path for test

* fix: adjust model repo id for test again

* fix: apply clippy lints

* fix: clippy fix

* fix: avoid torch build isolation in docker

* fix: bump repo id in flash llama tests

* fix: temporarily avoid problematic repos in tests
2025-11-18 12:29:21 -05:00
..
test_bloom.py Fix tokenization yi (#2507) 2024-09-11 22:41:56 +02:00
test_causal_lm.py Fix tokenization yi (#2507) 2024-09-11 22:41:56 +02:00
test_model.py feat: support max_image_fetch_size to limit (#3339) 2025-11-18 12:29:21 -05:00
test_santacoder.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
test_seq2seq_lm.py Fix tokenization yi (#2507) 2024-09-11 22:41:56 +02:00