text-generation-inference/integration-tests/models/__snapshots__
drbh c1cf36c0dc
Improve qwen vl impl (#2943)
* feat: refactor model, improve startup and re enable tests

* fix: improve multimodal rotary embed caching

* fix: limit vision flop calc to qwen2 vl models and update config typing

* fix: include clippy lint

* feat: refactor position ids in warmup and bump tests

* fix: prefer default dtype

* fix: enable all cuda graphs and bump snapshots

* fix: adjust rotaty init path

* fix: simplify get position ids and remove usused vision config

* fix: update position ids so first dim is batch, simplify rotary and bump vlm default token limit

* fix: improve position id init during cuda warmup for mrope and simplfy rotary forward

* fix: check existance before accessing rope type in cuda warmup

* fix: check key before access

* fix: improve mrope check in cuda graph warmup

* fix: remove check for default rope type

* fix: add more test and improve model generation

* fix: improve and simplify get_cos_sin, refactors and cleanup  get_position_ids

* fix: adjust signatures with types
2025-02-04 12:44:18 -05:00
..
test_bloom_560m All integration tests back everywhere (too many failed CI). (#2428) 2024-08-16 21:19:46 +02:00
test_bloom_560m_sharded fix: adjust test snapshots and small refactors (#2323) 2024-07-29 11:38:38 -04:00
test_chat_llama Lots of improvements (Still 2 allocators) (#2449) 2024-08-29 16:29:01 +02:00
test_completion_prompts Stream options. (#2533) 2024-09-19 20:50:37 +02:00
test_compressed_tensors_w8a8_int Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
test_compressed_tensors_w8a8_int_dynamic_weight Improve qwen vl impl (#2943) 2025-02-04 12:44:18 -05:00
test_compressed_tensors_w8an_fp Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_compressed_tensors_wna16_int Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
test_compressed_tensors_wna16_int_24 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_continue_final_message Support continue final message (#2733) 2024-11-27 19:13:30 -05:00
test_flash_awq Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_awq_sharded Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_deepseek_v2 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_falcon Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_gemma Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_gemma2 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_gemma_gptq Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
test_flash_gpt2 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_grammar_llama Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_exl2 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_fp8 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_fp8_kv_cache Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_gptq Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_marlin Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_marlin_24 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_llama_prefix Fix truffle (#2514) 2024-09-11 22:45:19 +02:00
test_flash_llama_prefix_flashdecoding Adding a test for FD. (#2516) 2024-09-16 17:00:54 +02:00
test_flash_medusa Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_mistral Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_mixtral Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_mixtral_awq Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_mixtral_gptq Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_neox Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_neox_sharded Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_pali_gemma All integration tests back everywhere (too many failed CI). (#2428) 2024-08-16 21:19:46 +02:00
test_flash_pali_gemma2 Enable paligemma2 (#2807) 2024-12-06 14:41:49 -05:00
test_flash_phi Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_phi35_moe Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_qwen2 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_qwen2_vl Improve qwen vl impl (#2943) 2025-02-04 12:44:18 -05:00
test_flash_santacoder Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_starcoder Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_flash_starcoder2 Basic flashinfer 0.2 support (#2862) 2025-01-09 16:25:00 +01:00
test_flash_starcoder2_lora feat: improve star coder to support multi lora layers (#2883) 2025-01-16 16:23:55 -05:00
test_flash_starcoder_gptq Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_grammar_llama Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_grammar_response_format_llama Move JSON grammar -> regex grammar conversion to the router (#2772) 2024-11-25 18:47:34 +01:00
test_idefics Support different image sizes in prefill in VLMs (#2065) 2024-06-17 10:49:41 +02:00
test_idefics2 Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_idefics3 Improve vlm support (add idefics3 support) (#2437) 2025-01-09 10:35:32 -05:00
test_llava_next Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_lora_mistral feat: simple mistral lora integration tests (#2180) 2024-07-15 09:16:15 -04:00
test_mamba All integration tests back everywhere (too many failed CI). (#2428) 2024-08-16 21:19:46 +02:00
test_mllama chore: prepare 2.4.1 release (#2773) 2024-11-22 17:26:15 +00:00
test_mpt feat(server): Add Non flash MPT. (#514) 2023-07-03 13:01:46 +02:00
test_mt0_base Fixing mt0 test. (#2692) 2024-10-25 09:46:39 +02:00
test_neox Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_neox_sharded Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_server_gptq_quantized Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00
test_smolvlm Improve vlm support (add idefics3 support) (#2437) 2025-01-09 10:35:32 -05:00
test_t5_sharded feat(server): support fp16 for t5 (#360) 2023-05-23 18:16:48 +02:00
test_tools_llama Move JSON grammar -> regex grammar conversion to the router (#2772) 2024-11-25 18:47:34 +01:00
test.py Auto max prefill (#2797) 2024-12-06 05:52:00 +01:00