..
test_bloom_560m
All integration tests back everywhere (too many failed CI). ( #2428 )
2024-09-25 06:10:59 +00:00
test_bloom_560m_sharded
fix: adjust test snapshots and small refactors ( #2323 )
2024-09-25 05:50:17 +00:00
test_chat_llama
Lots of improvements (Still 2 allocators) ( #2449 )
2024-09-25 06:13:11 +00:00
test_completion_prompts
Prefix test - Different kind of load test to trigger prefix test bugs. ( #2490 )
2024-09-25 06:14:07 +00:00
test_flash_awq
Add AWQ quantization inference support ( #1019 ) ( #1054 )
2023-09-25 15:31:27 +02:00
test_flash_awq_sharded
Add AWQ quantization inference support ( #1019 ) ( #1054 )
2023-09-25 15:31:27 +02:00
test_flash_deepseek_v2
Lots of improvements (Still 2 allocators) ( #2449 )
2024-09-25 06:13:11 +00:00
test_flash_falcon
feat(server): add retry on download ( #384 )
2023-05-31 10:57:53 +02:00
test_flash_gemma
All integration tests back everywhere (too many failed CI). ( #2428 )
2024-09-25 06:10:59 +00:00
test_flash_gemma2
Softcapping for gemma2. ( #2273 )
2024-09-25 05:31:08 +00:00
test_flash_gemma_gptq
fix: adjust test snapshots and small refactors ( #2323 )
2024-09-25 05:50:17 +00:00
test_flash_gpt2
Add GPT-2 with flash attention ( #1889 )
2024-07-17 05:36:58 +00:00
test_flash_grammar_llama
fix: correctly index into mask when applying grammar ( #1618 )
2024-04-25 10:16:16 +03:00
test_flash_llama
Remove the stripping of the prefix space (and any other mangling that tokenizers might do). ( #1065 )
2023-09-27 12:13:45 +02:00
test_flash_llama_exl2
Add support for exl2 quantization
2024-09-24 03:19:39 +00:00
test_flash_llama_fp8
Lots of improvements (Still 2 allocators) ( #2449 )
2024-09-25 06:13:11 +00:00
test_flash_llama_gptq
GPTQ CI improvements ( #2151 )
2024-09-25 05:21:03 +00:00
test_flash_llama_marlin
Add support for Marlin-quantized models
2024-09-24 03:38:05 +00:00
test_flash_llama_marlin_24
Improve the handling of quantized weights ( #2250 )
2024-09-25 05:27:40 +00:00
test_flash_llama_prefix
Fix truffle ( #2514 )
2024-09-25 06:15:35 +00:00
test_flash_llama_prefix_flashdecoding
Adding a test for FD. ( #2516 )
2024-09-25 06:17:09 +00:00
test_flash_medusa
Speculative ( #1308 )
2024-04-18 12:39:39 +00:00
test_flash_mistral
feat: add mistral model ( #1071 )
2023-09-28 09:55:47 +02:00
test_flash_mixtral
Add tests for Mixtral ( #2520 )
2024-09-25 06:16:08 +00:00
test_flash_neox
fix(server): fix init for flash causal lm ( #352 )
2023-05-22 15:05:32 +02:00
test_flash_neox_sharded
fix(server): fix init for flash causal lm ( #352 )
2023-05-22 15:05:32 +02:00
test_flash_pali_gemma
All integration tests back everywhere (too many failed CI). ( #2428 )
2024-09-25 06:10:59 +00:00
test_flash_phi
All integration tests back everywhere (too many failed CI). ( #2428 )
2024-09-25 06:10:59 +00:00
test_flash_qwen2
feat: Qwen2 ( #1608 )
2024-04-25 09:21:22 +03:00
test_flash_santacoder
feat(integration-tests): improve comparison and health checks ( #336 )
2023-05-16 20:22:11 +02:00
test_flash_starcoder
fix: adjust test snapshots and small refactors ( #2323 )
2024-09-25 05:50:17 +00:00
test_flash_starcoder2
Lots of improvements (Still 2 allocators) ( #2449 )
2024-09-25 06:13:11 +00:00
test_flash_starcoder_gptq
Further fixes. ( #2426 )
2024-09-25 06:09:22 +00:00
test_grammar_llama
fix: correctly index into mask when applying grammar ( #1618 )
2024-04-25 10:16:16 +03:00
test_grammar_response_format_llama
Support chat response format ( #2046 )
2024-09-24 03:42:29 +00:00
test_idefics
Support different image sizes in prefill in VLMs ( #2065 )
2024-09-24 03:43:31 +00:00
test_idefics2
Lots of improvements (Still 2 allocators) ( #2449 )
2024-09-25 06:13:11 +00:00
test_llava_next
All integration tests back everywhere (too many failed CI). ( #2428 )
2024-09-25 06:10:59 +00:00
test_lora_mistral
feat: simple mistral lora integration tests ( #2180 )
2024-09-25 05:27:40 +00:00
test_mamba
All integration tests back everywhere (too many failed CI). ( #2428 )
2024-09-25 06:10:59 +00:00
test_mpt
feat(server): Add Non flash MPT. ( #514 )
2023-07-03 13:01:46 +02:00
test_mt0_base
Upgrading the tests to match the current workings. ( #2423 )
2024-09-25 06:08:38 +00:00
test_neox
feat(server): Rework model loading ( #344 )
2023-06-08 14:51:52 +02:00
test_neox_sharded
feat(server): Rework model loading ( #344 )
2023-06-08 14:51:52 +02:00
test_server_gptq_quantized
GPTQ CI improvements ( #2151 )
2024-09-25 05:21:03 +00:00
test_t5_sharded
feat(server): support fp16 for t5 ( #360 )
2023-05-23 18:16:48 +02:00
test_tools_llama
v2.0.1
2024-06-03 15:39:47 +03:00