.. |
test_bloom_560m
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_bloom_560m_sharded
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_chat_llama
|
Fix seeded output. (#1949)
|
2024-09-24 03:14:53 +00:00 |
test_completion_prompts
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_flash_awq
|
Add AWQ quantization inference support (#1019) (#1054)
|
2023-09-25 15:31:27 +02:00 |
test_flash_awq_sharded
|
Add AWQ quantization inference support (#1019) (#1054)
|
2023-09-25 15:31:27 +02:00 |
test_flash_deepseek_v2
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_flash_falcon
|
feat(server): add retry on download (#384)
|
2023-05-31 10:57:53 +02:00 |
test_flash_gemma
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_flash_gemma2
|
Softcapping for gemma2. (#2273)
|
2024-09-25 05:31:08 +00:00 |
test_flash_gemma_gptq
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_flash_gpt2
|
Add GPT-2 with flash attention (#1889)
|
2024-07-17 05:36:58 +00:00 |
test_flash_grammar_llama
|
fix: correctly index into mask when applying grammar (#1618)
|
2024-04-25 10:16:16 +03:00 |
test_flash_llama
|
Remove the stripping of the prefix space (and any other mangling that tokenizers might do). (#1065)
|
2023-09-27 12:13:45 +02:00 |
test_flash_llama_exl2
|
Add support for exl2 quantization
|
2024-09-24 03:19:39 +00:00 |
test_flash_llama_fp8
|
fix(server): fix fp8 weight loading (#2268)
|
2024-09-25 05:31:08 +00:00 |
test_flash_llama_gptq
|
GPTQ CI improvements (#2151)
|
2024-09-25 05:21:03 +00:00 |
test_flash_llama_marlin
|
Add support for Marlin-quantized models
|
2024-09-24 03:38:05 +00:00 |
test_flash_llama_marlin_24
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
test_flash_medusa
|
Speculative (#1308)
|
2024-04-18 12:39:39 +00:00 |
test_flash_mistral
|
feat: add mistral model (#1071)
|
2023-09-28 09:55:47 +02:00 |
test_flash_neox
|
fix(server): fix init for flash causal lm (#352)
|
2023-05-22 15:05:32 +02:00 |
test_flash_neox_sharded
|
fix(server): fix init for flash causal lm (#352)
|
2023-05-22 15:05:32 +02:00 |
test_flash_pali_gemma
|
Some small fixes for the Torch 2.4.0 update (#2304)
|
2024-09-25 05:40:25 +00:00 |
test_flash_phi
|
feat: adds phi model (#1442)
|
2024-04-22 13:06:38 +03:00 |
test_flash_qwen2
|
feat: Qwen2 (#1608)
|
2024-04-25 09:21:22 +03:00 |
test_flash_santacoder
|
feat(integration-tests): improve comparison and health checks (#336)
|
2023-05-16 20:22:11 +02:00 |
test_flash_starcoder
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_flash_starcoder2
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_flash_starcoder_gptq
|
ROCm AWQ support (#1514)
|
2024-04-24 09:21:34 +00:00 |
test_grammar_llama
|
fix: correctly index into mask when applying grammar (#1618)
|
2024-04-25 10:16:16 +03:00 |
test_grammar_response_format_llama
|
Support chat response format (#2046)
|
2024-09-24 03:42:29 +00:00 |
test_idefics
|
Support different image sizes in prefill in VLMs (#2065)
|
2024-09-24 03:43:31 +00:00 |
test_idefics2
|
Fixing idefics on g6 tests. (#2306)
|
2024-09-25 05:40:25 +00:00 |
test_llava_next
|
Idefics2. (#1756)
|
2024-06-10 09:29:08 +03:00 |
test_lora_mistral
|
feat: simple mistral lora integration tests (#2180)
|
2024-09-25 05:27:40 +00:00 |
test_mamba
|
fix: adjust test snapshots and small refactors (#2323)
|
2024-09-25 05:50:17 +00:00 |
test_mpt
|
feat(server): Add Non flash MPT. (#514)
|
2023-07-03 13:01:46 +02:00 |
test_mt0_base
|
Adding Llava-Next (Llava 1.6) with full support. (#1709)
|
2024-04-25 14:30:55 +00:00 |
test_neox
|
feat(server): Rework model loading (#344)
|
2023-06-08 14:51:52 +02:00 |
test_neox_sharded
|
feat(server): Rework model loading (#344)
|
2023-06-08 14:51:52 +02:00 |
test_server_gptq_quantized
|
GPTQ CI improvements (#2151)
|
2024-09-25 05:21:03 +00:00 |
test_t5_sharded
|
feat(server): support fp16 for t5 (#360)
|
2023-05-23 18:16:48 +02:00 |
test_tools_llama
|
v2.0.1
|
2024-06-03 15:39:47 +03:00 |