.. |
__init__.py
|
feat(server): flash santacoder (#153)
|
2023-04-03 19:06:42 +02:00 |
bloom_modeling.py
|
Consistently take prefix in model constructors (#2191)
|
2024-09-25 05:21:34 +00:00 |
clip.py
|
Consistently take prefix in model constructors (#2191)
|
2024-09-25 05:21:34 +00:00 |
flash_cohere_modeling.py
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
flash_dbrx_modeling.py
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
flash_deepseek_v2_modeling.py
|
fix(server): fix deepseekv2 loading (#2266)
|
2024-09-25 05:30:41 +00:00 |
flash_gemma2_modeling.py
|
Hotfix: fix of use of unquantized weights in Gemma GQA loading (#2255)
|
2024-09-25 05:27:40 +00:00 |
flash_gemma_modeling.py
|
Hotfix: fix of use of unquantized weights in Gemma GQA loading (#2255)
|
2024-09-25 05:27:40 +00:00 |
flash_gpt2_modeling.py
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
flash_llama_modeling.py
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-09-25 05:30:41 +00:00 |
flash_mistral_modeling.py
|
Consistently take prefix in model constructors (#2191)
|
2024-09-25 05:21:34 +00:00 |
flash_mixtral_modeling.py
|
Hotfix: fix of use of unquantized weights in Mixtral GQA loading (#2269)
|
2024-09-25 05:30:41 +00:00 |
flash_neox_modeling.py
|
Hotfix: various GPT-based model fixes (#2256)
|
2024-09-25 05:27:40 +00:00 |
flash_pali_gemma_modeling.py
|
Enable multiple LoRa adapters (#2010)
|
2024-09-24 03:55:04 +00:00 |
flash_phi_modeling.py
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
flash_qwen2_modeling.py
|
Consistently take prefix in model constructors (#2191)
|
2024-09-25 05:21:34 +00:00 |
flash_rw_modeling.py
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
flash_santacoder_modeling.py
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
flash_starcoder2_modeling.py
|
Hotfix: various GPT-based model fixes (#2256)
|
2024-09-25 05:27:40 +00:00 |
idefics2.py
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
idefics_config.py
|
chore: add pre-commit (#1569)
|
2024-04-24 15:32:02 +03:00 |
idefics_image_processing.py
|
chore: formatting
|
2024-04-18 16:26:00 +03:00 |
idefics_modeling.py
|
reenable xpu for tgi (#1939)
|
2024-07-17 05:36:58 +00:00 |
idefics_perceiver.py
|
Refactor layers. (#1866)
|
2024-07-17 05:36:58 +00:00 |
idefics_processing.py
|
chore: add pre-commit (#1569)
|
2024-04-24 15:32:02 +03:00 |
idefics_vision.py
|
Refactor layers. (#1866)
|
2024-07-17 05:36:58 +00:00 |
llava_next.py
|
Refactor dead code - Removing all flash_xxx.py files. (#2166)
|
2024-09-25 05:20:28 +00:00 |
mamba_modeling.py
|
Refactor layers. (#1866)
|
2024-07-17 05:36:58 +00:00 |
mpt_modeling.py
|
Hotfix: fix MPT after recent refactor (#2257)
|
2024-09-25 05:27:40 +00:00 |
neox_modeling.py
|
Consistently take prefix in model constructors (#2191)
|
2024-09-25 05:21:34 +00:00 |
opt_modeling.py
|
fix dbrx & opt model prefix bug (#2201)
|
2024-09-25 05:21:34 +00:00 |
phi_modeling.py
|
Consistently take prefix in model constructors (#2191)
|
2024-09-25 05:21:34 +00:00 |
siglip.py
|
Removing some unused code. (#1915)
|
2024-07-17 05:36:58 +00:00 |
t5_modeling.py
|
Refactor layers. (#1866)
|
2024-07-17 05:36:58 +00:00 |
vlm.py
|
Pali gemma modeling (#1895)
|
2024-07-17 05:36:58 +00:00 |