..
__init__.py
feat(server): flash santacoder ( #153 )
2023-04-03 19:06:42 +02:00
bloom_modeling.py
Consistently take prefix
in model constructors ( #2191 )
2024-07-05 16:07:48 +02:00
clip.py
Consistently take prefix
in model constructors ( #2191 )
2024-07-05 16:07:48 +02:00
flash_cohere_modeling.py
Improve the handling of quantized weights ( #2250 )
2024-07-19 09:37:39 +02:00
flash_dbrx_modeling.py
Improve the handling of quantized weights ( #2250 )
2024-07-19 09:37:39 +02:00
flash_deepseek_v2_modeling.py
fix(server): fix deepseekv2 loading ( #2266 )
2024-07-21 18:48:04 +02:00
flash_gemma2_modeling.py
Softcapping for gemma2. ( #2273 )
2024-07-22 18:27:10 +02:00
flash_gemma_modeling.py
Hotfix: fix of use of unquantized weights in Gemma GQA loading ( #2255 )
2024-07-19 12:55:59 +02:00
flash_gpt2_modeling.py
Improve the handling of quantized weights ( #2250 )
2024-07-19 09:37:39 +02:00
flash_llama_modeling.py
feat(fp8): use fbgemm kernels and load fp8 weights directly ( #2248 )
2024-07-20 19:02:04 +02:00
flash_mistral_modeling.py
[WIP] Add support for Mistral-Nemo by supporting head_dim through config ( #2254 )
2024-07-23 15:00:07 +02:00
flash_mixtral_modeling.py
Hotfix: fix of use of unquantized weights in Mixtral GQA loading ( #2269 )
2024-07-22 11:31:00 +02:00
flash_neox_modeling.py
Hotfix: various GPT-based model fixes ( #2256 )
2024-07-19 14:42:19 +02:00
flash_pali_gemma_modeling.py
Enable multiple LoRa adapters ( #2010 )
2024-06-25 14:46:27 -04:00
flash_phi_modeling.py
Improve the handling of quantized weights ( #2250 )
2024-07-19 09:37:39 +02:00
flash_qwen2_modeling.py
Consistently take prefix
in model constructors ( #2191 )
2024-07-05 16:07:48 +02:00
flash_rw_modeling.py
Improve the handling of quantized weights ( #2250 )
2024-07-19 09:37:39 +02:00
flash_santacoder_modeling.py
Improve the handling of quantized weights ( #2250 )
2024-07-19 09:37:39 +02:00
flash_starcoder2_modeling.py
Hotfix: various GPT-based model fixes ( #2256 )
2024-07-19 14:42:19 +02:00
idefics2.py
Improve the handling of quantized weights ( #2250 )
2024-07-19 09:37:39 +02:00
idefics_config.py
chore: add pre-commit ( #1569 )
2024-02-16 11:58:58 +01:00
idefics_image_processing.py
chore: formatting
2023-12-11 14:49:52 +01:00
idefics_modeling.py
reenable xpu for tgi ( #1939 )
2024-05-23 14:11:08 +02:00
idefics_perceiver.py
Refactor layers. ( #1866 )
2024-05-13 12:44:30 +02:00
idefics_processing.py
chore: add pre-commit ( #1569 )
2024-02-16 11:58:58 +01:00
idefics_vision.py
Refactor layers. ( #1866 )
2024-05-13 12:44:30 +02:00
llava_next.py
Refactor dead code - Removing all flash_xxx.py
files. ( #2166 )
2024-07-05 10:29:56 +02:00
mamba_modeling.py
Refactor layers. ( #1866 )
2024-05-13 12:44:30 +02:00
mpt_modeling.py
Hotfix: fix MPT after recent refactor ( #2257 )
2024-07-19 14:42:35 +02:00
neox_modeling.py
Consistently take prefix
in model constructors ( #2191 )
2024-07-05 16:07:48 +02:00
opt_modeling.py
fix dbrx & opt model prefix bug ( #2201 )
2024-07-08 09:01:14 +02:00
phi_modeling.py
Consistently take prefix
in model constructors ( #2191 )
2024-07-05 16:07:48 +02:00
siglip.py
Removing some unused code. ( #1915 )
2024-05-17 11:35:49 +02:00
t5_modeling.py
Refactor layers. ( #1866 )
2024-05-13 12:44:30 +02:00
vlm.py
Pali gemma modeling ( #1895 )
2024-05-16 06:58:47 +02:00