text-generation-inference/server/text_generation_server/models
icyboy™ 4e4207224e
Hotfix: fix of use of unquantized weights in Mixtral GQA loading (#2269)
* Update idefics_causal_lm.py

Fix syntax issues

* fix dbrx & opt model prefix bug

* Hotfix: fix of use of unquantized weights in Mixtral GQA loading
2024-07-22 11:31:00 +02:00
..
custom_modeling Hotfix: fix of use of unquantized weights in Mixtral GQA loading (#2269) 2024-07-22 11:31:00 +02:00
__init__.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py Hotfix: fix MPT after recent refactor (#2257) 2024-07-19 14:42:35 +02:00
flash_causal_lm.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
flash_mistral.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
galactica.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
globals.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
idefics_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
idefics.py Move quantized weight handling out of the Weights class (#2194) 2024-07-09 20:04:03 +02:00
mamba.py Move quantized weight handling out of the Weights class (#2194) 2024-07-09 20:04:03 +02:00
model.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
pali_gemma.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
seq2seq_lm.py Move quantized weight handling out of the Weights class (#2194) 2024-07-09 20:04:03 +02:00
types.py chore: add pre-commit (#1569) 2024-02-16 11:58:58 +01:00
vlm_causal_lm.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00