text-generation-inference/server/text_generation_server/models
OlivierDehaene a7515b8af1 fix(server): fix fp8 weight loading (#2268)
* fix(server): fix fp8 weight loading

* fixed scales loading

* update snap

* revert default dtype
2024-09-25 05:31:08 +00:00
..
custom_modeling Hotfix: fix of use of unquantized weights in Mixtral GQA loading (#2269) 2024-09-25 05:30:41 +00:00
__init__.py fix(server): fix fp8 weight loading (#2268) 2024-09-25 05:31:08 +00:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
causal_lm.py Hotfix: fix MPT after recent refactor (#2257) 2024-09-25 05:27:40 +00:00
flash_causal_lm.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
flash_mistral.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
galactica.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
globals.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
idefics_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
idefics.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
mamba.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
model.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
pali_gemma.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
seq2seq_lm.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
types.py chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
vlm_causal_lm.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00