text-generation-inference/server/text_generation_server/models
drbh 7ebee37641 fix: refactor adapter weight loading and mapping (#2193)
* fix: refactor adapter weight loading and mapping

* feat: enable lora load from directory

* fix: adjust launcher for local lora adapters

* feat: improve weight loading and add tests

* fix: improve logging and rebase syntax issue

* fix: impove adapter merge comments and remove unused conditional

* fix: improve get_model_with_lora_adapters naming

* fix: comment typo
2024-09-25 05:39:58 +00:00
..
custom_modeling fix of use of unquantized weights in cohere GQA loading, also enable … (#2291) 2024-09-25 05:39:58 +00:00
__init__.py fix: refactor adapter weight loading and mapping (#2193) 2024-09-25 05:39:58 +00:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
causal_lm.py Hotfix: fix MPT after recent refactor (#2257) 2024-09-25 05:27:40 +00:00
flash_causal_lm.py fix: refactor adapter weight loading and mapping (#2193) 2024-09-25 05:39:58 +00:00
galactica.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
globals.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-09-25 05:30:41 +00:00
idefics_causal_lm.py Enable multiple LoRa adapters (#2010) 2024-09-24 03:55:04 +00:00
idefics.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
mamba.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
model.py fix: refactor adapter weight loading and mapping (#2193) 2024-09-25 05:39:58 +00:00
pali_gemma.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-09-25 05:20:28 +00:00
seq2seq_lm.py Move quantized weight handling out of the Weights class (#2194) 2024-09-25 05:27:40 +00:00
types.py chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
vlm_causal_lm.py fix crash in multi-modal (#2245) 2024-09-25 05:39:58 +00:00