text-generation-inference/server/text_generation_server/utils
drbh 04e1af94d7
Enable multiple LoRa adapters (#2010)
* feat: first draft load multiple lora

* feat: load weights within layer and refactor lora pass

* fix: refactor and reduce lora math

* feat: baseline impl single request multi lora support

* feat: prefer lorax implementation and port loading logic

* fix: prefer adapter_data and refactors

* feat: perfer loraxs custom punica kernels and add mlp loras

* fix: adjust batch for bgmv

* fix: adjust adapter_segments logic when in batch

* fix: refactor and move changes to v3 proto

* fix: pass model_id for all flash causal lms

* fix: pass model_id for all causal and seq2seq lms

* fix: add model_id to model test

* feat: add lora support to mistral and refactors

* feat: prefer model id in request

* fix: include rust code for adapter id

* feat: bump launcher and add new lora docs

* feat: support base model generation and refactors

* fix: rename doc to retry ci build

* feat: support if vlm models

* fix: add adapter_data param and avoid missing layers

* fix: add adapter_data param to phi and neox

* fix: update all models forwards to include adapter_data

* fix: add model_id to IdeficsCausalLM

* Update lora.md

Fixed a typo

* Update lora.md

Fixing spam image

* fix: add lora kernel to dockerfile, support running without kernels and refactors

* fix: avoid dockerfile conflict

* fix: refactors and adjust flash llama lora logic

* fix: skip llama test due to CI issue (temp)

* fix: skip llama test CI (temp) 2

* fix: revert skips and prefer updated ci token for tests

* fix: refactors and helpful comments

* fix: add noop in TensorParallelAdapterRowLinear too

* fix: refactor and move shard_lora_weights logic

* fix: exit early if no adapter_data

---------

Co-authored-by: Derek <datavistics@gmail.com>
2024-06-25 14:46:27 -04:00
..
merges Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
__init__.py feat(server): Add native support for PEFT Lora models (#762) 2023-08-03 17:22:45 +02:00
adapter.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
chunks.py server: use chunked inputs 2024-06-07 08:09:04 +02:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-05 19:23:57 +02:00
dist.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
hub.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
import_utils.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
log.py v1.3.4 2023-12-22 15:46:04 +01:00
logits_process.py Fixing frequency penalty (#1811) 2024-04-30 12:13:23 +02:00
peft.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
segments.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
sgmv.py Enable multiple LoRa adapters (#2010) 2024-06-25 14:46:27 -04:00
speculate.py chore: formatting 2023-12-11 14:49:52 +01:00
tokens.py Use the generation config. (#1808) 2024-04-25 19:41:50 +02:00
watermark.py Fixing watermark. (#851) 2023-08-16 07:17:26 +02:00
weights.py Factor out sharding of packed tensors (#2059) 2024-06-20 09:56:04 +02:00