text-generation-inference/server/text_generation_server/models/custom_modeling
Nicolas Patry 51506aa57a Mllama flash version (#2585)
* Working loading state.

* Preprocessing.

* Working state ? (Broke idefics1 temporarily).

* Cleaner condition.

* Fix idefics.

* Updating config, removing TODO

* Mllama

* Ugrade transformers 4.45

* Flashing mllama.

* Starting to get there.

* Working state.

* Integrations tests for mllama (cutting to 10 tokens because there seems'
to be instability after (meaning size of the batch matters.

* Updating model link.

* Earlier assert.

* Fix vlm ?

* remove log.

* Force ignore all images but last.

* Default dtype bfloat16.

* Update integration test after switch to bf16.

* Remove dead code.

* Removed dead code.

* Upgrade the flake to latest transformers/tokenizers

* Move to hf tgi-nix

* Upgrade to 0.5.0
2024-10-27 04:03:57 +00:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
clip.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
flash_cohere_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_dbrx_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_deepseek_v2_modeling.py Update ROCM libs and improvements (#2579) 2024-10-25 09:01:04 +00:00
flash_gemma2_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_gemma_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_gpt2_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_gptj_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_llama_modeling.py Mllama flash version (#2585) 2024-10-27 04:03:57 +00:00
flash_mistral_modeling.py Update ROCM libs and improvements (#2579) 2024-10-25 09:01:04 +00:00
flash_mixtral_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_neox_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_pali_gemma_modeling.py Mllama flash version (#2585) 2024-10-27 04:03:57 +00:00
flash_phi_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_phi_moe_modeling.py feat: support phi3.5 moe (#2479) 2024-10-25 09:12:03 +00:00
flash_qwen2_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_rw_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_santacoder_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
flash_starcoder2_modeling.py Improve support for GPUs with capability < 8 (#2575) 2024-10-25 09:01:04 +00:00
idefics2.py Lots of improvements (Still 2 allocators) (#2449) 2024-09-25 06:13:11 +00:00
idefics_config.py chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
idefics_image_processing.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
idefics_modeling.py enable HuggingFaceM4/idefics-9b in intel gpu (#2338) 2024-09-25 05:55:39 +00:00
idefics_perceiver.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
idefics_processing.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
idefics_vision.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
llava_next.py Make Gaudi adapt to the tgi 2.3.0 2024-09-26 06:04:55 +00:00
mamba_modeling.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
mllama.py Mllama flash version (#2585) 2024-10-27 04:03:57 +00:00
mpt_modeling.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
neox_modeling.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
opt_modeling.py Fix the prefix for OPT model in opt_modelling.py #2370 (CI RUN) (#2371) 2024-09-25 05:55:39 +00:00
phi_modeling.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
siglip.py Fix: don't apply post layernorm in SiglipVisionTransformer (#2459) 2024-09-25 06:10:59 +00:00
t5_modeling.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
vlm.py feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00