text-generation-inference/server/text_generation_server
Nicolas Patry 51506aa57a Mllama flash version (#2585)
* Working loading state.

* Preprocessing.

* Working state ? (Broke idefics1 temporarily).

* Cleaner condition.

* Fix idefics.

* Updating config, removing TODO

* Mllama

* Ugrade transformers 4.45

* Flashing mllama.

* Starting to get there.

* Working state.

* Integrations tests for mllama (cutting to 10 tokens because there seems'
to be instability after (meaning size of the batch matters.

* Updating model link.

* Earlier assert.

* Fix vlm ?

* remove log.

* Force ignore all images but last.

* Default dtype bfloat16.

* Update integration test after switch to bf16.

* Remove dead code.

* Removed dead code.

* Upgrade the flake to latest transformers/tokenizers

* Move to hf tgi-nix

* Upgrade to 0.5.0
2024-10-27 04:03:57 +00:00
..
adapters feat: add ruff and resolve issue (#2262) 2024-09-25 05:46:24 +00:00
layers MoE Marlin: support desc_act for groupsize != -1 (#2590) 2024-10-25 09:12:03 +00:00
models Mllama flash version (#2585) 2024-10-27 04:03:57 +00:00
pb chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
utils Micro cleanup. (#2555) 2024-10-25 08:53:47 +00:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py Pass the max_batch_total_tokens to causal_lm 2024-10-23 08:28:26 +00:00
habana_quantization_env.py Remove all references to habana_quantization_toolkit for 1.18 (#229) 2024-10-18 10:59:59 +02:00
interceptor.py Make Gaudi adapt to the tgi 2.3.0 2024-09-26 06:04:55 +00:00
server.py Mllama flash version (#2585) 2024-10-27 04:03:57 +00:00
tgi_service.py Make Gaudi adapt to the tgi 2.3.0 2024-09-26 06:04:55 +00:00
tracing.py Add OTLP Service Name Environment Variable (#2076) 2024-09-24 03:51:26 +00:00