text-generation-inference/server/text_generation_server/models
Nicolas Patry 82c24f7420
Using both value from config as they might not be correct. (#2817)
* Using both value from config as they might not be correct.

* Fixing max_position_embeddings for falcon.

* Simple attempt to fix the healthcheck block allocation.

* Much simpler solution.

* Default value for Backend start_health
2024-12-10 19:37:09 +01:00
..
custom_modeling Using both value from config as they might not be correct. (#2817) 2024-12-10 19:37:09 +01:00
__init__.py Use FP8 KV cache when specified by compressed-tensors (#2761) 2024-11-26 08:27:41 +01:00
bloom.py Refactor dead code - Removing all flash_xxx.py files. (#2166) 2024-07-05 10:29:56 +02:00
causal_lm.py Sync (most) server dependencies with Nix (#2782) 2024-12-03 04:04:06 +01:00
flash_causal_lm.py Using both value from config as they might not be correct. (#2817) 2024-12-10 19:37:09 +01:00
galactica.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
globals.py Attempt for cleverer auto batch_prefill values (some simplifications). (#2808) 2024-12-09 19:44:32 +01:00
idefics_causal_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
mamba.py Choosing input/total tokens automatically based on available VRAM? (#2673) 2024-10-28 04:59:49 +01:00
metadata_kernels.py feat: add payload limit (#2726) 2024-11-21 18:20:15 +00:00
mllama_causal_lm.py feat: add triton kernels to decrease latency of large batches (#2687) 2024-10-25 21:10:00 +00:00
model.py Removing experimental to prefill chunking. 2024-12-06 19:09:40 +01:00
pali_gemma.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
seq2seq_lm.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
types.py feat: prefill chunking (#2600) 2024-10-16 12:49:33 +02:00
vlm_causal_lm.py fix cuda graphs for qwen2-vl (#2708) 2024-11-01 03:05:34 +01:00