.. |
custom_modeling
|
Revert "feat: improve qwen2-vl startup " (#2924)
|
2025-01-17 12:09:05 -05:00 |
__init__.py
|
Transformers backend TP fix (#2945)
|
2025-01-23 18:09:57 +01:00 |
bloom.py
|
Refactor dead code - Removing all flash_xxx.py files. (#2166)
|
2024-07-05 10:29:56 +02:00 |
causal_lm.py
|
Sync (most) server dependencies with Nix (#2782)
|
2024-12-03 04:04:06 +01:00 |
flash_causal_lm.py
|
Tmp tp transformers (#2942)
|
2025-01-23 18:07:30 +01:00 |
galactica.py
|
feat: add ruff and resolve issue (#2262)
|
2024-07-26 10:29:09 -04:00 |
globals.py
|
Fixing the oom maybe with 2.5.1 change. (#2958)
|
2025-01-28 10:30:28 +01:00 |
idefics_causal_lm.py
|
feat: prefill chunking (#2600)
|
2024-10-16 12:49:33 +02:00 |
mamba.py
|
Choosing input/total tokens automatically based on available VRAM? (#2673)
|
2024-10-28 04:59:49 +01:00 |
metadata_kernels.py
|
feat: add payload limit (#2726)
|
2024-11-21 18:20:15 +00:00 |
mllama_causal_lm.py
|
feat: add triton kernels to decrease latency of large batches (#2687)
|
2024-10-25 21:10:00 +00:00 |
model.py
|
Flash decoding kernel adding and prefill-chunking and prefix caching enabling in intel cpu/xpu (#2815)
|
2025-01-17 12:04:57 +01:00 |
pali_gemma.py
|
feat: add ruff and resolve issue (#2262)
|
2024-07-26 10:29:09 -04:00 |
seq2seq_lm.py
|
feat: prefill chunking (#2600)
|
2024-10-16 12:49:33 +02:00 |
transformers_flash_causal_lm.py
|
Transformers backend TP fix (#2945)
|
2025-01-23 18:09:57 +01:00 |
types.py
|
feat: prefill chunking (#2600)
|
2024-10-16 12:49:33 +02:00 |
vlm_causal_lm.py
|
Revert "feat: improve qwen2-vl startup " (#2924)
|
2025-01-17 12:09:05 -05:00 |