text-generation-inference/server/text_generation_server/models
Daniël de Kok 748764efb4 Add Phi-3 medium support (#2039)
Add support for Phi-3-medium

The main difference between the medium and mini models is that medium
uses grouped query attention with a packed QKV matrix. This change adds
support for GQA with packed matrixes to `Weights.get_weights_col_packed`
and uses it for Phi-3. This also allows us to remove the custom
implementation of GQA from dbrx attention loading.
2024-09-24 03:42:29 +00:00
..
custom_modeling Add Phi-3 medium support (#2039) 2024-09-24 03:42:29 +00:00
__init__.py ROCm and sliding windows fixes (#2033) 2024-09-24 03:42:29 +00:00
bloom.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
causal_lm.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
flash_causal_lm.py ROCm and sliding windows fixes (#2033) 2024-09-24 03:42:29 +00:00
flash_cohere.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_dbrx.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_gemma.py Fix (flash) Gemma prefix and enable tests 2024-09-24 03:14:53 +00:00
flash_gpt2.py Add support for Marlin-quantized models 2024-09-24 03:38:05 +00:00
flash_llama.py Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
flash_mistral.py feat: move allocation logic to rust (#1835) 2024-09-24 03:34:15 +00:00
flash_mixtral.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_neox.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_phi.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_qwen2.py feat: move allocation logic to rust (#1835) 2024-09-24 03:34:15 +00:00
flash_rw.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_santacoder.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_starcoder2.py feat: move allocation logic to rust (#1835) 2024-09-24 03:34:15 +00:00
galactica.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
globals.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
gpt_neox.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
idefics2.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
idefics_causal_lm.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
idefics.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
llava_next.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
mamba.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
model.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
mpt.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
opt.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
pali_gemma.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
phi.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
rw.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
santacoder.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
seq2seq_lm.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
t5.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
types.py chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
vlm_causal_lm.py server: use chunked inputs 2024-09-24 03:42:29 +00:00