mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
Deepseek V2 is a MoE model from Deepseek. Relevant variations compared to other models: - Grouped top-K in expert selection. - mscale in yarn is calculated using the `mscale` and `mscale_all_dim` configuration options. - `mscale_all_dim` is also used in scaling attention softmax. - Permuting of the query/key representations before applying rotary embeddings. - Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`). So, we need weight loads that supports quantized weights. To this end `{Weights,WeightLoader}.get_weight` was added. - The query/key head dimensionality differs from that of the value, so we need to pad during attention. - Heads with size 192, needs an extension to our paged attention fork and we need to ensure that the KV cache is allocated with the correct size. - Shared experts. |
||
---|---|---|
.. | ||
__init__.py | ||
bloom_modeling.py | ||
clip.py | ||
flash_cohere_modeling.py | ||
flash_dbrx_modeling.py | ||
flash_deepseek_v2_modeling.py | ||
flash_gemma2_modeling.py | ||
flash_gemma_modeling.py | ||
flash_gpt2_modeling.py | ||
flash_llama_modeling.py | ||
flash_mistral_modeling.py | ||
flash_mixtral_modeling.py | ||
flash_neox_modeling.py | ||
flash_pali_gemma_modeling.py | ||
flash_phi_modeling.py | ||
flash_qwen2_modeling.py | ||
flash_rw_modeling.py | ||
flash_santacoder_modeling.py | ||
flash_starcoder2_modeling.py | ||
idefics2.py | ||
idefics_config.py | ||
idefics_image_processing.py | ||
idefics_modeling.py | ||
idefics_perceiver.py | ||
idefics_processing.py | ||
idefics_vision.py | ||
llava_next.py | ||
mamba_modeling.py | ||
mpt_modeling.py | ||
neox_modeling.py | ||
opt_modeling.py | ||
phi_modeling.py | ||
siglip.py | ||
t5_modeling.py | ||
vlm.py |