mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-21 23:12:07 +00:00
Deepseek V2 is a MoE model from Deepseek. Relevant variations compared to other models: - Grouped top-K in expert selection. - mscale in yarn is calculated using the `mscale` and `mscale_all_dim` configuration options. - `mscale_all_dim` is also used in scaling attention softmax. - Permuting of the query/key representations before applying rotary embeddings. - Some projections cannot be sharded (`q_a_proj`, `kv_a_proj_with_mqa`). So, we need weight loads that supports quantized weights. To this end `{Weights,WeightLoader}.get_weight` was added. - The query/key head dimensionality differs from that of the value, so we need to pad during attention. - Heads with size 192, needs an extension to our paged attention fork and we need to ensure that the KV cache is allocated with the correct size. - Shared experts. |
||
---|---|---|
.. | ||
__snapshots__ | ||
test_bloom_560m_sharded.py | ||
test_bloom_560m.py | ||
test_chat_llama.py | ||
test_completion_prompts.py | ||
test_flash_awq_sharded.py | ||
test_flash_awq.py | ||
test_flash_deepseek_v2.py | ||
test_flash_falcon.py | ||
test_flash_gemma_gptq.py | ||
test_flash_gemma.py | ||
test_flash_gpt2.py | ||
test_flash_grammar_llama.py | ||
test_flash_llama_exl2.py | ||
test_flash_llama_gptq.py | ||
test_flash_llama_marlin_24.py | ||
test_flash_llama_marlin.py | ||
test_flash_llama.py | ||
test_flash_medusa.py | ||
test_flash_mistral.py | ||
test_flash_neox_sharded.py | ||
test_flash_neox.py | ||
test_flash_pali_gemma.py | ||
test_flash_phi.py | ||
test_flash_qwen2.py | ||
test_flash_santacoder.py | ||
test_flash_starcoder2.py | ||
test_flash_starcoder_gptq.py | ||
test_flash_starcoder.py | ||
test_grammar_llama.py | ||
test_grammar_response_format_llama.py | ||
test_idefics2.py | ||
test_idefics.py | ||
test_llava_next.py | ||
test_lora_mistral.py | ||
test_mamba.py | ||
test_mpt.py | ||
test_mt0_base.py | ||
test_neox_sharded.py | ||
test_neox.py | ||
test_t5_sharded.py | ||
test_tools_llama.py |