..
attention
Basic flashinfer 0.2 support ( #2862 )
2025-01-09 16:25:00 +01:00
awq
fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… ( #2717 )
2024-11-04 16:07:51 +01:00
compressed_tensors
Add support for wNa16 int 2:4 compressed-tensors checkpoints ( #2758 )
2024-11-20 18:25:23 +01:00
gptq
fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… ( #2717 )
2024-11-04 16:07:51 +01:00
marlin
Add support for wNa16 int 2:4 compressed-tensors checkpoints ( #2758 )
2024-11-20 18:25:23 +01:00
moe
Update vllm kernels for ROCM ( #2826 )
2024-12-18 12:44:42 +01:00
__init__.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00
bnb.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00
conv.py
Refactor layers. ( #1866 )
2024-05-13 12:44:30 +02:00
eetq.py
feat(fp8): use fbgemm kernels and load fp8 weights directly ( #2248 )
2024-07-20 19:02:04 +02:00
exl2.py
Add support for Deepseek V2 ( #2224 )
2024-07-19 17:23:20 +02:00
fp8.py
Add initial support for compressed-tensors checkpoints ( #2732 )
2024-11-10 13:54:07 +01:00
layernorm.py
Update vllm kernels for ROCM ( #2826 )
2024-12-18 12:44:42 +01:00
linear.py
Update vllm kernels for ROCM ( #2826 )
2024-12-18 12:44:42 +01:00
lora.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00
medusa.py
Prefix caching ( #2402 )
2024-08-20 11:15:30 +02:00
mlp.py
Tied embeddings in MLP speculator. ( #2473 )
2024-08-29 17:44:54 +02:00
rotary.py
Update vllm kernels for ROCM ( #2826 )
2024-12-18 12:44:42 +01:00
speculative.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00
tensor_parallel.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00