..
attention
Use kernels from the kernel hub ( #2988 )
2025-02-10 19:19:25 +01:00
awq
fix incorrect output of Qwen2-7B-Instruct-GPTQ-Int4 and Qwen2-7B-Inst… ( #2717 )
2024-11-04 16:07:51 +01:00
compressed_tensors
Use kernels from the kernel hub ( #2988 )
2025-02-10 19:19:25 +01:00
gptq
Flash Transformers modeling backend support ( #2913 )
2025-01-21 10:01:51 +01:00
marlin
Use kernels from the kernel hub ( #2988 )
2025-02-10 19:19:25 +01:00
moe
Use kernels from the kernel hub ( #2988 )
2025-02-10 19:19:25 +01:00
__init__.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00
bnb.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00
conv.py
Refactor layers. ( #1866 )
2024-05-13 12:44:30 +02:00
eetq.py
feat(fp8): use fbgemm kernels and load fp8 weights directly ( #2248 )
2024-07-20 19:02:04 +02:00
exl2.py
Add support for Deepseek V2 ( #2224 )
2024-07-19 17:23:20 +02:00
fp8.py
Use kernels from the kernel hub ( #2988 )
2025-02-10 19:19:25 +01:00
layernorm.py
Update vllm kernels for ROCM ( #2826 )
2024-12-18 12:44:42 +01:00
linear.py
Update vllm kernels for ROCM ( #2826 )
2024-12-18 12:44:42 +01:00
lora.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00
medusa.py
Prefix caching ( #2402 )
2024-08-20 11:15:30 +02:00
mlp.py
Tied embeddings in MLP speculator. ( #2473 )
2024-08-29 17:44:54 +02:00
rotary.py
fix Qwen VL break in intel platform ( #3002 )
2025-02-12 11:31:34 +01:00
speculative.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00
tensor_parallel.py
feat: add ruff and resolve issue ( #2262 )
2024-07-26 10:29:09 -04:00