.. |
attention
|
Add support for FP8 KV cache scales (#2628)
|
2024-10-24 16:36:18 +02:00 |
awq
|
CI job. Gpt awq 4 (#2665)
|
2024-10-18 17:55:53 +02:00 |
gptq
|
Fixing rocm gptq by using triton code too (renamed cuda into triton). (#2691)
|
2024-10-25 09:17:57 +02:00 |
marlin
|
Fp8 e4m3_fnuz support for rocm (#2588)
|
2024-10-16 09:54:50 +02:00 |
moe
|
add ipex moe implementation to support Mixtral and PhiMoe
|
2024-10-29 23:54:42 -07:00 |
__init__.py
|
feat: add ruff and resolve issue (#2262)
|
2024-07-26 10:29:09 -04:00 |
bnb.py
|
feat: add ruff and resolve issue (#2262)
|
2024-07-26 10:29:09 -04:00 |
conv.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
eetq.py
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-07-20 19:02:04 +02:00 |
exl2.py
|
Add support for Deepseek V2 (#2224)
|
2024-07-19 17:23:20 +02:00 |
fp8.py
|
Switch from fbgemm-gpu w8a8 scaled matmul to vLLM/marlin-kernels (#2688)
|
2024-10-25 16:40:47 +02:00 |
layernorm.py
|
Removing IPEX_AVAIL. (#2115)
|
2024-06-25 13:20:57 +02:00 |
linear.py
|
Update ROCM libs and improvements (#2579)
|
2024-09-30 10:54:32 +02:00 |
lora.py
|
feat: add ruff and resolve issue (#2262)
|
2024-07-26 10:29:09 -04:00 |
medusa.py
|
Prefix caching (#2402)
|
2024-08-20 11:15:30 +02:00 |
mlp.py
|
Tied embeddings in MLP speculator. (#2473)
|
2024-08-29 17:44:54 +02:00 |
rotary.py
|
Support qwen2 vl (#2689)
|
2024-10-30 12:40:51 -04:00 |
speculative.py
|
feat: add ruff and resolve issue (#2262)
|
2024-07-26 10:29:09 -04:00 |
tensor_parallel.py
|
feat: add ruff and resolve issue (#2262)
|
2024-07-26 10:29:09 -04:00 |