.. |
attention
|
Fixing rocm. (#2164)
|
2024-07-02 12:01:08 +02:00 |
awq
|
Support AWQ quantization with bias (#2117)
|
2024-06-25 21:09:00 +02:00 |
gptq
|
Use symmetric quantization in the quantize subcommand (#2120)
|
2024-07-12 12:20:12 +02:00 |
__init__.py
|
Enable multiple LoRa adapters (#2010)
|
2024-06-25 14:46:27 -04:00 |
bnb.py
|
[Bug Fix] Update torch import reference in bnb quantization (#1902)
|
2024-05-15 21:08:32 +02:00 |
conv.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
eetq.py
|
Refactor layers. (#1866)
|
2024-05-13 12:44:30 +02:00 |
exl2.py
|
Move quantized weight handling out of the Weights class (#2194)
|
2024-07-09 20:04:03 +02:00 |
fp8.py
|
Add support for FP8 on compute capability >=8.0, <8.9 (#2213)
|
2024-07-11 16:03:26 +02:00 |
layernorm.py
|
Removing IPEX_AVAIL. (#2115)
|
2024-06-25 13:20:57 +02:00 |
linear.py
|
Merge branch 'main' into ci_amd3
|
2024-07-16 15:15:17 +02:00 |
lora.py
|
Enable multiple LoRa adapters (#2010)
|
2024-06-25 14:46:27 -04:00 |
marlin.py
|
Merge branch 'main' into ci_amd3
|
2024-07-16 15:15:17 +02:00 |
medusa.py
|
fix: use path inside of speculator config (#1935)
|
2024-05-22 20:46:29 +02:00 |
mlp.py
|
MLPSpeculator. (#1865)
|
2024-05-14 12:33:18 +02:00 |
rotary.py
|
[fix] Modifying base in yarn embedding (#2212)
|
2024-07-12 10:04:51 +02:00 |
speculative.py
|
MLPSpeculator. (#1865)
|
2024-05-14 12:33:18 +02:00 |
tensor_parallel.py
|
Move quantized weight handling out of the Weights class (#2194)
|
2024-07-09 20:04:03 +02:00 |