awq/quantize
|
feat: format code (#1070)
|
2023-09-27 12:22:09 +02:00 |
gptq
|
Exllama v2 (#1211)
|
2023-11-25 22:38:38 +01:00 |
convert.py
|
fit for baichuan models (#981)
|
2023-09-08 16:51:34 +02:00 |
dist.py
|
feat: add cuda memory fraction (#659)
|
2023-07-24 11:43:58 +02:00 |
flash_attn.py
|
Add RoCm support (#1243)
|
2023-11-27 14:08:12 +01:00 |
import_utils.py
|
Add RoCm support (#1243)
|
2023-11-27 14:08:12 +01:00 |
layers.py
|
feat: mixtral (#1328)
|
2023-12-11 14:43:40 +01:00 |
medusa.py
|
Speculative (#1308)
|
2023-12-11 12:46:30 +01:00 |
paged_attention.py
|
feat: paged attention v2 (#1183)
|
2023-10-23 12:29:25 +02:00 |
speculate.py
|
Speculative (#1308)
|
2023-12-11 12:46:30 +01:00 |
tokens.py
|
Speculative (#1308)
|
2023-12-11 12:46:30 +01:00 |
watermark.py
|
Fixing watermark. (#851)
|
2023-08-16 07:17:26 +02:00 |
weights.py
|
Exllama v2 (#1211)
|
2023-11-25 22:38:38 +01:00 |