awq/quantize
|
feat: format code (#1070)
|
2023-09-27 12:22:09 +02:00 |
gptq
|
Exllama v2 (#1211)
|
2023-11-25 22:38:38 +01:00 |
convert.py
|
fit for baichuan models (#981)
|
2023-09-08 16:51:34 +02:00 |
dist.py
|
feat: add cuda memory fraction (#659)
|
2023-07-24 11:43:58 +02:00 |
flash_attn.py
|
Add RoCm support (#1243)
|
2023-11-27 14:08:12 +01:00 |
import_utils.py
|
Add RoCm support (#1243)
|
2023-11-27 14:08:12 +01:00 |
layers.py
|
Add RoCm support (#1243)
|
2023-11-27 14:08:12 +01:00 |
medusa.py
|
Tmp work for medusa.
|
2023-11-28 15:32:51 +00:00 |
paged_attention.py
|
feat: paged attention v2 (#1183)
|
2023-10-23 12:29:25 +02:00 |
tokens.py
|
fix: type hint typo in tokens.py (#1102)
|
2023-10-05 09:33:04 +02:00 |
watermark.py
|
Fixing watermark. (#851)
|
2023-08-16 07:17:26 +02:00 |
weights.py
|
Exllama v2 (#1211)
|
2023-11-25 22:38:38 +01:00 |