..
attention
Fixing rocm. ( #2164 )
2024-09-24 03:58:13 +00:00
awq
Support AWQ quantization with bias ( #2117 )
2024-09-24 03:55:04 +00:00
gptq
Use symmetric quantization in the quantize
subcommand ( #2120 )
2024-09-25 05:27:40 +00:00
__init__.py
Enable multiple LoRa adapters ( #2010 )
2024-09-24 03:55:04 +00:00
bnb.py
Update torch import reference in bnb quantization ( #1902 )
2024-07-17 05:36:58 +00:00
conv.py
Refactor layers. ( #1866 )
2024-07-17 05:36:58 +00:00
eetq.py
Refactor layers. ( #1866 )
2024-07-17 05:36:58 +00:00
exl2.py
Move quantized weight handling out of the Weights
class ( #2194 )
2024-09-25 05:27:40 +00:00
fp8.py
Add support for FP8 on compute capability >=8.0, <8.9 ( #2213 )
2024-09-25 05:27:40 +00:00
layernorm.py
Removing IPEX_AVAIL. ( #2115 )
2024-09-24 03:52:23 +00:00
linear.py
Add support for FP8 on compute capability >=8.0, <8.9 ( #2213 )
2024-09-25 05:27:40 +00:00
lora.py
Enable multiple LoRa adapters ( #2010 )
2024-09-24 03:55:04 +00:00
marlin.py
Add support for FP8 on compute capability >=8.0, <8.9 ( #2213 )
2024-09-25 05:27:40 +00:00
medusa.py
fix: use path inside of speculator config ( #1935 )
2024-07-17 05:36:58 +00:00
mlp.py
MLPSpeculator. ( #1865 )
2024-07-17 05:36:58 +00:00
rotary.py
Modifying base in yarn embedding ( #2212 )
2024-09-25 05:27:40 +00:00
speculative.py
MLPSpeculator. ( #1865 )
2024-07-17 05:36:58 +00:00
tensor_parallel.py
Move quantized weight handling out of the Weights
class ( #2194 )
2024-09-25 05:27:40 +00:00