..
custom_kernels
chore: add pre-commit ( #1569 )
2024-04-24 15:32:02 +03:00
exllama_kernels
MI300 compatibility ( #1764 )
2024-07-17 05:36:58 +00:00
exllamav2_kernels
chore: add pre-commit ( #1569 )
2024-04-24 15:32:02 +03:00
marlin
Add support for repacking AWQ weights for GPTQ-Marlin ( #2278 )
2024-09-25 05:31:31 +00:00
tests
fix: refactor adapter weight loading and mapping ( #2193 )
2024-09-25 05:39:58 +00:00
text_generation_server
Some small fixes for the Torch 2.4.0 update ( #2304 )
2024-09-25 05:40:25 +00:00
.gitignore
Impl simple mamba model ( #1480 )
2024-04-23 11:45:11 +03:00
fbgemm_remove_unused.patch
feat(fp8): use fbgemm kernels and load fp8 weights directly ( #2248 )
2024-09-25 05:30:41 +00:00
fix_torch90a.sh
feat(fp8): use fbgemm kernels and load fp8 weights directly ( #2248 )
2024-09-25 05:30:41 +00:00
Makefile
hotfix: update nccl
2024-09-25 05:39:58 +00:00
Makefile-awq
chore: add pre-commit ( #1569 )
2024-04-24 15:32:02 +03:00
Makefile-eetq
Upgrade EETQ (Fixes the cuda graphs). ( #1729 )
2024-04-25 17:58:27 +03:00
Makefile-fbgemm
chore: update to torch 2.4 ( #2259 )
2024-09-25 05:39:14 +00:00
Makefile-flash-att
Hotfixing make install
. ( #2008 )
2024-09-24 03:29:29 +00:00
Makefile-flash-att-v2
Softcapping for gemma2. ( #2273 )
2024-09-25 05:31:08 +00:00
Makefile-lorax-punica
Enable multiple LoRa adapters ( #2010 )
2024-09-24 03:55:04 +00:00
Makefile-selective-scan
chore: add pre-commit ( #1569 )
2024-04-24 15:32:02 +03:00
Makefile-vllm
Add support for Deepseek V2 ( #2224 )
2024-09-25 05:27:40 +00:00
poetry.lock
Some small fixes for the Torch 2.4.0 update ( #2304 )
2024-09-25 05:40:25 +00:00
pyproject.toml
chore: update to torch 2.4 ( #2259 )
2024-09-25 05:39:14 +00:00
README.md
chore: add pre-commit ( #1569 )
2024-04-24 15:32:02 +03:00
requirements_cuda.txt
hotfix: pin numpy ( #2289 )
2024-09-25 05:38:48 +00:00
requirements_intel.txt
hotfix: pin numpy ( #2289 )
2024-09-25 05:38:48 +00:00
requirements_rocm.txt
hotfix: pin numpy ( #2289 )
2024-09-25 05:38:48 +00:00