.. |
custom_kernels
|
chore: add pre-commit (#1569)
|
2024-04-24 15:32:02 +03:00 |
exllama_kernels
|
MI300 compatibility (#1764)
|
2024-07-17 05:36:58 +00:00 |
exllamav2_kernels
|
chore: add pre-commit (#1569)
|
2024-04-24 15:32:02 +03:00 |
marlin
|
Add support for repacking AWQ weights for GPTQ-Marlin (#2278)
|
2024-09-25 05:31:31 +00:00 |
tests
|
Improve the handling of quantized weights (#2250)
|
2024-09-25 05:27:40 +00:00 |
text_generation_server
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-09-25 05:38:48 +00:00 |
.gitignore
|
Impl simple mamba model (#1480)
|
2024-04-23 11:45:11 +03:00 |
fbgemm_remove_unused.patch
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-09-25 05:30:41 +00:00 |
fix_torch90a.sh
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-09-25 05:30:41 +00:00 |
Makefile
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-09-25 05:30:41 +00:00 |
Makefile-awq
|
chore: add pre-commit (#1569)
|
2024-04-24 15:32:02 +03:00 |
Makefile-eetq
|
Upgrade EETQ (Fixes the cuda graphs). (#1729)
|
2024-04-25 17:58:27 +03:00 |
Makefile-fbgemm
|
feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248)
|
2024-09-25 05:30:41 +00:00 |
Makefile-flash-att
|
Hotfixing make install . (#2008)
|
2024-09-24 03:29:29 +00:00 |
Makefile-flash-att-v2
|
Softcapping for gemma2. (#2273)
|
2024-09-25 05:31:08 +00:00 |
Makefile-lorax-punica
|
Enable multiple LoRa adapters (#2010)
|
2024-09-24 03:55:04 +00:00 |
Makefile-selective-scan
|
chore: add pre-commit (#1569)
|
2024-04-24 15:32:02 +03:00 |
Makefile-vllm
|
Add support for Deepseek V2 (#2224)
|
2024-09-25 05:27:40 +00:00 |
poetry.lock
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-09-25 05:38:48 +00:00 |
pyproject.toml
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-09-25 05:38:48 +00:00 |
README.md
|
chore: add pre-commit (#1569)
|
2024-04-24 15:32:02 +03:00 |
requirements_cuda.txt
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-09-25 05:38:48 +00:00 |
requirements_intel.txt
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-09-25 05:38:48 +00:00 |
requirements_rocm.txt
|
Add support for Llama 3 rotary embeddings (#2286)
|
2024-09-25 05:38:48 +00:00 |