text-generation-inference/docs/source/conceptual
drbh 7664d2e2b3 CI (2592): Allow LoRA adapter revision in server launcher (#2602)
allow revision for lora adapters from launcher

Co-authored-by: Sida <sida@kulamind.com>
Co-authored-by: teamclouday <teamclouday@gmail.com>
2024-10-27 04:03:57 +00:00
..
external.md Add links to Adyen blogpost (#2500) 2024-09-25 06:14:07 +00:00
flash_attention.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
guidance.md Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
lora.md CI (2592): Allow LoRA adapter revision in server launcher (#2602) 2024-10-27 04:03:57 +00:00
paged_attention.md Paged Attention Conceptual Guide (#901) 2023-09-08 14:18:42 +02:00
quantization.md Using an enum for flash backens (paged/flashdecoding/flashinfer) (#2385) 2024-09-25 06:04:51 +00:00
safetensors.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
speculation.md feat: add train medusa head tutorial (#1934) 2024-07-17 05:36:58 +00:00
streaming.md Add links to Adyen blogpost (#2500) 2024-09-25 06:14:07 +00:00
tensor_parallelism.md chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00