text-generation-inference/backends/llamacpp
Adrien Gallouët 094975c3a8
Update the llamacpp backend (#3022)
* Build faster

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Make --model-gguf optional

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Bump llama.cpp

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Enable mmap, offload_kqv & flash_attention by default

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Update doc

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Better error message

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Update doc

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Update installed packages

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Save gguf in models/MODEL_ID/model.gguf

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Fix build with Mach-O

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Quantize without llama-quantize

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Bump llama.cpp and switch to ggml-org

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Remove make-gguf.sh

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Update Cargo.lock

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Support HF_HUB_USER_AGENT_ORIGIN

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Bump llama.cpp

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

* Add --build-arg llamacpp_native & llamacpp_cpu_arm_arch

Signed-off-by: Adrien Gallouët <angt@huggingface.co>

---------

Signed-off-by: Adrien Gallouët <angt@huggingface.co>
2025-03-11 09:19:01 +01:00
..
src Update the llamacpp backend (#3022) 2025-03-11 09:19:01 +01:00
build.rs Update the llamacpp backend (#3022) 2025-03-11 09:19:01 +01:00
Cargo.toml Update the llamacpp backend (#3022) 2025-03-11 09:19:01 +01:00
README.md [Backend] Add Llamacpp backend (#2975) 2025-02-14 13:40:57 +01:00
requirements.txt Update the llamacpp backend (#3022) 2025-03-11 09:19:01 +01:00

Llamacpp backend

If all your dependencies are installed at the system level, running cargo build should be sufficient. However, if you want to experiment with different versions of llama.cpp, some additional setup is required.

Install llama.cpp

LLAMACPP_PREFIX=$(pwd)/llama.cpp.out

git clone https://github.com/ggerganov/llama.cpp
cd llama.cpp
cmake -B build \
    -DCMAKE_INSTALL_PREFIX="$LLAMACPP_PREFIX" \
    -DLLAMA_BUILD_COMMON=OFF \
    -DLLAMA_BUILD_TESTS=OFF \
    -DLLAMA_BUILD_EXAMPLES=OFF \
    -DLLAMA_BUILD_SERVER=OFF
cmake --build build --config Release -j
cmake --install build

Build TGI

PKG_CONFIG_PATH="$LLAMACPP_PREFIX/lib/pkgconfig" cargo build