text-generation-inference/backends/llamacpp/csrc
2024-11-28 23:57:24 +01:00
..
backend.cpp feat(backend): use new batch API to generate tokens 2024-11-28 23:57:24 +01:00
backend.hpp feat(backend): add missing temperature parameter 2024-11-28 16:55:17 +01:00
ffi.hpp feat(backend): create llama_context_params with default factory 2024-11-28 23:57:13 +01:00