text-generation-inference/integration-tests/models/__snapshots__/test_flash_llama_exl2
Daniël de Kok 36dd16017c Add support for exl2 quantization
Mostly straightforward, changes to existing code:

* Wrap quantizer parameters in a small wrapper to avoid passing
  around untyped tuples and needing to repack them as a dict.
* Move scratch space computation to warmup, because we need the
  maximum input sequence length to avoid allocating huge
  scratch buffers that OOM.
2024-05-30 11:28:05 +02:00
..
test_flash_llama_exl2_all_params.json Add support for exl2 quantization 2024-05-30 11:28:05 +02:00
test_flash_llama_exl2_load.json Add support for exl2 quantization 2024-05-30 11:28:05 +02:00
test_flash_llama_exl2.json Add support for exl2 quantization 2024-05-30 11:28:05 +02:00