text-generation-inference/server/text_generation_server/layers/compressed_tensors
2025-01-16 13:44:32 +01:00
..
__init__.py Add initial support for compressed-tensors checkpoints (#2732) 2024-11-10 13:54:07 +01:00
loader.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
w8a8_int.py Add support for compressed-tensors w8a8 int checkpoints (#2745) 2024-11-18 17:20:31 +01:00
w8an_fp.py Do not convert weight scale to e4m3fnuz on CUDA (#2917) 2025-01-16 13:44:32 +01:00
wna16_int_24.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00
wna16_int.py Add support for wNa16 int 2:4 compressed-tensors checkpoints (#2758) 2024-11-20 18:25:23 +01:00