text-generation-inference/server/text_generation_server/models
Felix Marty 67d687609b cleanup
2023-07-12 16:16:58 +00:00
..
custom_modeling cleanup 2023-07-12 16:16:58 +00:00
__init__.py have a single gptq quantization type 2023-07-12 15:43:20 +00:00
bloom.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
causal_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_causal_lm.py feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
flash_llama.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_neox.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_rw.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_santacoder.py add exllama gptq kernel 2023-07-05 15:43:42 +00:00
galactica.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
gpt_neox.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
model.py move exllama buffer init to the top level 2023-07-12 16:09:26 +00:00
mpt.py feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
opt.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
rw.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
santacoder.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
seq2seq_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
t5.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
types.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00