text-generation-inference/server/text_generation_server/models
2023-07-06 01:31:05 +09:00
..
custom_modeling Merge branch 'main' into gptq-cuda-kernels 2023-07-06 01:31:05 +09:00
__init__.py Merge branch 'main' into gptq-cuda-kernels 2023-07-06 01:31:05 +09:00
bloom.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
causal_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_causal_lm.py feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
flash_llama.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_neox.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_rw.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_santacoder.py add exllama gptq kernel 2023-07-05 15:43:42 +00:00
galactica.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
gpt_neox.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
model.py feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
mpt.py feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
opt.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
rw.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
santacoder.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
seq2seq_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
t5.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
types.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00