text-generation-inference/server/text_generation_server/models
2023-11-28 15:32:51 +00:00
..
custom_modeling Add RoCm support (#1243) 2023-11-27 14:08:12 +01:00
__init__.py Tmp work for medusa. 2023-11-28 15:32:51 +00:00
bloom.py Handling bloom prefix. (#1090) 2023-10-03 11:55:10 +02:00
cache_manager.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
causal_lm.py Fix calling cuda() on load_in_8bit (#1153) 2023-10-19 10:42:03 +02:00
flash_causal_lm.py fix: better warmup error 2023-10-25 10:18:58 +02:00
flash_llama.py Tmp work for medusa. 2023-11-28 15:32:51 +00:00
flash_mistral.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
flash_neox.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
flash_rw.py Fix Falcon weight mapping for H2O.ai checkpoints (#953) 2023-08-31 21:15:14 +02:00
flash_santacoder.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
galactica.py Fix missing arguments in Galactica's from_pb (#1022) 2023-09-21 08:15:59 +02:00
gpt_neox.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
idefics_causal_lm.py Fix IDEFICS dtype (#1214) 2023-11-23 15:00:09 +01:00
idefics.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
model.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
mpt.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
opt.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
rw.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
santacoder.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
seq2seq_lm.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
t5.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
types.py Rebased #617 (#868) 2023-08-28 11:43:47 +02:00