text-generation-inference/server/text_generation_server/models
2024-04-19 14:18:05 +03:00
..
custom_modeling fix: max_past default value must be -1, not 0 (#1348) 2024-04-19 14:18:05 +03:00
__init__.py chore: formatting 2024-04-18 16:26:00 +03:00
bloom.py Add Habana copyright header (#122) 2024-04-08 18:06:21 +02:00
cache_manager.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
causal_lm.py feat: add more latency metrics in forward (#1346) 2024-04-19 13:41:34 +03:00
flash_causal_lm.py feat: add more latency metrics in forward (#1346) 2024-04-19 13:41:34 +03:00
flash_llama.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
flash_mistral.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
flash_mixtral.py chore: formatting 2024-04-18 16:26:00 +03:00
flash_neox.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
flash_rw.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
flash_santacoder.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
galactica.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
gpt_neox.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
idefics_causal_lm.py feat: add more latency metrics in forward (#1346) 2024-04-19 13:41:34 +03:00
idefics.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
model.py feat: add more latency metrics in forward (#1346) 2024-04-19 13:41:34 +03:00
mpt.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
opt.py fix: fix gpt-q params loading 2024-04-19 12:12:50 +03:00
rw.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
santacoder.py Add changes from Optimum Habana's TGI folder 2023-12-05 11:12:16 +01:00
seq2seq_lm.py feat: add more latency metrics in forward (#1346) 2024-04-19 13:41:34 +03:00
t5.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
types.py chore: formatting 2024-04-18 16:26:00 +03:00