text-generation-inference/server/text_generation_server/models/custom_modeling
2023-07-05 15:43:42 +00:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
flash_llama_modeling.py feat(server): pre-allocate past key values for flash causal LM (#412) 2023-06-12 18:30:29 +02:00
flash_neox_modeling.py feat(server): Add inference support for GPTQ (llama + falcon tested) + Quantization script (#438) 2023-06-26 12:27:01 +02:00
flash_rw_modeling.py feat(server): Add inference support for GPTQ (llama + falcon tested) + Quantization script (#438) 2023-06-26 12:27:01 +02:00
flash_santacoder_modeling.py add exllama gptq kernel 2023-07-05 15:43:42 +00:00
neox_modeling.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
opt_modeling.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
t5_modeling.py fix(server): Fixing T5 in case the names are mixed up. (#475) 2023-06-20 18:03:36 +02:00