text-generation-inference/server/text_generation_server/utils
Nicolas Patry 732da6942b Remove lots of dead code, move triton to hard requirement
- Added option to upload to hub directly after quantizing.
2023-06-14 14:55:45 +02:00
..
gptq Remove lots of dead code, move triton to hard requirement 2023-06-14 14:55:45 +02:00
__init__.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
convert.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00
dist.py feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
hub.py feat(server): batch tokenization for flash causal lm (#411) 2023-06-05 16:09:41 +02:00
layers.py Remove lots of dead code, move triton to hard requirement 2023-06-14 14:55:45 +02:00
logits_process.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00
tokens.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00
watermark.py fix(server): fix flash-neox scores warping (#137) 2023-03-24 18:21:41 +01:00
weights.py Fixing register bias + gptq_bits type. 2023-06-14 09:42:55 +02:00