text-generation-inference/server/text_generation_server/models
Nicolas Patry 5bd2ab6583
feat(server): Support for env value for GPTQ_BITS and GPTQ_GROUPSIZE. (#580)
# What does this PR do?

Some models are already converted, and do not have those values in the
file, this enables users to use them with less friction.

Went for pure env based because adding flags would end up (imo) very
tedious to maintain. There's a lot of sanitation to do: those flags
would be errors if not used in conjuction with `--quantize gptq`.
Then the flags need to exist in the launcher and the server passing them
all throughout all function calls.

This PR is intended as an easy escape hatch, not the defacto method to
use gptq in TGI.

Fixes #500
2023-07-12 10:00:02 +02:00
..
custom_modeling feat(server): Support for env value for GPTQ_BITS and GPTQ_GROUPSIZE. (#580) 2023-07-12 10:00:02 +02:00
__init__.py feat(server): Add Non flash MPT. (#514) 2023-07-03 13:01:46 +02:00
bloom.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
causal_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_causal_lm.py feat: better errors for warmup and TP (#575) 2023-07-10 14:47:15 +02:00
flash_llama.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_neox.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_rw.py fix(server): Fixing RW code (it's remote code so the Arch checking doesn't work to see which weights to keep). (#579) 2023-07-12 09:51:34 +02:00
flash_santacoder.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
galactica.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
gpt_neox.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
model.py feat(server): add paged attention to flash models (#516) 2023-06-30 19:09:59 +02:00
mpt.py feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
opt.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
rw.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
santacoder.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
seq2seq_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
t5.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
types.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00