text-generation-inference/server/text_generation_server/models
momonga 7402a355dc
Fix calling cuda() on load_in_8bit (#1153)
This PR addresses an issue where calling `model = model.cuda()` would
throw a ValueError when `quantize` is set to "bitsandbytes".

```
> File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 147, in serve_inner
    model = get_model(
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/__init__.py", line 295, in get_model
    return CausalLM(
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/causal_lm.py", line 515, in __init__
    model = model.cuda()
  File "/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1998, in cuda
    raise ValueError(
ValueError: Calling `cuda()` is not supported for `4-bit` or `8-bit` quantized models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```

Co-authored-by: mmnga <mmnga1mmnga@gmail.com>
2023-10-19 10:42:03 +02:00
..
custom_modeling Hotfixing idefics base64 parsing. (#1103) 2023-10-05 13:35:26 +02:00
__init__.py Fixing eetq dockerfile. (#1081) 2023-09-29 11:19:06 +02:00
bloom.py Handling bloom prefix. (#1090) 2023-10-03 11:55:10 +02:00
cache_manager.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
causal_lm.py Fix calling cuda() on load_in_8bit (#1153) 2023-10-19 10:42:03 +02:00
flash_causal_lm.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
flash_llama.py Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
flash_mistral.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
flash_neox.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
flash_rw.py Fix Falcon weight mapping for H2O.ai checkpoints (#953) 2023-08-31 21:15:14 +02:00
flash_santacoder.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
galactica.py Fix missing arguments in Galactica's from_pb (#1022) 2023-09-21 08:15:59 +02:00
gpt_neox.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
idefics_causal_lm.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
idefics.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
model.py feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
mpt.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
opt.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
rw.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
santacoder.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
seq2seq_lm.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
t5.py enable bfloat16 for cpu (#1034) 2023-09-19 17:19:28 +02:00
types.py Rebased #617 (#868) 2023-08-28 11:43:47 +02:00