text-generation-inference/server
momonga 7402a355dc
Fix calling cuda() on load_in_8bit (#1153)
This PR addresses an issue where calling `model = model.cuda()` would
throw a ValueError when `quantize` is set to "bitsandbytes".

```
> File "/opt/conda/lib/python3.9/site-packages/text_generation_server/server.py", line 147, in serve_inner
    model = get_model(
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/__init__.py", line 295, in get_model
    return CausalLM(
  File "/opt/conda/lib/python3.9/site-packages/text_generation_server/models/causal_lm.py", line 515, in __init__
    model = model.cuda()
  File "/opt/conda/lib/python3.9/site-packages/transformers/modeling_utils.py", line 1998, in cuda
    raise ValueError(
ValueError: Calling `cuda()` is not supported for `4-bit` or `8-bit` quantized models. Please use the model as it is, since the model has already been set to the correct devices and casted to the correct `dtype`.
```

Co-authored-by: mmnga <mmnga1mmnga@gmail.com>
2023-10-19 10:42:03 +02:00
..
custom_kernels feat(server): Rework model loading (#344) 2023-06-08 14:51:52 +02:00
exllama_kernels feat: add cuda memory fraction (#659) 2023-07-24 11:43:58 +02:00
tests feat: format code (#1070) 2023-09-27 12:22:09 +02:00
text_generation_server Fix calling cuda() on load_in_8bit (#1153) 2023-10-19 10:42:03 +02:00
.gitignore Support eetq weight only quantization (#1068) 2023-09-27 11:42:57 +02:00
Makefile Support eetq weight only quantization (#1068) 2023-09-27 11:42:57 +02:00
Makefile-awq Add AWQ quantization inference support (#1019) (#1054) 2023-09-25 15:31:27 +02:00
Makefile-eetq Support eetq weight only quantization (#1068) 2023-09-27 11:42:57 +02:00
Makefile-flash-att feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
Makefile-flash-att-v2 feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
Makefile-vllm feat: add mistral model (#1071) 2023-09-28 09:55:47 +02:00
poetry.lock Prepare for v1.1.1 (#1100) 2023-10-05 16:09:49 +02:00
pyproject.toml Prepare for v1.1.1 (#1100) 2023-10-05 16:09:49 +02:00
README.md feat(router): refactor API and add openAPI schemas (#53) 2023-02-03 12:43:37 +01:00
requirements.txt Preping 1.1.0 (#1066) 2023-09-27 10:40:18 +02:00

Text Generation Inference Python gRPC Server

A Python gRPC server for Text Generation Inference

Install

make install

Run

make run-dev