mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-27 04:52:07 +00:00
Fix runtime error when Qwen2-VL model is prompted with prompt with more than one image. The runtime error was: File "text-generation-inference/server/text_generation_server/models/custom_modeling/qwen2_vl.py", line 459, in get_position_ids text_pos_ids = torch.arange(text_length, device=d) RuntimeError: upper bound and larger bound inconsistent with step sign The error was caused by text_length variable going to negative value when multiple images caused multiple loops in the get_position_ids function's main loop. The error is a simple logic mistake where next_image_pos is initialized as relative offset from current_pos, but was used like it was absolute position from zero. |
||
---|---|---|
.. | ||
custom_kernels | ||
exllama_kernels | ||
exllamav2_kernels | ||
tests | ||
text_generation_server | ||
.gitignore | ||
bounds-from-nix.py | ||
Makefile | ||
Makefile-awq | ||
Makefile-eetq | ||
Makefile-exllamav2 | ||
Makefile-flash-att | ||
Makefile-flash-att-v2 | ||
Makefile-flashinfer | ||
Makefile-lorax-punica | ||
Makefile-selective-scan | ||
Makefile-vllm | ||
poetry.lock | ||
pyproject.toml | ||
README.md | ||
requirements_cuda.txt | ||
requirements_intel.txt | ||
requirements_rocm.txt |
Text Generation Inference Python gRPC Server
A Python gRPC server for Text Generation Inference
Install
make install
Run
make run-dev