text-generation-inference/server/Makefile
Nicolas Patry de19e7e844
Moving to uv instead of poetry. (#2919)
* Moving to `uv` instead of `poetry`.

More in the standard, faster, seemingly better lockfile.

* Creating venv if not created.

* Create the venv.

* Fix ?

* Fixing the test by activating the environment ?

* Install system  ?

* Add the cli entry point.

* docker install on system

* Monkeying this...

* `--system` is redundant.

* Trying to force-include this pb folder.

* TRying to check that pb is imported correctly.

* Editable install necessary ?

* Non editable?

* Editable it is.
2025-01-17 12:32:00 +01:00

38 lines
1.2 KiB
Makefile

include Makefile-flash-att
include Makefile-flash-att-v2
include Makefile-vllm
include Makefile-awq
include Makefile-eetq
include Makefile-selective-scan
include Makefile-lorax-punica
include Makefile-exllamav2
include Makefile-flashinfer
unit-tests:
pip install -U pip uv
uv pip install -e ".[dev]"
pytest -s -vv -m "not private" tests
gen-server:
# Compile protos
pip install -U pip uv
uv pip install ".[gen]"
mkdir text_generation_server/pb || true
python -m grpc_tools.protoc -I../proto/v3 --python_out=text_generation_server/pb \
--grpc_python_out=text_generation_server/pb --mypy_out=text_generation_server/pb ../proto/v3/generate.proto
find text_generation_server/pb/ -type f -name "*.py" -print0 -exec sed -i -e 's/^\(import.*pb2\)/from . \1/g' {} \;
touch text_generation_server/pb/__init__.py
install-server: gen-server
uv pip install -e ".[accelerate, compressed-tensors, quantize, peft, outlines]"
install: install-cuda
echo "Installed server"
install-cuda: install-server install-flash-attention-v2-cuda install-flash-attention
uv pip install -e ".[attention,bnb,marlin,moe]"
uv pip install nvidia-nccl-cu12==2.22.3
install-rocm: install-server install-flash-attention-v2-rocm install-vllm-rocm