mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-11-18 23:15:59 +00:00
* Working loading state. * Preprocessing. * Working state ? (Broke idefics1 temporarily). * Cleaner condition. * Fix idefics. * Updating config, removing TODO * Mllama * Ugrade transformers 4.45 * Flashing mllama. * Starting to get there. * Working state. * Integrations tests for mllama (cutting to 10 tokens because there seems' to be instability after (meaning size of the batch matters. * Updating model link. * Earlier assert. * Fix vlm ? * remove log. * Force ignore all images but last. * Default dtype bfloat16. * Update integration test after switch to bf16. * Remove dead code. * Removed dead code. * Upgrade the flake to latest transformers/tokenizers * Move to hf tgi-nix * Upgrade to 0.5.0 |
||
|---|---|---|
| .. | ||
| custom_kernels | ||
| exllama_kernels | ||
| exllamav2_kernels | ||
| tests | ||
| text_generation_server | ||
| .gitignore | ||
| Makefile | ||
| Makefile-awq | ||
| Makefile-eetq | ||
| Makefile-exllamav2 | ||
| Makefile-fbgemm | ||
| Makefile-flash-att | ||
| Makefile-flash-att-v2 | ||
| Makefile-flashinfer | ||
| Makefile-lorax-punica | ||
| Makefile-selective-scan | ||
| Makefile-vllm | ||
| poetry.lock | ||
| pyproject.toml | ||
| README.md | ||
| requirements_cuda.txt | ||
| requirements_intel.txt | ||
| requirements_rocm.txt | ||
Text Generation Inference Python gRPC Server
A Python gRPC server for Text Generation Inference
Install
make install
Run
make run-dev