text-generation-inference/backends/neuron
David Corvoysier 68e1c608f6 fix(neuron): increase ulimit when building image
The base image used to compile the rust components seems to have a low
ulimit for opened files, which leads to errors during compilation.
2025-02-23 14:17:02 +01:00
..
server feat(neuron): add server standalone installation 2025-02-23 14:17:02 +01:00
tests feat(neuron): add server and integration tests 2025-02-23 14:17:02 +01:00
Cargo.toml feat: add neuron backend 2025-02-23 14:17:02 +01:00
Makefile fix(neuron): increase ulimit when building image 2025-02-23 14:17:02 +01:00
README.md feat: add neuron backend 2025-02-23 14:17:02 +01:00
sagemaker-entrypoint.sh feat: add neuron backend 2025-02-23 14:17:02 +01:00
tgi_env.py feat: add neuron backend 2025-02-23 14:17:02 +01:00
tgi-entrypoint.sh feat: add neuron backend 2025-02-23 14:17:02 +01:00

Text-generation-inference - Neuron backend for AWS Trainium and inferentia2

Description

This is the TGI backend for AWS Neuron Trainium and Inferentia family of chips.

This backend is composed of:

  • the AWS Neuron SDK,
  • the legacy v2 TGI launcher and router,
  • a neuron specific inference server for text-generation.

Usage

Please refer to the official documentation.

Build your own image

The simplest way to build TGI with the neuron backend is to use the provided Makefile:

$ make -C backends/neuron image

Alternatively, you can build the image directly from the top directory using a command similar to the one defined in the Makefile under the image target.