text-generation-inference/backends/neuron
2025-02-25 16:11:34 -05:00
..
server fix: run linters and fix formatting (#3057) 2025-02-25 16:11:34 -05:00
tests fix: run linters and fix formatting (#3057) 2025-02-25 16:11:34 -05:00
Cargo.toml Add Neuron backend (#3033) 2025-02-24 09:10:05 +01:00
Makefile Add Neuron backend (#3033) 2025-02-24 09:10:05 +01:00
README.md Add Neuron backend (#3033) 2025-02-24 09:10:05 +01:00
tgi_env.py fix: run linters and fix formatting (#3057) 2025-02-25 16:11:34 -05:00
tgi-entrypoint.sh Add Neuron backend (#3033) 2025-02-24 09:10:05 +01:00

Text-generation-inference - Neuron backend for AWS Trainium and inferentia2

Description

This is the TGI backend for AWS Neuron Trainium and Inferentia family of chips.

This backend is composed of:

  • the AWS Neuron SDK,
  • the legacy v2 TGI launcher and router,
  • a neuron specific inference server for text-generation.

Usage

Please refer to the official documentation.

Build your own image

The simplest way to build TGI with the neuron backend is to use the provided Makefile:

$ make -C backends/neuron image

Alternatively, you can build the image directly from the top directory using a command similar to the one defined in the Makefile under the image target.