text-generation-inference/backends/neuron
2025-04-15 10:09:37 +02:00
..
server setuptools <= 70.0 is vulnerable: CVE-2024-6345 (#3171) 2025-04-15 10:09:37 +02:00
tests fix: run linters and fix formatting (#3057) 2025-02-25 16:11:34 -05:00
Cargo.toml feat: add support for HF_HUB_USER_AGENT_ORIGIN to add user-agent Origin field in Hub requests. (#3061) 2025-03-04 16:43:50 +01:00
Makefile Update neuron backend (#3098) 2025-03-12 09:53:15 +01:00
README.md Add Neuron backend (#3033) 2025-02-24 09:10:05 +01:00
tgi_env.py fix: run linters and fix formatting (#3057) 2025-02-25 16:11:34 -05:00
tgi-entrypoint.sh Add Neuron backend (#3033) 2025-02-24 09:10:05 +01:00

Text-generation-inference - Neuron backend for AWS Trainium and inferentia2

Description

This is the TGI backend for AWS Neuron Trainium and Inferentia family of chips.

This backend is composed of:

  • the AWS Neuron SDK,
  • the legacy v2 TGI launcher and router,
  • a neuron specific inference server for text-generation.

Usage

Please refer to the official documentation.

Build your own image

The simplest way to build TGI with the neuron backend is to use the provided Makefile:

$ make -C backends/neuron image

Alternatively, you can build the image directly from the top directory using a command similar to the one defined in the Makefile under the image target.