text-generation-inference/backends/neuron
Alvaro Moran 0f79162288
chore: prepare version 3.3.5 (#3314)
* chore: prepare version 3.3.5

* black

* neuron: black

* Update hf-xet in uv lockfile

* Attempt to fix API doc check failure

Add `error_type` where missing.

* Pin redocly version

* Sync redocly with Nix for now

---------

Co-authored-by: Daniël de Kok <me@danieldk.eu>
2025-09-02 15:35:42 +02:00
..
server Neuron backend fix and patch version 3.3.4 (#3273) 2025-06-19 10:52:41 +02:00
tests chore: prepare version 3.3.5 (#3314) 2025-09-02 15:35:42 +02:00
Cargo.toml feat: add support for HF_HUB_USER_AGENT_ORIGIN to add user-agent Origin field in Hub requests. (#3061) 2025-03-04 16:43:50 +01:00
Makefile Update neuron backend (#3098) 2025-03-12 09:53:15 +01:00
README.md Add Neuron backend (#3033) 2025-02-24 09:10:05 +01:00
tgi_entry_point.py Bump neuron SDK version (#3260) 2025-06-10 17:56:25 +02:00
tgi-entrypoint.sh Bump neuron SDK version (#3260) 2025-06-10 17:56:25 +02:00

Text-generation-inference - Neuron backend for AWS Trainium and inferentia2

Description

This is the TGI backend for AWS Neuron Trainium and Inferentia family of chips.

This backend is composed of:

  • the AWS Neuron SDK,
  • the legacy v2 TGI launcher and router,
  • a neuron specific inference server for text-generation.

Usage

Please refer to the official documentation.

Build your own image

The simplest way to build TGI with the neuron backend is to use the provided Makefile:

$ make -C backends/neuron image

Alternatively, you can build the image directly from the top directory using a command similar to the one defined in the Makefile under the image target.