text-generation-inference/backends/neuron
David Corvoysier 238fbd4d50
Neuron backend fix and patch version 3.3.4 (#3273)
* fix(neuron): wrong assertion when batch_size==1

* chore: prepare 3.3.4
2025-06-19 10:52:41 +02:00
..
server Neuron backend fix and patch version 3.3.4 (#3273) 2025-06-19 10:52:41 +02:00
tests Bump neuron SDK version (#3260) 2025-06-10 17:56:25 +02:00
Cargo.toml feat: add support for HF_HUB_USER_AGENT_ORIGIN to add user-agent Origin field in Hub requests. (#3061) 2025-03-04 16:43:50 +01:00
Makefile Update neuron backend (#3098) 2025-03-12 09:53:15 +01:00
README.md Add Neuron backend (#3033) 2025-02-24 09:10:05 +01:00
tgi_entry_point.py Bump neuron SDK version (#3260) 2025-06-10 17:56:25 +02:00
tgi-entrypoint.sh Bump neuron SDK version (#3260) 2025-06-10 17:56:25 +02:00

Text-generation-inference - Neuron backend for AWS Trainium and inferentia2

Description

This is the TGI backend for AWS Neuron Trainium and Inferentia family of chips.

This backend is composed of:

  • the AWS Neuron SDK,
  • the legacy v2 TGI launcher and router,
  • a neuron specific inference server for text-generation.

Usage

Please refer to the official documentation.

Build your own image

The simplest way to build TGI with the neuron backend is to use the provided Makefile:

$ make -C backends/neuron image

Alternatively, you can build the image directly from the top directory using a command similar to the one defined in the Makefile under the image target.