mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-23 07:52:06 +00:00
The `GPTWeightLoader` was structured like this in pseudocode: if marlin: Set up tensors in a way that GPTQ-Marlin expects else: Set up tensors in a way that ExLlama/GPTQ/AWQ expect However, the GPT-Marlin implementation details should really be in the `marlin` module. So move the former part out to a separate `GPTQMarlinWeightsLoader`. |
||
---|---|---|
.. | ||
adapters | ||
layers | ||
models | ||
pb | ||
utils | ||
__init__.py | ||
cache.py | ||
cli.py | ||
interceptor.py | ||
server.py | ||
tracing.py |