text-generation-inference/server/text_generation_server/layers
Daniël de Kok 47447ef017
Unify attention output handling (#2343)
- Always return the hidden states.
- Create the output tensor inside the `attention` and `paged_attention`
  functions.

This removes the difference between how the output is handled between
attention (output parameter) and paged attention (return value). This
also removes the assumption that the attention implementation can
write to an output tensor (in preparation of FlashInfer).
2024-08-01 17:03:28 +02:00
..
attention Unify attention output handling (#2343) 2024-08-01 17:03:28 +02:00
awq feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
gptq Handle GPTQ-Marlin loading in GPTQMarlinWeightLoader (#2300) 2024-07-31 13:08:41 +02:00
marlin Handle GPTQ-Marlin loading in GPTQMarlinWeightLoader (#2300) 2024-07-31 13:08:41 +02:00
__init__.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
bnb.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
conv.py Refactor layers. (#1866) 2024-05-13 12:44:30 +02:00
eetq.py feat(fp8): use fbgemm kernels and load fp8 weights directly (#2248) 2024-07-20 19:02:04 +02:00
exl2.py Add support for Deepseek V2 (#2224) 2024-07-19 17:23:20 +02:00
fp8.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
layernorm.py Removing IPEX_AVAIL. (#2115) 2024-06-25 13:20:57 +02:00
linear.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
lora.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
medusa.py fix: use path inside of speculator config (#1935) 2024-05-22 20:46:29 +02:00
mlp.py MLPSpeculator. (#1865) 2024-05-14 12:33:18 +02:00
rotary.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
speculative.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00
tensor_parallel.py feat: add ruff and resolve issue (#2262) 2024-07-26 10:29:09 -04:00