text-generation-inference/server/text_generation_server/models
Daniël de Kok 75aed8aed5 Fix Phi-2 with tp>1 (#2003)
# What does this PR do?

We were using the wrong parallelism in the up-projection.

<!-- Remove if not applicable -->

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-09-24 03:27:14 +00:00
..
custom_modeling Fix Phi-2 with tp>1 (#2003) 2024-09-24 03:27:14 +00:00
__init__.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
bloom.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
cache_manager.py Refactor layers. (#1866) 2024-07-17 05:36:58 +00:00
causal_lm.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
flash_causal_lm.py Remove useless modification 2024-07-30 10:05:38 +00:00
flash_cohere.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_dbrx.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_gemma.py Fix (flash) Gemma prefix and enable tests 2024-09-24 03:14:53 +00:00
flash_gpt2.py MI300 compatibility (#1764) 2024-07-17 05:36:58 +00:00
flash_llama.py Add support for exl2 quantization 2024-09-24 03:19:39 +00:00
flash_mistral.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
flash_mixtral.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_neox.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_phi.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_qwen2.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_rw.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_santacoder.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
flash_starcoder2.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
galactica.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
globals.py Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986) 2024-09-24 03:19:39 +00:00
gpt_neox.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
idefics2.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
idefics_causal_lm.py Adding Llava-Next (Llava 1.6) with full support. (#1709) 2024-04-25 14:30:55 +00:00
idefics.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
llava_next.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
mamba.py Fix TunableOp bug (#1920) 2024-07-17 05:36:58 +00:00
model.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
mpt.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
opt.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
pali_gemma.py Pali gemma modeling (#1895) 2024-07-17 05:36:58 +00:00
phi.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
rw.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
santacoder.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
seq2seq_lm.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
t5.py MLPSpeculator. (#1865) 2024-07-17 05:36:58 +00:00
types.py chore: add pre-commit (#1569) 2024-04-24 15:32:02 +03:00
vlm_causal_lm.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00