Commit Graph

874 Commits

Author SHA1 Message Date
Nicholas Broad
c46a223a6d single char ` addition for docs (#1989)
# What does this PR do?

I think this will fix the docs from being weirdly formatted. All the
sections after MAX_TOP_N_TOKENS don't show up in the bar on the right
(https://huggingface.co/docs/text-generation-inference/basic_tutorials/launcher#maxtopntokens)

## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

@merveenoyan

---------

Co-authored-by: Nicolas Patry <patry.nicolas@protonmail.com>
2024-09-24 03:26:17 +00:00
Nicolas Patry
d1473fab70 Fixing exl2 scratch buffer. (#1990)
# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-09-24 03:26:17 +00:00
Nicolas Patry
bdc676f65c Purely refactors paged/attention into layers/attention and make hardware differences more obvious with 1 file per hardware. (#1986)
# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-09-24 03:19:39 +00:00
fxmarty
dd2d46d9d1 Update documentation version to 2.0.4 (#1980)
As per title

cc @Narsil
2024-09-24 03:19:39 +00:00
Daniël de Kok
f6c5e078d5 Gemma GPTQ checks: skip logprob checks
This test fails somewhat regularly due to non-determinism and this
test is primarily to verify that we are loading a model which doesn't
have `float16` as the default dtype correctly.
2024-09-24 03:19:39 +00:00
Daniël de Kok
628d6a13da Add support for exl2 quantization
Mostly straightforward, changes to existing code:

* Wrap quantizer parameters in a small wrapper to avoid passing
  around untyped tuples and needing to repack them as a dict.
* Move scratch space computation to warmup, because we need the
  maximum input sequence length to avoid allocating huge
  scratch buffers that OOM.
2024-09-24 03:19:39 +00:00
drbh
4dca35fc62 feat: adjust attn weight loading logic (#1975)
This PR updates `load_attention` to prefer loading specific attention
based on the model type. Additionally there were two cases where
`TensorParallelColumnLinear.load_multi` was called and this reduces it
to a single path
2024-09-24 03:16:16 +00:00
Nicolas Patry
2b204f0479 Fixing the text part from tokenizer endpoint. (#1967)
# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-09-24 03:16:16 +00:00
Daniël de Kok
9a1475d816 Fix (non-container) pytest stdout buffering-related lock-up
Two issues:

1. When one of the stdout/stderr pipe buffers of a process started
   with `subprocess.Popen` is full, the process can get blocked until
   the buffer is drained.
2. Calling `Popen.wait` can deadlock when called before draining
   the pipe buffers (if they are full).

This avoids the issue altogether by giving the child process a
temporary file to write to.
2024-09-24 03:16:16 +00:00
Nicolas Patry
cbd5d67101 Upgrade to Axum 0.7 and Hyper 1.0 (Breaking change: disabled ngrok tunneling). (#1959)
- Axum upgraded to hyper 1.0 and most of the ecosystem switched so it's
our time now
- [ngrok-rust](https://github.com/ngrok/ngrok-rust/pull/137/files)
hasn't yet, and hasn't for several months now, so let's disabled the
feature for the time being.

# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-09-24 03:16:16 +00:00
Moritz Laurer
e3d4483f9b fix small typo and broken link (#1958)
# What does this PR do?

Fix a typo; fix a broken link; add one sentence in the guidance docs to
make the word "grammar" less abstract

## Before submitting
- [x] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

@drbh
2024-09-24 03:14:53 +00:00
drbh
1213b6a817 Processor config chat template (#1954)
This PR loads the `processor_config` similar to the `tokenizer_config`
and uses the processor_config's chat_template if the tokenizer_config
does not include one. These changes enable chat with idefics2
2024-09-24 03:14:53 +00:00
Daniël de Kok
1439b26cd4 Fix GPTQ for models which do not have float16 at the default dtype (simpler) (#1953)
# What does this PR do?

Fix GPTQ for models which do not have float16 at the default dtype

Before this change GPTQ models would not work if the model's default
data type is not `float16`. For example, Gemma GPTQ models would fail
because the default dtype of Gemma is `bfloat16`. There are two issues:

If the default `dtype` is not `float16`, the quantizer's `float16`
parameters get converted to that dtype. The kernels cannot deal
with non-`float16` types. The same applies to inputs of quantized ops.

This is resolved by setting the dtype of gptq/awq-quantized models to
`float16`.

Simpler version of #1951.

**Draft:** just testing...

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-09-24 03:14:53 +00:00
Daniël de Kok
742ef9b8e5 Fix (flash) Gemma prefix and enable tests 2024-09-24 03:14:53 +00:00
Nicolas Patry
479f1953ba Fix seeded output. (#1949)
# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-09-24 03:14:53 +00:00
yuanwu
92a1e0fbae Aligin the source code with main branch 2.0.4
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-09-24 03:06:55 +00:00
regisss
0deebe7012 Update README with Docker image v2.0.5 2024-09-07 17:56:52 +00:00
regisss
bf9865e956
Upgrade to Optimum Habana v1.13.2 (#222) 2024-09-07 19:52:59 +02:00
Thanaji Rao Thakkalapelli
a4f39a1cae
Update README.md with changes related to LLava-next multi card support (#221) 2024-09-07 17:46:21 +02:00
Thanaji Rao Thakkalapelli
ad7c620f0f
Llava-next: Added flash_attention_recompute option (#220) 2024-09-06 22:20:07 +02:00
yuanwu2017
2299b739fe
Only Apply the TP in language_model (#219)
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-09-06 22:19:24 +02:00
Thanaji Rao Thakkalapelli
73d93bdd93
Downgrade sympy to match synapaseAI 1.18 base image (#215) 2024-08-28 17:45:44 +02:00
Thanaji Rao Thakkalapelli
fde061ccf8
Updated docker image version to 2.0.4 (#212)
Co-authored-by: Thanaji Thakkalapelli <tthakkalapelli@tthakkalapelli-vm-u22.habana-labs.com>
2024-08-27 10:14:27 +02:00
yuanwu2017
2985503900
llava-next Fp8 (#209)
Signed-off-by: yuanwu <yuan.wu@intel.com>
Co-authored-by: Thanaji Rao Thakkalapelli <tthakkalapelli@habana.ai>
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
2024-08-26 16:53:08 +02:00
Wang, Chang
55d60a103c
Add qwen2 fp8 support (#210)
Signed-off-by: changwang <changwang@habana.ai>
Co-authored-by: changwang <changwang@habana.ai>
2024-08-26 11:02:58 +02:00
Thanaji Rao Thakkalapelli
e33db1877c
Updated Readme to use flash attention for llama (#200) 2024-08-26 11:01:11 +02:00
Vidya Galli
c925bd2872
Undo disable of hpu graphs for starcoder (#201) 2024-08-26 10:58:01 +02:00
Thanaji Rao Thakkalapelli
0c3239e710
Enable quantization with INC (#203) 2024-08-26 10:55:37 +02:00
Sun Choi
ea48ae169a
Make prefill time of static benchmark correct (#214) 2024-08-26 10:51:28 +02:00
yuanwu2017
a8cead1f92
Upgrade SynapseAI version to 1.17.0 (#208)
Signed-off-by: yuanwu <yuan.wu@intel.com>
Co-authored-by: Thanaji Rao Thakkalapelli <tthakkalapelli@habana.ai>
Co-authored-by: regisss <15324346+regisss@users.noreply.github.com>
2024-08-26 10:49:29 +02:00
yuanwu2017
369e499a66
Simplify the warmup process (#173)
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-08-15 12:04:14 +02:00
Sun Choi
e3f0f85b70
Pad token handling for Llama3.1 (#199) 2024-08-13 00:00:41 +02:00
regisss
c09f5bc930
Merge pull request #187 from yuanwu2017/v2.0.4 2024-08-12 23:59:03 +02:00
Abhilash Majumder
d403575c43
Make bf16 default for hpu, fix script (#205) 2024-08-11 10:48:35 +02:00
Sun Choi
cf2ff5a1dd
Revert PR#178 (#191) 2024-08-11 09:29:30 +02:00
regisss
a41e974c3b
Merge branch 'habana-main' into v2.0.4 2024-08-10 12:54:00 +02:00
geoffrey papilion
e36a9c57f0
Code expects newer huggingface_hub versions, tested and this resolves issues with streaming response format (#190)
Co-authored-by: Geoffrey Papilion <gpapilion@ebay.com>
2024-08-08 13:07:27 +02:00
Jacek Czaja
256a97231b
Removed redundant and crash causing regions to be a subject to Torch compile (#194)
Co-authored-by: Jacek Czaja <jczaja@habana.ai>
2024-08-08 13:06:20 +02:00
Vidya Galli
4dc67e4ef3
Version check, doc fixes (#182) 2024-08-07 22:09:51 +02:00
Abhilash Majumder
9b71343328
Integrate flash attention for starcoder2 tgi through habana and some fixes, enabling (#198) 2024-08-07 22:06:05 +02:00
yuanwu
d34ffc4fe9 Refile the hpu warmup
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-08-02 04:36:59 +00:00
yuanwu
05c13c89de Remove useless modification
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-07-30 10:05:38 +00:00
yuanwu
3f0f0e0825 Add the habana profiler
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-07-30 03:53:46 +00:00
yuanwu
db0b6567e1 Remove log
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-07-29 22:02:42 +00:00
yuanwu
588a014551 Enable llava-next
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-07-29 21:55:31 +00:00
yuanwu2017
d3155d6f41
Merge branch 'habana-main' into v2.0.4 2024-07-17 13:45:15 +08:00
yuanwu
b34edc2ee9 Upgrade to 2.0.4
Signed-off-by: yuanwu <yuan.wu@intel.com>
2024-07-17 05:36:58 +00:00
Nicolas Patry
179336888e Modifing the version number. 2024-07-17 05:36:58 +00:00
Nicolas Patry
42b0847a80 Fixing codellama loads by using purely AutoTokenizer. (#1947)
- The need for the slow tokenizer default stems from back
  when llama 1 was introduced and all the flags where not
  supported in `tokenizers`.

- Fixes #1891

# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-07-17 05:36:58 +00:00
Nicolas Patry
075092315e Improving the logging system. (#1938)
- Added a debug log for speculated ids (helps seeing in logs quality of
  a speculator).
- Remove newlines from child process logs when re-emitting in non JSON
  mode.
- Made standard level be closer to what's expected (only our binaries
  level).
- Propagate that level correctly to the shard (was forced into INFO).

# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

Fixes # (issue)

## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [ ] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @

@OlivierDehaene OR @Narsil

 -->
2024-07-17 05:36:58 +00:00