text-generation-inference/server/text_generation_server/models/custom_modeling
zhangsibo1129 1e3ec3c91f
Complete FastLinear.load parameters in OPTDecoder initialization (#1060)
# What does this PR do?

<!--
Congratulations! You've made it this far! You're not quite done yet
though.

Once merged, your PR is going to appear in the release notes with the
title you set, so make sure it's a great title that fully reflects the
extent of your awesome contribution.

Then, please replace this with a description of the change and which
issue is fixed (if applicable). Please also include relevant motivation
and context. List any dependencies (if any) that are required for this
change.

Once you're done, someone will review your PR shortly (see the section
"Who can review?" below to tag some potential reviewers). They may
suggest changes to make the code even better. If no one reviewed your PR
after a week has passed, don't hesitate to post a new comment
@-mentioning the same persons---sometimes notifications get lost.
-->

<!-- Remove if not applicable -->

`FastLinear.load` requires 4 parameters, but in the following only 3 are
given. This PR fix this.

```python
# server/text_generation_server/models/custom_modeling/opt_modeling.py
        if config.word_embed_proj_dim != config.hidden_size:
            self.project_out = FastLinear.load(
                config, prefix="model.decoder.project_out", bias=False
            )
        else:
            self.project_out = None

        if config.word_embed_proj_dim != config.hidden_size:
            self.project_in = FastLinear.load(
                config, prefix="model.decoder.project_in", bias=False
```

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @


@OlivierDehaene OR @Narsil

 -->
2023-09-27 12:25:59 +02:00
..
__init__.py feat(server): flash santacoder (#153) 2023-04-03 19:06:42 +02:00
bloom_modeling.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
flash_llama_modeling.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
flash_neox_modeling.py Adding Rope scaling. (#741) 2023-07-31 15:38:47 +02:00
flash_rw_modeling.py Fix f180 (#951) 2023-08-30 11:09:46 +02:00
flash_santacoder_modeling.py feat(server): Using quantize_config.json instead of GPTQ_BITS env variables. (#671) 2023-07-25 13:00:27 +02:00
idefics_config.py small fix on idefics (#954) 2023-09-01 18:44:34 +02:00
idefics_image_processing.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
idefics_modeling.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
idefics_perceiver.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
idefics_processing.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
idefics_vision.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
mpt_modeling.py chore: fix typo in mpt_modeling.py (#737) 2023-07-31 15:43:44 +02:00
neox_modeling.py feat: format code (#1070) 2023-09-27 12:22:09 +02:00
opt_modeling.py Complete FastLinear.load parameters in OPTDecoder initialization (#1060) 2023-09-27 12:25:59 +02:00
t5_modeling.py Fixing t5 loading. (#1042) 2023-09-25 12:22:28 +02:00