text-generation-inference/server/text_generation_server/models
Yang, Bo 15b3e9ffb0
Directly load GPTBigCode to specified device (#618)
This PR directly load GPTBigCode to specified device, avoiding moving
model between devices.

# What does this PR do?
This PR directly load GPTBigCode to specified device, avoiding moving
model between devices.


## Before submitting
- [ ] This PR fixes a typo or improves the docs (you can dismiss the
other checks if that's the case).
- [x] Did you read the [contributor
guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests),
      Pull Request section?
- [ ] Was this discussed/approved via a Github issue or the
[forum](https://discuss.huggingface.co/)? Please add a link
      to it if that's the case.
- [ ] Did you make sure to update the documentation with your changes?
Here are the
[documentation
guidelines](https://github.com/huggingface/transformers/tree/main/docs),
and
[here are tips on formatting
docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation).
- [ ] Did you write any new necessary tests?


## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.


@OlivierDehaene OR @Narsil
2023-07-21 11:27:31 +02:00
..
custom_modeling feat(server): Add exllama GPTQ CUDA kernel support #553 (#666) 2023-07-21 10:59:00 +02:00
__init__.py feat(server): flash attention v2 (#624) 2023-07-18 16:21:18 +02:00
bloom.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
causal_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_causal_lm.py feat(server): Add exllama GPTQ CUDA kernel support #553 (#666) 2023-07-21 10:59:00 +02:00
flash_llama.py fix(server): fix llamav2 config (#635) 2023-07-18 18:49:42 +02:00
flash_neox.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
flash_rw.py fix(server): Fixing RW code (it's remote code so the Arch checking doesn't work to see which weights to keep). (#579) 2023-07-12 09:51:34 +02:00
flash_santacoder.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
galactica.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
gpt_neox.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
model.py feat(server): Add exllama GPTQ CUDA kernel support #553 (#666) 2023-07-21 10:59:00 +02:00
mpt.py feat(server): use latest flash attention commit (#543) 2023-07-04 20:23:55 +02:00
opt.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
rw.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
santacoder.py Directly load GPTBigCode to specified device (#618) 2023-07-21 11:27:31 +02:00
seq2seq_lm.py feat: Add the option to force another dtype than f16. (#513) 2023-06-30 20:30:09 +02:00
t5.py fix(server): T5 weights names. (#582) 2023-07-12 10:01:42 +02:00
types.py feat(server): support vectorized warpers in flash causal lm (#317) 2023-05-26 12:30:27 +02:00