text-generation-inference/server/text_generation_server
zspo bd3088748e
add FastLinear import (#750)
# What does this PR do?

Fixes #749 

## Who can review?

Anyone in the community is free to review the PR once the tests have
passed. Feel free to tag
members/contributors who may be interested in your PR.

<!-- Your PR will be replied to more quickly if you can figure out the
right person to tag with @


@OlivierDehaene OR @Narsil

 -->

Co-authored-by: p_spozzhang <p_spozzhang@tencent.com>
2023-08-02 20:04:46 +02:00
..
models add FastLinear import (#750) 2023-08-02 20:04:46 +02:00
pb feat(server): clear cache on error (#143) 2023-03-28 11:29:35 +02:00
utils Typo fix. (#746) 2023-07-31 18:57:29 +02:00
__init__.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00
cache.py fix(server): decrease memory fragmentation (#557) 2023-07-06 14:28:33 +02:00
cli.py feat(server): Reworking the quantization script so it's still universal (not llama specific) (#587) 2023-07-18 12:19:05 +02:00
interceptor.py feat(server): empty cache on errors 2023-07-12 17:06:19 +02:00
server.py fix(server): fix quantization python requirements (#708) 2023-07-27 12:28:10 +02:00
tracing.py feat(clients): Python client (#103) 2023-03-07 18:52:22 +01:00