text-generation-inference/server/text_generation_server/utils
Wang, Yi 0d879fe66e Cpu tgi (#1936)
* add CPU tgi support

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

* ipex distributed ops support

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>

---------

Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
Co-authored-by: Funtowicz Morgan <mfuntowicz@users.noreply.github.com>
2024-09-24 03:51:26 +00:00
..
__init__.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
chunks.py server: use chunked inputs 2024-09-24 03:42:29 +00:00
convert.py Force weights_only (before fully breaking pickle files anyway). (#1710) 2024-04-25 15:10:53 +03:00
dist.py Cpu tgi (#1936) 2024-09-24 03:51:26 +00:00
hub.py Fixing the download strategy for ibm-fms (#1917) 2024-07-17 05:36:58 +00:00
import_utils.py Cpu tgi (#1936) 2024-09-24 03:51:26 +00:00
log.py v1.3.4 2024-04-22 09:08:34 +03:00
logits_process.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
peft.py fix: fix local loading for .bin models (#1419) 2024-04-22 09:17:52 +03:00
speculate.py chore: formatting 2024-04-18 16:26:00 +03:00
tokens.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
watermark.py Aligin the source code with main branch 2.0.4 2024-09-24 03:06:55 +00:00
weights.py Factor out sharding of packed tensors (#2059) 2024-09-24 03:46:09 +00:00