mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-04-25 03:12:13 +00:00
- Fixes CPU affinity when running inference on CPU, and when CPUs are externally managed using taskset, numactl, cgroups, Kubernetes CPU manager, NRI resource policy plugins, for instance. - Detect external CPU management and trust the external CPU manager completely. It is more likely that external manager has the big picture of all other tasks running on the system, their QoS, hardware characteristics, etc. - For instance, do not modify even memory affinity, because the external manager may know better which NUMA node has fastest memory, or which NUMA nodes have enough free memory for this inference. Fixes: #3011 Signed-off-by: Antti Kervinen <antti.kervinen@intel.com> |
||
---|---|---|
.. | ||
custom_modeling | ||
__init__.py | ||
bloom.py | ||
causal_lm.py | ||
flash_causal_lm.py | ||
galactica.py | ||
globals.py | ||
idefics_causal_lm.py | ||
mamba.py | ||
metadata_kernels.py | ||
mllama_causal_lm.py | ||
model.py | ||
pali_gemma.py | ||
seq2seq_lm.py | ||
transformers_flash_causal_lm.py | ||
types.py | ||
vlm_causal_lm.py |