mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-05-21 09:42:09 +00:00
* IPEX support FP8 kvcache Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * add kvcache dtype Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * add softcap and slidingwindow Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * kv scale in pageattn Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * remove triton installation, will be installed with torch Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * install xelink lib Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * softcap default -1.0 Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * softcap default -1.0 Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> |
||
---|---|---|
.. | ||
__init__.py | ||
common.py | ||
cuda.py | ||
flash_attn_triton.py | ||
flashinfer.py | ||
ipex.py | ||
kv_cache.py | ||
rocm.py |