mirror of
https://github.com/huggingface/text-generation-inference.git
synced 2025-10-20 12:25:23 +00:00
* IPEX support FP8 kvcache Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * add kvcache dtype Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * add softcap and slidingwindow Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * kv scale in pageattn Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * remove triton installation, will be installed with torch Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * install xelink lib Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * softcap default -1.0 Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> * softcap default -1.0 Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> --------- Signed-off-by: Wang, Yi A <yi.a.wang@intel.com> |
||
|---|---|---|
| .. | ||
| __init__.py | ||
| common.py | ||
| cuda.py | ||
| flash_attn_triton.py | ||
| flashinfer.py | ||
| ipex.py | ||
| kv_cache.py | ||
| rocm.py | ||