Wang, Yi A
76cc129796
remove block_scales which is not needed anymore
...
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-04-11 01:28:14 -07:00
Wang, Yi A
4cdc34ec4d
match the latest vllm_extension ops
...
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-04-10 19:32:32 -07:00
Wang, Yi A
c55a8caea2
remove torch.where to fix incorrect output in hpu graph model
...
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-31 22:51:54 -07:00
Wang, Yi A
f0e5faec1a
fix some issue
...
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-28 07:01:06 -07:00
Wang, Yi A
1508ee8de1
remove block_tables and prefill_cache_indices which will lead to dynamic shape
...
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-27 23:57:59 -07:00
Wang, Yi A
201dc6294f
clean cuda/rocm code in hpu backend, enable flat_hpu
...
Signed-off-by: Wang, Yi A <yi.a.wang@intel.com>
2025-03-14 01:25:31 -07:00
Baptiste Colle
683ff53fa3
Add Gaudi Backend ( #3055 )
...
* wip(gaudi): import server and dockerfile from tgi-gaudi fork
* feat(gaudi): new gaudi backend working
* fix: fix style
* fix prehooks issues
* fix(gaudi): refactor server and implement requested changes
2025-02-28 12:14:58 +01:00