OlivierDehaene
|
bb6200503c
|
fix: max_past default value must be -1, not 0 (#1348)
|
2024-04-19 14:18:05 +03:00 |
|
OlivierDehaene
|
214ec0eb49
|
fix: only keep stop sequence buffer if we have some
|
2024-04-19 14:18:00 +03:00 |
|
OlivierDehaene
|
04dbf7a506
|
fix: slice stopping criteria buffer
|
2024-04-19 14:17:52 +03:00 |
|
OlivierDehaene
|
b3c2d7291e
|
fix: fix quant linear autotune
|
2024-04-19 14:17:39 +03:00 |
|
OlivierDehaene
|
28fcdcca6d
|
fix: fix triton OutOfResources import
|
2024-04-19 14:17:32 +03:00 |
|
OlivierDehaene
|
5c9ef069ed
|
feat: add more latency metrics in forward (#1346)
|
2024-04-19 13:41:34 +03:00 |
|
OlivierDehaene
|
c974437ba7
|
fix: fix gpt-q params loading
|
2024-04-19 12:12:50 +03:00 |
|
OlivierDehaene
|
05f8c85a8b
|
v1.3.2
|
2024-04-18 16:33:05 +03:00 |
|
OlivierDehaene
|
f9b58ac7a1
|
feat: add quant to mixtral (#1337)
|
2024-04-18 16:32:50 +03:00 |
|
OlivierDehaene
|
09c556dbd7
|
v1.3.1
|
2024-04-18 16:32:07 +03:00 |
|
OlivierDehaene
|
db5053fc86
|
v1.3.0
|
2024-04-18 16:31:53 +03:00 |
|
OlivierDehaene
|
79f268f95a
|
chore: formatting
|
2024-04-18 16:26:00 +03:00 |
|
OlivierDehaene
|
9aef902982
|
feat: mixtral (#1328)
|
2024-04-18 12:39:52 +00:00 |
|
Nicolas Patry
|
a7f52f3812
|
Speculative (#1308)
|
2024-04-18 12:39:39 +00:00 |
|
Jacek Czaja
|
ae6215fcea
|
Enable server UT: test_causal_lm.py::test_batch_from_pb (#121)
Co-authored-by: Jacek Czaja <jczaja@habana.ai>
|
2024-04-10 16:33:56 +02:00 |
|
Karol Damaszke
|
30cc78773e
|
Skip server tests of not enabled models (#125)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-04-09 14:15:41 +02:00 |
|
Karol Damaszke
|
c6739526c6
|
Fix test_watermark (#124)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-04-09 11:29:21 +02:00 |
|
Sylwester Fraczek
|
757c12dbac
|
Fix test_pass_through_tokenizer (#117)
Co-authored-by: Sylwester Fraczek <sfraczek@habana.ai>
|
2024-04-09 09:30:47 +02:00 |
|
Karol Damaszke
|
d957e32601
|
Add Habana copyright header (#122)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-04-08 18:06:21 +02:00 |
|
Karol Damaszke
|
b0de25a285
|
Don't set rope_scaling for unsupported models (#115)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-04-02 12:12:02 +02:00 |
|
Karol Damaszke
|
7342baa2eb
|
Add support for rope_scaling and remove is_optimized_for_gaudi (#112)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-03-29 15:07:32 +01:00 |
|
Karol Damaszke
|
bf5263b88b
|
Disable watermark with FP8 quantization (#114)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-03-27 13:32:20 +01:00 |
|
jkaniecki
|
56f00a552b
|
Adjust warmup to all possible bucket sizes and decode batch size = 1 (#113)
|
2024-03-27 11:59:51 +01:00 |
|
Karol Damaszke
|
b45f648483
|
Add warmup for logits processors (#107)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-03-18 15:17:47 +01:00 |
|
yuanwu2017
|
a4d5c3f40f
|
Fix the generate_stream crash in concurrent query (#105)
Signed-off-by: yuanwu <yuan.wu@intel.com>
|
2024-03-15 10:54:56 +01:00 |
|
Yao Matrix
|
7149ac30e6
|
Fix the issue of out of range (#98)
Signed-off-by: yuanwu <yuan.wu@intel.com>
Co-authored-by: yuanwu <yuan.wu@intel.com>
|
2024-03-13 10:09:53 +01:00 |
|
Karol Damaszke
|
80ae9ead28
|
Set MAX_TOTAL_TOKENS automatically (#91)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-03-01 11:25:15 +01:00 |
|
Karol Damaszke
|
a5c788cfe4
|
Remove redundant fill op (#83) (#90)
Co-authored-by: mswiniarsk <156412439+mswiniarsk@users.noreply.github.com>
|
2024-03-01 01:32:02 +01:00 |
|
Karol Damaszke
|
03c2123244
|
Use batched index_copy (#73) (#89)
Co-authored-by: madamczykhabana <110973826+madamczykhabana@users.noreply.github.com>
|
2024-02-29 15:45:16 +01:00 |
|
Karol Damaszke
|
7dbf4bf7a4
|
Improve tensor slicing performance (#66) (#87)
Co-authored-by: mswiniarsk <156412439+mswiniarsk@users.noreply.github.com>
|
2024-02-29 10:48:54 +01:00 |
|
Karol Damaszke
|
3831f1bed5
|
Add warmup for shift operation (#59) (#86)
|
2024-02-29 09:19:28 +01:00 |
|
Karol Damaszke
|
022ce1eaaf
|
Overhead reduction (#58) (#85)
Co-authored-by: mrs303 <54661797+mrs303@users.noreply.github.com>
|
2024-02-29 09:17:45 +01:00 |
|
Karol Damaszke
|
212136dff8
|
Log exceptions to debug.log (#52) (#84)
Co-authored-by: madamczykhabana <110973826+madamczykhabana@users.noreply.github.com>
|
2024-02-29 09:14:42 +01:00 |
|
Karol Damaszke
|
c7ccfb87ff
|
Grouped pad/shift/move operations (#57) (#82)
Co-authored-by: madamczykhabana <110973826+madamczykhabana@users.noreply.github.com>
|
2024-02-29 04:16:44 +01:00 |
|
Karol Damaszke
|
2122acc60f
|
Add warmup for all possible shapes for prefill #49 (#81)
|
2024-02-28 10:40:13 +01:00 |
|
Karol Damaszke
|
31bed905d4
|
Update habana profiler (#50) (#80)
Co-authored-by: mswiniarsk <156412439+mswiniarsk@users.noreply.github.com>
|
2024-02-28 09:57:40 +01:00 |
|
Karol Damaszke
|
d31fb62576
|
Add more info to high-level profiler events (#46) (#79)
Co-authored-by: Karol Damaszke <kdamaszke@habana.ai>
|
2024-02-28 09:55:50 +01:00 |
|
Karol Damaszke
|
941d36f3fd
|
Enable deferred token generation (#44) (#75)
Co-authored-by: Krzysztof Laskowski <klaskowski@habana.ai>
|
2024-02-27 15:46:40 +01:00 |
|
jkaniecki
|
83b059bd27
|
Bulk shifting (#40) (#70)
Co-authored-by: madamczykhabana <110973826+madamczykhabana@users.noreply.github.com>
|
2024-02-26 17:29:56 +01:00 |
|
regisss
|
8f4aba6ad3
|
Update dependencies (#69)
|
2024-02-25 13:07:47 +01:00 |
|
jkaniecki
|
c3bd8ef445
|
Add Fp8 support (#42) (#71)
Co-authored-by: mrs303 <54661797+mrs303@users.noreply.github.com>
Co-authored-by: Adam Stachowicz <105052242+astachowiczhabana@users.noreply.github.com>
Co-authored-by: Grzegorz Morys <gmorys@habana.ai>
|
2024-02-23 11:52:28 +01:00 |
|
jkaniecki
|
a490847702
|
Sequence bucketing for prefill (#39) (#67)
Co-authored-by: mswiniarsk <156412439+mswiniarsk@users.noreply.github.com>
|
2024-02-23 01:52:14 +01:00 |
|
jkaniecki
|
9ad6086250
|
Improve habana profile dev experience (#36) (#65)
Co-authored-by: Michal Szutenberg <37601244+szutenberg@users.noreply.github.com>
|
2024-02-22 13:57:45 +01:00 |
|
jkaniecki
|
f7ef414e38
|
Remove unused pad_token_id for filter (#35) (#64)
Co-authored-by: mswiniarsk <156412439+mswiniarsk@users.noreply.github.com>
|
2024-02-22 11:24:09 +01:00 |
|
jkaniecki
|
8f590759e3
|
Prefill optimization by allocating space only for the first output token (#34) (#62)
Co-authored-by: mswiniarsk <156412439+mswiniarsk@users.noreply.github.com>
Co-authored-by: Karol Damaszke <karol.damaszke@intel.com>
|
2024-02-22 04:55:43 +01:00 |
|
jkaniecki
|
80303b469c
|
Do not limit hpu graphs by default (#32) (#61)
Co-authored-by: mswiniarsk <156412439+mswiniarsk@users.noreply.github.com>
|
2024-02-21 15:38:00 +01:00 |
|
jkaniecki
|
6b6dec9ea1
|
Transparent tokenizer uses explicit int32 (#31) (#60)
Co-authored-by: Adam Stachowicz <105052242+astachowiczhabana@users.noreply.github.com>
|
2024-02-21 14:24:41 +01:00 |
|
regisss
|
a4d3a00d98
|
Fix dependencies (#56)
|
2024-02-19 10:19:23 +01:00 |
|
regisss
|
dca9ac6508
|
Revert "Solve dependency issue"
This reverts commit ea2b93dd75 .
|
2024-02-19 07:28:04 +00:00 |
|
regisss
|
ea2b93dd75
|
Solve dependency issue
|
2024-02-19 07:26:37 +00:00 |
|