Compare commits
1039 Commits
Author | SHA1 | Date | |
---|---|---|---|
|
097791a380 | ||
|
548aa65973 | ||
|
c8bf012b13 | ||
|
febfa7eef7 | ||
|
15fae1ba47 | ||
|
a04760a08b | ||
|
78fee8e05b | ||
|
d30a970ea8 | ||
|
8c6a3f5142 | ||
|
e1914963a6 | ||
|
c0a258afef | ||
|
cb61e92a13 | ||
|
963ccf68eb | ||
|
c98bef59b8 | ||
|
9948983b15 | ||
|
dd3f5584f6 | ||
|
8615cfc8d9 | ||
|
f6ef492a1d | ||
|
67b4b38a12 | ||
|
39187d64d5 | ||
|
ad20475f11 | ||
|
b76d853800 | ||
|
914fb89687 | ||
|
b5379ad6b7 | ||
|
3b04fa8c02 | ||
|
23e2f299b8 | ||
|
e6d8d83c96 | ||
|
e689e0da09 | ||
|
9938548dda | ||
|
f0df91d31f | ||
|
dbee4f9d6e | ||
|
c760abf0ca | ||
|
ac30a7e5ea | ||
|
8146f37681 | ||
|
7e3e61b76b | ||
|
eb3e413c6a | ||
|
339e501042 | ||
|
1e8bd45c63 | ||
|
519d087a88 | ||
|
d87927ed85 | ||
|
852cf2c3f0 | ||
|
8124f74317 | ||
|
f9794f526a | ||
|
e1f1d3de1b | ||
|
f7767ddc57 | ||
|
833399b1d0 | ||
|
e2759dae6f | ||
|
44f2b978ac | ||
|
ffa9824cd2 | ||
|
944fcb2da7 | ||
|
0e382353c7 | ||
|
faa8073f56 | ||
|
902cccd218 | ||
|
17f8382daa | ||
|
6538e4ba72 | ||
|
a9cee48feb | ||
|
396a90c03b | ||
|
b01d2c2d18 | ||
|
2a5e32fc9f | ||
|
8e979dce3b | ||
|
cde061aa9b | ||
|
929a1f9dee | ||
|
a988d3398d | ||
|
b43ef9a11e | ||
|
2c9296cc4b | ||
|
f2c474e636 | ||
|
fab23a27aa | ||
|
f8420c16c8 | ||
|
132eb89bc8 | ||
|
a43faae14a | ||
|
7ffd3512be | ||
|
39a724e109 | ||
|
07de677d04 | ||
|
f625f8d5c3 | ||
|
392cd6ddbf | ||
|
63561e4d05 | ||
|
33f374def5 | ||
|
6b56bb2b72 | ||
|
0d94b6e4cf | ||
|
46e1bb2cc3 | ||
|
a0879b8167 | ||
|
15db0882ae | ||
|
c1d6d93374 | ||
|
a601ecc468 | ||
|
2cd1af070e | ||
|
d1e712de90 | ||
|
27f482bd9b | ||
|
1bbefa8132 | ||
|
cdc6447b88 | ||
|
b8069c547b | ||
|
2ae85e8dcb | ||
|
975239ecc9 | ||
|
34c07f3e5c | ||
|
7cbb97100e | ||
|
a5041e1cf3 | ||
|
fa04ba6d29 | ||
|
e45a9c04f3 | ||
|
b515d93963 | ||
|
c81ddd6e96 | ||
|
115edc0551 | ||
|
9befa479b9 | ||
|
985634be7f | ||
|
f6f0db84be | ||
|
32eaf99217 | ||
|
3a44ec93c9 | ||
|
ffaa3d2113 | ||
|
779b7551fa | ||
|
036ea2be75 | ||
|
5fc22ebca9 | ||
|
7b3b230f47 | ||
|
2a811e282b | ||
|
9681de43de | ||
|
5a8f33df0f | ||
|
b963844fec | ||
|
a156587c6e | ||
|
15a73c9e36 | ||
|
6e1524ef46 | ||
|
7a0f6d99c6 | ||
|
a929ab5644 | ||
|
1ce0fbabc1 | ||
|
398d05c997 | ||
|
7a878def1a | ||
|
8364519d61 | ||
|
77392d6ad8 | ||
|
d6b173977b | ||
|
68c1dae851 | ||
|
309e228591 | ||
|
e580550561 | ||
|
7781fbef0e | ||
|
02f7e12546 | ||
|
7680931f88 | ||
|
17ce9ec445 | ||
|
51d0c51ee2 | ||
|
3904838518 | ||
|
2cd50c6ff8 | ||
|
13bf7b6af0 | ||
|
33c53e101a | ||
|
cd461a9333 | ||
|
dab06c96e4 | ||
|
9c7cfd7a53 | ||
|
b15eac47a6 | ||
|
1b3398e54e | ||
|
433a5fa6c7 | ||
|
9a0883e8f2 | ||
|
73a8bda8bd | ||
|
d1c3f58399 | ||
|
094b61b66c | ||
|
e38d6aed78 | ||
|
dec0d4c11d | ||
|
3f16363ff1 | ||
|
e38f29772d | ||
|
81adad7ae4 | ||
|
498fa5afe7 | ||
|
5f4111249a | ||
|
26a6c23156 | ||
|
ec91e90f08 | ||
|
28ed96a319 | ||
|
88101a2274 | ||
|
ab67f9c98c | ||
|
7cc3351ff8 | ||
|
e454db847d | ||
|
e3e006cbcc | ||
|
4af6f26acc | ||
|
e1cc7af587 | ||
|
58ed0277e3 | ||
|
ccf9f3a32d | ||
|
d5f5cec2f9 | ||
|
3f5e636bc3 | ||
|
54e6163356 | ||
|
6764850dca | ||
|
15701bbe26 | ||
|
6c6cb23be1 | ||
|
9abb26714b | ||
|
86d06696df | ||
|
702c2e65d3 | ||
|
f82928c33e | ||
|
ec6480dd4c | ||
|
a22e7cd960 | ||
|
3f1666ec24 | ||
|
55babc8300 | ||
|
cb6307b799 | ||
|
92fd5b54ed | ||
|
5a3f8d714b | ||
|
e1ea3d7515 | ||
|
2ea5513286 | ||
|
761abc7611 | ||
|
4b17f8fbcd | ||
|
8c5dd01964 | ||
|
bb1bd7d4db | ||
|
b1ed0589b2 | ||
|
1deb51287b | ||
|
94a23e5b05 | ||
|
a7119b5bda | ||
|
674cdd0df0 | ||
|
d8a5c4ffd5 | ||
|
3a30bd8fed | ||
|
5a071e502c | ||
|
4fa27a3ca9 | ||
|
ccf3740b5b | ||
|
4250b68b0f | ||
|
9c1c474dc2 | ||
|
69dcfa5277 | ||
|
a7e4b23350 | ||
|
b8ec64414c | ||
|
86a9be5c33 | ||
|
145b166720 | ||
|
b06ce86784 | ||
|
68d6e221a1 | ||
|
76eaa3d3c1 | ||
|
15f55be936 | ||
|
715dd93150 | ||
|
ab92fece63 | ||
|
62998adab2 | ||
|
c83497b685 | ||
|
38aa0d01d5 | ||
|
c9488eb1f9 | ||
|
3a36dab7ca | ||
|
c4bf0b3a47 | ||
|
6b53539738 | ||
|
b08acf2457 | ||
|
7ef16f1240 | ||
|
5f9bb1aaa6 | ||
|
3ca724d928 | ||
|
fa0e458b3e | ||
|
ca36721b81 | ||
|
604acd1870 | ||
|
867257d59a | ||
|
06c4189bf9 | ||
|
5fa7579794 | ||
|
aa3998ee3a | ||
|
4f35fda4b2 | ||
|
c6506097fd | ||
|
f30875aa58 | ||
|
5f50e6f244 | ||
|
5846009648 | ||
|
0706d83133 | ||
|
d57db31395 | ||
|
986c4b96b0 | ||
|
939ac11774 | ||
|
13ac2a6641 | ||
|
207e74ecd4 | ||
|
0e56cd1e9a | ||
|
f32cd21452 | ||
|
0e002df132 | ||
|
066dde1110 | ||
|
6bf1747822 | ||
|
97cc2abc7c | ||
|
d4d4a05695 | ||
|
9da5dda258 | ||
|
400b8cd097 | ||
|
f3525fe363 | ||
|
e847b7f62c | ||
|
086676cdc5 | ||
|
321671b879 | ||
|
91350b05fb | ||
|
91c0faf004 | ||
|
2e0fc456be | ||
|
1101ebed73 | ||
|
71a59a08c7 | ||
|
3d28249c19 | ||
|
2d4000410d | ||
|
9ed6dca696 | ||
|
a0a6411449 | ||
|
c546ba83da | ||
|
576a4288c8 | ||
|
75222752d2 | ||
|
2769cecfef | ||
|
a63cc05a7f | ||
|
dab0603847 | ||
|
7b27a9ad49 | ||
|
7ad77e1a6b | ||
|
0898e6fa65 | ||
|
503df993c0 | ||
|
f3c81c1662 | ||
|
f57e05cbed | ||
|
23d264196f | ||
|
209aeaf5be | ||
|
b2410e2dab | ||
|
22daa08f60 | ||
|
cc043c43d1 | ||
|
27f6470f5c | ||
|
6de8c36fba | ||
|
c3095ee6e0 | ||
|
777bc5b0c0 | ||
|
704f6518ec | ||
|
53c9c407ed | ||
|
3ef709c84e | ||
|
89270bf0fa | ||
|
6172382d1b | ||
|
fca7f3a9a0 | ||
|
994ef67d21 | ||
|
97cb124f24 | ||
|
d3660e5474 | ||
|
502dcb72ed | ||
|
41b92af023 | ||
|
641b6cb856 | ||
|
ff9b19bbf5 | ||
|
9d77c781bc | ||
|
1e55e457c3 | ||
|
bfe44afdf8 | ||
|
8c876810bc | ||
|
61a03cb24b | ||
|
3e972418a9 | ||
|
3375a5b613 | ||
|
b92a30910c | ||
|
e71c029cf1 | ||
|
5c63250893 | ||
|
cc7a937188 | ||
|
cccabbf89f | ||
|
6f885bf313 | ||
|
b243e93f94 | ||
|
8759703a8b | ||
|
20ba35f9a2 | ||
|
9ccdcccf17 | ||
|
743fa08e8f | ||
|
96cceeb539 | ||
|
51b0fd8453 | ||
|
95ef30ba72 | ||
|
2d7e6a1283 | ||
|
b7dcd5b348 | ||
|
c0dd5f5713 | ||
|
c1b93f5531 | ||
|
df3462e205 | ||
|
d3e4d6e198 | ||
|
81a0941b1d | ||
|
180f0a5041 | ||
|
91fa3bb642 | ||
|
0a275ab34f | ||
|
eda558c0d5 | ||
|
1e7289dfe0 | ||
|
d48e95b8c3 | ||
|
edc1b83c5f | ||
|
0614c55fc3 | ||
|
fe5565dbcf | ||
|
368d8363da | ||
|
1e8dd33559 | ||
|
30c7eab049 | ||
|
dbd99b37d1 | ||
|
41f1d0a5a6 | ||
|
a38d21cf91 | ||
|
d51d539067 | ||
|
e8524abea5 | ||
|
6ade92ed83 | ||
|
58817f8e8b | ||
|
5ef3a182e3 | ||
|
212a940e64 | ||
|
262f956ebb | ||
|
8191bbe22c | ||
|
25832b90d5 | ||
|
a0a6066726 | ||
|
ab5a8ec5b6 | ||
|
8f6229c4c7 | ||
|
9de63db1d6 | ||
|
e2b4afbca0 | ||
|
59f3c3b647 | ||
|
2ccd63fdfa | ||
|
7ae1d69b07 | ||
|
173f8f47b2 | ||
|
6b525310c6 | ||
|
0293462aee | ||
|
76c2977095 | ||
|
f546f72bbd | ||
|
15c096887f | ||
|
d2b883a073 | ||
|
f94d4cc9de | ||
|
39bc516674 | ||
|
7f62a62f28 | ||
|
702d728cba | ||
|
fffe1e01dc | ||
|
8a84fdbe38 | ||
|
b2c5f30d16 | ||
|
90f9c7ba23 | ||
|
355f86ccc8 | ||
|
96afafb1ed | ||
|
b4fd827436 | ||
|
f29d1da373 | ||
|
5092cfee63 | ||
|
f28692e0ce | ||
|
4e5a8338b7 | ||
|
f898dec142 | ||
|
119677db04 | ||
|
d2168251d3 | ||
|
107029b180 | ||
|
9cb380eef5 | ||
|
b5c05972dc | ||
|
1c5145c47d | ||
|
1478f30841 | ||
|
045f94086d | ||
|
8512678932 | ||
|
342ed8d932 | ||
|
bd3655b122 | ||
|
19719de0c5 | ||
|
dba0d38ff1 | ||
|
931a692eb9 | ||
|
ff617ac1fd | ||
|
aa2f9576a8 | ||
|
049a4ea974 | ||
|
ae4df55ae4 | ||
|
37dc053972 | ||
|
86688525a1 | ||
|
2fcba12f67 | ||
|
1b8111495a | ||
|
9a36732ebd | ||
|
7be7109e65 | ||
|
00d0dd396e | ||
|
14c5434335 | ||
|
18e294ebb5 | ||
|
451d299c87 | ||
|
ffc75805f0 | ||
|
17947a7e8d | ||
|
9169d735fe | ||
|
be7e7055e2 | ||
|
b4015b98e6 | ||
|
8d2511bbed | ||
|
246bfdb85c | ||
|
29b2011779 | ||
|
ab53d4f98c | ||
|
78edfff1f5 | ||
|
fa6ec17cfb | ||
|
c7ed614cbc | ||
|
d970383384 | ||
|
fddb93ad47 | ||
|
c099863c4f | ||
|
e9326b0a2b | ||
|
159f44a0c5 | ||
|
be119b4a84 | ||
|
408d6fc26b | ||
|
1dedf7d37f | ||
|
ec9abd0b77 | ||
|
e835daf103 | ||
|
8cabbc690f | ||
|
f192c9e146 | ||
|
3301f9b915 | ||
|
81c0d1b3a2 | ||
|
a89b627556 | ||
|
647eada47b | ||
|
71c711c40f | ||
|
44c6ccbf0b | ||
|
93061470ea | ||
|
fb9d64ce50 | ||
|
9b8ac7e91c | ||
|
4c5d865434 | ||
|
98e1e17c3b | ||
|
5cdd5ddebd | ||
|
184ad40584 | ||
|
643793b208 | ||
|
a1817a3cac | ||
|
f33d5dbd9a | ||
|
e3b4d7b332 | ||
|
bad913b2a9 | ||
|
f2ff321c65 | ||
|
407ac72e4a | ||
|
8f53a2b6a1 | ||
|
fb40e1b540 | ||
|
e35988ae55 | ||
|
6fbd276c0d | ||
|
b6b6e4824d | ||
|
3749c3fe21 | ||
|
4494c54b3a | ||
|
2262652ef7 | ||
|
0b91c1bc39 | ||
|
607808927d | ||
|
7b8140ba2f | ||
|
cba459e2cb | ||
|
e98b0992b3 | ||
|
ecc529569d | ||
|
d4728f44df | ||
|
1e6c75413c | ||
|
669dfd29ef | ||
|
b7910161fc | ||
|
d07b05ce29 | ||
|
f2856cd358 | ||
|
83242d336e | ||
|
754ae2b889 | ||
|
f0bfaddff9 | ||
|
2287b51e00 | ||
|
55bdd81358 | ||
|
84d2f48912 | ||
|
a22706cf1e | ||
|
5d6319aba0 | ||
|
36d0b00a82 | ||
|
bda87293fb | ||
|
bf20a0c3db | ||
|
9c5bd07ac6 | ||
|
0c2dbcd675 | ||
|
baae009904 | ||
|
ab502bab79 | ||
|
00848b409a | ||
|
6b6bbc946e | ||
|
46e2cb63ae | ||
|
87a610d37f | ||
|
657ccb6ef5 | ||
|
64d53d53ee | ||
|
8dca77621d | ||
|
1bf0ae7bb8 | ||
|
52fe75f693 | ||
|
13dc719ebb | ||
|
25b409421f | ||
|
464e9e79ab | ||
|
47d38e621e | ||
|
8737c7605e | ||
|
48f24338f1 | ||
|
143dce4f2f | ||
|
a1359e90a6 | ||
|
06ec6424af | ||
|
f3a2478e91 | ||
|
c92ace9a61 | ||
|
b84e960377 | ||
|
a6471d6415 | ||
|
c91d0b8208 | ||
|
ac2b9dd7fb | ||
|
8318d99136 | ||
|
cbf766ec26 | ||
|
83edcef2b5 | ||
|
2e885320a0 | ||
|
95ed73d356 | ||
|
3b64d301ee | ||
|
776ef4afc1 | ||
|
a797753551 | ||
|
63e0bf809d | ||
|
abe193a72b | ||
|
ae45a58c3d | ||
|
36a56425a9 | ||
|
5c3776b35e | ||
|
89b5a40d54 | ||
|
e11bca4b82 | ||
|
756dee92b8 | ||
|
e2d78d79f3 | ||
|
9e404f8308 | ||
|
42a20755f8 | ||
|
c90c4bf7c9 | ||
|
c062778d0e | ||
|
c2ea9c0222 | ||
|
9d527937e1 | ||
|
740de6effe | ||
|
3a8604b0e0 | ||
|
3057cf7a4b | ||
|
727f90ae00 | ||
|
749f711c85 | ||
|
3fb88896ab | ||
|
c48c2cd5bd | ||
|
11d092ce74 | ||
|
47a34c4b36 | ||
|
53563068df | ||
|
6f8ffb2192 | ||
|
c9790b5615 | ||
|
70422a9324 | ||
|
5918e89fcd | ||
|
30bcdf3b0e | ||
|
96b8113488 | ||
|
f6f95b6e59 | ||
|
a0f0116751 | ||
|
53657e2a86 | ||
|
957bb72c8a | ||
|
d00dff3906 | ||
|
63041e174c | ||
|
50f2783ab0 | ||
|
3c9577affd | ||
|
b4af43d454 | ||
|
cde66bd82b | ||
|
eb76703339 | ||
|
ff72f95d9c | ||
|
8919b8e463 | ||
|
df876c1f4e | ||
|
e4c4f60908 | ||
|
8e84767cf2 | ||
|
57d6e6b735 | ||
|
76e5d617a7 | ||
|
53220cc7da | ||
|
0611e5fb8c | ||
|
2fd9cc9459 | ||
|
384e02b1ea | ||
|
f45f0512dc | ||
|
7c78b28314 | ||
|
723bf21b43 | ||
|
4d3cf2ad9c | ||
|
ee812f00ee | ||
|
32ddce95ae | ||
|
d23ebfd3fa | ||
|
4d48b11f6b | ||
|
0fd0c33e06 | ||
|
7533ae2fe2 | ||
|
a6b624b889 | ||
|
57a454e9f6 | ||
|
000a124d0f | ||
|
4174ebd74c | ||
|
55cfc3482a | ||
|
a399dfff52 | ||
|
1489feee7b | ||
|
4891c1ef3b | ||
|
e834f3177c | ||
|
696e6588f5 | ||
|
d242a624a7 | ||
|
2c25009c8a | ||
|
880bc5ed0f | ||
|
3da26d195e | ||
|
5ec0c5a261 | ||
|
2eeb93afb5 | ||
|
d62bcccfc7 | ||
|
cc66f57557 | ||
|
9cf7cc57ac | ||
|
8bcf391dae | ||
|
2d6c06a791 | ||
|
e6b58b5102 | ||
|
82f774c594 | ||
|
dcc4ac54cb | ||
|
bf7a7ab92a | ||
|
c95e948139 | ||
|
0e30e1b057 | ||
|
250fd32066 | ||
|
a1754a0906 | ||
|
e7fb431e3c | ||
|
12d62cfcf8 | ||
|
6a301741dc | ||
|
3a257953da | ||
|
7f0790fc80 | ||
|
08e1522f6e | ||
|
3898d17d62 | ||
|
90350b1903 | ||
|
791d63c5e3 | ||
|
d77dc2c35e | ||
|
b79b914967 | ||
|
4d52211839 | ||
|
351322ff05 | ||
|
36ad2daf0b | ||
|
2e14a1c09e | ||
|
69c1a3eb3e | ||
|
24e8c7c0ac | ||
|
6c34cef1ef | ||
|
9c3ae72954 | ||
|
1389981637 | ||
|
44f0c1403c | ||
|
c2753700b0 | ||
|
68f9e034b0 | ||
|
2b0c377f1a | ||
|
e54bbd9f4c | ||
|
3870716b8c | ||
|
0a5fa0867e | ||
|
39132812c1 | ||
|
e5a0491499 | ||
|
68a1082e6d | ||
|
9cbb93d10c | ||
|
3918638dda | ||
|
34d0c0073e | ||
|
f9a7690bc5 | ||
|
9cb7bdc160 | ||
|
93c0d6a8af | ||
|
0fe8cd63c9 | ||
|
30ab8398a4 | ||
|
41cbbdcf66 | ||
|
e2ab1c363c | ||
|
83f35e04ed | ||
|
0145138451 | ||
|
39c64c6537 | ||
|
62c71dc190 | ||
|
139554a08e | ||
|
9a95aa97ec | ||
|
478d574e7f | ||
|
4963761a92 | ||
|
885487253d | ||
|
41b08c2406 | ||
|
298d60d05d | ||
|
f887ad1fb4 | ||
|
29f31e43f7 | ||
|
44ba6e4212 | ||
|
f176517638 | ||
|
0e4085b640 | ||
|
c8e26ffbbf | ||
|
3027c289f2 | ||
|
ed993dceb1 | ||
|
b22fcdd6a7 | ||
|
ab63788557 | ||
|
01f6597992 | ||
|
8ad1f9f378 | ||
|
1eae8fc7e2 | ||
|
505163c9dc | ||
|
46f0270df9 | ||
|
3a13b07630 | ||
|
d1be6b428c | ||
|
e2253bc882 | ||
|
934baee6f6 | ||
|
4a24d6b0ee | ||
|
395795353e | ||
|
c90ad4cf2b | ||
|
312ea9d54d | ||
|
9464de28e0 | ||
|
13b2d0c84e | ||
|
ba31edc6a2 | ||
|
d2460a9819 | ||
|
c08d2a0295 | ||
|
f5379e2ec0 | ||
|
b76af125da | ||
|
1857eef869 | ||
|
7823f6485c | ||
|
f67834598e | ||
|
1b20b1b0fe | ||
|
7d19a0cbbb | ||
|
222759ee75 | ||
|
5df10eccf1 | ||
|
ce6db0b2bc | ||
|
b478c3a1bf | ||
|
43318cd24f | ||
|
1a58855f0d | ||
|
063db2bc5d | ||
|
aec2021833 | ||
|
23f9cc18b3 | ||
|
f70ee1063d | ||
|
8969d829f2 | ||
|
51d693b42d | ||
|
5f775a7488 | ||
|
809578d983 | ||
|
2218629598 | ||
|
3b051eb0fd | ||
|
6d9f02a8c7 | ||
|
af389e60a9 | ||
|
92bbd62958 | ||
|
acb80c4660 | ||
|
905c507717 | ||
|
6bb4abd5f7 | ||
|
385c7ea54d | ||
|
e6e313d0a6 | ||
|
d008aad816 | ||
|
fa5e6501f6 | ||
|
3e17410f0c | ||
|
ff17f15259 | ||
|
1867b85110 | ||
|
5e67c380a8 | ||
|
6741a5f003 | ||
|
fe93b7ac14 | ||
|
2c54009ff4 | ||
|
f473ae48f1 | ||
|
66f26a9c3d | ||
|
318fbc347b | ||
|
673d99a6aa | ||
|
c915b1c511 | ||
|
dfd438a28c | ||
|
ef1ba9accd | ||
|
7978692614 | ||
|
e39dece7a7 | ||
|
85ab7c8d9d | ||
|
340ed789f2 | ||
|
c02f69d6f7 | ||
|
c67fc8060e | ||
|
ed571836d7 | ||
|
5b141bcaf9 | ||
|
6b990aea63 | ||
|
57785d98b8 | ||
|
eb5301b62a | ||
|
8029c17658 | ||
|
c3bf0238ac | ||
|
92bb35b2f3 | ||
|
c97aa8c19d | ||
|
e02b763e91 | ||
|
0745f00b09 | ||
|
4726f112b5 | ||
|
dc89143faa | ||
|
70ef3fc8f2 | ||
|
bf03c921a0 | ||
|
008ba3d7f7 | ||
|
40755ae04b | ||
|
07e6bd9d1b | ||
|
160e6afd60 | ||
|
bfc79434ad | ||
|
3b7674376d | ||
|
a45fa7b48b | ||
|
6e469977a0 | ||
|
675799bd87 | ||
|
9bbb9ad76b | ||
|
87c498435c | ||
|
91ed8720c4 | ||
|
01856791ca | ||
|
78334f19f1 | ||
|
f2a6b98514 | ||
|
fc04775b45 | ||
|
99fd9dea43 | ||
|
681e040ec1 | ||
|
ecccf2a14a | ||
|
b70d86b3ce | ||
|
c92fa548d0 | ||
|
abfda2733d | ||
|
159ea6fbbf | ||
|
ad59f112b0 | ||
|
9da52004a5 | ||
|
d25ecb41a8 | ||
|
dd1e6ac152 | ||
|
ede9a5f710 | ||
|
2c10cb58ca | ||
|
7756b64352 | ||
|
71427c97c1 | ||
|
ed02804b20 | ||
|
73c7c5e6fc | ||
|
8723cff084 | ||
|
258facfbbd | ||
|
8c25dbe749 | ||
|
95a409fbb7 | ||
|
fa54123748 | ||
|
ebeaff7d67 | ||
|
67698435d0 | ||
|
5145d4ef8a | ||
|
531da64ae2 | ||
|
04c21e0f6f | ||
|
64c913d6c9 | ||
|
dcd1ae1eb2 | ||
|
c39b540deb | ||
|
b1061c62c9 | ||
|
5c5103d0a9 | ||
|
47de210eb6 | ||
|
efb9a5d7fd | ||
|
d777dfb486 | ||
|
24fa15dc05 | ||
|
62cc78c922 | ||
|
04e95f935d | ||
|
e55b3df745 | ||
|
4dc5d9ea4b | ||
|
5e2f8cc45e | ||
|
1845f22c6a | ||
|
fcc228b1ee | ||
|
04ff71d60c | ||
|
ee80f6003a | ||
|
d49a21e5af | ||
|
5778a6141a | ||
|
d2211cc23f | ||
|
1f197678e1 | ||
|
e537197ed5 | ||
|
6de3ceb875 | ||
|
55ef4ebb69 | ||
|
35954e1e26 | ||
|
c91680aa56 | ||
|
8385749346 | ||
|
071a465b23 | ||
|
768f5138fa | ||
|
94fbf78455 | ||
|
230ac73281 | ||
|
977d371590 | ||
|
93e19b4711 | ||
|
266c4e23b8 | ||
|
1d2b339fc2 | ||
|
34f7715f83 | ||
|
f35e7ac323 | ||
|
6937ed8027 | ||
|
5cd90d9c4a | ||
|
39d3c2939f | ||
|
212bc46504 | ||
|
7aa122b10e | ||
|
c2e5a610a8 | ||
|
bac4f8cd3f | ||
|
de4341450b | ||
|
e5e4c147ed | ||
|
ab75b05f5f | ||
|
2cc6d2608c | ||
|
6645767866 | ||
|
f1a19e3c61 | ||
|
15551d1add | ||
|
a3be6e1ce3 | ||
|
5f0a663e89 | ||
|
c331085443 | ||
|
3785777b3e | ||
|
c37fd9222e | ||
|
c8d39afb58 | ||
|
fcdc3114c5 | ||
|
38630a0b54 | ||
|
146b3aac02 | ||
|
a7fcdb4ba6 | ||
|
705d8128fb | ||
|
3c9615879c | ||
|
6f98f4fc78 | ||
|
4ad80548d9 | ||
|
a797f21dcb | ||
|
9ec03d6284 | ||
|
2432fc787b | ||
|
d02316da0d | ||
|
3966ff41ed | ||
|
dac12eb1e9 | ||
|
947799d80b | ||
|
325862aba3 | ||
|
097dc50bbf | ||
|
f0fd037eda | ||
|
3d6e477407 | ||
|
ffeac6836b | ||
|
f8e5a42cfb | ||
|
f4c0650d61 | ||
|
e7c9e7c197 | ||
|
5a7a3cb755 | ||
|
cb381474a9 | ||
|
209dc665e3 | ||
|
90f4ba0aee | ||
|
0f6450e1a9 | ||
|
e0775905bf | ||
|
ad3d52d531 | ||
|
2639a4e4d7 | ||
|
a13bbac74b | ||
|
d5a34b20ad | ||
|
4195f1c0f4 | ||
|
cac53ad3ba | ||
|
e63af539f4 | ||
|
bfd0dec302 | ||
|
8ef57976c3 | ||
|
af0be15b89 | ||
|
87a890efe4 | ||
|
b9e1b09a5a | ||
|
42a56c38b2 | ||
|
610e2a71ab | ||
|
9e73d276a1 | ||
|
ee06f2a8a1 | ||
|
3c86c1340a | ||
|
56ad623888 | ||
|
3a7ea32d01 | ||
|
8bacfac868 | ||
|
fa8df01428 | ||
|
caf86f0246 | ||
|
581d6fc6d1 | ||
|
6a8a73d5ab | ||
|
36719830eb | ||
|
09af299e43 | ||
|
413c73afeb | ||
|
de65424b51 | ||
|
eb3debf099 | ||
|
86aae4fd57 | ||
|
3c26de3f12 | ||
|
657154b6b4 | ||
|
347d6e09b7 | ||
|
333523dc1f | ||
|
4c6b18cccb | ||
|
76ff4fbc6a | ||
|
e7e9aadb4c | ||
|
fc185d9cd0 | ||
|
884281bafb | ||
|
15f0b9c26b | ||
|
9a17007661 | ||
|
7691a575a5 | ||
|
2c07481b75 | ||
|
69e6a333f9 | ||
|
0ac92d5bac | ||
|
6f2d7d0170 | ||
|
ef96541f0d | ||
|
14b194d826 | ||
|
d009d8e46e | ||
|
da9d0cac32 | ||
|
a28e8cb8b9 | ||
|
26c2a70df9 | ||
|
31798dca74 | ||
|
d32395f44b | ||
|
1c00bbb73d | ||
|
f7c572cfb6 | ||
|
d9b6eee274 | ||
|
6e4c7f8efb | ||
|
631ddeb2ac | ||
|
db5a752e42 | ||
|
3fccc1a2ed | ||
|
baf8663195 | ||
|
7ea699e445 | ||
|
c78236329c | ||
|
d878150232 | ||
|
f61d6392eb | ||
|
fb540f2f86 | ||
|
17ca2f2acb | ||
|
1ccddcec6d | ||
|
151d24a5fe | ||
|
c45856553f | ||
|
3402a98209 | ||
|
6454bd3fa8 | ||
|
7c52609cc1 | ||
|
87fea7e015 | ||
|
a57e48b24c | ||
|
0572af3697 | ||
|
d83129c600 | ||
|
1daa7c9f93 | ||
|
4993367a06 | ||
|
249ba938b1 | ||
|
f4c6df2b71 | ||
|
dc338a28f3 | ||
|
bd9bde1e8c | ||
|
bb50dce11c | ||
|
6e1cc98209 | ||
|
932f7459b0 | ||
|
a11578e591 | ||
|
b7c5424d6e | ||
|
103ed52d7a | ||
|
39cc2f1a14 | ||
|
d600abc332 | ||
|
b7ba5170e0 | ||
|
581b54651a | ||
|
57d521027f | ||
|
a711494368 | ||
|
64ae665ba8 | ||
|
7a703a7f37 | ||
|
d7b574acd6 | ||
|
a7c768ade2 | ||
|
b8ff0b5011 | ||
|
8f7c94c23d | ||
|
e98caf3583 | ||
|
b64d87f998 | ||
|
c3467dc0c9 | ||
|
f437b978d9 | ||
|
dbde8d78b4 | ||
|
d4d94cb82a | ||
|
0953371646 | ||
|
b892623bdc | ||
|
8f5d9acbca | ||
|
cbfa6ba4df | ||
|
f8b7d723e1 | ||
|
112b697422 | ||
|
b663e6cc62 | ||
|
d8b911cb6a | ||
|
b8c4ea680d | ||
|
0e5b939e95 | ||
|
54ebaaa924 | ||
|
7d4d7eb947 | ||
|
e3f6c79b8e | ||
|
fba10abf60 | ||
|
87c733c101 | ||
|
158c767e55 | ||
|
9098172019 | ||
|
4fe551be8a | ||
|
0dc32d7676 | ||
|
bb1ac25afd | ||
|
eb65b2f961 | ||
|
64562f16b1 | ||
|
f3ccbe7b28 | ||
|
dbb54b4588 | ||
|
01d4660660 | ||
|
f694739060 | ||
|
4a3078a84b | ||
|
ac838a35a9 | ||
|
86d57e9ff4 | ||
|
7d4afb10ac | ||
|
9c9b77b120 | ||
|
7cf33f33dd | ||
|
a009f02e71 | ||
|
4e97389621 | ||
|
1c87568c86 | ||
|
50354e4611 | ||
|
a5441c30dc | ||
|
082817fb79 | ||
|
382d2a588a | ||
|
8374a05263 | ||
|
3418431ead | ||
|
dfcf7ca8cf | ||
|
3757bf9e6c |
5
.codespellignore
Normal file
5
.codespellignore
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
aks
|
||||||
|
ec2
|
||||||
|
eks
|
||||||
|
gce
|
||||||
|
gcp
|
1
.github/CODEOWNERS
vendored
Normal file
1
.github/CODEOWNERS
vendored
Normal file
@ -0,0 +1 @@
|
|||||||
|
* @longhorn/dev
|
48
.github/ISSUE_TEMPLATE/bug.md
vendored
Normal file
48
.github/ISSUE_TEMPLATE/bug.md
vendored
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
---
|
||||||
|
name: Bug report
|
||||||
|
about: Create a bug report
|
||||||
|
title: "[BUG]"
|
||||||
|
labels: ["kind/bug", "require/qa-review-coverage", "require/backport"]
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Describe the bug (🐛 if you encounter this issue)
|
||||||
|
|
||||||
|
<!--A clear and concise description of what the bug is.-->
|
||||||
|
|
||||||
|
## To Reproduce
|
||||||
|
|
||||||
|
<!--Provide the steps to reproduce the behavior.-->
|
||||||
|
|
||||||
|
## Expected behavior
|
||||||
|
|
||||||
|
<!--A clear and concise description of what you expected to happen.-->
|
||||||
|
|
||||||
|
## Support bundle for troubleshooting
|
||||||
|
|
||||||
|
<!--Provide a support bundle when the issue happens. You can generate a support bundle using the link at the footer of the Longhorn UI. Check [here](https://longhorn.io/docs/latest/advanced-resources/support-bundle/).-->
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
<!-- Suggest checking the doc of the best practices of using Longhorn. [here](https://longhorn.io/docs/1.5.1/best-practices)-->
|
||||||
|
- Longhorn version:
|
||||||
|
- Installation method (e.g. Rancher Catalog App/Helm/Kubectl):
|
||||||
|
- Kubernetes distro (e.g. RKE/K3s/EKS/OpenShift) and version:
|
||||||
|
- Number of management node in the cluster:
|
||||||
|
- Number of worker node in the cluster:
|
||||||
|
- Node config
|
||||||
|
- OS type and version:
|
||||||
|
- Kernel version:
|
||||||
|
- CPU per node:
|
||||||
|
- Memory per node:
|
||||||
|
- Disk type(e.g. SSD/NVMe/HDD):
|
||||||
|
- Network bandwidth between the nodes:
|
||||||
|
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal):
|
||||||
|
- Number of Longhorn volumes in the cluster:
|
||||||
|
- Impacted Longhorn resources:
|
||||||
|
- Volume names:
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context about the problem here.-->
|
16
.github/ISSUE_TEMPLATE/doc.md
vendored
Normal file
16
.github/ISSUE_TEMPLATE/doc.md
vendored
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
---
|
||||||
|
name: Document
|
||||||
|
about: Create or update document
|
||||||
|
title: "[DOC] "
|
||||||
|
labels: kind/doc
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's the document you plan to update? Why? Please describe
|
||||||
|
|
||||||
|
<!--A clear and concise description of what the document is.-->
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context or screenshots about the document request here.-->
|
24
.github/ISSUE_TEMPLATE/feature.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/feature.md
vendored
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
name: Feature request
|
||||||
|
about: Suggest an idea/feature
|
||||||
|
title: "[FEATURE] "
|
||||||
|
labels: ["kind/enhancement", "require/lep", "require/doc", "require/auto-e2e-test"]
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Is your feature request related to a problem? Please describe (👍 if you like this request)
|
||||||
|
|
||||||
|
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
|
||||||
|
|
||||||
|
## Describe the solution you'd like
|
||||||
|
|
||||||
|
<!--A clear and concise description of what you want to happen-->
|
||||||
|
|
||||||
|
## Describe alternatives you've considered
|
||||||
|
|
||||||
|
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context or screenshots about the feature request here.-->
|
24
.github/ISSUE_TEMPLATE/improvement.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/improvement.md
vendored
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
name: Improvement request
|
||||||
|
about: Suggest an improvement of an existing feature
|
||||||
|
title: "[IMPROVEMENT] "
|
||||||
|
labels: ["kind/improvement", "require/doc", "require/auto-e2e-test", "require/backport"]
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Is your improvement request related to a feature? Please describe (👍 if you like this request)
|
||||||
|
|
||||||
|
<!--A clear and concise description of what the problem is. Ex. I'm always frustrated when [...]-->
|
||||||
|
|
||||||
|
## Describe the solution you'd like
|
||||||
|
|
||||||
|
<!--A clear and concise description of what you want to happen.-->
|
||||||
|
|
||||||
|
## Describe alternatives you've considered
|
||||||
|
|
||||||
|
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context or screenshots about the feature request here.-->
|
24
.github/ISSUE_TEMPLATE/infra.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/infra.md
vendored
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
name: Infra
|
||||||
|
about: Create an test/dev infra task
|
||||||
|
title: "[INFRA] "
|
||||||
|
labels: kind/infra
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's the test to develop? Please describe
|
||||||
|
|
||||||
|
<!--A clear and concise description of what test/dev infra you want to develop.-->
|
||||||
|
|
||||||
|
## Describe the items of the test development (DoD, definition of done) you'd like
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
||||||
|
|
||||||
|
- [ ] `item 1`
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context or screenshots about the test infra request here.-->
|
28
.github/ISSUE_TEMPLATE/question.md
vendored
Normal file
28
.github/ISSUE_TEMPLATE/question.md
vendored
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
---
|
||||||
|
name: Question
|
||||||
|
about: Have a question
|
||||||
|
title: "[QUESTION] "
|
||||||
|
labels: kind/question
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
## Question
|
||||||
|
|
||||||
|
<!--Suggest to use https://github.com/longhorn/longhorn/discussions to ask questions.-->
|
||||||
|
|
||||||
|
## Environment
|
||||||
|
|
||||||
|
- Longhorn version:
|
||||||
|
- Kubernetes version:
|
||||||
|
- Node config
|
||||||
|
- OS type and version
|
||||||
|
- Kernel version
|
||||||
|
- CPU per node:
|
||||||
|
- Memory per node:
|
||||||
|
- Disk type
|
||||||
|
- Network bandwidth and latency between the nodes:
|
||||||
|
- Underlying Infrastructure (e.g. on AWS/GCE, EKS/GKE, VMWare/KVM, Baremetal):
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context about the problem here.-->
|
24
.github/ISSUE_TEMPLATE/refactor.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/refactor.md
vendored
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
name: Refactor request
|
||||||
|
about: Suggest a refactoring request for an existing implementation
|
||||||
|
title: "[REFACTOR] "
|
||||||
|
labels: kind/refactoring
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## Is your improvement request related to a feature? Please describe
|
||||||
|
|
||||||
|
<!--A clear and concise description of what the problem is.-->
|
||||||
|
|
||||||
|
## Describe the solution you'd like
|
||||||
|
|
||||||
|
<!--A clear and concise description of what you want to happen.-->
|
||||||
|
|
||||||
|
## Describe alternatives you've considered
|
||||||
|
|
||||||
|
<!--A clear and concise description of any alternative solutions or features you've considered.-->
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context or screenshots about the refactoring request here.-->
|
35
.github/ISSUE_TEMPLATE/release.md
vendored
Normal file
35
.github/ISSUE_TEMPLATE/release.md
vendored
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
---
|
||||||
|
name: Release task
|
||||||
|
about: Create a release task
|
||||||
|
title: "[RELEASE]"
|
||||||
|
labels: release/task
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
**What's the task? Please describe.**
|
||||||
|
Action items for releasing v<x.y.z>
|
||||||
|
|
||||||
|
**Describe the sub-tasks.**
|
||||||
|
- Pre-Release
|
||||||
|
- [ ] Regression test plan (manual) - @khushboo-rancher
|
||||||
|
- [ ] Run e2e regression for pre-GA milestones (`install`, `upgrade`) - @yangchiu
|
||||||
|
- [ ] Run security testing of container images for pre-GA milestones - @yangchiu
|
||||||
|
- [ ] Verify longhorn chart PR to ensure all artifacts are ready for GA (`install`, `upgrade`) @chriscchien
|
||||||
|
- [ ] Run core testing (install, upgrade) for the GA build from the previous patch and the last patch of the previous feature release (1.4.2). - @yangchiu
|
||||||
|
- Release
|
||||||
|
- [ ] Release longhorn/chart from the release branch to publish to ArtifactHub
|
||||||
|
- [ ] Release note
|
||||||
|
- [ ] Deprecation note
|
||||||
|
- [ ] Upgrade notes including highlighted notes, deprecation, compatible changes, and others impacting the current users
|
||||||
|
- Post-Release
|
||||||
|
- [ ] Create a new release branch of manager/ui/tests/engine/longhorn instance-manager/share-manager/backing-image-manager when creating the RC1
|
||||||
|
- [ ] Update https://github.com/longhorn/longhorn/blob/master/deploy/upgrade_responder_server/chart-values.yaml @PhanLe1010
|
||||||
|
- [ ] Add another request for the rancher charts for the next patch release (`1.5.1`) @rebeccazzzz
|
||||||
|
- Rancher charts: verify the chart is able to install & upgrade - @khushboo-rancher
|
||||||
|
- [ ] rancher/image-mirrors update @weizhe0422 (@PhanLe1010 )
|
||||||
|
- https://github.com/rancher/image-mirror/pull/412
|
||||||
|
- [ ] rancher/charts 2.7 branches for rancher marketplace @weizhe0422 (@PhanLe1010)
|
||||||
|
- `dev-2.7`: https://github.com/rancher/charts/pull/2766
|
||||||
|
|
||||||
|
cc @longhorn/qa @longhorn/dev
|
24
.github/ISSUE_TEMPLATE/task.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/task.md
vendored
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
name: Task
|
||||||
|
about: Create a general task
|
||||||
|
title: "[TASK] "
|
||||||
|
labels: kind/task
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's the task? Please describe
|
||||||
|
|
||||||
|
<!--A clear and concise description of what the task is.-->
|
||||||
|
|
||||||
|
## Describe the sub-tasks
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
||||||
|
|
||||||
|
- [ ] `item 1`
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context or screenshots about the task request here.-->
|
24
.github/ISSUE_TEMPLATE/test.md
vendored
Normal file
24
.github/ISSUE_TEMPLATE/test.md
vendored
Normal file
@ -0,0 +1,24 @@
|
|||||||
|
---
|
||||||
|
name: Test
|
||||||
|
about: Create or update test
|
||||||
|
title: "[TEST] "
|
||||||
|
labels: kind/test
|
||||||
|
assignees: ''
|
||||||
|
|
||||||
|
---
|
||||||
|
|
||||||
|
## What's the test to develop? Please describe
|
||||||
|
|
||||||
|
<!--A clear and concise description of what test you want to develop.-->
|
||||||
|
|
||||||
|
## Describe the tasks for the test
|
||||||
|
|
||||||
|
<!--
|
||||||
|
Please use a task list for items on a separate line with a clickable checkbox https://docs.github.com/en/issues/tracking-your-work-with-issues/about-task-lists
|
||||||
|
|
||||||
|
- [ ] `item 1`
|
||||||
|
-->
|
||||||
|
|
||||||
|
## Additional context
|
||||||
|
|
||||||
|
<!--Add any other context or screenshots about the test request here.-->
|
34
.github/mergify.yml
vendored
Normal file
34
.github/mergify.yml
vendored
Normal file
@ -0,0 +1,34 @@
|
|||||||
|
pull_request_rules:
|
||||||
|
- name: automatic merge after review
|
||||||
|
conditions:
|
||||||
|
- check-success=continuous-integration/drone/pr
|
||||||
|
- check-success=DCO
|
||||||
|
- check-success=CodeFactor
|
||||||
|
- check-success=codespell
|
||||||
|
- "#approved-reviews-by>=1"
|
||||||
|
- approved-reviews-by=@longhorn/maintainer
|
||||||
|
- label=ready-to-merge
|
||||||
|
actions:
|
||||||
|
merge:
|
||||||
|
method: rebase
|
||||||
|
|
||||||
|
- name: ask to resolve conflict
|
||||||
|
conditions:
|
||||||
|
- conflict
|
||||||
|
actions:
|
||||||
|
comment:
|
||||||
|
message: This pull request is now in conflicts. Could you fix it @{{author}}? 🙏
|
||||||
|
|
||||||
|
# Comment on the PR to trigger backport. ex: @Mergifyio copy stable/3.1 stable/4.0
|
||||||
|
- name: backport patches to stable branch
|
||||||
|
conditions:
|
||||||
|
- base=master
|
||||||
|
actions:
|
||||||
|
backport:
|
||||||
|
title: "[BACKPORT][{{ destination_branch }}] {{ title }}"
|
||||||
|
body: |
|
||||||
|
This is an automatic backport of pull request #{{number}}.
|
||||||
|
|
||||||
|
{{cherry_pick_error}}
|
||||||
|
assignees:
|
||||||
|
- "{{ author }}"
|
40
.github/workflows/add-to-projects.yml
vendored
Normal file
40
.github/workflows/add-to-projects.yml
vendored
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
name: Add-To-Projects
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [ opened, labeled ]
|
||||||
|
jobs:
|
||||||
|
community:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Is Longhorn Member
|
||||||
|
uses: tspascoal/get-user-teams-membership@v1.0.4
|
||||||
|
id: is-longhorn-member
|
||||||
|
with:
|
||||||
|
username: ${{ github.event.issue.user.login }}
|
||||||
|
organization: longhorn
|
||||||
|
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||||
|
- name: Add To Community Project
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] == null
|
||||||
|
uses: actions/add-to-project@v0.3.0
|
||||||
|
with:
|
||||||
|
project-url: https://github.com/orgs/longhorn/projects/5
|
||||||
|
github-token: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||||
|
|
||||||
|
qa:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Is Longhorn Member
|
||||||
|
uses: tspascoal/get-user-teams-membership@v1.0.4
|
||||||
|
id: is-longhorn-member
|
||||||
|
with:
|
||||||
|
username: ${{ github.event.issue.user.login }}
|
||||||
|
organization: longhorn
|
||||||
|
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||||
|
- name: Add To QA & DevOps Project
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
||||||
|
uses: actions/add-to-project@v0.3.0
|
||||||
|
with:
|
||||||
|
project-url: https://github.com/orgs/longhorn/projects/4
|
||||||
|
github-token: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||||
|
labeled: kind/test, area/infra
|
||||||
|
label-operator: OR
|
50
.github/workflows/close-issue.yml
vendored
Normal file
50
.github/workflows/close-issue.yml
vendored
Normal file
@ -0,0 +1,50 @@
|
|||||||
|
name: Close-Issue
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [ unlabeled ]
|
||||||
|
jobs:
|
||||||
|
backport:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: contains(github.event.label.name, 'backport/')
|
||||||
|
steps:
|
||||||
|
- name: Get Backport Version
|
||||||
|
uses: xom9ikk/split@v1
|
||||||
|
id: split
|
||||||
|
with:
|
||||||
|
string: ${{ github.event.label.name }}
|
||||||
|
separator: /
|
||||||
|
- name: Check if Backport Issue Exists
|
||||||
|
uses: actions-cool/issues-helper@v3
|
||||||
|
id: if-backport-issue-exists
|
||||||
|
with:
|
||||||
|
actions: 'find-issues'
|
||||||
|
token: ${{ github.token }}
|
||||||
|
title-includes: |
|
||||||
|
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
||||||
|
- name: Close Backport Issue
|
||||||
|
if: fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] != null
|
||||||
|
uses: actions-cool/issues-helper@v3
|
||||||
|
with:
|
||||||
|
actions: 'close-issue'
|
||||||
|
token: ${{ github.token }}
|
||||||
|
issue-number: ${{ fromJSON(steps.if-backport-issue-exists.outputs.issues)[0].number }}
|
||||||
|
|
||||||
|
automation:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: contains(github.event.label.name, 'require/automation-e2e')
|
||||||
|
steps:
|
||||||
|
- name: Check if Automation Issue Exists
|
||||||
|
uses: actions-cool/issues-helper@v3
|
||||||
|
id: if-automation-issue-exists
|
||||||
|
with:
|
||||||
|
actions: 'find-issues'
|
||||||
|
token: ${{ github.token }}
|
||||||
|
title-includes: |
|
||||||
|
[TEST]${{ github.event.issue.title }}
|
||||||
|
- name: Close Automation Test Issue
|
||||||
|
if: fromJSON(steps.if-automation-issue-exists.outputs.issues)[0] != null
|
||||||
|
uses: actions-cool/issues-helper@v3
|
||||||
|
with:
|
||||||
|
actions: 'close-issue'
|
||||||
|
token: ${{ github.token }}
|
||||||
|
issue-number: ${{ fromJSON(steps.if-automation-issue-exists.outputs.issues)[0].number }}
|
23
.github/workflows/codespell.yml
vendored
Normal file
23
.github/workflows/codespell.yml
vendored
Normal file
@ -0,0 +1,23 @@
|
|||||||
|
name: Codespell
|
||||||
|
|
||||||
|
on:
|
||||||
|
push:
|
||||||
|
pull_request:
|
||||||
|
branches:
|
||||||
|
- master
|
||||||
|
- "v*.*.*"
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
codespell:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- name: Checkout
|
||||||
|
uses: actions/checkout@v3
|
||||||
|
with:
|
||||||
|
fetch-depth: 1
|
||||||
|
- name: Check code spell
|
||||||
|
uses: codespell-project/actions-codespell@v1
|
||||||
|
with:
|
||||||
|
check_filenames: true
|
||||||
|
ignore_words_file: .codespellignore
|
||||||
|
skip: "*/**.yaml,*/**.yml,*/**.tpl,./deploy,./dev,./scripts,./uninstall"
|
114
.github/workflows/create-issue.yml
vendored
Normal file
114
.github/workflows/create-issue.yml
vendored
Normal file
@ -0,0 +1,114 @@
|
|||||||
|
name: Create-Issue
|
||||||
|
on:
|
||||||
|
issues:
|
||||||
|
types: [ labeled ]
|
||||||
|
jobs:
|
||||||
|
backport:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: contains(github.event.label.name, 'backport/')
|
||||||
|
steps:
|
||||||
|
- name: Is Longhorn Member
|
||||||
|
uses: tspascoal/get-user-teams-membership@v1.0.4
|
||||||
|
id: is-longhorn-member
|
||||||
|
with:
|
||||||
|
username: ${{ github.actor }}
|
||||||
|
organization: longhorn
|
||||||
|
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||||
|
- name: Get Backport Version
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
||||||
|
uses: xom9ikk/split@v1
|
||||||
|
id: split
|
||||||
|
with:
|
||||||
|
string: ${{ github.event.label.name }}
|
||||||
|
separator: /
|
||||||
|
- name: Check if Backport Issue Exists
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
||||||
|
uses: actions-cool/issues-helper@v3
|
||||||
|
id: if-backport-issue-exists
|
||||||
|
with:
|
||||||
|
actions: 'find-issues'
|
||||||
|
token: ${{ github.token }}
|
||||||
|
issue-state: 'all'
|
||||||
|
title-includes: |
|
||||||
|
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
||||||
|
- name: Get Milestone Object
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||||
|
uses: longhorn/bot/milestone-action@master
|
||||||
|
id: milestone
|
||||||
|
with:
|
||||||
|
token: ${{ github.token }}
|
||||||
|
repository: ${{ github.repository }}
|
||||||
|
milestone_name: v${{ steps.split.outputs._1 }}
|
||||||
|
- name: Get Labels
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||||
|
id: labels
|
||||||
|
run: |
|
||||||
|
RAW_LABELS="${{ join(github.event.issue.labels.*.name, ' ') }}"
|
||||||
|
RAW_LABELS="${RAW_LABELS} kind/backport"
|
||||||
|
echo "RAW LABELS: $RAW_LABELS"
|
||||||
|
LABELS=$(echo "$RAW_LABELS" | sed -r 's/\s*backport\S+//g' | sed -r 's/\s*require\/auto-e2e-test//g' | xargs | sed 's/ /, /g')
|
||||||
|
echo "LABELS: $LABELS"
|
||||||
|
echo "labels=$LABELS" >> $GITHUB_OUTPUT
|
||||||
|
- name: Create Backport Issue
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||||
|
uses: dacbd/create-issue-action@v1
|
||||||
|
id: new-issue
|
||||||
|
with:
|
||||||
|
token: ${{ github.token }}
|
||||||
|
title: |
|
||||||
|
[BACKPORT][v${{ steps.split.outputs._1 }}]${{ github.event.issue.title }}
|
||||||
|
body: |
|
||||||
|
backport ${{ github.event.issue.html_url }}
|
||||||
|
labels: ${{ steps.labels.outputs.labels }}
|
||||||
|
milestone: ${{ fromJSON(steps.milestone.outputs.data).number }}
|
||||||
|
assignees: ${{ join(github.event.issue.assignees.*.login, ', ') }}
|
||||||
|
- name: Get Repo Id
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||||
|
uses: octokit/request-action@v2.x
|
||||||
|
id: repo
|
||||||
|
with:
|
||||||
|
route: GET /repos/${{ github.repository }}
|
||||||
|
env:
|
||||||
|
GITHUB_TOKEN: ${{ github.token }}
|
||||||
|
- name: Add Backport Issue To Release
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-backport-issue-exists.outputs.issues)[0] == null
|
||||||
|
uses: longhorn/bot/add-zenhub-release-action@master
|
||||||
|
with:
|
||||||
|
zenhub_token: ${{ secrets.ZENHUB_TOKEN }}
|
||||||
|
repo_id: ${{ fromJSON(steps.repo.outputs.data).id }}
|
||||||
|
issue_number: ${{ steps.new-issue.outputs.number }}
|
||||||
|
release_name: ${{ steps.split.outputs._1 }}
|
||||||
|
|
||||||
|
automation:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
if: contains(github.event.label.name, 'require/auto-e2e-test')
|
||||||
|
steps:
|
||||||
|
- name: Is Longhorn Member
|
||||||
|
uses: tspascoal/get-user-teams-membership@v1.0.4
|
||||||
|
id: is-longhorn-member
|
||||||
|
with:
|
||||||
|
username: ${{ github.actor }}
|
||||||
|
organization: longhorn
|
||||||
|
GITHUB_TOKEN: ${{ secrets.CUSTOM_GITHUB_TOKEN }}
|
||||||
|
- name: Check if Automation Issue Exists
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null
|
||||||
|
uses: actions-cool/issues-helper@v3
|
||||||
|
id: if-automation-issue-exists
|
||||||
|
with:
|
||||||
|
actions: 'find-issues'
|
||||||
|
token: ${{ github.token }}
|
||||||
|
issue-state: 'all'
|
||||||
|
title-includes: |
|
||||||
|
[TEST]${{ github.event.issue.title }}
|
||||||
|
- name: Create Automation Test Issue
|
||||||
|
if: fromJSON(steps.is-longhorn-member.outputs.teams)[0] != null && fromJSON(steps.if-automation-issue-exists.outputs.issues)[0] == null
|
||||||
|
uses: dacbd/create-issue-action@v1
|
||||||
|
with:
|
||||||
|
token: ${{ github.token }}
|
||||||
|
title: |
|
||||||
|
[TEST]${{ github.event.issue.title }}
|
||||||
|
body: |
|
||||||
|
adding/updating auto e2e test cases for ${{ github.event.issue.html_url }} if they can be automated
|
||||||
|
|
||||||
|
cc @longhorn/qa
|
||||||
|
labels: kind/test
|
28
.github/workflows/stale.yaml
vendored
Normal file
28
.github/workflows/stale.yaml
vendored
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
name: 'Close stale issues and PRs'
|
||||||
|
|
||||||
|
on:
|
||||||
|
workflow_call:
|
||||||
|
workflow_dispatch:
|
||||||
|
schedule:
|
||||||
|
- cron: '30 1 * * *'
|
||||||
|
|
||||||
|
jobs:
|
||||||
|
stale:
|
||||||
|
runs-on: ubuntu-latest
|
||||||
|
steps:
|
||||||
|
- uses: actions/stale@v4
|
||||||
|
with:
|
||||||
|
stale-issue-message: 'This issue is stale because it has been open 30 days with no activity. Remove stale label or comment or this will be closed in 5 days.'
|
||||||
|
stale-pr-message: 'This PR is stale because it has been open 45 days with no activity. Remove stale label or comment or this will be closed in 10 days.'
|
||||||
|
close-issue-message: 'This issue was closed because it has been stalled for 5 days with no activity.'
|
||||||
|
close-pr-message: 'This PR was closed because it has been stalled for 10 days with no activity.'
|
||||||
|
days-before-stale: 30
|
||||||
|
days-before-pr-stale: 45
|
||||||
|
days-before-close: 5
|
||||||
|
days-before-pr-close: 10
|
||||||
|
stale-issue-label: 'stale'
|
||||||
|
stale-pr-label: 'stale'
|
||||||
|
exempt-all-assignees: true
|
||||||
|
exempt-issue-labels: 'kind/bug,kind/doc,kind/enhancement,kind/poc,kind/refactoring,kind/test,kind/task,kind/backport,kind/regression,kind/evaluation'
|
||||||
|
exempt-draft-pr: true
|
||||||
|
exempt-all-milestones: true
|
7
.gitignore
vendored
Normal file
7
.gitignore
vendored
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
# ignores all goland project folders and files
|
||||||
|
.idea
|
||||||
|
*.iml
|
||||||
|
*.ipr
|
||||||
|
|
||||||
|
# python venv for dev scripts
|
||||||
|
.venv
|
283
CHANGELOG/CHANGELOG-1.4.0.md
Normal file
283
CHANGELOG/CHANGELOG-1.4.0.md
Normal file
@ -0,0 +1,283 @@
|
|||||||
|
## Release Note
|
||||||
|
**v1.4.0 released!** 🎆
|
||||||
|
|
||||||
|
This release introduces many enhancements, improvements, and bug fixes as described below about stability, performance, data integrity, troubleshooting, and so on. Please try it and feedback. Thanks for all the contributions!
|
||||||
|
|
||||||
|
- [Kubernetes 1.25 Support](https://github.com/longhorn/longhorn/issues/4003) [[doc]](https://longhorn.io/docs/1.4.0/deploy/important-notes/#pod-security-policies-disabled--pod-security-admission-introduction)
|
||||||
|
In the previous versions, Longhorn relies on Pod Security Policy (PSP) to authorize Longhorn components for privileged operations. From Kubernetes 1.25, PSP has been removed and replaced with Pod Security Admission (PSA). Longhorn v1.4.0 supports opt-in PSP enablement, so it can support Kubernetes versions with or without PSP.
|
||||||
|
|
||||||
|
- [ARM64 GA](https://github.com/longhorn/longhorn/issues/4206)
|
||||||
|
ARM64 has been experimental from Longhorn v1.1.0. After receiving more user feedback and increasing testing coverage, ARM64 distribution has been stabilized with quality as per our regular regression testing, so it is qualified for general availability.
|
||||||
|
|
||||||
|
- [RWX GA](https://github.com/longhorn/longhorn/issues/2293) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220727-dedicated-recovery-backend-for-rwx-volume-nfs-server.md)[[doc]](https://longhorn.io/docs/1.4.0/advanced-resources/rwx-workloads/)
|
||||||
|
RWX has been experimental from Longhorn v1.1.0, but it lacks availability support when the Longhorn Share Manager component behind becomes unavailable. Longhorn v1.4.0 supports NFS recovery backend based on Kubernetes built-in resource, ConfigMap, for recovering NFS client connection during the fail-over period. Also, the NFS client hard mode introduction will further avoid previous potential data loss. For the detail, please check the issue and enhancement proposal.
|
||||||
|
|
||||||
|
- [Volume Snapshot Checksum](https://github.com/longhorn/longhorn/issues/4210) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220922-snapshot-checksum-and-bit-rot-detection.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#snapshot-data-integrity)
|
||||||
|
Data integrity is a continuous effort for Longhorn. In this version, Snapshot Checksum has been introduced w/ some settings to allow users to enable or disable checksum calculation with different modes.
|
||||||
|
|
||||||
|
- [Volume Bit-rot Protection](https://github.com/longhorn/longhorn/issues/3198) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220922-snapshot-checksum-and-bit-rot-detection.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#snapshot-data-integrity)
|
||||||
|
When enabling the Volume Snapshot Checksum feature, Longhorn will periodically calculate and check the checksums of volume snapshots, find corrupted snapshots, then fix them.
|
||||||
|
|
||||||
|
- [Volume Replica Rebuilding Speedup](https://github.com/longhorn/longhorn/issues/4783)
|
||||||
|
When enabling the Volume Snapshot Checksum feature, Longhorn will use the calculated snapshot checksum to avoid needless snapshot replication between nodes for improving replica rebuilding speed and resource consumption.
|
||||||
|
|
||||||
|
- [Volume Trim](https://github.com/longhorn/longhorn/issues/836) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20221103-filesystem-trim.md)[[doc]](https://longhorn.io/docs/1.4.0/volumes-and-nodes/trim-filesystem/#trim-the-filesystem-in-a-longhorn-volume)
|
||||||
|
Longhorn engine supports UNMAP SCSI command to reclaim space from the block volume.
|
||||||
|
|
||||||
|
- [Online Volume Expansion](https://github.com/longhorn/longhorn/issues/1674) [[doc]](https://longhorn.io/docs/1.4.0/volumes-and-nodes/expansion)
|
||||||
|
Longhorn engine supports optional parameters to pass size expansion requests when updating the volume frontend to support online volume expansion and resize the filesystem via CSI node driver.
|
||||||
|
|
||||||
|
- [Local Volume via Data Locality Strict Mode](https://github.com/longhorn/longhorn/issues/3957) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20200819-keep-a-local-replica-to-engine.md)[[doc]](https://longhorn.io/docs/1.4.0/references/settings/#default-data-locality)
|
||||||
|
Local volume is based on a new Data Locality setting, Strict Local. It will allow users to create one replica volume staying in a consistent location, and the data transfer between the volume frontend and engine will be through a local socket instead of the TCP stack to improve performance and reduce resource consumption.
|
||||||
|
|
||||||
|
- [Volume Recurring Job Backup Restore](https://github.com/longhorn/longhorn/issues/2227) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20201002-allow-recurring-backup-detached-volumes.md)[[doc]](https://longhorn.io/docs/1.4.0/snapshots-and-backups/backup-and-restore/restore-recurring-jobs-from-a-backup/)
|
||||||
|
Recurring jobs binding to a volume can be backed up to the remote backup target together with the volume backup metadata. They can be restored back as well for a better operation experience.
|
||||||
|
|
||||||
|
- [Volume IO Metrics](https://github.com/longhorn/longhorn/issues/2406) [[doc]](https://longhorn.io/docs/1.4.0/monitoring/metrics/#volume)
|
||||||
|
Longhorn enriches Volume metrics by providing real-time IO stats including IOPS, latency, and throughput of R/W IO. Users can set up a monotoning solution like Prometheus to monitor volume performance.
|
||||||
|
|
||||||
|
- [Longhorn System Backup & Restore](https://github.com/longhorn/longhorn/issues/1455) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20220913-longhorn-system-backup-restore.md)[[doc]](https://longhorn.io/docs/1.4.0/advanced-resources/system-backup-restore/)
|
||||||
|
Users can back up the longhorn system to the remote backup target. Afterward, it's able to restore back to an existing cluster in place or a new cluster for specific operational purposes.
|
||||||
|
|
||||||
|
- [Support Bundle Enhancement](https://github.com/longhorn/longhorn/issues/2759) [[lep]](https://github.com/longhorn/longhorn/blob/master/enhancements/20221109-support-bundle-enhancement.md)
|
||||||
|
Longhorn introduces a new support bundle integration based on a general [support bundle kit](https://github.com/rancher/support-bundle-kit) solution. This can help us collect more complete troubleshooting info and simulate the cluster environment.
|
||||||
|
|
||||||
|
- [Tunable Timeout between Engine and Replica](https://github.com/longhorn/longhorn/issues/4491) [[doc]](https://longhorn.io/docs/1.4.0/references/settings/#engine-to-replica-timeout)
|
||||||
|
In the current Longhorn versions, the default timeout between the Longhorn engine and replica is fixed without any exposed user settings. This will potentially bring some challenges for users having a low-spec infra environment. By exporting the setting configurable, it will allow users adaptively tune the stability of volume operations.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.4.0.**
|
||||||
|
|
||||||
|
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.0/deploy/install/).
|
||||||
|
|
||||||
|
## Upgrade
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.0 from v1.3.x. Only support upgrading from 1.3.x.**
|
||||||
|
|
||||||
|
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.0/deploy/upgrade/).
|
||||||
|
|
||||||
|
## Deprecation & Incompatibilities
|
||||||
|
|
||||||
|
- Pod Security Policy is an opt-in setting. If installing Longhorn with PSP support, need to enable it first.
|
||||||
|
- The built-in CSI Snapshotter sidecar is upgraded to v5.0.1. The v1beta1 version of Volume Snapshot custom resource is deprecated but still supported. However, it will be removed after upgrading CSI Snapshotter to 6.1 or later versions in the future, so please start using v1 version instead before the deprecated version is removed.
|
||||||
|
|
||||||
|
## Known Issues after Release
|
||||||
|
|
||||||
|
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||||
|
|
||||||
|
## Highlights
|
||||||
|
|
||||||
|
- [FEATURE] Reclaim/Shrink space of volume ([836](https://github.com/longhorn/longhorn/issues/836)) - @yangchiu @derekbit @smallteeths @shuo-wu
|
||||||
|
- [FEATURE] Backup/Restore Longhorn System ([1455](https://github.com/longhorn/longhorn/issues/1455)) - @c3y1huang @khushboo-rancher
|
||||||
|
- [FEATURE] Online volume expansion ([1674](https://github.com/longhorn/longhorn/issues/1674)) - @shuo-wu @chriscchien
|
||||||
|
- [FEATURE] Record recurring schedule in the backups and allow user choose to use it for the restored volume ([2227](https://github.com/longhorn/longhorn/issues/2227)) - @yangchiu @mantissahz
|
||||||
|
- [FEATURE] NFS support (RWX) GA ([2293](https://github.com/longhorn/longhorn/issues/2293)) - @derekbit @chriscchien
|
||||||
|
- [FEATURE] Support metrics for Volume IOPS, throughput and latency real time ([2406](https://github.com/longhorn/longhorn/issues/2406)) - @derekbit @roger-ryao
|
||||||
|
- [FEATURE] Support bundle enhancement ([2759](https://github.com/longhorn/longhorn/issues/2759)) - @c3y1huang @chriscchien
|
||||||
|
- [FEATURE] Automatic identifying of corrupted replica (bit rot detection) ([3198](https://github.com/longhorn/longhorn/issues/3198)) - @yangchiu @derekbit
|
||||||
|
- [FEATURE] Local volume for distributed data workloads ([3957](https://github.com/longhorn/longhorn/issues/3957)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Support K8s 1.25 by updating removed deprecated resource versions like PodSecurityPolicy ([4003](https://github.com/longhorn/longhorn/issues/4003)) - @PhanLe1010 @chriscchien
|
||||||
|
- [IMPROVEMENT] Faster resync time for fresh replica rebuilding ([4092](https://github.com/longhorn/longhorn/issues/4092)) - @yangchiu @derekbit
|
||||||
|
- [FEATURE] Introduce checksum for snapshots ([4210](https://github.com/longhorn/longhorn/issues/4210)) - @derekbit @roger-ryao
|
||||||
|
- [FEATURE] Update K8s version support and component/pkg/build dependencies ([4239](https://github.com/longhorn/longhorn/issues/4239)) - @yangchiu @PhanLe1010
|
||||||
|
- [BUG] data corruption due to COW and block size not being aligned during rebuilding replicas ([4354](https://github.com/longhorn/longhorn/issues/4354)) - @PhanLe1010 @chriscchien
|
||||||
|
- [IMPROVEMENT] Adjust the iSCSI timeout and the engine-to-replica timeout settings ([4491](https://github.com/longhorn/longhorn/issues/4491)) - @yangchiu @derekbit
|
||||||
|
- [IMPROVEMENT] Using specific block size in Longhorn volume's filesystem ([4594](https://github.com/longhorn/longhorn/issues/4594)) - @derekbit @roger-ryao
|
||||||
|
- [IMPROVEMENT] Speed up replica rebuilding by the metadata such as ctime of snapshot disk files ([4783](https://github.com/longhorn/longhorn/issues/4783)) - @yangchiu @derekbit
|
||||||
|
|
||||||
|
## Enhancements
|
||||||
|
|
||||||
|
- [FEATURE] Configure successfulJobsHistoryLimit of CronJobs ([1711](https://github.com/longhorn/longhorn/issues/1711)) - @weizhe0422 @chriscchien
|
||||||
|
- [FEATURE] Allow customization of the cipher used by cryptsetup in volume encryption ([3353](https://github.com/longhorn/longhorn/issues/3353)) - @mantissahz @chriscchien
|
||||||
|
- [FEATURE] New setting to limit the concurrent volume restoring from backup ([4558](https://github.com/longhorn/longhorn/issues/4558)) - @c3y1huang @chriscchien
|
||||||
|
- [FEATURE] Make FS format options configurable in storage class ([4642](https://github.com/longhorn/longhorn/issues/4642)) - @weizhe0422 @chriscchien
|
||||||
|
|
||||||
|
## Improvement
|
||||||
|
|
||||||
|
- [IMPROVEMENT] Change the script into a docker run command mentioned in 'recovery from longhorn backup without system installed' doc ([1521](https://github.com/longhorn/longhorn/issues/1521)) - @weizhe0422 @chriscchien
|
||||||
|
- [IMPROVEMENT] Improve 'recovery from longhorn backup without system installed' doc. ([1522](https://github.com/longhorn/longhorn/issues/1522)) - @weizhe0422 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Dump NFS ganesha logs to pod stdout ([2380](https://github.com/longhorn/longhorn/issues/2380)) - @weizhe0422 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Support failed/obsolete orphaned backup cleanup ([3898](https://github.com/longhorn/longhorn/issues/3898)) - @mantissahz @chriscchien
|
||||||
|
- [IMPROVEMENT] liveness and readiness probes with longhorn csi plugin daemonset ([3907](https://github.com/longhorn/longhorn/issues/3907)) - @c3y1huang @roger-ryao
|
||||||
|
- [IMPROVEMENT] Longhorn doesn't reuse failed replica on a disk with full allocated space ([3921](https://github.com/longhorn/longhorn/issues/3921)) - @PhanLe1010 @chriscchien
|
||||||
|
- [IMPROVEMENT] Reduce syscalls while reading and writing requests in longhorn-engine (engine <-> replica) ([4122](https://github.com/longhorn/longhorn/issues/4122)) - @yangchiu @derekbit
|
||||||
|
- [IMPROVEMENT] Reduce read and write calls in liblonghorn (tgt <-> engine) ([4133](https://github.com/longhorn/longhorn/issues/4133)) - @derekbit
|
||||||
|
- [IMPROVEMENT] Replace the GCC allocator in liblonghorn with a more efficient memory allocator ([4136](https://github.com/longhorn/longhorn/issues/4136)) - @yangchiu @derekbit
|
||||||
|
- [DOC] Update Helm readme and document ([4175](https://github.com/longhorn/longhorn/issues/4175)) - @derekbit
|
||||||
|
- [IMPROVEMENT] Purging a volume before rebuilding starts ([4183](https://github.com/longhorn/longhorn/issues/4183)) - @yangchiu @shuo-wu
|
||||||
|
- [IMPROVEMENT] Schedule volumes based on available disk space ([4185](https://github.com/longhorn/longhorn/issues/4185)) - @yangchiu @c3y1huang
|
||||||
|
- [IMPROVEMENT] Recognize default toleration and node selector to allow Longhorn run on the RKE mixed cluster ([4246](https://github.com/longhorn/longhorn/issues/4246)) - @c3y1huang @chriscchien
|
||||||
|
- [IMPROVEMENT] Support bundle doesn't collect the snapshot yamls ([4285](https://github.com/longhorn/longhorn/issues/4285)) - @yangchiu @PhanLe1010
|
||||||
|
- [IMPROVEMENT] Avoid accidentally deleting engine images that are still in use ([4332](https://github.com/longhorn/longhorn/issues/4332)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Show non-JSON error from backup store ([4336](https://github.com/longhorn/longhorn/issues/4336)) - @c3y1huang
|
||||||
|
- [IMPROVEMENT] Update nfs-ganesha to v4.0 ([4351](https://github.com/longhorn/longhorn/issues/4351)) - @derekbit
|
||||||
|
- [IMPROVEMENT] show error when failed to init frontend ([4362](https://github.com/longhorn/longhorn/issues/4362)) - @c3y1huang
|
||||||
|
- [IMPROVEMENT] Too many debug-level log messages in engine instance-manager ([4427](https://github.com/longhorn/longhorn/issues/4427)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Add prep work for fixing the corrupted filesystem using fsck in KB ([4440](https://github.com/longhorn/longhorn/issues/4440)) - @derekbit
|
||||||
|
- [IMPROVEMENT] Prevent users from accidentally uninstalling Longhorn ([4509](https://github.com/longhorn/longhorn/issues/4509)) - @yangchiu @PhanLe1010
|
||||||
|
- [IMPROVEMENT] add possibility to use nodeSelector on the storageClass ([4574](https://github.com/longhorn/longhorn/issues/4574)) - @weizhe0422 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Check if node schedulable condition is set before trying to read it ([4581](https://github.com/longhorn/longhorn/issues/4581)) - @weizhe0422 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Review/consolidate the sectorSize in replica server, replica volume, and engine ([4599](https://github.com/longhorn/longhorn/issues/4599)) - @yangchiu @derekbit
|
||||||
|
- [IMPROVEMENT] Reorganize longhorn-manager/k8s/patches and auto-generate preserveUnknownFields field ([4600](https://github.com/longhorn/longhorn/issues/4600)) - @yangchiu @derekbit
|
||||||
|
- [IMPROVEMENT] share-manager pod bypasses the kubernetes scheduler ([4789](https://github.com/longhorn/longhorn/issues/4789)) - @joshimoo @chriscchien
|
||||||
|
- [IMPROVEMENT] Unify the format of returned error messages in longhorn-engine ([4828](https://github.com/longhorn/longhorn/issues/4828)) - @derekbit
|
||||||
|
- [IMPROVEMENT] Longhorn system backup/restore UI ([4855](https://github.com/longhorn/longhorn/issues/4855)) - @smallteeths
|
||||||
|
- [IMPROVEMENT] Replace the modTime (mtime) with ctime in snapshot hash ([4934](https://github.com/longhorn/longhorn/issues/4934)) - @derekbit @chriscchien
|
||||||
|
- [BUG] volume is stuck in attaching/detaching loop with error `Failed to init frontend: device...` ([4959](https://github.com/longhorn/longhorn/issues/4959)) - @derekbit @PhanLe1010 @chriscchien
|
||||||
|
- [IMPROVEMENT] Affinity in the longhorn-ui deployment within the helm chart ([4987](https://github.com/longhorn/longhorn/issues/4987)) - @mantissahz @chriscchien
|
||||||
|
- [IMPROVEMENT] Allow users to change volume.spec.snapshotDataIntegrity on UI ([4994](https://github.com/longhorn/longhorn/issues/4994)) - @yangchiu @smallteeths
|
||||||
|
- [IMPROVEMENT] Backup and restore recurring jobs on UI ([5009](https://github.com/longhorn/longhorn/issues/5009)) - @smallteeths @chriscchien
|
||||||
|
- [IMPROVEMENT] Disable `Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly` for RWX volumes ([5017](https://github.com/longhorn/longhorn/issues/5017)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Enable fast replica rebuilding by default ([5023](https://github.com/longhorn/longhorn/issues/5023)) - @derekbit @roger-ryao
|
||||||
|
- [IMPROVEMENT] Upgrade tcmalloc in longhorn-engine ([5050](https://github.com/longhorn/longhorn/issues/5050)) - @derekbit
|
||||||
|
- [IMPROVEMENT] UI show error when backup target is empty for system backup ([5056](https://github.com/longhorn/longhorn/issues/5056)) - @smallteeths @khushboo-rancher
|
||||||
|
- [IMPROVEMENT] System restore job name should be Longhorn prefixed ([5057](https://github.com/longhorn/longhorn/issues/5057)) - @c3y1huang @khushboo-rancher
|
||||||
|
- [BUG] Error in logs while restoring the system backup ([5061](https://github.com/longhorn/longhorn/issues/5061)) - @c3y1huang @chriscchien
|
||||||
|
- [IMPROVEMENT] Add warning message to when deleting the restoring backups ([5065](https://github.com/longhorn/longhorn/issues/5065)) - @smallteeths @khushboo-rancher @roger-ryao
|
||||||
|
- [IMPROVEMENT] Inconsistent name convention across volume backup restore and system backup restore ([5066](https://github.com/longhorn/longhorn/issues/5066)) - @smallteeths @roger-ryao
|
||||||
|
- [IMPROVEMENT] System restore should proceed to restore other volumes if restoring one volume keeps failing for a certain time. ([5086](https://github.com/longhorn/longhorn/issues/5086)) - @c3y1huang @khushboo-rancher @roger-ryao
|
||||||
|
- [IMPROVEMENT] Support customized number of replicas of webhook and recovery-backend ([5087](https://github.com/longhorn/longhorn/issues/5087)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Simplify the page by placing some configuration items in the advanced configuration when creating the volume ([5090](https://github.com/longhorn/longhorn/issues/5090)) - @yangchiu @smallteeths
|
||||||
|
- [IMPROVEMENT] Support replica sync client timeout setting to stabilize replica rebuilding ([5110](https://github.com/longhorn/longhorn/issues/5110)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Set a newly created volume's data integrity from UI to `ignored` rather than `Fast-Check`. ([5126](https://github.com/longhorn/longhorn/issues/5126)) - @yangchiu @smallteeths
|
||||||
|
|
||||||
|
## Performance
|
||||||
|
|
||||||
|
- [BUG] Turn a node down and up, workload takes longer time to come back online in Longhorn v1.2.0 ([2947](https://github.com/longhorn/longhorn/issues/2947)) - @yangchiu @PhanLe1010
|
||||||
|
- [TASK] RWX volume performance measurement and investigation ([3665](https://github.com/longhorn/longhorn/issues/3665)) - @derekbit
|
||||||
|
- [TASK] Verify spinning disk/HDD via the current e2e regression ([4182](https://github.com/longhorn/longhorn/issues/4182)) - @yangchiu
|
||||||
|
- [BUG] test_csi_snapshot_snap_create_volume_from_snapshot failed when using HDD as Longhorn disks ([4227](https://github.com/longhorn/longhorn/issues/4227)) - @yangchiu @PhanLe1010
|
||||||
|
- [TASK] Disable tcmalloc in data path because newer tcmalloc version leads to performance drop ([5096](https://github.com/longhorn/longhorn/issues/5096)) - @derekbit @chriscchien
|
||||||
|
|
||||||
|
## Stability
|
||||||
|
|
||||||
|
- [BUG] Longhorn won't fail all replicas if there is no valid backend during the engine starting stage ([1330](https://github.com/longhorn/longhorn/issues/1330)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Every other backup fails and crashes the volume (Segmentation Fault) ([1768](https://github.com/longhorn/longhorn/issues/1768)) - @olljanat @mantissahz
|
||||||
|
- [BUG] Backend sizes do not match 5368709120 != 10737418240 in the engine initiation phase ([3601](https://github.com/longhorn/longhorn/issues/3601)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Somehow the Rebuilding field inside volume.meta is set to true causing the volume to stuck in attaching/detaching loop ([4212](https://github.com/longhorn/longhorn/issues/4212)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Engine binary cannot be recovered after being removed accidentally ([4380](https://github.com/longhorn/longhorn/issues/4380)) - @yangchiu @c3y1huang
|
||||||
|
- [TASK] Disable tcmalloc in longhorn-engine and longhorn-instance-manager ([5068](https://github.com/longhorn/longhorn/issues/5068)) - @derekbit
|
||||||
|
|
||||||
|
## Bugs
|
||||||
|
|
||||||
|
- [BUG] Removing old instance records after the new IM pod is launched will take 1 minute ([1363](https://github.com/longhorn/longhorn/issues/1363)) - @mantissahz
|
||||||
|
- [BUG] Restoring volume stuck forever if the backup is already deleted. ([1867](https://github.com/longhorn/longhorn/issues/1867)) - @mantissahz @chriscchien
|
||||||
|
- [BUG] Duplicated default instance manager leads to engine/replica cannot be started ([3000](https://github.com/longhorn/longhorn/issues/3000)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [BUG] Restore from backup sometimes failed if having high frequent recurring backup job w/ retention ([3055](https://github.com/longhorn/longhorn/issues/3055)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] Newly created backup stays in `InProgress` when the volume deleted before backup finished ([3122](https://github.com/longhorn/longhorn/issues/3122)) - @mantissahz @chriscchien
|
||||||
|
- [Bug] Degraded volume generate failed replica make volume unschedulable ([3220](https://github.com/longhorn/longhorn/issues/3220)) - @derekbit @chriscchien
|
||||||
|
- [BUG] The default access mode of a restored RWX volume is RWO ([3444](https://github.com/longhorn/longhorn/issues/3444)) - @weizhe0422 @roger-ryao
|
||||||
|
- [BUG] Replica rebuilding failure with error "Replica must be closed, Can not add in state: open" ([3828](https://github.com/longhorn/longhorn/issues/3828)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] Max length of volume name not consist between frontend and backend ([3917](https://github.com/longhorn/longhorn/issues/3917)) - @weizhe0422 @roger-ryao
|
||||||
|
- [BUG] Can't delete volumesnapshot if backup removed first ([4107](https://github.com/longhorn/longhorn/issues/4107)) - @weizhe0422 @chriscchien
|
||||||
|
- [BUG] A IM-proxy connection not closed in full regression 1.3 ([4113](https://github.com/longhorn/longhorn/issues/4113)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Scale replica warning ([4120](https://github.com/longhorn/longhorn/issues/4120)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Wrong nodeOrDiskEvicted collected in node monitor ([4143](https://github.com/longhorn/longhorn/issues/4143)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Misleading log "BUG: replica is running but storage IP is empty" ([4153](https://github.com/longhorn/longhorn/issues/4153)) - @shuo-wu @chriscchien
|
||||||
|
- [BUG] longhorn-manager cannot start while upgrading if the configmap contains volume sensitive settings ([4160](https://github.com/longhorn/longhorn/issues/4160)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Replica stuck in buggy state with status.currentState is error and the spec.desireState is running ([4197](https://github.com/longhorn/longhorn/issues/4197)) - @yangchiu @PhanLe1010
|
||||||
|
- [BUG] After updating longhorn to version 1.3.0, only 1 node had problems and I can't even delete it ([4213](https://github.com/longhorn/longhorn/issues/4213)) - @derekbit @c3y1huang @chriscchien
|
||||||
|
- [BUG] Unable to use a TTY error when running environment_check.sh ([4216](https://github.com/longhorn/longhorn/issues/4216)) - @flkdnt @chriscchien
|
||||||
|
- [BUG] The last healthy replica may be evicted or removed ([4238](https://github.com/longhorn/longhorn/issues/4238)) - @yangchiu @shuo-wu
|
||||||
|
- [BUG] Volume detaching and attaching repeatedly while creating multiple snapshots with a same id ([4250](https://github.com/longhorn/longhorn/issues/4250)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Backing image is not deleted and recreated correctly ([4256](https://github.com/longhorn/longhorn/issues/4256)) - @shuo-wu @chriscchien
|
||||||
|
- [BUG] longhorn-ui fails to start on RKE2 with cis-1.6 profile for Longhorn v1.3.0 with helm install ([4266](https://github.com/longhorn/longhorn/issues/4266)) - @yangchiu @mantissahz
|
||||||
|
- [BUG] Longhorn volume stuck in deleting state ([4278](https://github.com/longhorn/longhorn/issues/4278)) - @yangchiu @PhanLe1010
|
||||||
|
- [BUG] the IP address is duplicate when using storage network and the second network is contronllerd by ovs-cni. ([4281](https://github.com/longhorn/longhorn/issues/4281)) - @mantissahz
|
||||||
|
- [BUG] build longhorn-ui image error ([4283](https://github.com/longhorn/longhorn/issues/4283)) - @smallteeths
|
||||||
|
- [BUG] Wrong conditions in the Chart default-setting manifest for Rancher deployed Windows Cluster feature ([4289](https://github.com/longhorn/longhorn/issues/4289)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Volume operations/rebuilding error during eviction ([4294](https://github.com/longhorn/longhorn/issues/4294)) - @yangchiu @shuo-wu
|
||||||
|
- [BUG] longhorn-manager deletes same pod multi times when rebooting ([4302](https://github.com/longhorn/longhorn/issues/4302)) - @mantissahz @w13915984028
|
||||||
|
- [BUG] test_setting_backing_image_auto_cleanup failed because the backing image file isn't deleted on the corresponding node as expected ([4308](https://github.com/longhorn/longhorn/issues/4308)) - @shuo-wu @chriscchien
|
||||||
|
- [BUG] After automatically force delete terminating pods of deployment on down node, data lost and I/O error ([4384](https://github.com/longhorn/longhorn/issues/4384)) - @yangchiu @derekbit @PhanLe1010
|
||||||
|
- [BUG] Volume can not attach to node when engine image DaemonSet pods are not fully deployed ([4386](https://github.com/longhorn/longhorn/issues/4386)) - @PhanLe1010 @chriscchien
|
||||||
|
- [BUG] Error/warning during uninstallation of Longhorn v1.3.1 via manifest ([4405](https://github.com/longhorn/longhorn/issues/4405)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [BUG] can't upgrade engine if a volume was created in Longhorn v1.0 and the volume.spec.dataLocality is `""` ([4412](https://github.com/longhorn/longhorn/issues/4412)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Confusing description the label for replica delition ([4430](https://github.com/longhorn/longhorn/issues/4430)) - @yangchiu @smallteeths
|
||||||
|
- [BUG] Update the Longhorn document in Using the Environment Check Script ([4450](https://github.com/longhorn/longhorn/issues/4450)) - @weizhe0422 @roger-ryao
|
||||||
|
- [BUG] Unable to search 1.3.1 doc by algolia ([4457](https://github.com/longhorn/longhorn/issues/4457)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] Misleading message "The volume is in expansion progress from size 20Gi to 10Gi" if the expansion is invalid ([4475](https://github.com/longhorn/longhorn/issues/4475)) - @yangchiu @smallteeths
|
||||||
|
- [BUG] Flaky case test_autosalvage_with_data_locality_enabled ([4489](https://github.com/longhorn/longhorn/issues/4489)) - @weizhe0422
|
||||||
|
- [BUG] Continuously rebuild when auto-balance==least-effort and existing node becomes unschedulable ([4502](https://github.com/longhorn/longhorn/issues/4502)) - @yangchiu @c3y1huang
|
||||||
|
- [BUG] Inconsistent system snapshots between replicas after rebuilding ([4513](https://github.com/longhorn/longhorn/issues/4513)) - @derekbit
|
||||||
|
- [BUG] Prometheus metric for backup state (longhorn_backup_state) returns wrong values ([4521](https://github.com/longhorn/longhorn/issues/4521)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] Longhorn accidentally schedule all replicas onto a worker node even though the setting Replica Node Level Soft Anti-Affinity is currently disabled ([4546](https://github.com/longhorn/longhorn/issues/4546)) - @yangchiu @mantissahz
|
||||||
|
- [BUG] LH continuously reports `invalid customized default setting taint-toleration` ([4554](https://github.com/longhorn/longhorn/issues/4554)) - @weizhe0422 @roger-ryao
|
||||||
|
- [BUG] the values.yaml in the longhorn helm chart contains values not used. ([4601](https://github.com/longhorn/longhorn/issues/4601)) - @weizhe0422 @roger-ryao
|
||||||
|
- [BUG] longhorn-engine integration test test_restore_to_file_with_backing_file failed after upgrade to sles 15.4 ([4632](https://github.com/longhorn/longhorn/issues/4632)) - @mantissahz
|
||||||
|
- [BUG] Can not pull a backup created by another Longhorn system from the remote backup target ([4637](https://github.com/longhorn/longhorn/issues/4637)) - @yangchiu @mantissahz @roger-ryao
|
||||||
|
- [BUG] Fix the share-manager deletion failure if the confimap is not existing ([4648](https://github.com/longhorn/longhorn/issues/4648)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Updating volume-scheduling-error failure for RWX volumes and expanding volumes ([4654](https://github.com/longhorn/longhorn/issues/4654)) - @derekbit @chriscchien
|
||||||
|
- [BUG] charts/longhorn/questions.yaml include oudated csi-image tags ([4669](https://github.com/longhorn/longhorn/issues/4669)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [BUG] rebuilding the replica failed after upgrading from 1.2.4 to 1.3.2-rc2 ([4705](https://github.com/longhorn/longhorn/issues/4705)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Cannot re-run helm uninstallation if the first one failed and cannot fetch logs of failed uninstallation pod ([4711](https://github.com/longhorn/longhorn/issues/4711)) - @yangchiu @PhanLe1010 @roger-ryao
|
||||||
|
- [BUG] The old instance-manager-r Pods are not deleted after upgrade ([4726](https://github.com/longhorn/longhorn/issues/4726)) - @mantissahz @chriscchien
|
||||||
|
- [BUG] Replica Auto Balance repeatedly delete the local replica and trigger rebuilding ([4761](https://github.com/longhorn/longhorn/issues/4761)) - @c3y1huang @roger-ryao
|
||||||
|
- [BUG] Volume metafile getting deleted or empty results in a detach-attach loop ([4846](https://github.com/longhorn/longhorn/issues/4846)) - @mantissahz @chriscchien
|
||||||
|
- [BUG] Backing image is stuck at `in-progress` status if the provided checksum is incorrect ([4852](https://github.com/longhorn/longhorn/issues/4852)) - @FrankYang0529 @chriscchien
|
||||||
|
- [BUG] Duplicate channel close error in the backing image manage related components ([4865](https://github.com/longhorn/longhorn/issues/4865)) - @weizhe0422 @roger-ryao
|
||||||
|
- [BUG] The node ID of backing image data source somehow get changed then lead to file handling failed ([4887](https://github.com/longhorn/longhorn/issues/4887)) - @shuo-wu @chriscchien
|
||||||
|
- [BUG] Cannot upload a backing image larger than 10G ([4902](https://github.com/longhorn/longhorn/issues/4902)) - @smallteeths @shuo-wu @chriscchien
|
||||||
|
- [BUG] Failed to build longhorn-instance-manager master branch ([4946](https://github.com/longhorn/longhorn/issues/4946)) - @derekbit
|
||||||
|
- [BUG] PVC only works with plural annotation `volumes.kubernetes.io/storage-provisioner: driver.longhorn.io` ([4951](https://github.com/longhorn/longhorn/issues/4951)) - @weizhe0422
|
||||||
|
- [BUG] Failed to create a replenished replica process because of the newly adding option ([4962](https://github.com/longhorn/longhorn/issues/4962)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Incorrect log messages in longhorn-engine processRemoveSnapshot() ([4980](https://github.com/longhorn/longhorn/issues/4980)) - @derekbit
|
||||||
|
- [BUG] System backup showing wrong age ([5047](https://github.com/longhorn/longhorn/issues/5047)) - @smallteeths @khushboo-rancher
|
||||||
|
- [BUG] System backup should validate empty backup target ([5055](https://github.com/longhorn/longhorn/issues/5055)) - @c3y1huang @khushboo-rancher
|
||||||
|
- [BUG] missing the `restoreVolumeRecurringJob` parameter in the VolumeGet API ([5062](https://github.com/longhorn/longhorn/issues/5062)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] System restore stuck in restoring if pvc exists with identical name ([5064](https://github.com/longhorn/longhorn/issues/5064)) - @c3y1huang @roger-ryao
|
||||||
|
- [BUG] No error shown on UI if system backup conf not available ([5072](https://github.com/longhorn/longhorn/issues/5072)) - @c3y1huang @khushboo-rancher
|
||||||
|
- [BUG] System restore missing services ([5074](https://github.com/longhorn/longhorn/issues/5074)) - @yangchiu @c3y1huang
|
||||||
|
- [BUG] In a system restore, PV & PVC are not restored if PVC was created with 'longhorn-static' (created via Longhorn GUI) ([5091](https://github.com/longhorn/longhorn/issues/5091)) - @c3y1huang @khushboo-rancher
|
||||||
|
- [BUG][v1.4.0-rc1] image security scan CRITICAL issues ([5107](https://github.com/longhorn/longhorn/issues/5107)) - @yangchiu @mantissahz
|
||||||
|
- [BUG] Snapshot trim wrong label in the volume detail page. ([5127](https://github.com/longhorn/longhorn/issues/5127)) - @smallteeths @chriscchien
|
||||||
|
- [BUG] Filesystem on the volume with a backing image is corrupted after applying trim operation ([5129](https://github.com/longhorn/longhorn/issues/5129)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Error in uninstall job ([5132](https://github.com/longhorn/longhorn/issues/5132)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Uninstall job unable to delete the systembackup and systemrestore cr. ([5133](https://github.com/longhorn/longhorn/issues/5133)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Nil pointer dereference error on restoring the system backup ([5134](https://github.com/longhorn/longhorn/issues/5134)) - @yangchiu @c3y1huang
|
||||||
|
- [BUG] UI option Update Replicas Auto Balance should use capital letter like others ([5154](https://github.com/longhorn/longhorn/issues/5154)) - @smallteeths @chriscchien
|
||||||
|
- [BUG] System restore cannot roll out when volume name is different to the PV ([5157](https://github.com/longhorn/longhorn/issues/5157)) - @yangchiu @c3y1huang
|
||||||
|
- [BUG] Online expansion doesn't succeed after a failed expansion ([5169](https://github.com/longhorn/longhorn/issues/5169)) - @derekbit @shuo-wu @khushboo-rancher
|
||||||
|
|
||||||
|
## Misc
|
||||||
|
|
||||||
|
- [DOC] RWX support for NVIDIA JETSON Ubuntu 18.4LTS kernel requires enabling NFSV4.1 ([3157](https://github.com/longhorn/longhorn/issues/3157)) - @yangchiu @derekbit
|
||||||
|
- [DOC] Add information about encryption algorithm to documentation ([3285](https://github.com/longhorn/longhorn/issues/3285)) - @mantissahz
|
||||||
|
- [DOC] Update the doc of volume size after introducing snapshot prune ([4158](https://github.com/longhorn/longhorn/issues/4158)) - @shuo-wu
|
||||||
|
- [Doc] Update the outdated "Customizing Default Settings" document ([4174](https://github.com/longhorn/longhorn/issues/4174)) - @derekbit
|
||||||
|
- [TASK] Refresh distro version support for 1.4 ([4401](https://github.com/longhorn/longhorn/issues/4401)) - @weizhe0422
|
||||||
|
- [TASK] Update official document Longhorn Networking ([4478](https://github.com/longhorn/longhorn/issues/4478)) - @derekbit
|
||||||
|
- [TASK] Update preserveUnknownFields fields in longhorn-manager CRD manifest ([4505](https://github.com/longhorn/longhorn/issues/4505)) - @derekbit @roger-ryao
|
||||||
|
- [TASK] Disable doc search for archived versions < 1.1 ([4524](https://github.com/longhorn/longhorn/issues/4524)) - @mantissahz
|
||||||
|
- [TASK] Update longhorn components with the latest backupstore ([4552](https://github.com/longhorn/longhorn/issues/4552)) - @derekbit
|
||||||
|
- [TASK] Update base image of all components from BCI 15.3 to 15.4 ([4617](https://github.com/longhorn/longhorn/issues/4617)) - @yangchiu
|
||||||
|
- [DOC] Update the Longhorn document in Install with Helm ([4745](https://github.com/longhorn/longhorn/issues/4745)) - @roger-ryao
|
||||||
|
- [TASK] Create longhornio support-bundle-kit image ([4911](https://github.com/longhorn/longhorn/issues/4911)) - @yangchiu
|
||||||
|
- [DOC] Add Recurring * Jobs History Limit to setting reference ([4912](https://github.com/longhorn/longhorn/issues/4912)) - @weizhe0422 @roger-ryao
|
||||||
|
- [DOC] Add Failed Backup TTL to setting reference ([4913](https://github.com/longhorn/longhorn/issues/4913)) - @mantissahz
|
||||||
|
- [TASK] Create longhornio liveness probe image ([4945](https://github.com/longhorn/longhorn/issues/4945)) - @yangchiu
|
||||||
|
- [TASK] Make system managed components branch-based build ([5024](https://github.com/longhorn/longhorn/issues/5024)) - @yangchiu
|
||||||
|
- [TASK] Remove unstable s390x from PR check for all repos ([5040](https://github.com/longhorn/longhorn/issues/5040)) -
|
||||||
|
- [TASK] Update longhorn-share-manager's nfs-ganesha to V4.2.1 ([5083](https://github.com/longhorn/longhorn/issues/5083)) - @derekbit @mantissahz
|
||||||
|
- [DOC] Update the Longhorn document in Setting up Prometheus and Grafana ([5158](https://github.com/longhorn/longhorn/issues/5158)) - @roger-ryao
|
||||||
|
|
||||||
|
## Contributors
|
||||||
|
|
||||||
|
- @FrankYang0529
|
||||||
|
- @PhanLe1010
|
||||||
|
- @c3y1huang
|
||||||
|
- @chriscchien
|
||||||
|
- @derekbit
|
||||||
|
- @flkdnt
|
||||||
|
- @innobead
|
||||||
|
- @joshimoo
|
||||||
|
- @khushboo-rancher
|
||||||
|
- @mantissahz
|
||||||
|
- @olljanat
|
||||||
|
- @roger-ryao
|
||||||
|
- @shuo-wu
|
||||||
|
- @smallteeths
|
||||||
|
- @w13915984028
|
||||||
|
- @weizhe0422
|
||||||
|
- @yangchiu
|
88
CHANGELOG/CHANGELOG-1.4.1.md
Normal file
88
CHANGELOG/CHANGELOG-1.4.1.md
Normal file
@ -0,0 +1,88 @@
|
|||||||
|
## Release Note
|
||||||
|
**v1.4.1 released!** 🎆
|
||||||
|
|
||||||
|
This release introduces improvements and bug fixes as described below about stability, performance, space efficiency, resilience, and so on. Please try it and feedback. Thanks for all the contributions!
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.4.1.**
|
||||||
|
|
||||||
|
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.1/deploy/install/).
|
||||||
|
|
||||||
|
## Upgrade
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.1 from v1.3.x/v1.4.0, which are only supported source versions.**
|
||||||
|
|
||||||
|
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.1/deploy/upgrade/).
|
||||||
|
|
||||||
|
## Deprecation & Incompatibilities
|
||||||
|
|
||||||
|
N/A
|
||||||
|
|
||||||
|
## Known Issues after Release
|
||||||
|
|
||||||
|
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||||
|
|
||||||
|
|
||||||
|
## Highlights
|
||||||
|
|
||||||
|
- [IMPROVEMENT] Periodically clean up volume snapshots ([3836](https://github.com/longhorn/longhorn/issues/3836)) - @c3y1huang @chriscchien
|
||||||
|
|
||||||
|
## Improvement
|
||||||
|
|
||||||
|
- [IMPROVEMENT] Do not count the failure replica reuse failure caused by the disconnection ([1923](https://github.com/longhorn/longhorn/issues/1923)) - @yangchiu @mantissahz
|
||||||
|
- [IMPROVEMENT] Update uninstallation info to include the 'Deleting Confirmation Flag' in chart ([5250](https://github.com/longhorn/longhorn/issues/5250)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Fix Guaranteed Engine Manager CPU recommendation formula in UI ([5338](https://github.com/longhorn/longhorn/issues/5338)) - @c3y1huang @smallteeths @roger-ryao
|
||||||
|
- [IMPROVEMENT] Update PSP validation in the Longhorn upstream chart ([5339](https://github.com/longhorn/longhorn/issues/5339)) - @yangchiu @PhanLe1010
|
||||||
|
- [IMPROVEMENT] Update ganesha nfs to 4.2.3 ([5356](https://github.com/longhorn/longhorn/issues/5356)) - @derekbit @roger-ryao
|
||||||
|
- [IMPROVEMENT] Set write-cache of longhorn block device to off explicitly ([5382](https://github.com/longhorn/longhorn/issues/5382)) - @derekbit @chriscchien
|
||||||
|
|
||||||
|
## Stability
|
||||||
|
|
||||||
|
- [BUG] Memory leak in CSI plugin caused by stuck umount processes if the RWX volume is already gone ([5296](https://github.com/longhorn/longhorn/issues/5296)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] share-manager pod failed to restart after kubelet restart ([5507](https://github.com/longhorn/longhorn/issues/5507)) - @yangchiu @derekbit
|
||||||
|
|
||||||
|
## Bugs
|
||||||
|
|
||||||
|
- [BUG] Longhorn 1.3.2 fails to backup & restore volumes behind Internet proxy ([5054](https://github.com/longhorn/longhorn/issues/5054)) - @mantissahz @chriscchien
|
||||||
|
- [BUG] RWX doesn't work with release 1.4.0 due to end grace update error from recovery backend ([5183](https://github.com/longhorn/longhorn/issues/5183)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Incorrect indentation of charts/questions.yaml ([5196](https://github.com/longhorn/longhorn/issues/5196)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] Updating option "Allow snapshots removal during trim" for old volumes failed ([5218](https://github.com/longhorn/longhorn/issues/5218)) - @shuo-wu @roger-ryao
|
||||||
|
- [BUG] Incorrect router retry mechanism ([5259](https://github.com/longhorn/longhorn/issues/5259)) - @mantissahz @chriscchien
|
||||||
|
- [BUG] System Backup is stuck at Uploading if there are PVs not provisioned by CSI driver ([5286](https://github.com/longhorn/longhorn/issues/5286)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Sync up with backup target during DR volume activation ([5292](https://github.com/longhorn/longhorn/issues/5292)) - @yangchiu @weizhe0422
|
||||||
|
- [BUG] environment_check.sh does not handle different kernel versions in cluster correctly ([5304](https://github.com/longhorn/longhorn/issues/5304)) - @achims311 @roger-ryao
|
||||||
|
- [BUG] instance-manager-r high memory consumption ([5312](https://github.com/longhorn/longhorn/issues/5312)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Replica rebuilding caused by rke2/kubelet restart ([5340](https://github.com/longhorn/longhorn/issues/5340)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Error message not consistent between create/update recurring job when retain number greater than 50 ([5434](https://github.com/longhorn/longhorn/issues/5434)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Do not copy Host header to API requests forwarded to Longhorn Manager ([5438](https://github.com/longhorn/longhorn/issues/5438)) - @yangchiu @smallteeths
|
||||||
|
- [BUG] RWX Volume attachment is getting Failed ([5456](https://github.com/longhorn/longhorn/issues/5456)) - @derekbit
|
||||||
|
- [BUG] test case test_backup_lock_deletion_during_restoration failed ([5458](https://github.com/longhorn/longhorn/issues/5458)) - @yangchiu @derekbit
|
||||||
|
- [BUG] [master] [v1.4.1-rc1] Volume restoration will never complete if attached node is down ([5464](https://github.com/longhorn/longhorn/issues/5464)) - @derekbit @weizhe0422 @chriscchien
|
||||||
|
- [BUG] Unable to create support bundle agent pod in air-gap environment ([5467](https://github.com/longhorn/longhorn/issues/5467)) - @yangchiu @c3y1huang
|
||||||
|
- [BUG] Node disconnection test failed ([5476](https://github.com/longhorn/longhorn/issues/5476)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Physical node down test failed ([5477](https://github.com/longhorn/longhorn/issues/5477)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Backing image with sync failure ([5481](https://github.com/longhorn/longhorn/issues/5481)) - @ChanYiLin @roger-ryao
|
||||||
|
- [BUG] Example of data migration doesn't work for hidden/./dot-files) ([5484](https://github.com/longhorn/longhorn/issues/5484)) - @hedefalk @shuo-wu @chriscchien
|
||||||
|
- [BUG] test case test_dr_volume_with_backup_block_deletion failed ([5489](https://github.com/longhorn/longhorn/issues/5489)) - @yangchiu @derekbit
|
||||||
|
|
||||||
|
## Misc
|
||||||
|
|
||||||
|
- [TASK][UI] add new recurring job tasks ([5272](https://github.com/longhorn/longhorn/issues/5272)) - @smallteeths @chriscchien
|
||||||
|
|
||||||
|
## Contributors
|
||||||
|
|
||||||
|
- @ChanYiLin
|
||||||
|
- @PhanLe1010
|
||||||
|
- @achims311
|
||||||
|
- @c3y1huang
|
||||||
|
- @chriscchien
|
||||||
|
- @derekbit
|
||||||
|
- @hedefalk
|
||||||
|
- @innobead
|
||||||
|
- @mantissahz
|
||||||
|
- @roger-ryao
|
||||||
|
- @shuo-wu
|
||||||
|
- @smallteeths
|
||||||
|
- @weizhe0422
|
||||||
|
- @yangchiu
|
92
CHANGELOG/CHANGELOG-1.4.2.md
Normal file
92
CHANGELOG/CHANGELOG-1.4.2.md
Normal file
@ -0,0 +1,92 @@
|
|||||||
|
## Release Note
|
||||||
|
### **v1.4.2 released!** 🎆
|
||||||
|
|
||||||
|
Longhorn v1.4.2 is the latest stable version of Longhorn 1.4.
|
||||||
|
It introduces improvements and bug fixes in the areas of stability, performance, space efficiency, resilience, and so on. Please try it out and provide feedback. Thanks for all the contributions!
|
||||||
|
|
||||||
|
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.4.2.**
|
||||||
|
|
||||||
|
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.2/deploy/install/).
|
||||||
|
|
||||||
|
## Upgrade
|
||||||
|
|
||||||
|
> **Please read the [important notes](https://longhorn.io/docs/1.4.2/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.2 from v1.3.x/v1.4.x, which are only supported source versions.**
|
||||||
|
|
||||||
|
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.2/deploy/upgrade/).
|
||||||
|
|
||||||
|
## Deprecation & Incompatibilities
|
||||||
|
|
||||||
|
N/A
|
||||||
|
|
||||||
|
## Known Issues after Release
|
||||||
|
|
||||||
|
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||||
|
|
||||||
|
|
||||||
|
## Highlights
|
||||||
|
|
||||||
|
- [IMPROVEMENT] Use PDB to protect Longhorn components from unexpected drains ([3304](https://github.com/longhorn/longhorn/issues/3304)) - @yangchiu @PhanLe1010
|
||||||
|
- [IMPROVEMENT] Introduce timeout mechanism for the sparse file syncing service ([4305](https://github.com/longhorn/longhorn/issues/4305)) - @yangchiu @ChanYiLin
|
||||||
|
- [IMPROVEMENT] Recurring jobs create new snapshots while being not able to clean up old ones ([4898](https://github.com/longhorn/longhorn/issues/4898)) - @mantissahz @chriscchien
|
||||||
|
|
||||||
|
## Improvement
|
||||||
|
|
||||||
|
- [IMPROVEMENT] Support bundle collects dmesg, syslog and related information of longhorn nodes ([5073](https://github.com/longhorn/longhorn/issues/5073)) - @weizhe0422 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Fix BackingImage uploading/downloading flow to prevent client timeout ([5443](https://github.com/longhorn/longhorn/issues/5443)) - @ChanYiLin @chriscchien
|
||||||
|
- [IMPROVEMENT] Create a new setting so that Longhorn removes PDB for instance-manager-r that doesn't have any running instance inside it ([5549](https://github.com/longhorn/longhorn/issues/5549)) - @PhanLe1010 @khushboo-rancher
|
||||||
|
- [IMPROVEMENT] Deprecate the setting `allow-node-drain-with-last-healthy-replica` and replace it by `node-drain-policy` setting ([5585](https://github.com/longhorn/longhorn/issues/5585)) - @yangchiu @PhanLe1010
|
||||||
|
- [IMPROVEMENT][UI] Recurring jobs create new snapshots while being not able to clean up old one ([5610](https://github.com/longhorn/longhorn/issues/5610)) - @mantissahz @smallteeths @roger-ryao
|
||||||
|
- [IMPROVEMENT] Only activate replica if it doesn't have deletion timestamp during volume engine upgrade ([5632](https://github.com/longhorn/longhorn/issues/5632)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Clean up backup target if the backup target setting is unset ([5655](https://github.com/longhorn/longhorn/issues/5655)) - @yangchiu @ChanYiLin
|
||||||
|
|
||||||
|
## Resilience
|
||||||
|
|
||||||
|
- [BUG] Directly mark replica as failed if the node is deleted ([5542](https://github.com/longhorn/longhorn/issues/5542)) - @weizhe0422 @roger-ryao
|
||||||
|
- [BUG] RWX volume is stuck at detaching when the attached node is down ([5558](https://github.com/longhorn/longhorn/issues/5558)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Backup monitor gets stuck in an infinite loop if backup isn't found ([5662](https://github.com/longhorn/longhorn/issues/5662)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Resources such as replicas are somehow not mutated when network is unstable ([5762](https://github.com/longhorn/longhorn/issues/5762)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Instance manager may not update instance status for a minute after starting ([5809](https://github.com/longhorn/longhorn/issues/5809)) - @ejweber @chriscchien
|
||||||
|
|
||||||
|
## Bugs
|
||||||
|
|
||||||
|
- [BUG] Delete a uploading backing image, the corresponding LH temp file is not deleted ([3682](https://github.com/longhorn/longhorn/issues/3682)) - @ChanYiLin @chriscchien
|
||||||
|
- [BUG] Can not create backup in engine image not fully deployed cluster ([5248](https://github.com/longhorn/longhorn/issues/5248)) - @ChanYiLin @roger-ryao
|
||||||
|
- [BUG] Upgrade engine --> spec.restoreVolumeRecurringJob and spec.snapshotDataIntegrity Unsupported value ([5485](https://github.com/longhorn/longhorn/issues/5485)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Bulk backup deletion cause restoring volume to finish with attached state. ([5506](https://github.com/longhorn/longhorn/issues/5506)) - @ChanYiLin @roger-ryao
|
||||||
|
- [BUG] volume expansion starts for no reason, gets stuck on current size > expected size ([5513](https://github.com/longhorn/longhorn/issues/5513)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] RWX volume attachment failed if tried more enough times ([5537](https://github.com/longhorn/longhorn/issues/5537)) - @yangchiu @derekbit
|
||||||
|
- [BUG] instance-manager-e emits `Wait for process pvc-xxxx to shutdown` constantly ([5575](https://github.com/longhorn/longhorn/issues/5575)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Support bundle kit should respect node selector & taint toleration ([5614](https://github.com/longhorn/longhorn/issues/5614)) - @yangchiu @c3y1huang
|
||||||
|
- [BUG] Value overlapped in page Instance Manager Image ([5622](https://github.com/longhorn/longhorn/issues/5622)) - @smallteeths @chriscchien
|
||||||
|
- [BUG] Instance manager PDB created with wrong selector thus blocking the draining of the wrongly selected node forever ([5680](https://github.com/longhorn/longhorn/issues/5680)) - @PhanLe1010 @chriscchien
|
||||||
|
- [BUG] During volume live engine upgrade, if the replica pod is killed, the volume is stuck in upgrading forever ([5684](https://github.com/longhorn/longhorn/issues/5684)) - @yangchiu @PhanLe1010
|
||||||
|
- [BUG] Instance manager PDBs cannot be removed if the longhorn-manager pod on its spec node is not available ([5688](https://github.com/longhorn/longhorn/issues/5688)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [BUG] Rebuild rebuilding is possibly issued to a wrong replica ([5709](https://github.com/longhorn/longhorn/issues/5709)) - @ejweber @roger-ryao
|
||||||
|
- [BUG] longhorn upgrade is not upgrading engineimage ([5740](https://github.com/longhorn/longhorn/issues/5740)) - @shuo-wu @chriscchien
|
||||||
|
- [BUG] `test_replica_auto_balance_when_replica_on_unschedulable_node` Error in creating volume with nodeSelector and dataLocality parameters ([5745](https://github.com/longhorn/longhorn/issues/5745)) - @c3y1huang @roger-ryao
|
||||||
|
- [BUG] Unable to backup volume after NFS server IP change ([5856](https://github.com/longhorn/longhorn/issues/5856)) - @derekbit @roger-ryao
|
||||||
|
|
||||||
|
## Misc
|
||||||
|
|
||||||
|
- [TASK] Check and update the networking doc & example YAMLs ([5651](https://github.com/longhorn/longhorn/issues/5651)) - @yangchiu @shuo-wu
|
||||||
|
|
||||||
|
## Contributors
|
||||||
|
|
||||||
|
- @ChanYiLin
|
||||||
|
- @PhanLe1010
|
||||||
|
- @c3y1huang
|
||||||
|
- @chriscchien
|
||||||
|
- @derekbit
|
||||||
|
- @ejweber
|
||||||
|
- @innobead
|
||||||
|
- @khushboo-rancher
|
||||||
|
- @mantissahz
|
||||||
|
- @roger-ryao
|
||||||
|
- @shuo-wu
|
||||||
|
- @smallteeths
|
||||||
|
- @weizhe0422
|
||||||
|
- @yangchiu
|
74
CHANGELOG/CHANGELOG-1.4.3.md
Normal file
74
CHANGELOG/CHANGELOG-1.4.3.md
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
## Release Note
|
||||||
|
### **v1.4.3 released!** 🎆
|
||||||
|
|
||||||
|
Longhorn v1.4.3 is the latest stable version of Longhorn 1.4.
|
||||||
|
It introduces improvements and bug fixes in the areas of stability, resilience, and so on. Please try it out and provide feedback. Thanks for all the contributions!
|
||||||
|
|
||||||
|
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.4.3.**
|
||||||
|
|
||||||
|
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.4.3/deploy/install/).
|
||||||
|
|
||||||
|
## Upgrade
|
||||||
|
|
||||||
|
> **Please read the [important notes](https://longhorn.io/docs/1.4.3/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.4.3 from v1.3.x/v1.4.x, which are only supported source versions.**
|
||||||
|
|
||||||
|
Follow the upgrade instructions [here](https://longhorn.io/docs/1.4.3/deploy/upgrade/).
|
||||||
|
|
||||||
|
## Deprecation & Incompatibilities
|
||||||
|
|
||||||
|
N/A
|
||||||
|
|
||||||
|
## Known Issues after Release
|
||||||
|
|
||||||
|
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||||
|
|
||||||
|
|
||||||
|
## Improvement
|
||||||
|
|
||||||
|
- [IMPROVEMENT] Assign the pods to the same node where the strict-local volume is present ([5448](https://github.com/longhorn/longhorn/issues/5448)) - @c3y1huang @chriscchien
|
||||||
|
|
||||||
|
## Resilience
|
||||||
|
|
||||||
|
- [BUG] filesystem corrupted after delete instance-manager-r for a locality best-effort volume ([5801](https://github.com/longhorn/longhorn/issues/5801)) - @yangchiu @ChanYiLin @mantissahz
|
||||||
|
|
||||||
|
## Bugs
|
||||||
|
|
||||||
|
- [BUG] 'Upgrade Engine' still shows up in a specific situation when engine already upgraded ([3063](https://github.com/longhorn/longhorn/issues/3063)) - @weizhe0422 @PhanLe1010 @smallteeths
|
||||||
|
- [BUG] DR volume even after activation remains in standby mode if there are one or more failed replicas. ([3069](https://github.com/longhorn/longhorn/issues/3069)) - @yangchiu @mantissahz
|
||||||
|
- [BUG] Prevent Longhorn uninstallation from getting stuck due to backups in error ([5868](https://github.com/longhorn/longhorn/issues/5868)) - @ChanYiLin @mantissahz
|
||||||
|
- [BUG] Unable to create support bundle if the previous one stayed in ReadyForDownload phase ([5882](https://github.com/longhorn/longhorn/issues/5882)) - @c3y1huang @roger-ryao
|
||||||
|
- [BUG] share-manager for a given pvc keep restarting (other pvc are working fine) ([5954](https://github.com/longhorn/longhorn/issues/5954)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Replica auto-rebalance doesn't respect node selector ([5971](https://github.com/longhorn/longhorn/issues/5971)) - @c3y1huang @roger-ryao
|
||||||
|
- [BUG] Extra snapshot generated when clone from a detached volume ([5986](https://github.com/longhorn/longhorn/issues/5986)) - @weizhe0422 @ejweber
|
||||||
|
- [BUG] User created snapshot deleted after node drain and uncordon ([5992](https://github.com/longhorn/longhorn/issues/5992)) - @yangchiu @mantissahz
|
||||||
|
- [BUG] In some specific situation, system backup auto deleted when creating another one ([6045](https://github.com/longhorn/longhorn/issues/6045)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Backing Image deletion stuck if it's deleted during uploading process and bids is ready-for-transfer state ([6086](https://github.com/longhorn/longhorn/issues/6086)) - @WebberHuang1118 @chriscchien
|
||||||
|
- [BUG] Backing image manager fails when SELinux is enabled ([6108](https://github.com/longhorn/longhorn/issues/6108)) - @ejweber @chriscchien
|
||||||
|
- [BUG] test_dr_volume_with_restore_command_error failed ([6130](https://github.com/longhorn/longhorn/issues/6130)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] Longhorn doesn't remove the system backups crd on uninstallation ([6185](https://github.com/longhorn/longhorn/issues/6185)) - @c3y1huang @khushboo-rancher
|
||||||
|
- [BUG] Test case test_ha_backup_deletion_recovery failed in rhel or rockylinux arm64 environment ([6213](https://github.com/longhorn/longhorn/issues/6213)) - @yangchiu @ChanYiLin @mantissahz
|
||||||
|
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
||||||
|
- [BUG] Unable to receive support bundle from UI when it's large (400MB+) ([6256](https://github.com/longhorn/longhorn/issues/6256)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Migration test case failed: unable to detach volume migration is not ready yet ([6238](https://github.com/longhorn/longhorn/issues/6238)) - @yangchiu @PhanLe1010 @khushboo-rancher
|
||||||
|
- [BUG] Restored Volumes stuck in attaching state ([6239](https://github.com/longhorn/longhorn/issues/6239)) - @derekbit @roger-ryao
|
||||||
|
|
||||||
|
## Contributors
|
||||||
|
|
||||||
|
- @ChanYiLin
|
||||||
|
- @PhanLe1010
|
||||||
|
- @WebberHuang1118
|
||||||
|
- @c3y1huang
|
||||||
|
- @chriscchien
|
||||||
|
- @derekbit
|
||||||
|
- @ejweber
|
||||||
|
- @innobead
|
||||||
|
- @khushboo-rancher
|
||||||
|
- @mantissahz
|
||||||
|
- @roger-ryao
|
||||||
|
- @smallteeths
|
||||||
|
- @weizhe0422
|
||||||
|
- @yangchiu
|
301
CHANGELOG/CHANGELOG-1.5.0.md
Normal file
301
CHANGELOG/CHANGELOG-1.5.0.md
Normal file
@ -0,0 +1,301 @@
|
|||||||
|
## Release Note
|
||||||
|
### **v1.5.0 released!** 🎆
|
||||||
|
|
||||||
|
Longhorn v1.5.0 is the latest version of Longhorn 1.5.
|
||||||
|
It introduces many enhancements, improvements, and bug fixes as described below including performance, stability, maintenance, resilience, and so on. Please try it and feedback. Thanks for all the contributions!
|
||||||
|
|
||||||
|
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
||||||
|
|
||||||
|
- [v2 Data Engine based on SPDK - Preview](https://github.com/longhorn/longhorn/issues/5751)
|
||||||
|
> **Please note that this is a preview feature, so should not be used in any production environment. A preview feature is disabled by default and would be changed in the following versions until it becomes general availability.**
|
||||||
|
|
||||||
|
In addition to the existing iSCSI stack (v1) data engine, we are introducing the v2 data engine based on SPDK (Storage Performance Development Kit). This release includes the introduction of volume lifecycle management, degraded volume handling, offline replica rebuilding, block device management, and orphaned replica management. For the performance benchmark and comparison with v1, check the report [here](https://longhorn.io/docs/1.5.0/spdk/performance-benchmark/).
|
||||||
|
|
||||||
|
- [Longhorn Volume Attachment](https://github.com/longhorn/longhorn/issues/3715)
|
||||||
|
Introducing the new Longhorn VolumeAttachment CR, which ensures exclusive attachment and supports automatic volume attachment and detachment for various headless operations such as volume cloning, backing image export, and recurring jobs.
|
||||||
|
|
||||||
|
- [Cluster Autoscaler - GA](https://github.com/longhorn/longhorn/issues/5238)
|
||||||
|
Cluster Autoscaler was initially introduced as an experimental feature in v1.3. After undergoing automatic validation on different public cloud Kubernetes distributions and receiving user feedback, it has now reached general availability.
|
||||||
|
|
||||||
|
- [Instance Manager Engine & Replica Consolidation](https://github.com/longhorn/longhorn/issues/5208)
|
||||||
|
Previously, there were two separate instance manager pods responsible for volume engine and replica process management. However, this setup required high resource usage, especially during live upgrades. In this release, we have merged these pods into a single instance manager, reducing the initial resource requirements.
|
||||||
|
|
||||||
|
- [Volume Backup Compression Methods](https://github.com/longhorn/longhorn/issues/5189)
|
||||||
|
Longhorn supports different compression methods for volume backups, including lz4, gzip, or no compression. This allows users to choose the most suitable method based on their data type and usage requirements.
|
||||||
|
|
||||||
|
- [Automatic Volume Trim Recurring Job](https://github.com/longhorn/longhorn/issues/5186)
|
||||||
|
While volume filesystem trim was introduced in v1.4, users had to perform the operation manually. From this release, users can create a recurring job that automatically runs the trim process, improving space efficiency without requiring human intervention.
|
||||||
|
|
||||||
|
- [RWX Volume Trim](https://github.com/longhorn/longhorn/issues/5143)
|
||||||
|
Longhorn supports filesystem trim for RWX (Read-Write-Many) volumes, expanding the trim functionality beyond RWO (Read-Write-Once) volumes only.
|
||||||
|
|
||||||
|
- [Upgrade Path Enforcement & Downgrade Prevention](https://github.com/longhorn/longhorn/issues/5131)
|
||||||
|
To ensure compatibility after an upgrade, we have implemented upgrade path enforcement. This prevents unintended downgrades and ensures the system and data remain intact.
|
||||||
|
|
||||||
|
- [Backing Image Management via CSI VolumeSnapshot](https://github.com/longhorn/longhorn/issues/5005)
|
||||||
|
Users can now utilize the unified CSI VolumeSnapshot interface to manage Backing Images similar to volume snapshots and backups.
|
||||||
|
|
||||||
|
- [Snapshot Cleanup & Delete Recurring Job](https://github.com/longhorn/longhorn/issues/3836)
|
||||||
|
Introducing two new recurring job types specifically designed for snapshot cleanup and deletion. These jobs allow users to remove unnecessary snapshots for better space efficiency.
|
||||||
|
|
||||||
|
- [CIFS Backup Store](https://github.com/longhorn/longhorn/issues/3599) & [Azure Backup Store](https://github.com/longhorn/longhorn/issues/1309)
|
||||||
|
To enhance users' backup strategies and align with data governance policies, Longhorn now supports additional backup storage protocols, including CIFS and Azure.
|
||||||
|
|
||||||
|
- [Kubernetes Upgrade Node Drain Policy](https://github.com/longhorn/longhorn/issues/3304)
|
||||||
|
The new Node Drain Policy provides flexible strategies to protect volume data during Kubernetes upgrades or node maintenance operations. This ensures the integrity and availability of your volumes.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before installing Longhorn v1.5.0.**
|
||||||
|
|
||||||
|
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.5.0/deploy/install/).
|
||||||
|
|
||||||
|
## Upgrade
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.0 from v1.4.x. Only support upgrading from 1.4.x.**
|
||||||
|
|
||||||
|
Follow the upgrade instructions [here](https://longhorn.io/docs/1.5.0/deploy/upgrade/).
|
||||||
|
|
||||||
|
## Deprecation & Incompatibilities
|
||||||
|
|
||||||
|
Please check the [important notes](https://longhorn.io/docs/1.5.0/deploy/important-notes/) to know more about deprecated, removed, incompatible features and important changes. If you upgrade indirectly from an older version like v1.3.x, please also check the corresponding important note for each upgrade version path.
|
||||||
|
|
||||||
|
## Known Issues after Release
|
||||||
|
|
||||||
|
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||||
|
|
||||||
|
## Highlights
|
||||||
|
|
||||||
|
- [DOC] Provide the user guide for Kubernetes upgrade ([494](https://github.com/longhorn/longhorn/issues/494)) - @PhanLe1010
|
||||||
|
- [FEATURE] Backups to Azure Blob Storage ([1309](https://github.com/longhorn/longhorn/issues/1309)) - @mantissahz @chriscchien
|
||||||
|
- [IMPROVEMENT] Use PDB to protect Longhorn components from unexpected drains ([3304](https://github.com/longhorn/longhorn/issues/3304)) - @yangchiu @PhanLe1010
|
||||||
|
- [FEATURE] CIFS Backup Store Support ([3599](https://github.com/longhorn/longhorn/issues/3599)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Consolidate volume attach/detach implementation ([3715](https://github.com/longhorn/longhorn/issues/3715)) - @yangchiu @PhanLe1010
|
||||||
|
- [IMPROVEMENT] Periodically clean up volume snapshots ([3836](https://github.com/longhorn/longhorn/issues/3836)) - @c3y1huang @chriscchien
|
||||||
|
- [IMPROVEMENT] Introduce timeout mechanism for the sparse file syncing service ([4305](https://github.com/longhorn/longhorn/issues/4305)) - @yangchiu @ChanYiLin
|
||||||
|
- [IMPROVEMENT] Recurring jobs create new snapshots while being not able to clean up old ones ([4898](https://github.com/longhorn/longhorn/issues/4898)) - @mantissahz @chriscchien
|
||||||
|
- [FEATURE] BackingImage Management via VolumeSnapshot ([5005](https://github.com/longhorn/longhorn/issues/5005)) - @ChanYiLin @chriscchien
|
||||||
|
- [FEATURE] Upgrade path enforcement & downgrade prevention ([5131](https://github.com/longhorn/longhorn/issues/5131)) - @yangchiu @mantissahz
|
||||||
|
- [FEATURE] Support RWX volume trim ([5143](https://github.com/longhorn/longhorn/issues/5143)) - @derekbit @chriscchien
|
||||||
|
- [FEATURE] Auto Trim via recurring job ([5186](https://github.com/longhorn/longhorn/issues/5186)) - @c3y1huang @chriscchien
|
||||||
|
- [FEATURE] Introduce faster compression and multiple threads for volume backup & restore ([5189](https://github.com/longhorn/longhorn/issues/5189)) - @derekbit @roger-ryao
|
||||||
|
- [FEATURE] Consolidate Instance Manager Engine & Replica for resource consumption reduction ([5208](https://github.com/longhorn/longhorn/issues/5208)) - @yangchiu @c3y1huang
|
||||||
|
- [FEATURE] Cluster Autoscaler Support GA ([5238](https://github.com/longhorn/longhorn/issues/5238)) - @yangchiu @c3y1huang
|
||||||
|
- [FEATURE] Update K8s version support and component/pkg/build dependencies for Longhorn 1.5 ([5595](https://github.com/longhorn/longhorn/issues/5595)) - @yangchiu @ejweber
|
||||||
|
- [FEATURE] Support SPDK Data Engine - Preview ([5751](https://github.com/longhorn/longhorn/issues/5751)) - @derekbit @shuo-wu @DamiaSan
|
||||||
|
|
||||||
|
## Enhancements
|
||||||
|
|
||||||
|
- [FEATURE] Allow users to directly activate a restoring/DR volume as long as there is one ready replica. ([1512](https://github.com/longhorn/longhorn/issues/1512)) - @mantissahz @weizhe0422
|
||||||
|
- [REFACTOR] volume controller refactoring/split up, to simplify the control flow ([2527](https://github.com/longhorn/longhorn/issues/2527)) - @PhanLe1010 @chriscchien
|
||||||
|
- [FEATURE] Import and export SPDK longhorn volumes to longhorn sparse file directory ([4100](https://github.com/longhorn/longhorn/issues/4100)) - @DamiaSan
|
||||||
|
- [FEATURE] Add a global `storage reserved` setting for newly created longhorn nodes' disks ([4773](https://github.com/longhorn/longhorn/issues/4773)) - @mantissahz @chriscchien
|
||||||
|
- [FEATURE] Support backup volumes during system backup ([5011](https://github.com/longhorn/longhorn/issues/5011)) - @c3y1huang @chriscchien
|
||||||
|
- [FEATURE] Support SPDK lvol shallow copy for newly replica creation ([5217](https://github.com/longhorn/longhorn/issues/5217)) - @DamiaSan
|
||||||
|
- [FEATURE] Introduce longhorn-spdk-engine for SPDK volume management ([5282](https://github.com/longhorn/longhorn/issues/5282)) - @shuo-wu
|
||||||
|
- [FEATURE] Support replica-zone-soft-anti-affinity setting per volume ([5358](https://github.com/longhorn/longhorn/issues/5358)) - @ChanYiLin @smallteeths @chriscchien
|
||||||
|
- [FEATURE] Install Opt-In NetworkPolicies ([5403](https://github.com/longhorn/longhorn/issues/5403)) - @yangchiu @ChanYiLin
|
||||||
|
- [FEATURE] Create Longhorn SPDK Engine component with basic fundamental functions ([5406](https://github.com/longhorn/longhorn/issues/5406)) - @shuo-wu
|
||||||
|
- [FEATURE] Add status APIs for shallow copy and IO pause/resume ([5647](https://github.com/longhorn/longhorn/issues/5647)) - @DamiaSan
|
||||||
|
- [FEATURE] Introduce a new disk type, disk management and replica scheduler for SPDK volumes ([5683](https://github.com/longhorn/longhorn/issues/5683)) - @derekbit @roger-ryao
|
||||||
|
- [FEATURE] Support replica scheduling for SPDK volume ([5711](https://github.com/longhorn/longhorn/issues/5711)) - @derekbit
|
||||||
|
- [FEATURE] Create SPDK gRPC service for instance manager ([5712](https://github.com/longhorn/longhorn/issues/5712)) - @shuo-wu
|
||||||
|
- [FEATURE] Environment check script for Longhorn with SPDK ([5738](https://github.com/longhorn/longhorn/issues/5738)) - @derekbit @chriscchien
|
||||||
|
- [FEATURE] Deployment manifests for helping install SPDK dependencies, utilities and libraries ([5739](https://github.com/longhorn/longhorn/issues/5739)) - @yangchiu @derekbit
|
||||||
|
- [FEATURE] Implement Disk gRPC Service in Instance Manager for collecting SPDK disk statistics from SPDK gRPC service ([5744](https://github.com/longhorn/longhorn/issues/5744)) - @derekbit @chriscchien
|
||||||
|
- [FEATURE] Support for SPDK RAID1 by setting the minimum number of base_bdevs to 1 ([5758](https://github.com/longhorn/longhorn/issues/5758)) - @yangchiu @DamiaSan
|
||||||
|
- [FEATURE] Add a global setting for enabling and disabling SPDK feature ([5778](https://github.com/longhorn/longhorn/issues/5778)) - @yangchiu @derekbit
|
||||||
|
- [FEATURE] Identify and manage orphaned lvols and raid bdevs if the associated `Volume` resources are not existing ([5827](https://github.com/longhorn/longhorn/issues/5827)) - @yangchiu @derekbit
|
||||||
|
- [FEATURE] Longhorn UI for SPDK feature ([5846](https://github.com/longhorn/longhorn/issues/5846)) - @smallteeths @chriscchien
|
||||||
|
- [FEATURE] UI modification to work with new AD mechanism (Longhorn UI -> Longhorn API) ([6004](https://github.com/longhorn/longhorn/issues/6004)) - @yangchiu @smallteeths
|
||||||
|
- [FEATURE] Replica offline rebuild over SPDK - data engine ([6067](https://github.com/longhorn/longhorn/issues/6067)) - @shuo-wu
|
||||||
|
- [FEATURE] Support automatic offline replica rebuilding of volumes using SPDK data engine ([6071](https://github.com/longhorn/longhorn/issues/6071)) - @yangchiu @derekbit
|
||||||
|
|
||||||
|
## Improvement
|
||||||
|
|
||||||
|
- [IMPROVEMENT] Do not count the failure replica reuse failure caused by the disconnection ([1923](https://github.com/longhorn/longhorn/issues/1923)) - @yangchiu @mantissahz
|
||||||
|
- [IMPROVEMENT] Consider changing the over provisioning default/recommendation to 100% percentage (no over provisioning) ([2694](https://github.com/longhorn/longhorn/issues/2694)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] StorageClass of pv and pvc of a recovered pv should not always be default. ([3506](https://github.com/longhorn/longhorn/issues/3506)) - @ChanYiLin @smallteeths @roger-ryao
|
||||||
|
- [IMPROVEMENT] Auto-attach volume for K8s CSI snapshot ([3726](https://github.com/longhorn/longhorn/issues/3726)) - @weizhe0422 @PhanLe1010
|
||||||
|
- [IMPROVEMENT] Change Longhorn API to create/delete snapshot CRs instead of calling engine CLI ([3995](https://github.com/longhorn/longhorn/issues/3995)) - @yangchiu @PhanLe1010
|
||||||
|
- [IMPROVEMENT] Add support for crypto parameters for RWX volumes ([4829](https://github.com/longhorn/longhorn/issues/4829)) - @mantissahz @roger-ryao
|
||||||
|
- [IMPROVEMENT] Remove the global setting `mkfs-ext4-parameters` ([4914](https://github.com/longhorn/longhorn/issues/4914)) - @ejweber @roger-ryao
|
||||||
|
- [IMPROVEMENT] Move all snapshot related settings at one place. ([4930](https://github.com/longhorn/longhorn/issues/4930)) - @smallteeths @roger-ryao
|
||||||
|
- [IMPROVEMENT] Remove system managed component image settings ([5028](https://github.com/longhorn/longhorn/issues/5028)) - @mantissahz @chriscchien
|
||||||
|
- [IMPROVEMENT] Set default `engine-replica-timeout` value for engine controller start command ([5031](https://github.com/longhorn/longhorn/issues/5031)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Support bundle collects dmesg, syslog and related information of longhorn nodes ([5073](https://github.com/longhorn/longhorn/issues/5073)) - @weizhe0422 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Collect volume, system, feature info for metrics for better usage awareness ([5235](https://github.com/longhorn/longhorn/issues/5235)) - @c3y1huang @chriscchien @roger-ryao
|
||||||
|
- [IMPROVEMENT] Update uninstallation info to include the 'Deleting Confirmation Flag' in chart ([5250](https://github.com/longhorn/longhorn/issues/5250)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Disable Revision Counter for Strict-Local dataLocality ([5257](https://github.com/longhorn/longhorn/issues/5257)) - @derekbit @roger-ryao
|
||||||
|
- [IMPROVEMENT] Fix Guaranteed Engine Manager CPU recommendation formula in UI ([5338](https://github.com/longhorn/longhorn/issues/5338)) - @c3y1huang @smallteeths @roger-ryao
|
||||||
|
- [IMPROVEMENT] Update PSP validation in the Longhorn upstream chart ([5339](https://github.com/longhorn/longhorn/issues/5339)) - @yangchiu @PhanLe1010
|
||||||
|
- [IMPROVEMENT] Update ganesha nfs to 4.2.3 ([5356](https://github.com/longhorn/longhorn/issues/5356)) - @derekbit @roger-ryao
|
||||||
|
- [IMPROVEMENT] Set write-cache of longhorn block device to off explicitly ([5382](https://github.com/longhorn/longhorn/issues/5382)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] Clean up unused backupstore mountpoint ([5391](https://github.com/longhorn/longhorn/issues/5391)) - @derekbit @chriscchien
|
||||||
|
- [DOC] Update Kubernetes version info to have consistent description from the longhorn documentation in chart ([5399](https://github.com/longhorn/longhorn/issues/5399)) - @ChanYiLin @roger-ryao
|
||||||
|
- [IMPROVEMENT] Fix BackingImage uploading/downloading flow to prevent client timeout ([5443](https://github.com/longhorn/longhorn/issues/5443)) - @ChanYiLin @chriscchien
|
||||||
|
- [IMPROVEMENT] Assign the pods to the same node where the strict-local volume is present ([5448](https://github.com/longhorn/longhorn/issues/5448)) - @c3y1huang @chriscchien
|
||||||
|
- [IMPROVEMENT] Have explicitly message when trying to attach a volume which it's engine and replica were on deleted node ([5545](https://github.com/longhorn/longhorn/issues/5545)) - @ChanYiLin @chriscchien
|
||||||
|
- [IMPROVEMENT] Create a new setting so that Longhorn removes PDB for instance-manager-r that doesn't have any running instance inside it ([5549](https://github.com/longhorn/longhorn/issues/5549)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Merge conversion/admission webhook and recovery backend services into longhorn-manager ([5590](https://github.com/longhorn/longhorn/issues/5590)) - @ChanYiLin @chriscchien
|
||||||
|
- [IMPROVEMENT][UI] Recurring jobs create new snapshots while being not able to clean up old one ([5610](https://github.com/longhorn/longhorn/issues/5610)) - @mantissahz @smallteeths @roger-ryao
|
||||||
|
- [IMPROVEMENT] Only activate replica if it doesn't have deletion timestamp during volume engine upgrade ([5632](https://github.com/longhorn/longhorn/issues/5632)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [IMPROVEMENT] Clean up backup target if the backup target setting is unset ([5655](https://github.com/longhorn/longhorn/issues/5655)) - @yangchiu @ChanYiLin
|
||||||
|
- [IMPROVEMENT] Bump CSI sidecar components' version ([5672](https://github.com/longhorn/longhorn/issues/5672)) - @yangchiu @ejweber
|
||||||
|
- [IMPROVEMENT] Configure log level of Longhorn components ([5888](https://github.com/longhorn/longhorn/issues/5888)) - @ChanYiLin @weizhe0422
|
||||||
|
- [IMPROVEMENT] Remove development toolchain from Longhorn images ([6022](https://github.com/longhorn/longhorn/issues/6022)) - @ChanYiLin @derekbit
|
||||||
|
- [IMPROVEMENT] Reduce replica process's number of allocated ports ([6079](https://github.com/longhorn/longhorn/issues/6079)) - @ChanYiLin @derekbit
|
||||||
|
- [IMPROVEMENT] UI supports automatic replica rebuilding for SPDK volumes ([6107](https://github.com/longhorn/longhorn/issues/6107)) - @smallteeths @roger-ryao
|
||||||
|
- [IMPROVEMENT] Minor UX changes for Longhorn SPDK ([6126](https://github.com/longhorn/longhorn/issues/6126)) - @derekbit @roger-ryao
|
||||||
|
- [IMPROVEMENT] Instance manager spdk_tgt resilience due to spdk_tgt crash ([6155](https://github.com/longhorn/longhorn/issues/6155)) - @yangchiu @derekbit
|
||||||
|
- [IMPROVEMENT] Determine number of replica/engine port count in longhorn-manager (control plane) instead ([6163](https://github.com/longhorn/longhorn/issues/6163)) - @derekbit @chriscchien
|
||||||
|
- [IMPROVEMENT] SPDK client should functions after encountering decoding error ([6191](https://github.com/longhorn/longhorn/issues/6191)) - @yangchiu @shuo-wu
|
||||||
|
|
||||||
|
## Performance
|
||||||
|
|
||||||
|
- [REFACTORING] Evaluate the impact of removing the client side compression for backup blocks ([1409](https://github.com/longhorn/longhorn/issues/1409)) - @derekbit
|
||||||
|
|
||||||
|
## Resilience
|
||||||
|
|
||||||
|
- [BUG] If backing image downloading fails on one node, it doesn't try on other nodes. ([3746](https://github.com/longhorn/longhorn/issues/3746)) - @ChanYiLin
|
||||||
|
- [BUG] Replica rebuilding caused by rke2/kubelet restart ([5340](https://github.com/longhorn/longhorn/issues/5340)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Volume restoration will never complete if attached node is down ([5464](https://github.com/longhorn/longhorn/issues/5464)) - @derekbit @weizhe0422 @chriscchien
|
||||||
|
- [BUG] Node disconnection test failed ([5476](https://github.com/longhorn/longhorn/issues/5476)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Physical node down test failed ([5477](https://github.com/longhorn/longhorn/issues/5477)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Backing image with sync failure ([5481](https://github.com/longhorn/longhorn/issues/5481)) - @ChanYiLin @roger-ryao
|
||||||
|
- [BUG] share-manager pod failed to restart after kubelet restart ([5507](https://github.com/longhorn/longhorn/issues/5507)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Directly mark replica as failed if the node is deleted ([5542](https://github.com/longhorn/longhorn/issues/5542)) - @weizhe0422 @roger-ryao
|
||||||
|
- [BUG] RWX volume is stuck at detaching when the attached node is down ([5558](https://github.com/longhorn/longhorn/issues/5558)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Unable to export RAID1 bdev in degraded state ([5650](https://github.com/longhorn/longhorn/issues/5650)) - @chriscchien @DamiaSan
|
||||||
|
- [BUG] Backup monitor gets stuck in an infinite loop if backup isn't found ([5662](https://github.com/longhorn/longhorn/issues/5662)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Resources such as replicas are somehow not mutated when network is unstable ([5762](https://github.com/longhorn/longhorn/issues/5762)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] filesystem corrupted after delete instance-manager-r for a locality best-effort volume ([5801](https://github.com/longhorn/longhorn/issues/5801)) - @yangchiu @ChanYiLin @mantissahz
|
||||||
|
|
||||||
|
## Stability
|
||||||
|
|
||||||
|
- [BUG] nfs backup broken - NFS server: mkdir - file exists ([4626](https://github.com/longhorn/longhorn/issues/4626)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Memory leak in CSI plugin caused by stuck umount processes if the RWX volume is already gone ([5296](https://github.com/longhorn/longhorn/issues/5296)) - @derekbit @roger-ryao
|
||||||
|
|
||||||
|
## Bugs
|
||||||
|
|
||||||
|
- [BUG] 'Upgrade Engine' still shows up in a specific situation when engine already upgraded ([3063](https://github.com/longhorn/longhorn/issues/3063)) - @weizhe0422 @PhanLe1010 @smallteeths
|
||||||
|
- [BUG] DR volume even after activation remains in standby mode if there are one or more failed replicas. ([3069](https://github.com/longhorn/longhorn/issues/3069)) - @yangchiu @mantissahz
|
||||||
|
- [BUG] volume not able to attach with raw type backing image ([3437](https://github.com/longhorn/longhorn/issues/3437)) - @yangchiu @ChanYiLin
|
||||||
|
- [BUG] Delete a uploading backing image, the corresponding LH temp file is not deleted ([3682](https://github.com/longhorn/longhorn/issues/3682)) - @ChanYiLin @chriscchien
|
||||||
|
- [BUG] Cloned PVC from detached volume will stuck at not ready for workload ([3692](https://github.com/longhorn/longhorn/issues/3692)) - @PhanLe1010 @chriscchien
|
||||||
|
- [BUG] Block device volume failed to unmount when it is detached unexpectedly ([3778](https://github.com/longhorn/longhorn/issues/3778)) - @PhanLe1010 @chriscchien
|
||||||
|
- [BUG] After migration of Longhorn from Rancher old UI to dashboard, the csi-plugin doesn't update ([4519](https://github.com/longhorn/longhorn/issues/4519)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] Volumes Stuck in Attach/Detach Loop when running on OpenShift/OKD ([4988](https://github.com/longhorn/longhorn/issues/4988)) - @ChanYiLin
|
||||||
|
- [BUG] Longhorn 1.3.2 fails to backup & restore volumes behind Internet proxy ([5054](https://github.com/longhorn/longhorn/issues/5054)) - @mantissahz @chriscchien
|
||||||
|
- [BUG] Instance manager pod does not respect of node taint? ([5161](https://github.com/longhorn/longhorn/issues/5161)) - @ejweber
|
||||||
|
- [BUG] RWX doesn't work with release 1.4.0 due to end grace update error from recovery backend ([5183](https://github.com/longhorn/longhorn/issues/5183)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Incorrect indentation of charts/questions.yaml ([5196](https://github.com/longhorn/longhorn/issues/5196)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] Updating option "Allow snapshots removal during trim" for old volumes failed ([5218](https://github.com/longhorn/longhorn/issues/5218)) - @shuo-wu @roger-ryao
|
||||||
|
- [BUG] Since 1.4.0 RWX volume failing regularly ([5224](https://github.com/longhorn/longhorn/issues/5224)) - @derekbit
|
||||||
|
- [BUG] Can not create backup in engine image not fully deployed cluster ([5248](https://github.com/longhorn/longhorn/issues/5248)) - @ChanYiLin @roger-ryao
|
||||||
|
- [BUG] Incorrect router retry mechanism ([5259](https://github.com/longhorn/longhorn/issues/5259)) - @mantissahz @chriscchien
|
||||||
|
- [BUG] System Backup is stuck at Uploading if there are PVs not provisioned by CSI driver ([5286](https://github.com/longhorn/longhorn/issues/5286)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Sync up with backup target during DR volume activation ([5292](https://github.com/longhorn/longhorn/issues/5292)) - @yangchiu @weizhe0422
|
||||||
|
- [BUG] environment_check.sh does not handle different kernel versions in cluster correctly ([5304](https://github.com/longhorn/longhorn/issues/5304)) - @achims311 @roger-ryao
|
||||||
|
- [BUG] instance-manager-r high memory consumption ([5312](https://github.com/longhorn/longhorn/issues/5312)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Unable to upgrade longhorn from v1.3.2 to master-head ([5368](https://github.com/longhorn/longhorn/issues/5368)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Modify engineManagerCPURequest and replicaManagerCPURequest won't raise resource request in instance-manager-e pod ([5419](https://github.com/longhorn/longhorn/issues/5419)) - @c3y1huang
|
||||||
|
- [BUG] Error message not consistent between create/update recurring job when retain number greater than 50 ([5434](https://github.com/longhorn/longhorn/issues/5434)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Do not copy Host header to API requests forwarded to Longhorn Manager ([5438](https://github.com/longhorn/longhorn/issues/5438)) - @yangchiu @smallteeths
|
||||||
|
- [BUG] RWX Volume attachment is getting Failed ([5456](https://github.com/longhorn/longhorn/issues/5456)) - @derekbit
|
||||||
|
- [BUG] test case test_backup_lock_deletion_during_restoration failed ([5458](https://github.com/longhorn/longhorn/issues/5458)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Unable to create support bundle agent pod in air-gap environment ([5467](https://github.com/longhorn/longhorn/issues/5467)) - @yangchiu @c3y1huang
|
||||||
|
- [BUG] Example of data migration doesn't work for hidden/./dot-files) ([5484](https://github.com/longhorn/longhorn/issues/5484)) - @hedefalk @shuo-wu @chriscchien
|
||||||
|
- [BUG] Upgrade engine --> spec.restoreVolumeRecurringJob and spec.snapshotDataIntegrity Unsupported value ([5485](https://github.com/longhorn/longhorn/issues/5485)) - @yangchiu @derekbit
|
||||||
|
- [BUG] test case test_dr_volume_with_backup_block_deletion failed ([5489](https://github.com/longhorn/longhorn/issues/5489)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Bulk backup deletion cause restoring volume to finish with attached state. ([5506](https://github.com/longhorn/longhorn/issues/5506)) - @ChanYiLin @roger-ryao
|
||||||
|
- [BUG] volume expansion starts for no reason, gets stuck on current size > expected size ([5513](https://github.com/longhorn/longhorn/issues/5513)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] RWX volume attachment failed if tried more enough times ([5537](https://github.com/longhorn/longhorn/issues/5537)) - @yangchiu @derekbit
|
||||||
|
- [BUG] instance-manager-e emits `Wait for process pvc-xxxx to shutdown` constantly ([5575](https://github.com/longhorn/longhorn/issues/5575)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Support bundle kit should respect node selector & taint toleration ([5614](https://github.com/longhorn/longhorn/issues/5614)) - @yangchiu @c3y1huang
|
||||||
|
- [BUG] Value overlapped in page Instance Manager Image ([5622](https://github.com/longhorn/longhorn/issues/5622)) - @smallteeths @chriscchien
|
||||||
|
- [BUG] Updated Rocky 9 (and others) can't attach due to SELinux ([5627](https://github.com/longhorn/longhorn/issues/5627)) - @yangchiu @ejweber
|
||||||
|
- [BUG] Fix misleading error messages when creating a mount point for a backup store ([5630](https://github.com/longhorn/longhorn/issues/5630)) - @derekbit
|
||||||
|
- [BUG] Instance manager PDB created with wrong selector thus blocking the draining of the wrongly selected node forever ([5680](https://github.com/longhorn/longhorn/issues/5680)) - @PhanLe1010 @chriscchien
|
||||||
|
- [BUG] During volume live engine upgrade, if the replica pod is killed, the volume is stuck in upgrading forever ([5684](https://github.com/longhorn/longhorn/issues/5684)) - @yangchiu @PhanLe1010
|
||||||
|
- [BUG] Instance manager PDBs cannot be removed if the longhorn-manager pod on its spec node is not available ([5688](https://github.com/longhorn/longhorn/issues/5688)) - @PhanLe1010 @roger-ryao
|
||||||
|
- [BUG] Rebuild rebuilding is possibly issued to a wrong replica ([5709](https://github.com/longhorn/longhorn/issues/5709)) - @ejweber @roger-ryao
|
||||||
|
- [BUG] Observing repilca on new IM-r before upgrading of volume ([5729](https://github.com/longhorn/longhorn/issues/5729)) - @c3y1huang
|
||||||
|
- [BUG] longhorn upgrade is not upgrading engineimage ([5740](https://github.com/longhorn/longhorn/issues/5740)) - @shuo-wu @chriscchien
|
||||||
|
- [BUG] `test_replica_auto_balance_when_replica_on_unschedulable_node` Error in creating volume with nodeSelector and dataLocality parameters ([5745](https://github.com/longhorn/longhorn/issues/5745)) - @c3y1huang @roger-ryao
|
||||||
|
- [BUG] Unable to backup volume after NFS server IP change ([5856](https://github.com/longhorn/longhorn/issues/5856)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Prevent Longhorn uninstallation from getting stuck due to backups in error ([5868](https://github.com/longhorn/longhorn/issues/5868)) - @ChanYiLin @mantissahz
|
||||||
|
- [BUG] Unable to create support bundle if the previous one stayed in ReadyForDownload phase ([5882](https://github.com/longhorn/longhorn/issues/5882)) - @c3y1huang @roger-ryao
|
||||||
|
- [BUG] share-manager for a given pvc keep restarting (other pvc are working fine) ([5954](https://github.com/longhorn/longhorn/issues/5954)) - @yangchiu @derekbit
|
||||||
|
- [BUG] Replica auto-rebalance doesn't respect node selector ([5971](https://github.com/longhorn/longhorn/issues/5971)) - @c3y1huang @roger-ryao
|
||||||
|
- [BUG] Volume detached automatically after upgrade Longhorn ([5983](https://github.com/longhorn/longhorn/issues/5983)) - @yangchiu @PhanLe1010
|
||||||
|
- [BUG] Extra snapshot generated when clone from a detached volume ([5986](https://github.com/longhorn/longhorn/issues/5986)) - @weizhe0422 @ejweber
|
||||||
|
- [BUG] User created snapshot deleted after node drain and uncordon ([5992](https://github.com/longhorn/longhorn/issues/5992)) - @yangchiu @mantissahz
|
||||||
|
- [BUG] Webhook PDBs are not removed after upgrading to master-head ([6026](https://github.com/longhorn/longhorn/issues/6026)) - @weizhe0422 @PhanLe1010
|
||||||
|
- [BUG] In some specific situation, system backup auto deleted when creating another one ([6045](https://github.com/longhorn/longhorn/issues/6045)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Backing Image deletion stuck if it's deleted during uploading process and bids is ready-for-transfer state ([6086](https://github.com/longhorn/longhorn/issues/6086)) - @WebberHuang1118 @chriscchien
|
||||||
|
- [BUG] A backup target backed by a Samba server is not recognized ([6100](https://github.com/longhorn/longhorn/issues/6100)) - @derekbit @weizhe0422
|
||||||
|
- [BUG] Backing image manager fails when SELinux is enabled ([6108](https://github.com/longhorn/longhorn/issues/6108)) - @ejweber @chriscchien
|
||||||
|
- [BUG] Force delete volume make SPDK disk unschedule ([6110](https://github.com/longhorn/longhorn/issues/6110)) - @derekbit
|
||||||
|
- [BUG] share-manager terminated during Longhorn upgrading causes rwx volume not working ([6120](https://github.com/longhorn/longhorn/issues/6120)) - @yangchiu @derekbit
|
||||||
|
- [BUG] SPDK Volume snapshotList API Error ([6123](https://github.com/longhorn/longhorn/issues/6123)) - @derekbit @chriscchien
|
||||||
|
- [BUG] test_recurring_jobs_allow_detached_volume failed ([6124](https://github.com/longhorn/longhorn/issues/6124)) - @ChanYiLin @roger-ryao
|
||||||
|
- [BUG] Cron job triggered replica rebuilding keeps repeating itself after corrupting snapshot data ([6129](https://github.com/longhorn/longhorn/issues/6129)) - @yangchiu @mantissahz
|
||||||
|
- [BUG] test_dr_volume_with_restore_command_error failed ([6130](https://github.com/longhorn/longhorn/issues/6130)) - @mantissahz @roger-ryao
|
||||||
|
- [BUG] RWX volume remains attached after workload deleted if it's upgraded from v1.4.2 ([6139](https://github.com/longhorn/longhorn/issues/6139)) - @PhanLe1010 @chriscchien
|
||||||
|
- [BUG] timestamp or checksum not matched in test_snapshot_hash_detect_corruption test case ([6145](https://github.com/longhorn/longhorn/issues/6145)) - @yangchiu @derekbit
|
||||||
|
- [BUG] When a v2 volume is attached in maintenance mode, removing a replica will lead to volume stuck in attaching-detaching loop ([6166](https://github.com/longhorn/longhorn/issues/6166)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Misleading offline rebuilding hint if offline rebuilding is not enabled ([6169](https://github.com/longhorn/longhorn/issues/6169)) - @smallteeths @roger-ryao
|
||||||
|
- [BUG] Longhorn doesn't remove the system backups crd on uninstallation ([6185](https://github.com/longhorn/longhorn/issues/6185)) - @c3y1huang @khushboo-rancher
|
||||||
|
- [BUG] Volume attachment related error logs in uninstaller pod ([6197](https://github.com/longhorn/longhorn/issues/6197)) - @yangchiu @PhanLe1010
|
||||||
|
- [BUG] Test case test_ha_backup_deletion_recovery failed in rhel or rockylinux arm64 environment ([6213](https://github.com/longhorn/longhorn/issues/6213)) - @yangchiu @ChanYiLin @mantissahz
|
||||||
|
- [BUG] migration test cases could fail due to unexpected volume controllers and replicas status ([6215](https://github.com/longhorn/longhorn/issues/6215)) - @yangchiu @PhanLe1010
|
||||||
|
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
||||||
|
|
||||||
|
## Misc
|
||||||
|
|
||||||
|
- [TASK] Remove deprecated volume spec recurringJobs and storageClass recurringJobs field ([2865](https://github.com/longhorn/longhorn/issues/2865)) - @c3y1huang @chriscchien
|
||||||
|
- [TASK] Remove deprecated fields after CRD API version bump ([3289](https://github.com/longhorn/longhorn/issues/3289)) - @c3y1huang @roger-ryao
|
||||||
|
- [TASK] Replace jobq lib with an alternative way for listing remote backup volumes and info ([4176](https://github.com/longhorn/longhorn/issues/4176)) - @ChanYiLin @chriscchien
|
||||||
|
- [DOC] Update the Longhorn document in Uninstalling Longhorn using kubectl ([4841](https://github.com/longhorn/longhorn/issues/4841)) - @roger-ryao
|
||||||
|
- [TASK] Remove a deprecated feature `disable-replica-rebuild` from longhorn-manager ([4997](https://github.com/longhorn/longhorn/issues/4997)) - @ejweber @chriscchien
|
||||||
|
- [TASK] Update the distro matrix supports on Longhorn docs for 1.5 ([5177](https://github.com/longhorn/longhorn/issues/5177)) - @yangchiu
|
||||||
|
- [TASK] Clarify if any upcoming K8s API deprecation/removal will impact Longhorn 1.4 ([5180](https://github.com/longhorn/longhorn/issues/5180)) - @PhanLe1010
|
||||||
|
- [TASK] Revert affinity for Longhorn user deployed components ([5191](https://github.com/longhorn/longhorn/issues/5191)) - @weizhe0422 @ejweber
|
||||||
|
- [TASK] Add GitHub action for CI to lib repos for supporting dependency bot ([5239](https://github.com/longhorn/longhorn/issues/5239)) -
|
||||||
|
- [DOC] Update the readme of longhorn-spdk-engine about using new Longhorn (RAID1) bdev ([5256](https://github.com/longhorn/longhorn/issues/5256)) - @DamiaSan
|
||||||
|
- [TASK][UI] add new recurring job tasks ([5272](https://github.com/longhorn/longhorn/issues/5272)) - @smallteeths @chriscchien
|
||||||
|
- [DOC] Update the node maintenance doc to cover upgrade prerequisites for Rancher ([5278](https://github.com/longhorn/longhorn/issues/5278)) - @PhanLe1010
|
||||||
|
- [TASK] Run build-engine-test-images automatically when having incompatible engine on master ([5400](https://github.com/longhorn/longhorn/issues/5400)) - @yangchiu
|
||||||
|
- [TASK] Update k8s.gcr.io to registry.k8s.io in repos ([5432](https://github.com/longhorn/longhorn/issues/5432)) - @yangchiu
|
||||||
|
- [TASK][UI] add new recurring job task - filesystem trim ([5529](https://github.com/longhorn/longhorn/issues/5529)) - @smallteeths @chriscchien
|
||||||
|
- doc: update prerequisites in chart readme to make it consistent with documentation v1.3.x ([5531](https://github.com/longhorn/longhorn/pull/5531)) - @ChanYiLin
|
||||||
|
- [FEATURE] Remove deprecated `allow-node-drain-with-last-healthy-replica` ([5620](https://github.com/longhorn/longhorn/issues/5620)) - @weizhe0422 @PhanLe1010
|
||||||
|
- [FEATURE] Set recurring jobs to PVCs ([5791](https://github.com/longhorn/longhorn/issues/5791)) - @yangchiu @c3y1huang
|
||||||
|
- [TASK] Automatically update crds.yaml in longhorn repo from longhorn-manager repo ([5854](https://github.com/longhorn/longhorn/issues/5854)) - @yangchiu
|
||||||
|
- [IMPROVEMENT] Remove privilege requirement from lifecycle jobs ([5862](https://github.com/longhorn/longhorn/issues/5862)) - @mantissahz @chriscchien
|
||||||
|
- [TASK][UI] support new aio typed instance managers ([5876](https://github.com/longhorn/longhorn/issues/5876)) - @smallteeths @chriscchien
|
||||||
|
- [TASK] Remove `Guaranteed Engine Manager CPU`, `Guaranteed Replica Manager CPU`, and `Guaranteed Engine CPU` settings. ([5917](https://github.com/longhorn/longhorn/issues/5917)) - @c3y1huang @roger-ryao
|
||||||
|
- [TASK][UI] Support volume backup policy ([6028](https://github.com/longhorn/longhorn/issues/6028)) - @smallteeths @chriscchien
|
||||||
|
- [TASK] Reduce BackupConcurrentLimit and RestoreConcurrentLimit default values ([6135](https://github.com/longhorn/longhorn/issues/6135)) - @derekbit @chriscchien
|
||||||
|
|
||||||
|
## Contributors
|
||||||
|
|
||||||
|
- @ChanYiLin
|
||||||
|
- @DamiaSan
|
||||||
|
- @PhanLe1010
|
||||||
|
- @WebberHuang1118
|
||||||
|
- @achims311
|
||||||
|
- @c3y1huang
|
||||||
|
- @chriscchien
|
||||||
|
- @derekbit
|
||||||
|
- @ejweber
|
||||||
|
- @hedefalk
|
||||||
|
- @innobead
|
||||||
|
- @khushboo-rancher
|
||||||
|
- @mantissahz
|
||||||
|
- @roger-ryao
|
||||||
|
- @shuo-wu
|
||||||
|
- @smallteeths
|
||||||
|
- @weizhe0422
|
||||||
|
- @yangchiu
|
65
CHANGELOG/CHANGELOG-1.5.1.md
Normal file
65
CHANGELOG/CHANGELOG-1.5.1.md
Normal file
@ -0,0 +1,65 @@
|
|||||||
|
## Release Note
|
||||||
|
### **v1.5.1 released!** 🎆
|
||||||
|
|
||||||
|
Longhorn v1.5.1 is the latest version of Longhorn 1.5.
|
||||||
|
This release introduces bug fixes as described below about 1.5.0 upgrade issues, stability, troubleshooting and so on. Please try it and feedback. Thanks for all the contributions!
|
||||||
|
|
||||||
|
> For the definition of stable or latest release, please check [here](https://github.com/longhorn/longhorn#releases).
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
> **Please ensure your Kubernetes cluster is at least v1.21 before installing v1.5.1.**
|
||||||
|
|
||||||
|
Longhorn supports 3 installation ways including Rancher App Marketplace, Kubectl, and Helm. Follow the installation instructions [here](https://longhorn.io/docs/1.5.1/deploy/install/).
|
||||||
|
|
||||||
|
## Upgrade
|
||||||
|
|
||||||
|
> **Please read the [important notes](https://longhorn.io/docs/1.5.1/deploy/important-notes/) first and ensure your Kubernetes cluster is at least v1.21 before upgrading to Longhorn v1.5.1 from v1.4.x/v1.5.0, which are only supported source versions.**
|
||||||
|
|
||||||
|
Follow the upgrade instructions [here](https://longhorn.io/docs/1.5.1/deploy/upgrade/).
|
||||||
|
|
||||||
|
## Deprecation & Incompatibilities
|
||||||
|
|
||||||
|
N/A
|
||||||
|
|
||||||
|
## Known Issues after Release
|
||||||
|
|
||||||
|
Please follow up on [here](https://github.com/longhorn/longhorn/wiki/Outstanding-Known-Issues-of-Releases) about any outstanding issues found after this release.
|
||||||
|
|
||||||
|
## Improvement
|
||||||
|
|
||||||
|
- [IMPROVEMENT] Implement/fix the unit tests of Volume Attachment and volume controller ([6005](https://github.com/longhorn/longhorn/issues/6005)) - @PhanLe1010
|
||||||
|
- [QUESTION] Repetitive warnings and errors in a new longhorn setup ([6257](https://github.com/longhorn/longhorn/issues/6257)) - @derekbit @c3y1huang @roger-ryao
|
||||||
|
|
||||||
|
## Resilience
|
||||||
|
|
||||||
|
- [BUG] 1.5.0 Upgrade: Longhorn conversion webhook server fails ([6259](https://github.com/longhorn/longhorn/issues/6259)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Race leaves snapshot CRs that cannot be deleted ([6298](https://github.com/longhorn/longhorn/issues/6298)) - @yangchiu @PhanLe1010 @ejweber
|
||||||
|
|
||||||
|
## Bugs
|
||||||
|
|
||||||
|
- [BUG] Engine continues to attempt to rebuild replica while detaching ([6217](https://github.com/longhorn/longhorn/issues/6217)) - @yangchiu @ejweber
|
||||||
|
- [BUG] Upgrade to 1.5.0 failed: validator.longhorn.io denied the request if having orphan resources ([6246](https://github.com/longhorn/longhorn/issues/6246)) - @derekbit @roger-ryao
|
||||||
|
- [BUG] Unable to receive support bundle from UI when it's large (400MB+) ([6256](https://github.com/longhorn/longhorn/issues/6256)) - @c3y1huang @chriscchien
|
||||||
|
- [BUG] Longhorn Manager Pods CrashLoop after upgrade from 1.4.0 to 1.5.0 while backing up volumes ([6264](https://github.com/longhorn/longhorn/issues/6264)) - @ChanYiLin @roger-ryao
|
||||||
|
- [BUG] Can not delete type=`bi` VolumeSnapshot if related backing image not exist ([6266](https://github.com/longhorn/longhorn/issues/6266)) - @ChanYiLin @chriscchien
|
||||||
|
- [BUG] 1.5.0: AttachVolume.Attach failed for volume, the volume is currently attached to a different node ([6287](https://github.com/longhorn/longhorn/issues/6287)) - @yangchiu @derekbit
|
||||||
|
- [BUG] test case test_setting_priority_class failed in master and v1.5.x ([6319](https://github.com/longhorn/longhorn/issues/6319)) - @derekbit @chriscchien
|
||||||
|
- [BUG] Unused webhook and recovery backend deployment left in helm chart ([6252](https://github.com/longhorn/longhorn/issues/6252)) - @ChanYiLin @chriscchien
|
||||||
|
|
||||||
|
## Misc
|
||||||
|
|
||||||
|
- [DOC] v1.5.0 additional outgoing firewall ports need to be opened 9501 9502 9503 ([6317](https://github.com/longhorn/longhorn/issues/6317)) - @ChanYiLin @chriscchien
|
||||||
|
|
||||||
|
## Contributors
|
||||||
|
|
||||||
|
- @ChanYiLin
|
||||||
|
- @PhanLe1010
|
||||||
|
- @c3y1huang
|
||||||
|
- @chriscchien
|
||||||
|
- @derekbit
|
||||||
|
- @ejweber
|
||||||
|
- @innobead
|
||||||
|
- @roger-ryao
|
||||||
|
- @yangchiu
|
||||||
|
|
3
CODE_OF_CONDUCT.md
Normal file
3
CODE_OF_CONDUCT.md
Normal file
@ -0,0 +1,3 @@
|
|||||||
|
# Longhorn Community Code of Conduct
|
||||||
|
|
||||||
|
Longhorn follows the [Cloud Native Computing Foundation Code of Conduct](https://github.com/cncf/foundation/blob/master/code-of-conduct.md).
|
73
CONTRIBUTING.md
Normal file
73
CONTRIBUTING.md
Normal file
@ -0,0 +1,73 @@
|
|||||||
|
# Contributing Guideline
|
||||||
|
|
||||||
|
Welcome contributing to Longhorn!
|
||||||
|
|
||||||
|
This guideline applies to all the repositories under Longhorn.
|
||||||
|
|
||||||
|
Contributing to Longhorn is not limited to writing the code or submitting the PR. We will also appreciate if you can file issues, provide feedback and suggest new features. In fact, many of Longhorn's features are driven by the community's need. The community plays a big role in the development of Longhorn.
|
||||||
|
|
||||||
|
Of course, contributing the code is more than welcome. To make things simpler, if you're fixing a small issue (e.g. typo), go ahead submitting a PR and we will pick it up; but if you're planning to submit a bigger PR to implement a new feature, it's easier to submit a new issue to discuss the design with the maintainers first before implementing it.
|
||||||
|
|
||||||
|
When you're ready to get involved in contributing the code, [this developer guide](https://github.com/longhorn/longhorn/wiki/Getting-started-with-Longhorn-Development) should help you to get up to the speed. And remember to [sign off your commits](#dco-sign-off)!
|
||||||
|
|
||||||
|
Feel free to join the discussion on Longhorn development at [longhorn-dev](https://rancher-users.slack.com/messages/CMLPKMYDC) slack channel.
|
||||||
|
|
||||||
|
Happy contributing!
|
||||||
|
|
||||||
|
## DCO Sign off
|
||||||
|
|
||||||
|
All authors to the project retain copyright to their work. However, to ensure
|
||||||
|
that they are only submitting work that they have rights to, we are requiring
|
||||||
|
everyone to acknowledge this by signing their work.
|
||||||
|
|
||||||
|
Any copyright notices in this repo should specify the authors as "the Longhorn contributors".
|
||||||
|
|
||||||
|
To sign your work, just add a line like this at the end of your commit message:
|
||||||
|
|
||||||
|
```
|
||||||
|
Signed-off-by: Sheng Yang <sheng.yang@rancher.com>
|
||||||
|
```
|
||||||
|
|
||||||
|
This can easily be done with the `--signoff/-s` option to `git commit`.
|
||||||
|
|
||||||
|
By doing this you state that you can certify the following (from https://developercertificate.org/):
|
||||||
|
|
||||||
|
```
|
||||||
|
Developer Certificate of Origin
|
||||||
|
Version 1.1
|
||||||
|
|
||||||
|
Copyright (C) 2004, 2006 The Linux Foundation and its contributors.
|
||||||
|
1 Letterman Drive
|
||||||
|
Suite D4700
|
||||||
|
San Francisco, CA, 94129
|
||||||
|
|
||||||
|
Everyone is permitted to copy and distribute verbatim copies of this
|
||||||
|
license document, but changing it is not allowed.
|
||||||
|
|
||||||
|
|
||||||
|
Developer's Certificate of Origin 1.1
|
||||||
|
|
||||||
|
By making a contribution to this project, I certify that:
|
||||||
|
|
||||||
|
(a) The contribution was created in whole or in part by me and I
|
||||||
|
have the right to submit it under the open source license
|
||||||
|
indicated in the file; or
|
||||||
|
|
||||||
|
(b) The contribution is based upon previous work that, to the best
|
||||||
|
of my knowledge, is covered under an appropriate open source
|
||||||
|
license and I have the right under that license to submit that
|
||||||
|
work with modifications, whether created in whole or in part
|
||||||
|
by me, under the same open source license (unless I am
|
||||||
|
permitted to submit under a different license), as indicated
|
||||||
|
in the file; or
|
||||||
|
|
||||||
|
(c) The contribution was provided directly to me by some other
|
||||||
|
person who certified (a), (b) or (c) and I have not modified
|
||||||
|
it.
|
||||||
|
|
||||||
|
(d) I understand and agree that this project and the contribution
|
||||||
|
are public and that a record of the contribution (including all
|
||||||
|
personal information I submit with it, including my sign-off) is
|
||||||
|
maintained indefinitely and may be redistributed consistent with
|
||||||
|
this project or the open source license(s) involved.
|
||||||
|
```
|
8
MAINTAINERS
Normal file
8
MAINTAINERS
Normal file
@ -0,0 +1,8 @@
|
|||||||
|
The list of current Longhorn maintainers:
|
||||||
|
|
||||||
|
Name, <Email>, @GitHubHandle
|
||||||
|
Sheng Yang, <sheng@yasker.org>, @yasker
|
||||||
|
Shuo Wu, <shuo.wu@suse.com>, @shuo-wu
|
||||||
|
David Ko, <dko@suse.com>, @innobead
|
||||||
|
Derek Su, <derek.su@suse.com>, @derekbit
|
||||||
|
Phan Le, <phan.le@suse.com>, @PhanLe1010
|
308
README.md
308
README.md
@ -1,244 +1,144 @@
|
|||||||
# Longhorn
|
<h1 align="center" style="border-bottom: none">
|
||||||
|
<a href="https://longhorn.io/" target="_blank"><img alt="Longhorn" width="120px" src="https://github.com/longhorn/website/blob/master/static/img/icon-longhorn.svg"></a><br>Longhorn
|
||||||
|
</h1>
|
||||||
|
|
||||||
Longhorn is a distributed block storage system for Kubernetes. Longhorn is lightweight, reliable, and easy-to-use. You can deploy Longhorn on an existing Kubernetes cluster with one simple command. Once Longhorn is deployed, it adds persistent volume support to the Kubernetes cluster.
|
<p align="center">A CNCF Incubating Project. Visit <a href="https://longhorn.io/" target="_blank">longhorn.io</a> for the full documentation.</p>
|
||||||
|
|
||||||
Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and sychronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Longhorn supports snapshots, backups, and even allows you to schedule recurring snapshots and backups!
|
<div align="center">
|
||||||
|
|
||||||
You can read more details of Longhorn and its design [here](http://rancher.com/microservices-block-storage/).
|
[](https://github.com/longhorn/longhorn/releases)
|
||||||
|
[](https://github.com/longhorn/longhorn/blob/master/LICENSE)
|
||||||
|
[](https://longhorn.io/docs/latest/)
|
||||||
|
|
||||||
Longhorn is a work in progress. We appreciate your comments as we continue to work on it!
|
</div>
|
||||||
|
|
||||||
|
Longhorn is a distributed block storage system for Kubernetes. Longhorn is cloud-native storage built using Kubernetes and container primitives.
|
||||||
|
|
||||||
|
Longhorn is lightweight, reliable, and powerful. You can install Longhorn on an existing Kubernetes cluster with one `kubectl apply`command or by using Helm charts. Once Longhorn is installed, it adds persistent volume support to the Kubernetes cluster.
|
||||||
|
|
||||||
|
Longhorn implements distributed block storage using containers and microservices. Longhorn creates a dedicated storage controller for each block device volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Here are some notable features of Longhorn:
|
||||||
|
|
||||||
|
1. Enterprise-grade distributed storage with no single point of failure
|
||||||
|
2. Incremental snapshot of block storage
|
||||||
|
3. Backup to secondary storage (NFSv4 or S3-compatible object storage) built on efficient change block detection
|
||||||
|
4. Recurring snapshot and backup
|
||||||
|
5. Automated non-disruptive upgrade. You can upgrade the entire Longhorn software stack without disrupting running volumes!
|
||||||
|
6. Intuitive GUI dashboard
|
||||||
|
|
||||||
|
You can read more technical details of Longhorn [here](https://longhorn.io/).
|
||||||
|
|
||||||
|
# Releases
|
||||||
|
|
||||||
|
> **NOTE**:
|
||||||
|
> - __\<version\>*__ means the release branch is under active support and will have periodic follow-up patch releases.
|
||||||
|
> - __Latest__ release means the version is the latest release of the newest release branch.
|
||||||
|
> - __Stable__ release means the version is stable and has been widely adopted by users.
|
||||||
|
|
||||||
|
https://github.com/longhorn/longhorn/releases
|
||||||
|
|
||||||
|
| Release | Version | Type | Release Note (Changelog) | Important Note |
|
||||||
|
|-----------|---------|----------------|----------------------------------------------------------------|-------------------------------------------------------------|
|
||||||
|
| **1.5*** | 1.5.1 | Latest | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.5.1) | [🔗](https://longhorn.io/docs/1.5.1/deploy/important-notes) |
|
||||||
|
| **1.4*** | 1.4.4 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.4.4) | [🔗](https://longhorn.io/docs/1.4.4/deploy/important-notes) |
|
||||||
|
| 1.3 | 1.3.3 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.3.3) | [🔗](https://longhorn.io/docs/1.3.3/deploy/important-notes) |
|
||||||
|
| 1.2 | 1.2.6 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.2.6) | [🔗](https://longhorn.io/docs/1.2.6/deploy/important-notes) |
|
||||||
|
| 1.1 | 1.1.3 | Stable | [🔗](https://github.com/longhorn/longhorn/releases/tag/v1.1.3) | |
|
||||||
|
|
||||||
|
# Roadmap
|
||||||
|
|
||||||
|
https://github.com/longhorn/longhorn/wiki/Roadmap
|
||||||
|
|
||||||
|
# Components
|
||||||
|
|
||||||
## Source Code
|
|
||||||
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||||
|
|
||||||
1. Longhorn Engine -- Core controller/replica logic https://github.com/rancher/longhorn-engine
|
* Engine: [](https://drone-publish.longhorn.io/longhorn/longhorn-engine)[](https://goreportcard.com/report/github.com/longhorn/longhorn-engine)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-engine?ref=badge_shield)
|
||||||
1. Longhorn Manager -- Longhorn orchestration, includes Flexvolume driver for Kubernetes https://github.com/rancher/longhorn-manager
|
* Manager: [](https://drone-publish.longhorn.io/longhorn/longhorn-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-manager?ref=badge_shield)
|
||||||
1. Longhorn UI -- Dashboard https://github.com/rancher/longhorn-ui
|
* Instance Manager: [](http://drone-publish.longhorn.io/longhorn/longhorn-instance-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-instance-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-instance-manager?ref=badge_shield)
|
||||||
|
* Share Manager: [](http://drone-publish.longhorn.io/longhorn/longhorn-share-manager)[](https://goreportcard.com/report/github.com/longhorn/longhorn-share-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-share-manager?ref=badge_shield)
|
||||||
|
* Backing Image Manager: [](http://drone-publish.longhorn.io/longhorn/backing-image-manager)[](https://goreportcard.com/report/github.com/longhorn/backing-image-manager)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Fbacking-image-manager?ref=badge_shield)
|
||||||
|
* UI: [](https://drone-publish.longhorn.io/longhorn/longhorn-ui)[](https://app.fossa.com/projects/custom%2B25850%2Fgithub.com%2Flonghorn%2Flonghorn-ui?ref=badge_shield)
|
||||||
|
|
||||||
# Demo
|
| Component | What it does | GitHub repo |
|
||||||
|
| :----------------------------- | :--------------------------------------------------------------------- | :------------------------------------------------------------------------------------------ |
|
||||||
|
| Longhorn Backing Image Manager | Backing image download, sync, and deletion in a disk | [longhorn/backing-image-manager](https://github.com/longhorn/backing-image-manager) |
|
||||||
|
| Longhorn Engine | Core controller/replica logic | [longhorn/longhorn-engine](https://github.com/longhorn/longhorn-engine) |
|
||||||
|
| Longhorn Instance Manager | Controller/replica instance lifecycle management | [longhorn/longhorn-instance-manager](https://github.com/longhorn/longhorn-instance-manager) |
|
||||||
|
| Longhorn Manager | Longhorn orchestration, includes CSI driver for Kubernetes | [longhorn/longhorn-manager](https://github.com/longhorn/longhorn-manager) |
|
||||||
|
| Longhorn Share Manager | NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes | [longhorn/longhorn-share-manager](https://github.com/longhorn/longhorn-share-manager) |
|
||||||
|
| Longhorn UI | The Longhorn dashboard | [longhorn/longhorn-ui](https://github.com/longhorn/longhorn-ui) |
|
||||||
|
|
||||||
[](https://asciinema.org/a/172720?autoplay=1&loop=1&speed=2)
|

|
||||||
|
|
||||||
# Deploy on Kubernetes
|
# Get Started
|
||||||
|
|
||||||
## Requirements
|
## Requirements
|
||||||
|
|
||||||
1. Docker v1.13+
|
For the installation requirements, refer to the [Longhorn documentation.](https://longhorn.io/docs/latest/deploy/install/#installation-requirements)
|
||||||
2. Kubernetes v1.8+
|
|
||||||
3. Make sure `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
|
||||||
4. Make sure `open-iscsi` has been installed in all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
|
||||||
|
|
||||||
## Deployment
|
## Installation
|
||||||
Create the deployment of Longhorn in your Kubernetes cluster is easy. For most Kubernetes setup (except GKE), you will only need to run the following command to install Longhorn:
|
|
||||||
```
|
|
||||||
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
For Google Kubernetes Engine (GKE) users, see [here](#google-kubernetes-engine) before proceed.
|
> **NOTE**:
|
||||||
|
> Please note that the master branch is for the upcoming feature release development.
|
||||||
|
> For an official release installation or upgrade, please refer to the below ways.
|
||||||
|
|
||||||
Longhorn Manager and Longhorn Driver will be deployed as daemonsets in a separate namespace called `longhorn-system`, as you can see in the yaml file.
|
Longhorn can be installed on a Kubernetes cluster in several ways:
|
||||||
|
|
||||||
When you see those pods has started correctly as follows, you've deployed the Longhorn successfully.
|
- [Rancher App Marketplace](https://longhorn.io/docs/latest/deploy/install/install-with-rancher/)
|
||||||
|
- [kubectl](https://longhorn.io/docs/latest/deploy/install/install-with-kubectl/)
|
||||||
|
- [Helm](https://longhorn.io/docs/latest/deploy/install/install-with-helm/)
|
||||||
|
|
||||||
```
|
## Documentation
|
||||||
# kubectl -n longhorn-system get pod
|
|
||||||
NAME READY STATUS RESTARTS AGE
|
|
||||||
longhorn-flexvolume-driver-4dnx6 1/1 Running 0 1d
|
|
||||||
longhorn-flexvolume-driver-cqwj5 1/1 Running 0 1d
|
|
||||||
longhorn-flexvolume-driver-deployer-bc7b95b5b-sb9kr 1/1 Running 0 1d
|
|
||||||
longhorn-flexvolume-driver-q9h4f 1/1 Running 0 1d
|
|
||||||
longhorn-manager-dkdn9 1/1 Running 0 2h
|
|
||||||
longhorn-manager-l6npd 1/1 Running 0 2h
|
|
||||||
longhorn-manager-v4fz8 1/1 Running 0 2h
|
|
||||||
longhorn-ui-58796c68d-db4t6 1/1 Running 0 1h
|
|
||||||
```
|
|
||||||
|
|
||||||
## Access the UI
|
The official Longhorn documentation is [here.](https://longhorn.io/docs)
|
||||||
Use `kubectl -n longhorn-system get svc` to get the external service IP for UI:
|
|
||||||
|
|
||||||
```
|
# Get Involved
|
||||||
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
|
|
||||||
longhorn-backend ClusterIP 10.20.248.250 <none> 9500/TCP 58m
|
|
||||||
longhorn-frontend LoadBalancer 10.20.245.110 100.200.200.123 80:30697/TCP 58m
|
|
||||||
```
|
|
||||||
|
|
||||||
If the Kubernetes Cluster supports creating LoadBalancer, user can then use `EXTERNAL-IP`(`100.200.200.123` in the case above) of `longhorn-frontend` to access the Longhorn UI. Otherwise the user can use `<node_ip>:<port>` (port is `30697` in the case above) to access the UI.
|
## Discussion, Feedback
|
||||||
|
|
||||||
Longhorn UI would connect to the Longhorn Manager API, provides the overview of the system, the volume operations, and the snapshot/backup operations. It's highly recommended for the user to check out Longhorn UI.
|
If having any discussions or feedbacks, feel free to [file a discussion](https://github.com/longhorn/longhorn/discussions).
|
||||||
|
|
||||||
Notice the current UI is unauthenticated.
|
## Features Request, Bug Reporting
|
||||||
|
|
||||||
## How to use the Longhorn Volume in your pod
|
If having any issues, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new/choose).
|
||||||
|
We have a weekly community issue review meeting to review all reported issues or enhancement requests.
|
||||||
|
|
||||||
There are serveral ways to use the Longhorn volume.
|
When creating a bug issue, please help upload the support bundle to the issue or send to
|
||||||
|
[longhorn-support-bundle](mailto:longhorn-support-bundle@suse.com).
|
||||||
|
|
||||||
### Pod with Longhorn volume
|
## Report Vulnerabilities
|
||||||
The following YAML file shows the definition of a pod that makes the Longhorn attach a volume to be used by the pod.
|
|
||||||
|
|
||||||
```
|
If having any vulnerabilities found, please report to [longhorn-security](mailto:longhorn-security@suse.com).
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: volume-test
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: volume-test
|
|
||||||
image: nginx:stable-alpine
|
|
||||||
imagePullPolicy: IfNotPresent
|
|
||||||
volumeMounts:
|
|
||||||
- name: voll
|
|
||||||
mountPath: /data
|
|
||||||
ports:
|
|
||||||
- containerPort: 80
|
|
||||||
volumes:
|
|
||||||
- name: voll
|
|
||||||
flexVolume:
|
|
||||||
driver: "rancher.io/longhorn"
|
|
||||||
fsType: "ext4"
|
|
||||||
options:
|
|
||||||
size: "2Gi"
|
|
||||||
numberOfReplicas: "3"
|
|
||||||
staleReplicaTimeout: "20"
|
|
||||||
fromBackup: ""
|
|
||||||
```
|
|
||||||
|
|
||||||
Notice this field in the YAML file: `flexVolume.driver "rancher.io/longhorn"`. It specifies that the Longhorn FlexVolume plug-in should be used. There are some option fields in `options` the user can fill in.
|
# Community
|
||||||
|
|
||||||
Option | Required | Description
|
Longhorn is open source software, so contributions are greatly welcome.
|
||||||
------------- | ----|---------
|
Please read [Code of Conduct](./CODE_OF_CONDUCT.md) and [Contributing Guideline](./CONTRIBUTING.md) before contributing.
|
||||||
size | Yes | Specify the capacity of the volume in longhorn and the unit should be `G`
|
|
||||||
numberOfReplicas | Yes | The number of replicas (HA feature) for volume in this Longhorn volume
|
|
||||||
fromBackup | No | Optional. Must be a Longhorn Backup URL. Specify where the user want to restore the volume from.
|
|
||||||
|
|
||||||
### Storage class
|
Contributing code is not the only way of contributing. We value feedbacks very much and many of the Longhorn features are originated from users' feedback.
|
||||||
|
If you have any feedbacks, feel free to [file an issue](https://github.com/longhorn/longhorn/issues/new/choose) and talk to the developers at the [CNCF](https://slack.cncf.io/) [#longhorn](https://cloud-native.slack.com/messages/longhorn) Slack channel.
|
||||||
|
|
||||||
Longhorn supports dynamic provisioner function, which can create PV automatically for the user according to the spec of storage class and PVC. The user needs to create a new storage class in order to use it. The storage class example can be downloaded from [here](./deploy/example-storageclass.yaml)
|
If having any discussion, feedbacks, requests, issues or security reports, please follow below ways.
|
||||||
```
|
We also have a [CNCF Slack channel: longhorn](https://cloud-native.slack.com/messages/longhorn) for discussion.
|
||||||
kind: StorageClass
|
|
||||||
apiVersion: storage.k8s.io/v1
|
|
||||||
metadata:
|
|
||||||
name: longhorn
|
|
||||||
provisioner: rancher.io/longhorn
|
|
||||||
parameters:
|
|
||||||
numberOfReplicas: "3"
|
|
||||||
staleReplicaTimeout: "30"
|
|
||||||
fromBackup: ""
|
|
||||||
```
|
|
||||||
|
|
||||||
Then user can create a PVC directly. For example:
|
## Community Meeting and Office Hours
|
||||||
```
|
Hosted by the core maintainers of Longhorn: 4th Friday of the every month at 09:00 (CET) or 16:00 (CST) at https://community.cncf.io/longhorn-community/.
|
||||||
apiVersion: v1
|
|
||||||
kind: PersistentVolumeClaim
|
|
||||||
metadata:
|
|
||||||
name: longhorn-volv-pvc
|
|
||||||
spec:
|
|
||||||
accessModes:
|
|
||||||
- ReadWriteOnce
|
|
||||||
storageClassName: longhorn
|
|
||||||
resources:
|
|
||||||
requests:
|
|
||||||
storage: 2Gi
|
|
||||||
```
|
|
||||||
|
|
||||||
Then use it in the pod:
|
## Longhorn Mailing List
|
||||||
```
|
Stay up to date on the latest news and events: https://lists.cncf.io/g/cncf-longhorn
|
||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: volume-test
|
|
||||||
namespace: default
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: volume-test
|
|
||||||
image: nginx:stable-alpine
|
|
||||||
imagePullPolicy: IfNotPresent
|
|
||||||
volumeMounts:
|
|
||||||
- name: volv
|
|
||||||
mountPath: /data
|
|
||||||
ports:
|
|
||||||
- containerPort: 80
|
|
||||||
volumes:
|
|
||||||
- name: volv
|
|
||||||
persistentVolumeClaim:
|
|
||||||
claimName: longhorn-volv-pvc
|
|
||||||
```
|
|
||||||
|
|
||||||
## Setup a TESTING ONLY NFS server for storing backups
|
You can read more about the community and its events here: https://github.com/longhorn/community
|
||||||
|
|
||||||
Longhorn supports backing up mechanisms to export the user data out of the Longhorn system. Currently Longhorn supports backing up to a NFS server. In order to use this feature, you need to have a NFS server running and accessible in the Kubernetes cluster. Here we provide a simple way to setup a testing NFS server.
|
# License
|
||||||
|
|
||||||
WARNING: This NFS server won't save any data after you delete it. It's for TESTING ONLY.
|
Copyright (c) 2014-2022 The Longhorn Authors
|
||||||
|
|
||||||
```
|
Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at
|
||||||
kubectl create -f deploy/example-backupstore.yaml
|
|
||||||
```
|
|
||||||
It will create a simple NFS server in the `default` namespace, which can be addressed as `longhorn-test-nfs-svc.default` for other pods in the cluster.
|
|
||||||
|
|
||||||
After this script completes, using the following URL as the Backup Target in the Longhorn setting:
|
|
||||||
```
|
|
||||||
nfs://longhorn-test-nfs-svc.default:/opt/backupstore
|
|
||||||
```
|
|
||||||
Open Longhorn UI, go to Setting, fill the Backup Target field with the URL above, click Save. Now you should able to use the backup feature of Longhorn.
|
|
||||||
|
|
||||||
## Google Kubernetes Engine
|
|
||||||
The configuration yaml will be slight different for Google Kubernetes Engine (GKE):
|
|
||||||
|
|
||||||
1. GKE requires user to manually claim himself as cluster admin to enable RBAC. User need to execute following command before create the Longhorn system using yaml files.
|
|
||||||
```
|
|
||||||
kubectl create clusterrolebinding cluster-admin-binding --clusterrole=cluster-admin --user=<name@example.com>
|
|
||||||
```
|
|
||||||
In which `name@example.com` is the user's account name in GCE, and it's case sensitive.
|
|
||||||
See [here](https://cloud.google.com/kubernetes-engine/docs/how-to/role-based-access-control) for details.
|
|
||||||
|
|
||||||
2. The default Flexvolume plugin directory is different with GKE 1.8+, which is at `/home/kubernetes/flexvolume`. User need to use following command instead:
|
|
||||||
```
|
|
||||||
kubectl create -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn-gke.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
User can also customerize the Flexvolume directory in the last part of the Longhorn system deployment yaml file, e.g.:
|
|
||||||
```
|
|
||||||
- name: FLEXVOLUME_DIR
|
|
||||||
value: "/home/kubernetes/flexvolume/"
|
|
||||||
```
|
|
||||||
|
|
||||||
See [Troubleshooting](#troubleshooting) for details.
|
|
||||||
|
|
||||||
## Uninstall Longhorn
|
|
||||||
|
|
||||||
In order to uninstall Longhorn, user need to remove all the volumes first:
|
|
||||||
```
|
|
||||||
kubectl -n longhorn-system delete lhv --all
|
|
||||||
```
|
|
||||||
|
|
||||||
After confirming all the volumes are removed, then Longhorn can be easily uninstalled using:
|
|
||||||
```
|
|
||||||
kubectl delete -f https://raw.githubusercontent.com/rancher/longhorn/master/deploy/longhorn.yaml
|
|
||||||
```
|
|
||||||
|
|
||||||
## Troubleshooting
|
|
||||||
|
|
||||||
### Volume can be attached/detached from UI, but Kubernetes Pod/StatefulSet etc cannot use it
|
|
||||||
|
|
||||||
Check if volume plugin directory has been set correctly.
|
|
||||||
|
|
||||||
By default, Kubernetes use `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` as the directory for volume plugin drivers, as stated in the [official document](https://github.com/kubernetes/community/blob/master/contributors/devel/flexvolume.md#prerequisites).
|
|
||||||
|
|
||||||
But some vendors may choose to change the directory due to various reasons. For example, GKE uses `/home/kubernetes/flexvolume`, and RKE uses `/var/lib/kubelet/volumeplugins`.
|
|
||||||
|
|
||||||
User can find the correct directory by running `ps aux|grep kubelet` on the host and check the `--volume-plugin-dir` parameter. If there is none, the default `/usr/libexec/kubernetes/kubelet-plugins/volume/exec/` will be used.
|
|
||||||
|
|
||||||
## License
|
|
||||||
Copyright (c) 2014-2018 [Rancher Labs, Inc.](http://rancher.com)
|
|
||||||
|
|
||||||
Licensed under the Apache License, Version 2.0 (the "License");
|
|
||||||
you may not use this file except in compliance with the License.
|
|
||||||
You may obtain a copy of the License at
|
|
||||||
|
|
||||||
[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
|
[http://www.apache.org/licenses/LICENSE-2.0](http://www.apache.org/licenses/LICENSE-2.0)
|
||||||
|
|
||||||
Unless required by applicable law or agreed to in writing, software
|
Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.
|
||||||
distributed under the License is distributed on an "AS IS" BASIS,
|
|
||||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
## Longhorn is a [CNCF Incubating Project](https://www.cncf.io/projects/)
|
||||||
See the License for the specific language governing permissions and
|
|
||||||
limitations under the License.
|

|
||||||
|
21
chart/.helmignore
Normal file
21
chart/.helmignore
Normal file
@ -0,0 +1,21 @@
|
|||||||
|
# Patterns to ignore when building packages.
|
||||||
|
# This supports shell glob matching, relative path matching, and
|
||||||
|
# negation (prefixed with !). Only one pattern per line.
|
||||||
|
.DS_Store
|
||||||
|
# Common VCS dirs
|
||||||
|
.git/
|
||||||
|
.gitignore
|
||||||
|
.bzr/
|
||||||
|
.bzrignore
|
||||||
|
.hg/
|
||||||
|
.hgignore
|
||||||
|
.svn/
|
||||||
|
# Common backup files
|
||||||
|
*.swp
|
||||||
|
*.bak
|
||||||
|
*.tmp
|
||||||
|
*~
|
||||||
|
# Various IDEs
|
||||||
|
.project
|
||||||
|
.idea/
|
||||||
|
*.tmproj
|
28
chart/Chart.yaml
Normal file
28
chart/Chart.yaml
Normal file
@ -0,0 +1,28 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
name: longhorn
|
||||||
|
version: 1.6.0-dev
|
||||||
|
appVersion: v1.6.0-dev
|
||||||
|
kubeVersion: ">=1.21.0-0"
|
||||||
|
description: Longhorn is a distributed block storage system for Kubernetes.
|
||||||
|
keywords:
|
||||||
|
- longhorn
|
||||||
|
- storage
|
||||||
|
- distributed
|
||||||
|
- block
|
||||||
|
- device
|
||||||
|
- iscsi
|
||||||
|
- nfs
|
||||||
|
home: https://github.com/longhorn/longhorn
|
||||||
|
sources:
|
||||||
|
- https://github.com/longhorn/longhorn
|
||||||
|
- https://github.com/longhorn/longhorn-engine
|
||||||
|
- https://github.com/longhorn/longhorn-instance-manager
|
||||||
|
- https://github.com/longhorn/longhorn-share-manager
|
||||||
|
- https://github.com/longhorn/longhorn-manager
|
||||||
|
- https://github.com/longhorn/longhorn-ui
|
||||||
|
- https://github.com/longhorn/longhorn-tests
|
||||||
|
- https://github.com/longhorn/backing-image-manager
|
||||||
|
maintainers:
|
||||||
|
- name: Longhorn maintainers
|
||||||
|
email: maintainers@longhorn.io
|
||||||
|
icon: https://raw.githubusercontent.com/cncf/artwork/master/projects/longhorn/icon/color/longhorn-icon-color.png
|
326
chart/README.md
Normal file
326
chart/README.md
Normal file
@ -0,0 +1,326 @@
|
|||||||
|
# Longhorn Chart
|
||||||
|
|
||||||
|
> **Important**: Please install the Longhorn chart in the `longhorn-system` namespace only.
|
||||||
|
|
||||||
|
> **Warning**: Longhorn doesn't support downgrading from a higher version to a lower version.
|
||||||
|
|
||||||
|
## Source Code
|
||||||
|
|
||||||
|
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||||
|
|
||||||
|
1. Longhorn Engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
|
||||||
|
2. Longhorn Instance Manager -- Controller/replica instance lifecycle management https://github.com/longhorn/longhorn-instance-manager
|
||||||
|
3. Longhorn Share Manager -- NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes. https://github.com/longhorn/longhorn-share-manager
|
||||||
|
4. Backing Image Manager -- Backing image file lifecycle management. https://github.com/longhorn/backing-image-manager
|
||||||
|
5. Longhorn Manager -- Longhorn orchestration, includes CSI driver for Kubernetes https://github.com/longhorn/longhorn-manager
|
||||||
|
6. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.)
|
||||||
|
2. Kubernetes >= v1.21
|
||||||
|
3. Make sure `bash`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
||||||
|
4. Make sure `open-iscsi` has been installed, and the `iscsid` daemon is running on all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
||||||
|
|
||||||
|
## Upgrading to Kubernetes v1.25+
|
||||||
|
|
||||||
|
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
|
||||||
|
|
||||||
|
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `enablePSP` set to `false` if it has been previously set to `true`.
|
||||||
|
|
||||||
|
> **Note:**
|
||||||
|
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
|
||||||
|
>
|
||||||
|
> If your charts get stuck in this state, you may have to clean up your Helm release secrets.
|
||||||
|
Upon setting `enablePSP` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
|
||||||
|
|
||||||
|
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Longhorn docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
1. Add Longhorn chart repository.
|
||||||
|
```
|
||||||
|
helm repo add longhorn https://charts.longhorn.io
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Update local Longhorn chart information from chart repository.
|
||||||
|
```
|
||||||
|
helm repo update
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Install Longhorn chart.
|
||||||
|
- With Helm 2, the following command will create the `longhorn-system` namespace and install the Longhorn chart together.
|
||||||
|
```
|
||||||
|
helm install longhorn/longhorn --name longhorn --namespace longhorn-system
|
||||||
|
```
|
||||||
|
- With Helm 3, the following commands will create the `longhorn-system` namespace first, then install the Longhorn chart.
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl create namespace longhorn-system
|
||||||
|
helm install longhorn longhorn/longhorn --namespace longhorn-system
|
||||||
|
```
|
||||||
|
|
||||||
|
## Uninstallation
|
||||||
|
|
||||||
|
With Helm 2 to uninstall Longhorn.
|
||||||
|
```
|
||||||
|
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
||||||
|
helm delete longhorn --purge
|
||||||
|
```
|
||||||
|
|
||||||
|
With Helm 3 to uninstall Longhorn.
|
||||||
|
```
|
||||||
|
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
||||||
|
helm uninstall longhorn -n longhorn-system
|
||||||
|
kubectl delete namespace longhorn-system
|
||||||
|
```
|
||||||
|
|
||||||
|
## Values
|
||||||
|
|
||||||
|
The `values.yaml` contains items used to tweak a deployment of this chart.
|
||||||
|
|
||||||
|
### Cattle Settings
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| global.cattle.systemDefaultRegistry | string | `""` | System default registry |
|
||||||
|
| global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector | string | `"kubernetes.io/os:linux"` | Node selector for Longhorn system managed components |
|
||||||
|
| global.cattle.windowsCluster.defaultSetting.taintToleration | string | `"cattle.io/os=linux:NoSchedule"` | Toleration for Longhorn system managed components |
|
||||||
|
| global.cattle.windowsCluster.enabled | bool | `false` | Enable this to allow Longhorn to run on the Rancher deployed Windows cluster |
|
||||||
|
| global.cattle.windowsCluster.nodeSelector | object | `{"kubernetes.io/os":"linux"}` | Select Linux nodes to run Longhorn user deployed components |
|
||||||
|
| global.cattle.windowsCluster.tolerations | list | `[{"effect":"NoSchedule","key":"cattle.io/os","operator":"Equal","value":"linux"}]` | Tolerate Linux nodes to run Longhorn user deployed components |
|
||||||
|
|
||||||
|
### Network Policies
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| networkPolicies.enabled | bool | `false` | Enable NetworkPolicies to limit access to the Longhorn pods |
|
||||||
|
| networkPolicies.type | string | `"k3s"` | Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1` |
|
||||||
|
|
||||||
|
### Image Settings
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| image.csi.attacher.repository | string | `"longhornio/csi-attacher"` | Specify CSI attacher image repository. Leave blank to autodetect |
|
||||||
|
| image.csi.attacher.tag | string | `"v4.2.0"` | Specify CSI attacher image tag. Leave blank to autodetect |
|
||||||
|
| image.csi.livenessProbe.repository | string | `"longhornio/livenessprobe"` | Specify CSI liveness probe image repository. Leave blank to autodetect |
|
||||||
|
| image.csi.livenessProbe.tag | string | `"v2.9.0"` | Specify CSI liveness probe image tag. Leave blank to autodetect |
|
||||||
|
| image.csi.nodeDriverRegistrar.repository | string | `"longhornio/csi-node-driver-registrar"` | Specify CSI node driver registrar image repository. Leave blank to autodetect |
|
||||||
|
| image.csi.nodeDriverRegistrar.tag | string | `"v2.7.0"` | Specify CSI node driver registrar image tag. Leave blank to autodetect |
|
||||||
|
| image.csi.provisioner.repository | string | `"longhornio/csi-provisioner"` | Specify CSI provisioner image repository. Leave blank to autodetect |
|
||||||
|
| image.csi.provisioner.tag | string | `"v3.4.1"` | Specify CSI provisioner image tag. Leave blank to autodetect |
|
||||||
|
| image.csi.resizer.repository | string | `"longhornio/csi-resizer"` | Specify CSI driver resizer image repository. Leave blank to autodetect |
|
||||||
|
| image.csi.resizer.tag | string | `"v1.7.0"` | Specify CSI driver resizer image tag. Leave blank to autodetect |
|
||||||
|
| image.csi.snapshotter.repository | string | `"longhornio/csi-snapshotter"` | Specify CSI driver snapshotter image repository. Leave blank to autodetect |
|
||||||
|
| image.csi.snapshotter.tag | string | `"v6.2.1"` | Specify CSI driver snapshotter image tag. Leave blank to autodetect. |
|
||||||
|
| image.longhorn.backingImageManager.repository | string | `"longhornio/backing-image-manager"` | Specify Longhorn backing image manager image repository |
|
||||||
|
| image.longhorn.backingImageManager.tag | string | `"master-head"` | Specify Longhorn backing image manager image tag |
|
||||||
|
| image.longhorn.engine.repository | string | `"longhornio/longhorn-engine"` | Specify Longhorn engine image repository |
|
||||||
|
| image.longhorn.engine.tag | string | `"master-head"` | Specify Longhorn engine image tag |
|
||||||
|
| image.longhorn.instanceManager.repository | string | `"longhornio/longhorn-instance-manager"` | Specify Longhorn instance manager image repository |
|
||||||
|
| image.longhorn.instanceManager.tag | string | `"master-head"` | Specify Longhorn instance manager image tag |
|
||||||
|
| image.longhorn.manager.repository | string | `"longhornio/longhorn-manager"` | Specify Longhorn manager image repository |
|
||||||
|
| image.longhorn.manager.tag | string | `"master-head"` | Specify Longhorn manager image tag |
|
||||||
|
| image.longhorn.shareManager.repository | string | `"longhornio/longhorn-share-manager"` | Specify Longhorn share manager image repository |
|
||||||
|
| image.longhorn.shareManager.tag | string | `"master-head"` | Specify Longhorn share manager image tag |
|
||||||
|
| image.longhorn.supportBundleKit.repository | string | `"longhornio/support-bundle-kit"` | Specify Longhorn support bundle manager image repository |
|
||||||
|
| image.longhorn.supportBundleKit.tag | string | `"v0.0.27"` | Specify Longhorn support bundle manager image tag |
|
||||||
|
| image.longhorn.ui.repository | string | `"longhornio/longhorn-ui"` | Specify Longhorn ui image repository |
|
||||||
|
| image.longhorn.ui.tag | string | `"master-head"` | Specify Longhorn ui image tag |
|
||||||
|
| image.openshift.oauthProxy.repository | string | `"quay.io/openshift/origin-oauth-proxy"` | For openshift user. Specify oauth proxy image repository |
|
||||||
|
| image.openshift.oauthProxy.tag | float | `4.13` | For openshift user. Specify oauth proxy image tag. Note: Use your OCP/OKD 4.X Version, Current Stable is 4.13 |
|
||||||
|
| image.pullPolicy | string | `"IfNotPresent"` | Image pull policy which applies to all user deployed Longhorn Components. e.g, Longhorn manager, Longhorn driver, Longhorn UI |
|
||||||
|
|
||||||
|
### Service Settings
|
||||||
|
|
||||||
|
| Key | Description |
|
||||||
|
|-----|-------------|
|
||||||
|
| service.manager.nodePort | NodePort port number (to set explicitly, choose port between 30000-32767) |
|
||||||
|
| service.manager.type | Define Longhorn manager service type. |
|
||||||
|
| service.ui.nodePort | NodePort port number (to set explicitly, choose port between 30000-32767) |
|
||||||
|
| service.ui.type | Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy` |
|
||||||
|
|
||||||
|
### StorageClass Settings
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| persistence.backingImage.dataSourceParameters | string | `nil` | Specify the data source parameters for the backing image used in Longhorn StorageClass. This option accepts a json string of a map. e.g., `'{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'`. |
|
||||||
|
| persistence.backingImage.dataSourceType | string | `nil` | Specify the data source type for the backing image used in Longhorn StorageClass. If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image. |
|
||||||
|
| persistence.backingImage.enable | bool | `false` | Set backing image for Longhorn StorageClass |
|
||||||
|
| persistence.backingImage.expectedChecksum | string | `nil` | Specify the expected SHA512 checksum of the selected backing image in Longhorn StorageClass |
|
||||||
|
| persistence.backingImage.name | string | `nil` | Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it |
|
||||||
|
| persistence.defaultClass | bool | `true` | Set Longhorn StorageClass as default |
|
||||||
|
| persistence.defaultClassReplicaCount | int | `3` | Set replica count for Longhorn StorageClass |
|
||||||
|
| persistence.defaultDataLocality | string | `"disabled"` | Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort` |
|
||||||
|
| persistence.defaultFsType | string | `"ext4"` | Set filesystem type for Longhorn StorageClass |
|
||||||
|
| persistence.defaultMkfsParams | string | `""` | Set mkfs options for Longhorn StorageClass |
|
||||||
|
| persistence.defaultNodeSelector.enable | bool | `false` | Enable Node selector for Longhorn StorageClass |
|
||||||
|
| persistence.defaultNodeSelector.selector | string | `""` | This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"` |
|
||||||
|
| persistence.migratable | bool | `false` | Set volume migratable for Longhorn StorageClass |
|
||||||
|
| persistence.reclaimPolicy | string | `"Delete"` | Define reclaim policy. Options: `Retain`, `Delete` |
|
||||||
|
| persistence.recurringJobSelector.enable | bool | `false` | Enable recurring job selector for Longhorn StorageClass |
|
||||||
|
| persistence.recurringJobSelector.jobList | list | `[]` | Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., `[{"name":"backup", "isGroup":true}]` |
|
||||||
|
| persistence.removeSnapshotsDuringFilesystemTrim | string | `"ignored"` | Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled` |
|
||||||
|
|
||||||
|
### CSI Settings
|
||||||
|
|
||||||
|
| Key | Description |
|
||||||
|
|-----|-------------|
|
||||||
|
| csi.attacherReplicaCount | Specify replica count of CSI Attacher. Leave blank to use default count: 3 |
|
||||||
|
| csi.kubeletRootDir | Specify kubelet root-dir. Leave blank to autodetect |
|
||||||
|
| csi.provisionerReplicaCount | Specify replica count of CSI Provisioner. Leave blank to use default count: 3 |
|
||||||
|
| csi.resizerReplicaCount | Specify replica count of CSI Resizer. Leave blank to use default count: 3 |
|
||||||
|
| csi.snapshotterReplicaCount | Specify replica count of CSI Snapshotter. Leave blank to use default count: 3 |
|
||||||
|
|
||||||
|
### Longhorn Manager Settings
|
||||||
|
|
||||||
|
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||||
|
These settings only apply to Longhorn manager component.
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| longhornManager.log.format | string | `"plain"` | Options: `plain`, `json` |
|
||||||
|
| longhornManager.nodeSelector | object | `{}` | Select nodes to run Longhorn manager |
|
||||||
|
| longhornManager.priorityClass | string | `nil` | Priority class for longhorn manager |
|
||||||
|
| longhornManager.serviceAnnotations | object | `{}` | Annotation used in Longhorn manager service |
|
||||||
|
| longhornManager.tolerations | list | `[]` | Tolerate nodes to run Longhorn manager |
|
||||||
|
|
||||||
|
### Longhorn Driver Settings
|
||||||
|
|
||||||
|
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||||
|
These settings only apply to Longhorn driver component.
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| longhornDriver.nodeSelector | object | `{}` | Select nodes to run Longhorn driver |
|
||||||
|
| longhornDriver.priorityClass | string | `nil` | Priority class for longhorn driver |
|
||||||
|
| longhornDriver.tolerations | list | `[]` | Tolerate nodes to run Longhorn driver |
|
||||||
|
|
||||||
|
### Longhorn UI Settings
|
||||||
|
|
||||||
|
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||||
|
These settings only apply to Longhorn UI component.
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| longhornUI.nodeSelector | object | `{}` | Select nodes to run Longhorn UI |
|
||||||
|
| longhornUI.priorityClass | string | `nil` | Priority class count for longhorn ui |
|
||||||
|
| longhornUI.replicas | int | `2` | Replica count for longhorn ui |
|
||||||
|
| longhornUI.tolerations | list | `[]` | Tolerate nodes to run Longhorn UI |
|
||||||
|
|
||||||
|
### Ingress Settings
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| ingress.annotations | string | `nil` | Ingress annotations done as key:value pairs |
|
||||||
|
| ingress.enabled | bool | `false` | Set to true to enable ingress record generation |
|
||||||
|
| ingress.host | string | `"sslip.io"` | Layer 7 Load Balancer hostname |
|
||||||
|
| ingress.ingressClassName | string | `nil` | Add ingressClassName to the Ingress Can replace the kubernetes.io/ingress.class annotation on v1.18+ |
|
||||||
|
| ingress.path | string | `"/"` | If ingress is enabled you can set the default ingress path then you can access the UI by using the following full path {{host}}+{{path}} |
|
||||||
|
| ingress.secrets | string | `nil` | If you're providing your own certificates, please use this to add the certificates as secrets |
|
||||||
|
| ingress.secureBackends | bool | `false` | Enable this in order to enable that the backend service will be connected at port 443 |
|
||||||
|
| ingress.tls | bool | `false` | Set this to true in order to enable TLS on the ingress record |
|
||||||
|
| ingress.tlsSecret | string | `"longhorn.local-tls"` | If TLS is set to true, you must declare what secret will store the key/certificate for TLS |
|
||||||
|
|
||||||
|
### Private Registry Settings
|
||||||
|
|
||||||
|
Longhorn can be installed in an air gapped environment with private registry settings. Please refer to **Air Gap Installation** in our official site [link](https://longhorn.io/docs)
|
||||||
|
|
||||||
|
| Key | Description |
|
||||||
|
|-----|-------------|
|
||||||
|
| privateRegistry.createSecret | Set `true` to create a new private registry secret |
|
||||||
|
| privateRegistry.registryPasswd | Password used to authenticate to private registry |
|
||||||
|
| privateRegistry.registrySecret | If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry |
|
||||||
|
| privateRegistry.registryUrl | URL of private registry. Leave blank to apply system default registry |
|
||||||
|
| privateRegistry.registryUser | User used to authenticate to private registry |
|
||||||
|
|
||||||
|
### OS/Kubernetes Distro Settings
|
||||||
|
|
||||||
|
#### Opensift Settings
|
||||||
|
|
||||||
|
Please also refer to this document [ocp-readme](https://github.com/longhorn/longhorn/blob/master/chart/ocp-readme.md) for more details
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
| openshift.enabled | bool | `false` | Enable when using openshift |
|
||||||
|
| openshift.ui.port | int | `443` | UI port in openshift environment |
|
||||||
|
| openshift.ui.proxy | int | `8443` | UI proxy in openshift environment |
|
||||||
|
| openshift.ui.route | string | `"longhorn-ui"` | UI route in openshift environment |
|
||||||
|
|
||||||
|
### Other Settings
|
||||||
|
|
||||||
|
| Key | Default | Description |
|
||||||
|
|-----|---------|-------------|
|
||||||
|
| annotations | `{}` | Annotations to add to the Longhorn Manager DaemonSet Pods. Optional. |
|
||||||
|
| enablePSP | `false` | For Kubernetes < v1.25, if your cluster enables Pod Security Policy admission controller, set this to `true` to ship longhorn-psp which allow privileged Longhorn pods to start |
|
||||||
|
|
||||||
|
### System Default Settings
|
||||||
|
|
||||||
|
For system default settings, you can first leave blank to use default values which will be applied when installing Longhorn.
|
||||||
|
You can then change them through UI after installation.
|
||||||
|
For more details like types or options, you can refer to **Settings Reference** in our official site [link](https://longhorn.io/docs)
|
||||||
|
|
||||||
|
| Key | Description |
|
||||||
|
|-----|-------------|
|
||||||
|
| defaultSettings.allowEmptyDiskSelectorVolume | Allow Scheduling Empty Disk Selector Volumes To Any Disk |
|
||||||
|
| defaultSettings.allowEmptyNodeSelectorVolume | Allow Scheduling Empty Node Selector Volumes To Any Node |
|
||||||
|
| defaultSettings.allowRecurringJobWhileVolumeDetached | If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup when it is the time to do recurring snapshot/backup. |
|
||||||
|
| defaultSettings.allowVolumeCreationWithDegradedAvailability | This setting allows user to create and attach a volume that doesn't have all the replicas scheduled at the time of creation. |
|
||||||
|
| defaultSettings.autoCleanupSystemGeneratedSnapshot | This setting enables Longhorn to automatically cleanup the system generated snapshot after replica rebuild is done. |
|
||||||
|
| defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly | If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...) when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect). By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount. |
|
||||||
|
| defaultSettings.autoSalvage | If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection. Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true. |
|
||||||
|
| defaultSettings.backingImageCleanupWaitInterval | This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it. |
|
||||||
|
| defaultSettings.backingImageRecoveryWaitInterval | This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file when all disk files of this backing image become failed or unknown. |
|
||||||
|
| defaultSettings.backupCompressionMethod | This setting allows users to specify backup compression method. |
|
||||||
|
| defaultSettings.backupConcurrentLimit | This setting controls how many worker threads per backup concurrently. |
|
||||||
|
| defaultSettings.backupTarget | The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE. |
|
||||||
|
| defaultSettings.backupTargetCredentialSecret | The name of the Kubernetes secret associated with the backup target. |
|
||||||
|
| defaultSettings.backupstorePollInterval | In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups. Set to 0 to disable the polling. By default 300. |
|
||||||
|
| defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit | This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version. |
|
||||||
|
| defaultSettings.concurrentReplicaRebuildPerNodeLimit | This setting controls how many replicas on a node can be rebuilt simultaneously. |
|
||||||
|
| defaultSettings.concurrentVolumeBackupRestorePerNodeLimit | This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore. |
|
||||||
|
| defaultSettings.createDefaultDiskLabeledNodes | Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist. If disabled, the default disk will be created on all new nodes when each node is first added. |
|
||||||
|
| defaultSettings.defaultDataLocality | Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume. |
|
||||||
|
| defaultSettings.defaultDataPath | Default path to use for storing data on a host. By default "/var/lib/longhorn/" |
|
||||||
|
| defaultSettings.defaultLonghornStaticStorageClass | The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label, so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object. By default 'longhorn-static'. |
|
||||||
|
| defaultSettings.defaultReplicaCount | The default number of replicas when a volume is created from the Longhorn UI. For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3. |
|
||||||
|
| defaultSettings.deletingConfirmationFlag | This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost. |
|
||||||
|
| defaultSettings.disableRevisionCounter | This setting is only for volumes created by UI. By default, this is false meaning there will be a reivision counter file to track every write to the volume. During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume. If revision counter is disabled, Longhorn will not track every write to the volume. During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and file size to pick the replica candidate to recover the whole volume. |
|
||||||
|
| defaultSettings.disableSchedulingOnCordonedNode | Disable Longhorn manager to schedule replica on Kubernetes cordoned node. By default true. |
|
||||||
|
| defaultSettings.engineReplicaTimeout | In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds. The default value is 8 seconds. |
|
||||||
|
| defaultSettings.failedBackupTTL | In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion. |
|
||||||
|
| defaultSettings.fastReplicaRebuildEnabled | This feature supports the fast replica rebuilding. It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite. |
|
||||||
|
| defaultSettings.guaranteedInstanceManagerCPU | This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod. You can leave it with the default value, which is 12%. |
|
||||||
|
| defaultSettings.kubernetesClusterAutoscalerEnabled | Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler. |
|
||||||
|
| defaultSettings.logLevel | The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info. |
|
||||||
|
| defaultSettings.nodeDownPodDeletionPolicy | Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down. |
|
||||||
|
| defaultSettings.nodeDrainPolicy | Define the policy to use when a node with the last healthy replica of a volume is drained. |
|
||||||
|
| defaultSettings.offlineReplicaRebuilding | This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine. |
|
||||||
|
| defaultSettings.orphanAutoDeletion | This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas. Orphan resources on down or unknown nodes will not be cleaned up automatically. |
|
||||||
|
| defaultSettings.priorityClass | priorityClass for longhorn system componentss |
|
||||||
|
| defaultSettings.recurringFailedJobsHistoryLimit | This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0. |
|
||||||
|
| defaultSettings.recurringSuccessfulJobsHistoryLimit | This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0. |
|
||||||
|
| defaultSettings.removeSnapshotsDuringFilesystemTrim | This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and its ancestors as removed and stops at the snapshot containing multiple children. |
|
||||||
|
| defaultSettings.replicaAutoBalance | Enable this setting automatically rebalances replicas when discovered an available node. |
|
||||||
|
| defaultSettings.replicaDiskSoftAntiAffinity | Allow scheduling on disks with existing healthy replicas of the same volume. By default true. |
|
||||||
|
| defaultSettings.replicaFileSyncHttpClientTimeout | In seconds. The setting specifies the HTTP client timeout to the file sync server. |
|
||||||
|
| defaultSettings.replicaReplenishmentWaitInterval | In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica rather than directly creating a new replica for a degraded volume. |
|
||||||
|
| defaultSettings.replicaSoftAntiAffinity | Allow scheduling on nodes with existing healthy replicas of the same volume. By default false. |
|
||||||
|
| defaultSettings.replicaZoneSoftAntiAffinity | Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas. Nodes don't belong to any Zone will be treated as in the same Zone. Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone. By default true. |
|
||||||
|
| defaultSettings.restoreConcurrentLimit | This setting controls how many worker threads per restore concurrently. |
|
||||||
|
| defaultSettings.restoreVolumeRecurringJobs | Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration. |
|
||||||
|
| defaultSettings.snapshotDataIntegrity | This setting allows users to enable or disable snapshot hashing and data integrity checking. |
|
||||||
|
| defaultSettings.snapshotDataIntegrityCronjob | Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files. |
|
||||||
|
| defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation | Hashing snapshot disk files impacts the performance of the system. The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot. |
|
||||||
|
| defaultSettings.storageMinimalAvailablePercentage | If the minimum available disk capacity exceeds the actual percentage of available disk capacity, the disk becomes unschedulable until more space is freed up. By default 25. |
|
||||||
|
| defaultSettings.storageNetwork | Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network. |
|
||||||
|
| defaultSettings.storageOverProvisioningPercentage | The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 200. |
|
||||||
|
| defaultSettings.storageReservedPercentageForDefaultDisk | The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node. |
|
||||||
|
| defaultSettings.supportBundleFailedHistoryLimit | This setting specifies how many failed support bundles can exist in the cluster. Set this value to **0** to have Longhorn automatically purge all failed support bundles. |
|
||||||
|
| defaultSettings.systemManagedComponentsNodeSelector | nodeSelector for longhorn system components |
|
||||||
|
| defaultSettings.systemManagedPodsImagePullPolicy | This setting defines the Image Pull Policy of Longhorn system managed pod. e.g. instance manager, engine image, CSI driver, etc. The new Image Pull Policy will only apply after the system managed pods restart. |
|
||||||
|
| defaultSettings.taintToleration | taintToleration for longhorn system components |
|
||||||
|
| defaultSettings.upgradeChecker | Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true. |
|
||||||
|
| defaultSettings.v2DataEngine | This allows users to activate v2 data engine based on SPDK. Currently, it is in the preview phase and should not be utilized in a production environment. |
|
||||||
|
|
||||||
|
---
|
||||||
|
Please see [link](https://github.com/longhorn/longhorn) for more information.
|
253
chart/README.md.gotmpl
Normal file
253
chart/README.md.gotmpl
Normal file
@ -0,0 +1,253 @@
|
|||||||
|
# Longhorn Chart
|
||||||
|
|
||||||
|
> **Important**: Please install the Longhorn chart in the `longhorn-system` namespace only.
|
||||||
|
|
||||||
|
> **Warning**: Longhorn doesn't support downgrading from a higher version to a lower version.
|
||||||
|
|
||||||
|
## Source Code
|
||||||
|
|
||||||
|
Longhorn is 100% open source software. Project source code is spread across a number of repos:
|
||||||
|
|
||||||
|
1. Longhorn Engine -- Core controller/replica logic https://github.com/longhorn/longhorn-engine
|
||||||
|
2. Longhorn Instance Manager -- Controller/replica instance lifecycle management https://github.com/longhorn/longhorn-instance-manager
|
||||||
|
3. Longhorn Share Manager -- NFS provisioner that exposes Longhorn volumes as ReadWriteMany volumes. https://github.com/longhorn/longhorn-share-manager
|
||||||
|
4. Backing Image Manager -- Backing image file lifecycle management. https://github.com/longhorn/backing-image-manager
|
||||||
|
5. Longhorn Manager -- Longhorn orchestration, includes CSI driver for Kubernetes https://github.com/longhorn/longhorn-manager
|
||||||
|
6. Longhorn UI -- Dashboard https://github.com/longhorn/longhorn-ui
|
||||||
|
|
||||||
|
## Prerequisites
|
||||||
|
|
||||||
|
1. A container runtime compatible with Kubernetes (Docker v1.13+, containerd v1.3.7+, etc.)
|
||||||
|
2. Kubernetes >= v1.21
|
||||||
|
3. Make sure `bash`, `curl`, `findmnt`, `grep`, `awk` and `blkid` has been installed in all nodes of the Kubernetes cluster.
|
||||||
|
4. Make sure `open-iscsi` has been installed, and the `iscsid` daemon is running on all nodes of the Kubernetes cluster. For GKE, recommended Ubuntu as guest OS image since it contains `open-iscsi` already.
|
||||||
|
|
||||||
|
## Upgrading to Kubernetes v1.25+
|
||||||
|
|
||||||
|
Starting in Kubernetes v1.25, [Pod Security Policies](https://kubernetes.io/docs/concepts/security/pod-security-policy/) have been removed from the Kubernetes API.
|
||||||
|
|
||||||
|
As a result, **before upgrading to Kubernetes v1.25** (or on a fresh install in a Kubernetes v1.25+ cluster), users are expected to perform an in-place upgrade of this chart with `enablePSP` set to `false` if it has been previously set to `true`.
|
||||||
|
|
||||||
|
> **Note:**
|
||||||
|
> If you upgrade your cluster to Kubernetes v1.25+ before removing PSPs via a `helm upgrade` (even if you manually clean up resources), **it will leave the Helm release in a broken state within the cluster such that further Helm operations will not work (`helm uninstall`, `helm upgrade`, etc.).**
|
||||||
|
>
|
||||||
|
> If your charts get stuck in this state, you may have to clean up your Helm release secrets.
|
||||||
|
Upon setting `enablePSP` to false, the chart will remove any PSP resources deployed on its behalf from the cluster. This is the default setting for this chart.
|
||||||
|
|
||||||
|
As a replacement for PSPs, [Pod Security Admission](https://kubernetes.io/docs/concepts/security/pod-security-admission/) should be used. Please consult the Longhorn docs for more details on how to configure your chart release namespaces to work with the new Pod Security Admission and apply Pod Security Standards.
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
1. Add Longhorn chart repository.
|
||||||
|
```
|
||||||
|
helm repo add longhorn https://charts.longhorn.io
|
||||||
|
```
|
||||||
|
|
||||||
|
2. Update local Longhorn chart information from chart repository.
|
||||||
|
```
|
||||||
|
helm repo update
|
||||||
|
```
|
||||||
|
|
||||||
|
3. Install Longhorn chart.
|
||||||
|
- With Helm 2, the following command will create the `longhorn-system` namespace and install the Longhorn chart together.
|
||||||
|
```
|
||||||
|
helm install longhorn/longhorn --name longhorn --namespace longhorn-system
|
||||||
|
```
|
||||||
|
- With Helm 3, the following commands will create the `longhorn-system` namespace first, then install the Longhorn chart.
|
||||||
|
|
||||||
|
```
|
||||||
|
kubectl create namespace longhorn-system
|
||||||
|
helm install longhorn longhorn/longhorn --namespace longhorn-system
|
||||||
|
```
|
||||||
|
|
||||||
|
## Uninstallation
|
||||||
|
|
||||||
|
With Helm 2 to uninstall Longhorn.
|
||||||
|
```
|
||||||
|
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
||||||
|
helm delete longhorn --purge
|
||||||
|
```
|
||||||
|
|
||||||
|
With Helm 3 to uninstall Longhorn.
|
||||||
|
```
|
||||||
|
kubectl -n longhorn-system patch -p '{"value": "true"}' --type=merge lhs deleting-confirmation-flag
|
||||||
|
helm uninstall longhorn -n longhorn-system
|
||||||
|
kubectl delete namespace longhorn-system
|
||||||
|
```
|
||||||
|
|
||||||
|
## Values
|
||||||
|
|
||||||
|
The `values.yaml` contains items used to tweak a deployment of this chart.
|
||||||
|
|
||||||
|
### Cattle Settings
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "global" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Network Policies
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "networkPolicies" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Image Settings
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "image" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Service Settings
|
||||||
|
|
||||||
|
| Key | Description |
|
||||||
|
|-----|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if (and (hasPrefix "service" .Key) (not (contains "Account" .Key))) }}
|
||||||
|
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### StorageClass Settings
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "persistence" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### CSI Settings
|
||||||
|
|
||||||
|
| Key | Description |
|
||||||
|
|-----|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "csi" .Key }}
|
||||||
|
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Longhorn Manager Settings
|
||||||
|
|
||||||
|
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||||
|
These settings only apply to Longhorn manager component.
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "longhornManager" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Longhorn Driver Settings
|
||||||
|
|
||||||
|
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||||
|
These settings only apply to Longhorn driver component.
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "longhornDriver" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Longhorn UI Settings
|
||||||
|
|
||||||
|
Longhorn system contains user deployed components (e.g, Longhorn manager, Longhorn driver, Longhorn UI) and system managed components (e.g, instance manager, engine image, CSI driver, etc.).
|
||||||
|
These settings only apply to Longhorn UI component.
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "longhornUI" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Ingress Settings
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "ingress" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Private Registry Settings
|
||||||
|
|
||||||
|
Longhorn can be installed in an air gapped environment with private registry settings. Please refer to **Air Gap Installation** in our official site [link](https://longhorn.io/docs)
|
||||||
|
|
||||||
|
| Key | Description |
|
||||||
|
|-----|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "privateRegistry" .Key }}
|
||||||
|
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### OS/Kubernetes Distro Settings
|
||||||
|
|
||||||
|
#### Opensift Settings
|
||||||
|
|
||||||
|
Please also refer to this document [ocp-readme](https://github.com/longhorn/longhorn/blob/master/chart/ocp-readme.md) for more details
|
||||||
|
|
||||||
|
| Key | Type | Default | Description |
|
||||||
|
|-----|------|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "openshift" .Key }}
|
||||||
|
| {{ .Key }} | {{ .Type }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### Other Settings
|
||||||
|
|
||||||
|
| Key | Default | Description |
|
||||||
|
|-----|---------|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if not (or (hasPrefix "defaultSettings" .Key)
|
||||||
|
(hasPrefix "networkPolicies" .Key)
|
||||||
|
(hasPrefix "image" .Key)
|
||||||
|
(hasPrefix "service" .Key)
|
||||||
|
(hasPrefix "persistence" .Key)
|
||||||
|
(hasPrefix "csi" .Key)
|
||||||
|
(hasPrefix "longhornManager" .Key)
|
||||||
|
(hasPrefix "longhornDriver" .Key)
|
||||||
|
(hasPrefix "longhornUI" .Key)
|
||||||
|
(hasPrefix "privateRegistry" .Key)
|
||||||
|
(hasPrefix "ingress" .Key)
|
||||||
|
(hasPrefix "openshift" .Key)
|
||||||
|
(hasPrefix "global" .Key)) }}
|
||||||
|
| {{ .Key }} | {{ if .Default }}{{ .Default }}{{ else }}{{ .AutoDefault }}{{ end }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
### System Default Settings
|
||||||
|
|
||||||
|
For system default settings, you can first leave blank to use default values which will be applied when installing Longhorn.
|
||||||
|
You can then change them through UI after installation.
|
||||||
|
For more details like types or options, you can refer to **Settings Reference** in our official site [link](https://longhorn.io/docs)
|
||||||
|
|
||||||
|
| Key | Description |
|
||||||
|
|-----|-------------|
|
||||||
|
{{- range .Values }}
|
||||||
|
{{- if hasPrefix "defaultSettings" .Key }}
|
||||||
|
| {{ .Key }} | {{ if .Description }}{{ .Description }}{{ else }}{{ .AutoDescription }}{{ end }} |
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
---
|
||||||
|
Please see [link](https://github.com/longhorn/longhorn) for more information.
|
11
chart/app-readme.md
Normal file
11
chart/app-readme.md
Normal file
@ -0,0 +1,11 @@
|
|||||||
|
# Longhorn
|
||||||
|
|
||||||
|
Longhorn is a lightweight, reliable and easy to use distributed block storage system for Kubernetes. Once deployed, users can leverage persistent volumes provided by Longhorn.
|
||||||
|
|
||||||
|
Longhorn creates a dedicated storage controller for each volume and synchronously replicates the volume across multiple replicas stored on multiple nodes. The storage controller and replicas are themselves orchestrated using Kubernetes. Longhorn supports snapshots, backups and even allows you to schedule recurring snapshots and backups!
|
||||||
|
|
||||||
|
**Important**: Please install Longhorn chart in `longhorn-system` namespace only.
|
||||||
|
|
||||||
|
**Warning**: Longhorn doesn't support downgrading from a higher version to a lower version.
|
||||||
|
|
||||||
|
[Chart Documentation](https://github.com/longhorn/longhorn/blob/master/chart/README.md)
|
177
chart/ocp-readme.md
Normal file
177
chart/ocp-readme.md
Normal file
@ -0,0 +1,177 @@
|
|||||||
|
# OpenShift / OKD Extra Configuration Steps
|
||||||
|
|
||||||
|
- [OpenShift / OKD Extra Configuration Steps](#openshift--okd-extra-configuration-steps)
|
||||||
|
- [Notes](#notes)
|
||||||
|
- [Known Issues](#known-issues)
|
||||||
|
- [Preparing Nodes (Optional)](#preparing-nodes-optional)
|
||||||
|
- [Default /var/lib/longhorn setup](#default-varliblonghorn-setup)
|
||||||
|
- [Separate /var/mnt/longhorn setup](#separate-varmntlonghorn-setup)
|
||||||
|
- [Create Filesystem](#create-filesystem)
|
||||||
|
- [Mounting Disk On Boot](#mounting-disk-on-boot)
|
||||||
|
- [Label and Annotate Nodes](#label-and-annotate-nodes)
|
||||||
|
- [Example values.yaml](#example-valuesyaml)
|
||||||
|
- [Installation](#installation)
|
||||||
|
- [Refs](#refs)
|
||||||
|
|
||||||
|
## Notes
|
||||||
|
|
||||||
|
Main changes and tasks for OCP are:
|
||||||
|
|
||||||
|
- On OCP / OKD, the Operating System is Managed by the Cluster
|
||||||
|
- OCP Imposes [Security Context Constraints](https://docs.openshift.com/container-platform/4.11/authentication/managing-security-context-constraints.html)
|
||||||
|
- This requires everything to run with the least privilege possible. For the moment every component has been given access to run as higher privilege.
|
||||||
|
- Something to circle back on is network polices and which components can have their privileges reduced without impacting functionality.
|
||||||
|
- The UI probably can be for example.
|
||||||
|
- openshift/oauth-proxy for authentication to the Longhorn Ui
|
||||||
|
- **⚠️** Currently Scoped to Authenticated Users that can delete a longhorn settings object.
|
||||||
|
- **⚠️** Since the UI it self is not protected, network policies will need to be created to prevent namespace <--> namespace communication against the pod or service object directly.
|
||||||
|
- Anyone with access to the UI Deployment can remove the route restriction. (Namespace Scoped Admin)
|
||||||
|
- Option to use separate disk in /var/mnt/longhorn & MachineConfig file to mount /var/mnt/longhorn
|
||||||
|
- Adding finalizers for mount propagation
|
||||||
|
|
||||||
|
## Known Issues
|
||||||
|
|
||||||
|
- General Feature/Issue Thread
|
||||||
|
- [[FEATURE] Deploying Longhorn on OKD/Openshift](https://github.com/longhorn/longhorn/issues/1831)
|
||||||
|
- 4.10 / 1.23:
|
||||||
|
- 4.10.0-0.okd-2022-03-07-131213 to 4.10.0-0.okd-2022-07-09-073606
|
||||||
|
- Tested, No Known Issues
|
||||||
|
- 4.11 / 1.24:
|
||||||
|
- 4.11.0-0.okd-2022-07-27-052000 to 4.11.0-0.okd-2022-11-19-050030
|
||||||
|
- Tested, No Known Issues
|
||||||
|
- 4.11.0-0.okd-2022-12-02-145640, 4.11.0-0.okd-2023-01-14-152430:
|
||||||
|
- Workaround: [[BUG] Volumes Stuck in Attach/Detach Loop](https://github.com/longhorn/longhorn/issues/4988)
|
||||||
|
- [MachineConfig Patch](https://github.com/longhorn/longhorn/issues/4988#issuecomment-1345676772)
|
||||||
|
- 4.12 / 1.25:
|
||||||
|
- 4.12.0-0.okd-2022-12-05-210624 to 4.12.0-0.okd-2023-01-20-101927
|
||||||
|
- Tested, No Known Issues
|
||||||
|
- 4.12.0-0.okd-2023-01-21-055900 to 4.12.0-0.okd-2023-02-18-033438:
|
||||||
|
- Workaround: [[BUG] Volumes Stuck in Attach/Detach Loop](https://github.com/longhorn/longhorn/issues/4988)
|
||||||
|
- [MachineConfig Patch](https://github.com/longhorn/longhorn/issues/4988#issuecomment-1345676772)
|
||||||
|
- 4.12.0-0.okd-2023-03-05-022504 - 4.12.0-0.okd-2023-04-16-041331:
|
||||||
|
- Tested, No Known Issues
|
||||||
|
- 4.13 / 1.26:
|
||||||
|
- 4.13.0-0.okd-2023-05-03-001308 - 4.13.0-0.okd-2023-08-18-135805:
|
||||||
|
- Tested, No Known Issues
|
||||||
|
- 4.14 / 1.27:
|
||||||
|
- 4.14.0-0.okd-2023-08-12-022330 - 4.14.0-0.okd-2023-10-28-073550:
|
||||||
|
- Tested, No Known Issues
|
||||||
|
|
||||||
|
## Preparing Nodes (Optional)
|
||||||
|
|
||||||
|
Only required if you require additional customizations, such as storage-less nodes, or secondary disks.
|
||||||
|
|
||||||
|
### Default /var/lib/longhorn setup
|
||||||
|
|
||||||
|
Label each node for storage with:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oc get nodes --no-headers | awk '{print $1}'
|
||||||
|
|
||||||
|
export NODE="worker-0"
|
||||||
|
oc label node "${NODE}" node.longhorn.io/create-default-disk=true
|
||||||
|
```
|
||||||
|
|
||||||
|
### Separate /var/mnt/longhorn setup
|
||||||
|
|
||||||
|
#### Create Filesystem
|
||||||
|
|
||||||
|
On the storage nodes create a filesystem with the label longhorn:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oc get nodes --no-headers | awk '{print $1}'
|
||||||
|
|
||||||
|
export NODE="worker-0"
|
||||||
|
oc debug node/${NODE} -t -- chroot /host bash
|
||||||
|
|
||||||
|
# Validate Target Drive is Present
|
||||||
|
lsblk
|
||||||
|
|
||||||
|
export DRIVE="sdb" #vdb
|
||||||
|
sudo mkfs.ext4 -L longhorn /dev/${DRIVE}
|
||||||
|
```
|
||||||
|
|
||||||
|
> ⚠️ Note: If you add New Nodes After the below Machine Config is applied, you will need to also reboot the node.
|
||||||
|
|
||||||
|
#### Mounting Disk On Boot
|
||||||
|
|
||||||
|
The Secondary Drive needs to be mounted on every boot. Save the Concents and Apply the MachineConfig with `oc apply -f`:
|
||||||
|
|
||||||
|
> ⚠️ This will trigger an machine config profile update and reboot all worker nodes on the cluster
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
apiVersion: machineconfiguration.openshift.io/v1
|
||||||
|
kind: MachineConfig
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
machineconfiguration.openshift.io/role: worker
|
||||||
|
name: 71-mount-storage-worker
|
||||||
|
spec:
|
||||||
|
config:
|
||||||
|
ignition:
|
||||||
|
version: 3.2.0
|
||||||
|
systemd:
|
||||||
|
units:
|
||||||
|
- name: var-mnt-longhorn.mount
|
||||||
|
enabled: true
|
||||||
|
contents: |
|
||||||
|
[Unit]
|
||||||
|
Before=local-fs.target
|
||||||
|
[Mount]
|
||||||
|
Where=/var/mnt/longhorn
|
||||||
|
What=/dev/disk/by-label/longhorn
|
||||||
|
Options=rw,relatime,discard
|
||||||
|
[Install]
|
||||||
|
WantedBy=local-fs.target
|
||||||
|
```
|
||||||
|
|
||||||
|
#### Label and Annotate Nodes
|
||||||
|
|
||||||
|
Label and annotate storage nodes like this:
|
||||||
|
|
||||||
|
```bash
|
||||||
|
oc get nodes --no-headers | awk '{print $1}'
|
||||||
|
|
||||||
|
export NODE="worker-0"
|
||||||
|
oc annotate node ${NODE} --overwrite node.longhorn.io/default-disks-config='[{"path":"/var/mnt/longhorn","allowScheduling":true}]'
|
||||||
|
oc label node ${NODE} node.longhorn.io/create-default-disk=config
|
||||||
|
```
|
||||||
|
|
||||||
|
## Example values.yaml
|
||||||
|
|
||||||
|
Minimum Adjustments Required
|
||||||
|
|
||||||
|
```yaml
|
||||||
|
openshift:
|
||||||
|
oauthProxy:
|
||||||
|
repository: quay.io/openshift/origin-oauth-proxy
|
||||||
|
tag: 4.14 # Use Your OCP/OKD 4.X Version, Current Stable is 4.14
|
||||||
|
|
||||||
|
# defaultSettings: # Preparing nodes (Optional)
|
||||||
|
# createDefaultDiskLabeledNodes: true
|
||||||
|
|
||||||
|
openshift:
|
||||||
|
enabled: true
|
||||||
|
ui:
|
||||||
|
route: "longhorn-ui"
|
||||||
|
port: 443
|
||||||
|
proxy: 8443
|
||||||
|
```
|
||||||
|
|
||||||
|
## Installation
|
||||||
|
|
||||||
|
```bash
|
||||||
|
# helm template ./chart/ --namespace longhorn-system --values ./chart/values.yaml --no-hooks > longhorn.yaml # Local Testing
|
||||||
|
helm template longhorn --namespace longhorn-system --values values.yaml --no-hooks > longhorn.yaml
|
||||||
|
oc create namespace longhorn-system -o yaml --dry-run=client | oc apply -f -
|
||||||
|
oc apply -f longhorn.yaml -n longhorn-system
|
||||||
|
```
|
||||||
|
|
||||||
|
## Refs
|
||||||
|
|
||||||
|
- <https://docs.openshift.com/container-platform/4.11/storage/persistent_storage/persistent-storage-iscsi.html>
|
||||||
|
- <https://docs.okd.io/4.11/storage/persistent_storage/persistent-storage-iscsi.html>
|
||||||
|
- okd 4.5: <https://github.com/longhorn/longhorn/issues/1831#issuecomment-702690613>
|
||||||
|
- okd 4.6: <https://github.com/longhorn/longhorn/issues/1831#issuecomment-765884631>
|
||||||
|
- oauth-proxy: <https://github.com/openshift/oauth-proxy/blob/master/contrib/sidecar.yaml>
|
||||||
|
- <https://github.com/longhorn/longhorn/issues/1831>
|
825
chart/questions.yaml
Normal file
825
chart/questions.yaml
Normal file
@ -0,0 +1,825 @@
|
|||||||
|
categories:
|
||||||
|
- storage
|
||||||
|
namespace: longhorn-system
|
||||||
|
questions:
|
||||||
|
- variable: image.defaultImage
|
||||||
|
default: "true"
|
||||||
|
description: "Use default Longhorn images"
|
||||||
|
label: Use Default Images
|
||||||
|
type: boolean
|
||||||
|
show_subquestion_if: false
|
||||||
|
group: "Longhorn Images"
|
||||||
|
subquestions:
|
||||||
|
- variable: image.longhorn.manager.repository
|
||||||
|
default: longhornio/longhorn-manager
|
||||||
|
description: "Specify Longhorn Manager Image Repository"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Manager Image Repository
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.manager.tag
|
||||||
|
default: master-head
|
||||||
|
description: "Specify Longhorn Manager Image Tag"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Manager Image Tag
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.engine.repository
|
||||||
|
default: longhornio/longhorn-engine
|
||||||
|
description: "Specify Longhorn Engine Image Repository"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Engine Image Repository
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.engine.tag
|
||||||
|
default: master-head
|
||||||
|
description: "Specify Longhorn Engine Image Tag"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Engine Image Tag
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.ui.repository
|
||||||
|
default: longhornio/longhorn-ui
|
||||||
|
description: "Specify Longhorn UI Image Repository"
|
||||||
|
type: string
|
||||||
|
label: Longhorn UI Image Repository
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.ui.tag
|
||||||
|
default: master-head
|
||||||
|
description: "Specify Longhorn UI Image Tag"
|
||||||
|
type: string
|
||||||
|
label: Longhorn UI Image Tag
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.instanceManager.repository
|
||||||
|
default: longhornio/longhorn-instance-manager
|
||||||
|
description: "Specify Longhorn Instance Manager Image Repository"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Instance Manager Image Repository
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.instanceManager.tag
|
||||||
|
default: v2_20221123
|
||||||
|
description: "Specify Longhorn Instance Manager Image Tag"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Instance Manager Image Tag
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.shareManager.repository
|
||||||
|
default: longhornio/longhorn-share-manager
|
||||||
|
description: "Specify Longhorn Share Manager Image Repository"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Share Manager Image Repository
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.shareManager.tag
|
||||||
|
default: v1_20220914
|
||||||
|
description: "Specify Longhorn Share Manager Image Tag"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Share Manager Image Tag
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.backingImageManager.repository
|
||||||
|
default: longhornio/backing-image-manager
|
||||||
|
description: "Specify Longhorn Backing Image Manager Image Repository"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Backing Image Manager Image Repository
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.backingImageManager.tag
|
||||||
|
default: v3_20220808
|
||||||
|
description: "Specify Longhorn Backing Image Manager Image Tag"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Backing Image Manager Image Tag
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.supportBundleKit.repository
|
||||||
|
default: longhornio/support-bundle-kit
|
||||||
|
description: "Specify Longhorn Support Bundle Manager Image Repository"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Support Bundle Kit Image Repository
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.longhorn.supportBundleKit.tag
|
||||||
|
default: v0.0.27
|
||||||
|
description: "Specify Longhorn Support Bundle Manager Image Tag"
|
||||||
|
type: string
|
||||||
|
label: Longhorn Support Bundle Kit Image Tag
|
||||||
|
group: "Longhorn Images Settings"
|
||||||
|
- variable: image.csi.attacher.repository
|
||||||
|
default: longhornio/csi-attacher
|
||||||
|
description: "Specify CSI attacher image repository. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Attacher Image Repository
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.attacher.tag
|
||||||
|
default: v4.2.0
|
||||||
|
description: "Specify CSI attacher image tag. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Attacher Image Tag
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.provisioner.repository
|
||||||
|
default: longhornio/csi-provisioner
|
||||||
|
description: "Specify CSI provisioner image repository. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Provisioner Image Repository
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.provisioner.tag
|
||||||
|
default: v3.4.1
|
||||||
|
description: "Specify CSI provisioner image tag. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Provisioner Image Tag
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.nodeDriverRegistrar.repository
|
||||||
|
default: longhornio/csi-node-driver-registrar
|
||||||
|
description: "Specify CSI Node Driver Registrar image repository. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Node Driver Registrar Image Repository
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.nodeDriverRegistrar.tag
|
||||||
|
default: v2.7.0
|
||||||
|
description: "Specify CSI Node Driver Registrar image tag. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Node Driver Registrar Image Tag
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.resizer.repository
|
||||||
|
default: longhornio/csi-resizer
|
||||||
|
description: "Specify CSI Driver Resizer image repository. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Driver Resizer Image Repository
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.resizer.tag
|
||||||
|
default: v1.7.0
|
||||||
|
description: "Specify CSI Driver Resizer image tag. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Driver Resizer Image Tag
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.snapshotter.repository
|
||||||
|
default: longhornio/csi-snapshotter
|
||||||
|
description: "Specify CSI Driver Snapshotter image repository. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Driver Snapshotter Image Repository
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.snapshotter.tag
|
||||||
|
default: v6.2.1
|
||||||
|
description: "Specify CSI Driver Snapshotter image tag. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Driver Snapshotter Image Tag
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.livenessProbe.repository
|
||||||
|
default: longhornio/livenessprobe
|
||||||
|
description: "Specify CSI liveness probe image repository. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Liveness Probe Image Repository
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: image.csi.livenessProbe.tag
|
||||||
|
default: v2.9.0
|
||||||
|
description: "Specify CSI liveness probe image tag. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Longhorn CSI Liveness Probe Image Tag
|
||||||
|
group: "Longhorn CSI Driver Images"
|
||||||
|
- variable: privateRegistry.registryUrl
|
||||||
|
label: Private registry URL
|
||||||
|
description: "URL of private registry. Leave blank to apply system default registry."
|
||||||
|
group: "Private Registry Settings"
|
||||||
|
type: string
|
||||||
|
default: ""
|
||||||
|
- variable: privateRegistry.registrySecret
|
||||||
|
label: Private registry secret name
|
||||||
|
description: "If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry."
|
||||||
|
group: "Private Registry Settings"
|
||||||
|
type: string
|
||||||
|
default: ""
|
||||||
|
- variable: privateRegistry.createSecret
|
||||||
|
default: "true"
|
||||||
|
description: "Create a new private registry secret"
|
||||||
|
type: boolean
|
||||||
|
group: "Private Registry Settings"
|
||||||
|
label: Create Secret for Private Registry Settings
|
||||||
|
show_subquestion_if: true
|
||||||
|
subquestions:
|
||||||
|
- variable: privateRegistry.registryUser
|
||||||
|
label: Private registry user
|
||||||
|
description: "User used to authenticate to private registry."
|
||||||
|
type: string
|
||||||
|
default: ""
|
||||||
|
- variable: privateRegistry.registryPasswd
|
||||||
|
label: Private registry password
|
||||||
|
description: "Password used to authenticate to private registry."
|
||||||
|
type: password
|
||||||
|
default: ""
|
||||||
|
- variable: longhorn.default_setting
|
||||||
|
default: "false"
|
||||||
|
description: "Customize the default settings before installing Longhorn for the first time. This option will only work if the cluster hasn't installed Longhorn."
|
||||||
|
label: "Customize Default Settings"
|
||||||
|
type: boolean
|
||||||
|
show_subquestion_if: true
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
subquestions:
|
||||||
|
- variable: csi.kubeletRootDir
|
||||||
|
default:
|
||||||
|
description: "Specify kubelet root-dir. Leave blank to autodetect."
|
||||||
|
type: string
|
||||||
|
label: Kubelet Root Directory
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.attacherReplicaCount
|
||||||
|
type: int
|
||||||
|
default: 3
|
||||||
|
min: 1
|
||||||
|
max: 10
|
||||||
|
description: "Specify replica count of CSI Attacher. By default 3."
|
||||||
|
label: Longhorn CSI Attacher replica count
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.provisionerReplicaCount
|
||||||
|
type: int
|
||||||
|
default: 3
|
||||||
|
min: 1
|
||||||
|
max: 10
|
||||||
|
description: "Specify replica count of CSI Provisioner. By default 3."
|
||||||
|
label: Longhorn CSI Provisioner replica count
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.resizerReplicaCount
|
||||||
|
type: int
|
||||||
|
default: 3
|
||||||
|
min: 1
|
||||||
|
max: 10
|
||||||
|
description: "Specify replica count of CSI Resizer. By default 3."
|
||||||
|
label: Longhorn CSI Resizer replica count
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: csi.snapshotterReplicaCount
|
||||||
|
type: int
|
||||||
|
default: 3
|
||||||
|
min: 1
|
||||||
|
max: 10
|
||||||
|
description: "Specify replica count of CSI Snapshotter. By default 3."
|
||||||
|
label: Longhorn CSI Snapshotter replica count
|
||||||
|
group: "Longhorn CSI Driver Settings"
|
||||||
|
- variable: defaultSettings.backupTarget
|
||||||
|
label: Backup Target
|
||||||
|
description: "The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE"
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: defaultSettings.backupTargetCredentialSecret
|
||||||
|
label: Backup Target Credential Secret
|
||||||
|
description: "The name of the Kubernetes secret associated with the backup target."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: defaultSettings.allowRecurringJobWhileVolumeDetached
|
||||||
|
label: Allow Recurring Job While Volume Is Detached
|
||||||
|
description: 'If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup when it is the time to do recurring snapshot/backup.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.createDefaultDiskLabeledNodes
|
||||||
|
label: Create Default Disk on Labeled Nodes
|
||||||
|
description: 'Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist. If disabled, the default disk will be created on all new nodes when each node is first added.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.defaultDataPath
|
||||||
|
label: Default Data Path
|
||||||
|
description: 'Default path to use for storing data on a host. By default "/var/lib/longhorn/"'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: "/var/lib/longhorn/"
|
||||||
|
- variable: defaultSettings.defaultDataLocality
|
||||||
|
label: Default Data Locality
|
||||||
|
description: 'Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "disabled"
|
||||||
|
- "best-effort"
|
||||||
|
default: "disabled"
|
||||||
|
- variable: defaultSettings.replicaSoftAntiAffinity
|
||||||
|
label: Replica Node Level Soft Anti-Affinity
|
||||||
|
description: 'Allow scheduling on nodes with existing healthy replicas of the same volume. By default false.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.replicaAutoBalance
|
||||||
|
label: Replica Auto Balance
|
||||||
|
description: 'Enable this setting automatically rebalances replicas when discovered an available node.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "disabled"
|
||||||
|
- "least-effort"
|
||||||
|
- "best-effort"
|
||||||
|
default: "disabled"
|
||||||
|
- variable: defaultSettings.storageOverProvisioningPercentage
|
||||||
|
label: Storage Over Provisioning Percentage
|
||||||
|
description: "The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 200."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 200
|
||||||
|
- variable: defaultSettings.storageMinimalAvailablePercentage
|
||||||
|
label: Storage Minimal Available Percentage
|
||||||
|
description: "If the minimum available disk capacity exceeds the actual percentage of available disk capacity, the disk becomes unschedulable until more space is freed up. By default 25."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
max: 100
|
||||||
|
default: 25
|
||||||
|
- variable: defaultSettings.storageReservedPercentageForDefaultDisk
|
||||||
|
label: Storage Reserved Percentage For Default Disk
|
||||||
|
description: "The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
max: 100
|
||||||
|
default: 30
|
||||||
|
- variable: defaultSettings.upgradeChecker
|
||||||
|
label: Enable Upgrade Checker
|
||||||
|
description: 'Upgrade Checker will check for new Longhorn version periodically. When there is a new version available, a notification will appear in the UI. By default true.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.defaultReplicaCount
|
||||||
|
label: Default Replica Count
|
||||||
|
description: "The default number of replicas when a volume is created from the Longhorn UI. For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 1
|
||||||
|
max: 20
|
||||||
|
default: 3
|
||||||
|
- variable: defaultSettings.defaultLonghornStaticStorageClass
|
||||||
|
label: Default Longhorn Static StorageClass Name
|
||||||
|
description: "The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label, so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object. By default 'longhorn-static'."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: "longhorn-static"
|
||||||
|
- variable: defaultSettings.backupstorePollInterval
|
||||||
|
label: Backupstore Poll Interval
|
||||||
|
description: "In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups. Set to 0 to disable the polling. By default 300."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 300
|
||||||
|
- variable: defaultSettings.failedBackupTTL
|
||||||
|
label: Failed Backup Time to Live
|
||||||
|
description: "In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 1440
|
||||||
|
- variable: defaultSettings.restoreVolumeRecurringJobs
|
||||||
|
label: Restore Volume Recurring Jobs
|
||||||
|
description: "Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.recurringSuccessfulJobsHistoryLimit
|
||||||
|
label: Cronjob Successful Jobs History Limit
|
||||||
|
description: "This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 1
|
||||||
|
- variable: defaultSettings.recurringFailedJobsHistoryLimit
|
||||||
|
label: Cronjob Failed Jobs History Limit
|
||||||
|
description: "This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 1
|
||||||
|
- variable: defaultSettings.supportBundleFailedHistoryLimit
|
||||||
|
label: SupportBundle Failed History Limit
|
||||||
|
description: "This setting specifies how many failed support bundles can exist in the cluster. Set this value to **0** to have Longhorn automatically purge all failed support bundles."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 1
|
||||||
|
- variable: defaultSettings.autoSalvage
|
||||||
|
label: Automatic salvage
|
||||||
|
description: "If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection. Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly
|
||||||
|
label: Automatically Delete Workload Pod when The Volume Is Detached Unexpectedly
|
||||||
|
description: 'If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...) when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect). By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.disableSchedulingOnCordonedNode
|
||||||
|
label: Disable Scheduling On Cordoned Node
|
||||||
|
description: "Disable Longhorn manager to schedule replica on Kubernetes cordoned node. By default true."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.replicaZoneSoftAntiAffinity
|
||||||
|
label: Replica Zone Level Soft Anti-Affinity
|
||||||
|
description: "Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas. Nodes don't belong to any Zone will be treated as in the same Zone. Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone. By default true."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.replicaDiskSoftAntiAffinity
|
||||||
|
label: Replica Disk Level Soft Anti-Affinity
|
||||||
|
description: 'Allow scheduling on disks with existing healthy replicas of the same volume. By default true.'
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.allowEmptyNodeSelectorVolume
|
||||||
|
label: Allow Empty Node Selector Volume
|
||||||
|
description: "Allow Scheduling Empty Node Selector Volumes To Any Node"
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.allowEmptyDiskSelectorVolume
|
||||||
|
label: Allow Empty Disk Selector Volume
|
||||||
|
description: "Allow Scheduling Empty Disk Selector Volumes To Any Disk"
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.nodeDownPodDeletionPolicy
|
||||||
|
label: Pod Deletion Policy When Node is Down
|
||||||
|
description: "Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "do-nothing"
|
||||||
|
- "delete-statefulset-pod"
|
||||||
|
- "delete-deployment-pod"
|
||||||
|
- "delete-both-statefulset-and-deployment-pod"
|
||||||
|
default: "do-nothing"
|
||||||
|
- variable: defaultSettings.nodeDrainPolicy
|
||||||
|
label: Node Drain Policy
|
||||||
|
description: "Define the policy to use when a node with the last healthy replica of a volume is drained."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "block-if-contains-last-replica"
|
||||||
|
- "allow-if-replica-is-stopped"
|
||||||
|
- "always-allow"
|
||||||
|
default: "block-if-contains-last-replica"
|
||||||
|
- variable: defaultSettings.replicaReplenishmentWaitInterval
|
||||||
|
label: Replica Replenishment Wait Interval
|
||||||
|
description: "In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica rather than directly creating a new replica for a degraded volume."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 600
|
||||||
|
- variable: defaultSettings.concurrentReplicaRebuildPerNodeLimit
|
||||||
|
label: Concurrent Replica Rebuild Per Node Limit
|
||||||
|
description: "This setting controls how many replicas on a node can be rebuilt simultaneously."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 5
|
||||||
|
- variable: defaultSettings.concurrentVolumeBackupRestorePerNodeLimit
|
||||||
|
label: Concurrent Volume Backup Restore Per Node Limit
|
||||||
|
description: "This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 5
|
||||||
|
- variable: defaultSettings.disableRevisionCounter
|
||||||
|
label: Disable Revision Counter
|
||||||
|
description: "This setting is only for volumes created by UI. By default, this is false meaning there will be a reivision counter file to track every write to the volume. During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume. If revision counter is disabled, Longhorn will not track every write to the volume. During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and file size to pick the replica candidate to recover the whole volume."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.systemManagedPodsImagePullPolicy
|
||||||
|
label: System Managed Pod Image Pull Policy
|
||||||
|
description: "This setting defines the Image Pull Policy of Longhorn system managed pods, e.g. instance manager, engine image, CSI driver, etc. The new Image Pull Policy will only apply after the system managed pods restart."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "if-not-present"
|
||||||
|
- "always"
|
||||||
|
- "never"
|
||||||
|
default: "if-not-present"
|
||||||
|
- variable: defaultSettings.allowVolumeCreationWithDegradedAvailability
|
||||||
|
label: Allow Volume Creation with Degraded Availability
|
||||||
|
description: "This setting allows user to create and attach a volume that doesn't have all the replicas scheduled at the time of creation."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.autoCleanupSystemGeneratedSnapshot
|
||||||
|
label: Automatically Cleanup System Generated Snapshot
|
||||||
|
description: "This setting enables Longhorn to automatically cleanup the system generated snapshot after replica rebuild is done."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "true"
|
||||||
|
- variable: defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit
|
||||||
|
label: Concurrent Automatic Engine Upgrade Per Node Limit
|
||||||
|
description: "This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager. The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time. If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 0
|
||||||
|
- variable: defaultSettings.backingImageCleanupWaitInterval
|
||||||
|
label: Backing Image Cleanup Wait Interval
|
||||||
|
description: "This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 60
|
||||||
|
- variable: defaultSettings.backingImageRecoveryWaitInterval
|
||||||
|
label: Backing Image Recovery Wait Interval
|
||||||
|
description: "This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file when all disk files of this backing image become failed or unknown."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
default: 300
|
||||||
|
- variable: defaultSettings.guaranteedInstanceManagerCPU
|
||||||
|
label: Guaranteed Instance Manager CPU
|
||||||
|
description: "This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod. You can leave it with the default value, which is 12%."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 0
|
||||||
|
max: 40
|
||||||
|
default: 12
|
||||||
|
- variable: defaultSettings.logLevel
|
||||||
|
label: Log Level
|
||||||
|
description: "The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: "Info"
|
||||||
|
- variable: defaultSettings.kubernetesClusterAutoscalerEnabled
|
||||||
|
label: Kubernetes Cluster Autoscaler Enabled (Experimental)
|
||||||
|
description: "Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
- variable: defaultSettings.orphanAutoDeletion
|
||||||
|
label: Orphaned Data Cleanup
|
||||||
|
description: "This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas. Orphan resources on down or unknown nodes will not be cleaned up automatically."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
- variable: defaultSettings.storageNetwork
|
||||||
|
label: Storage Network
|
||||||
|
description: "Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: defaultSettings.deletingConfirmationFlag
|
||||||
|
label: Deleting Confirmation Flag
|
||||||
|
description: "This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.engineReplicaTimeout
|
||||||
|
label: Timeout between Engine and Replica
|
||||||
|
description: "In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds. The default value is 8 seconds."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
default: "8"
|
||||||
|
- variable: defaultSettings.snapshotDataIntegrity
|
||||||
|
label: Snapshot Data Integrity
|
||||||
|
description: "This setting allows users to enable or disable snapshot hashing and data integrity checking."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: "disabled"
|
||||||
|
- variable: defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation
|
||||||
|
label: Immediate Snapshot Data Integrity Check After Creating a Snapshot
|
||||||
|
description: "Hashing snapshot disk files impacts the performance of the system. The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.snapshotDataIntegrityCronjob
|
||||||
|
label: Snapshot Data Integrity Check CronJob
|
||||||
|
description: "Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: "0 0 */7 * *"
|
||||||
|
- variable: defaultSettings.removeSnapshotsDuringFilesystemTrim
|
||||||
|
label: Remove Snapshots During Filesystem Trim
|
||||||
|
description: "This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and its ancestors as removed and stops at the snapshot containing multiple children."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: "false"
|
||||||
|
- variable: defaultSettings.fastReplicaRebuildEnabled
|
||||||
|
label: Fast Replica Rebuild Enabled
|
||||||
|
description: "This feature supports the fast replica rebuilding. It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
- variable: defaultSettings.replicaFileSyncHttpClientTimeout
|
||||||
|
label: Timeout of HTTP Client to Replica File Sync Server
|
||||||
|
description: "In seconds. The setting specifies the HTTP client timeout to the file sync server."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
default: "30"
|
||||||
|
- variable: defaultSettings.backupCompressionMethod
|
||||||
|
label: Backup Compression Method
|
||||||
|
description: "This setting allows users to specify backup compression method."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: string
|
||||||
|
default: "lz4"
|
||||||
|
- variable: defaultSettings.backupConcurrentLimit
|
||||||
|
label: Backup Concurrent Limit Per Backup
|
||||||
|
description: "This setting controls how many worker threads per backup concurrently."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 1
|
||||||
|
default: 2
|
||||||
|
- variable: defaultSettings.restoreConcurrentLimit
|
||||||
|
label: Restore Concurrent Limit Per Backup
|
||||||
|
description: "This setting controls how many worker threads per restore concurrently."
|
||||||
|
group: "Longhorn Default Settings"
|
||||||
|
type: int
|
||||||
|
min: 1
|
||||||
|
default: 2
|
||||||
|
- variable: defaultSettings.v2DataEngine
|
||||||
|
label: V2 Data Engine
|
||||||
|
description: "This allows users to activate v2 data engine based on SPDK. Currently, it is in the preview phase and should not be utilized in a production environment."
|
||||||
|
group: "Longhorn V2 Data Engine (Preview Feature) Settings"
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
- variable: defaultSettings.offlineReplicaRebuilding
|
||||||
|
label: Offline Replica Rebuilding
|
||||||
|
description: "This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine."
|
||||||
|
group: "Longhorn V2 Data Engine (Preview Feature) Settings"
|
||||||
|
required: true
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "enabled"
|
||||||
|
- "disabled"
|
||||||
|
default: "enabled"
|
||||||
|
- variable: persistence.defaultClass
|
||||||
|
default: "true"
|
||||||
|
description: "Set as default StorageClass for Longhorn"
|
||||||
|
label: Default Storage Class
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
required: true
|
||||||
|
type: boolean
|
||||||
|
- variable: persistence.reclaimPolicy
|
||||||
|
label: Storage Class Retain Policy
|
||||||
|
description: "Define reclaim policy. Options: `Retain`, `Delete`"
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
required: true
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "Delete"
|
||||||
|
- "Retain"
|
||||||
|
default: "Delete"
|
||||||
|
- variable: persistence.defaultClassReplicaCount
|
||||||
|
description: "Set replica count for Longhorn StorageClass"
|
||||||
|
label: Default Storage Class Replica Count
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: int
|
||||||
|
min: 1
|
||||||
|
max: 10
|
||||||
|
default: 3
|
||||||
|
- variable: persistence.defaultDataLocality
|
||||||
|
description: "Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort`"
|
||||||
|
label: Default Storage Class Data Locality
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "disabled"
|
||||||
|
- "best-effort"
|
||||||
|
default: "disabled"
|
||||||
|
- variable: persistence.recurringJobSelector.enable
|
||||||
|
description: "Enable recurring job selector for Longhorn StorageClass"
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
label: Enable Storage Class Recurring Job Selector
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
show_subquestion_if: true
|
||||||
|
subquestions:
|
||||||
|
- variable: persistence.recurringJobSelector.jobList
|
||||||
|
description: 'Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., [{"name":"backup", "isGroup":true}]'
|
||||||
|
label: Storage Class Recurring Job Selector List
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: persistence.defaultNodeSelector.enable
|
||||||
|
description: "Enable Node selector for Longhorn StorageClass"
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
label: Enable Storage Class Node Selector
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
show_subquestion_if: true
|
||||||
|
subquestions:
|
||||||
|
- variable: persistence.defaultNodeSelector.selector
|
||||||
|
label: Storage Class Node Selector
|
||||||
|
description: 'This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"`'
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: persistence.backingImage.enable
|
||||||
|
description: "Set backing image for Longhorn StorageClass"
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
label: Default Storage Class Backing Image
|
||||||
|
type: boolean
|
||||||
|
default: false
|
||||||
|
show_subquestion_if: true
|
||||||
|
subquestions:
|
||||||
|
- variable: persistence.backingImage.name
|
||||||
|
description: 'Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it.'
|
||||||
|
label: Storage Class Backing Image Name
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: persistence.backingImage.expectedChecksum
|
||||||
|
description: 'Specify the expected SHA512 checksum of the selected backing image in Longhorn StorageClass.
|
||||||
|
WARNING:
|
||||||
|
- If the backing image name is not specified, setting this field is meaningless.
|
||||||
|
- It is not recommended to set this field if the data source type is \"export-from-volume\".'
|
||||||
|
label: Storage Class Backing Image Expected SHA512 Checksum
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: persistence.backingImage.dataSourceType
|
||||||
|
description: 'Specify the data source type for the backing image used in Longhorn StorageClass.
|
||||||
|
If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
|
||||||
|
WARNING:
|
||||||
|
- If the backing image name is not specified, setting this field is meaningless.
|
||||||
|
- As for backing image creation with data source type \"upload\", it is recommended to do it via UI rather than StorageClass here. Uploading requires file data sending to the Longhorn backend after the object creation, which is complicated if you want to handle it manually.'
|
||||||
|
label: Storage Class Backing Image Data Source Type
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- ""
|
||||||
|
- "download"
|
||||||
|
- "upload"
|
||||||
|
- "export-from-volume"
|
||||||
|
default: ""
|
||||||
|
- variable: persistence.backingImage.dataSourceParameters
|
||||||
|
description: "Specify the data source parameters for the backing image used in Longhorn StorageClass.
|
||||||
|
If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
|
||||||
|
This option accepts a json string of a map. e.g., '{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'.
|
||||||
|
WARNING:
|
||||||
|
- If the backing image name is not specified, setting this field is meaningless.
|
||||||
|
- Be careful of the quotes here."
|
||||||
|
label: Storage Class Backing Image Data Source Parameters
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: string
|
||||||
|
default:
|
||||||
|
- variable: persistence.removeSnapshotsDuringFilesystemTrim
|
||||||
|
description: "Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled`"
|
||||||
|
label: Default Storage Class Remove Snapshots During Filesystem Trim
|
||||||
|
group: "Longhorn Storage Class Settings"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "ignored"
|
||||||
|
- "enabled"
|
||||||
|
- "disabled"
|
||||||
|
default: "ignored"
|
||||||
|
- variable: ingress.enabled
|
||||||
|
default: "false"
|
||||||
|
description: "Expose app using Layer 7 Load Balancer - ingress"
|
||||||
|
type: boolean
|
||||||
|
group: "Services and Load Balancing"
|
||||||
|
label: Expose app using Layer 7 Load Balancer
|
||||||
|
show_subquestion_if: true
|
||||||
|
subquestions:
|
||||||
|
- variable: ingress.host
|
||||||
|
default: "xip.io"
|
||||||
|
description: "layer 7 Load Balancer hostname"
|
||||||
|
type: hostname
|
||||||
|
required: true
|
||||||
|
label: Layer 7 Load Balancer Hostname
|
||||||
|
- variable: ingress.path
|
||||||
|
default: "/"
|
||||||
|
description: "If ingress is enabled you can set the default ingress path"
|
||||||
|
type: string
|
||||||
|
required: true
|
||||||
|
label: Ingress Path
|
||||||
|
- variable: service.ui.type
|
||||||
|
default: "Rancher-Proxy"
|
||||||
|
description: "Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy`"
|
||||||
|
type: enum
|
||||||
|
options:
|
||||||
|
- "ClusterIP"
|
||||||
|
- "NodePort"
|
||||||
|
- "LoadBalancer"
|
||||||
|
- "Rancher-Proxy"
|
||||||
|
label: Longhorn UI Service
|
||||||
|
show_if: "ingress.enabled=false"
|
||||||
|
group: "Services and Load Balancing"
|
||||||
|
show_subquestion_if: "NodePort"
|
||||||
|
subquestions:
|
||||||
|
- variable: service.ui.nodePort
|
||||||
|
default: ""
|
||||||
|
description: "NodePort port number(to set explicitly, choose port between 30000-32767)"
|
||||||
|
type: int
|
||||||
|
min: 30000
|
||||||
|
max: 32767
|
||||||
|
show_if: "service.ui.type=NodePort||service.ui.type=LoadBalancer"
|
||||||
|
label: UI Service NodePort number
|
||||||
|
- variable: enablePSP
|
||||||
|
default: "false"
|
||||||
|
description: "Setup a pod security policy for Longhorn workloads."
|
||||||
|
label: Pod Security Policy
|
||||||
|
type: boolean
|
||||||
|
group: "Other Settings"
|
||||||
|
- variable: global.cattle.windowsCluster.enabled
|
||||||
|
default: "false"
|
||||||
|
description: "Enable this to allow Longhorn to run on the Rancher deployed Windows cluster."
|
||||||
|
label: Rancher Windows Cluster
|
||||||
|
type: boolean
|
||||||
|
group: "Other Settings"
|
||||||
|
- variable: networkPolicies.enabled
|
||||||
|
description: "Enable NetworkPolicies to limit access to the longhorn pods.
|
||||||
|
Warning: The Rancher Proxy will not work if this feature is enabled and a custom NetworkPolicy must be added."
|
||||||
|
group: "Other Settings"
|
||||||
|
label: Network Policies
|
||||||
|
default: "false"
|
||||||
|
type: boolean
|
||||||
|
subquestions:
|
||||||
|
- variable: networkPolicies.type
|
||||||
|
label: Network Policies for Ingress
|
||||||
|
description: "Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1`"
|
||||||
|
show_if: "networkPolicies.enabled=true&&ingress.enabled=true"
|
||||||
|
type: enum
|
||||||
|
default: "rke2"
|
||||||
|
options:
|
||||||
|
- "rke1"
|
||||||
|
- "rke2"
|
||||||
|
- "k3s"
|
5
chart/templates/NOTES.txt
Normal file
5
chart/templates/NOTES.txt
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
Longhorn is now installed on the cluster!
|
||||||
|
|
||||||
|
Please wait a few minutes for other Longhorn components such as CSI deployments, Engine Images, and Instance Managers to be initialized.
|
||||||
|
|
||||||
|
Visit our documentation at https://longhorn.io/docs/
|
66
chart/templates/_helpers.tpl
Normal file
66
chart/templates/_helpers.tpl
Normal file
@ -0,0 +1,66 @@
|
|||||||
|
{{/* vim: set filetype=mustache: */}}
|
||||||
|
{{/*
|
||||||
|
Expand the name of the chart.
|
||||||
|
*/}}
|
||||||
|
{{- define "longhorn.name" -}}
|
||||||
|
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{/*
|
||||||
|
Create a default fully qualified app name.
|
||||||
|
We truncate at 63 chars because some Kubernetes name fields are limited to this (by the DNS naming spec).
|
||||||
|
*/}}
|
||||||
|
{{- define "longhorn.fullname" -}}
|
||||||
|
{{- $name := default .Chart.Name .Values.nameOverride -}}
|
||||||
|
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
|
||||||
|
{{- define "longhorn.managerIP" -}}
|
||||||
|
{{- $fullname := (include "longhorn.fullname" .) -}}
|
||||||
|
{{- printf "http://%s-backend:9500" $fullname | trunc 63 | trimSuffix "-" -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
|
||||||
|
{{- define "secret" }}
|
||||||
|
{{- printf "{\"auths\": {\"%s\": {\"auth\": \"%s\"}}}" .Values.privateRegistry.registryUrl (printf "%s:%s" .Values.privateRegistry.registryUser .Values.privateRegistry.registryPasswd | b64enc) | b64enc }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{- /*
|
||||||
|
longhorn.labels generates the standard Helm labels.
|
||||||
|
*/ -}}
|
||||||
|
{{- define "longhorn.labels" -}}
|
||||||
|
app.kubernetes.io/name: {{ template "longhorn.name" . }}
|
||||||
|
helm.sh/chart: {{ .Chart.Name }}-{{ .Chart.Version | replace "+" "_" }}
|
||||||
|
app.kubernetes.io/managed-by: {{ .Release.Service }}
|
||||||
|
app.kubernetes.io/instance: {{ .Release.Name }}
|
||||||
|
app.kubernetes.io/version: {{ .Chart.AppVersion }}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
|
||||||
|
{{- define "system_default_registry" -}}
|
||||||
|
{{- if .Values.global.cattle.systemDefaultRegistry -}}
|
||||||
|
{{- printf "%s/" .Values.global.cattle.systemDefaultRegistry -}}
|
||||||
|
{{- else -}}
|
||||||
|
{{- "" -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{- define "registry_url" -}}
|
||||||
|
{{- if .Values.privateRegistry.registryUrl -}}
|
||||||
|
{{- printf "%s/" .Values.privateRegistry.registryUrl -}}
|
||||||
|
{{- else -}}
|
||||||
|
{{ include "system_default_registry" . }}
|
||||||
|
{{- end -}}
|
||||||
|
{{- end -}}
|
||||||
|
|
||||||
|
{{- /*
|
||||||
|
define the longhorn release namespace
|
||||||
|
*/ -}}
|
||||||
|
{{- define "release_namespace" -}}
|
||||||
|
{{- if .Values.namespaceOverride -}}
|
||||||
|
{{- .Values.namespaceOverride -}}
|
||||||
|
{{- else -}}
|
||||||
|
{{- .Release.Namespace -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- end -}}
|
77
chart/templates/clusterrole.yaml
Normal file
77
chart/templates/clusterrole.yaml
Normal file
@ -0,0 +1,77 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: longhorn-role
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- apiextensions.k8s.io
|
||||||
|
resources:
|
||||||
|
- customresourcedefinitions
|
||||||
|
verbs:
|
||||||
|
- "*"
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims","persistentvolumeclaims/status", "nodes", "proxy/nodes", "pods/log", "secrets", "services", "endpoints", "configmaps", "serviceaccounts"]
|
||||||
|
verbs: ["*"]
|
||||||
|
- apiGroups: [""]
|
||||||
|
resources: ["namespaces"]
|
||||||
|
verbs: ["get", "list"]
|
||||||
|
- apiGroups: ["apps"]
|
||||||
|
resources: ["daemonsets", "statefulsets", "deployments"]
|
||||||
|
verbs: ["*"]
|
||||||
|
- apiGroups: ["batch"]
|
||||||
|
resources: ["jobs", "cronjobs"]
|
||||||
|
verbs: ["*"]
|
||||||
|
- apiGroups: ["policy"]
|
||||||
|
resources: ["poddisruptionbudgets", "podsecuritypolicies"]
|
||||||
|
verbs: ["*"]
|
||||||
|
- apiGroups: ["scheduling.k8s.io"]
|
||||||
|
resources: ["priorityclasses"]
|
||||||
|
verbs: ["watch", "list"]
|
||||||
|
- apiGroups: ["storage.k8s.io"]
|
||||||
|
resources: ["storageclasses", "volumeattachments", "volumeattachments/status", "csinodes", "csidrivers"]
|
||||||
|
verbs: ["*"]
|
||||||
|
- apiGroups: ["snapshot.storage.k8s.io"]
|
||||||
|
resources: ["volumesnapshotclasses", "volumesnapshots", "volumesnapshotcontents", "volumesnapshotcontents/status"]
|
||||||
|
verbs: ["*"]
|
||||||
|
- apiGroups: ["longhorn.io"]
|
||||||
|
resources: ["volumes", "volumes/status", "engines", "engines/status", "replicas", "replicas/status", "settings",
|
||||||
|
"engineimages", "engineimages/status", "nodes", "nodes/status", "instancemanagers", "instancemanagers/status",
|
||||||
|
{{- if .Values.openshift.enabled }}
|
||||||
|
"engineimages/finalizers", "nodes/finalizers", "instancemanagers/finalizers",
|
||||||
|
{{- end }}
|
||||||
|
"sharemanagers", "sharemanagers/status", "backingimages", "backingimages/status",
|
||||||
|
"backingimagemanagers", "backingimagemanagers/status", "backingimagedatasources", "backingimagedatasources/status",
|
||||||
|
"backuptargets", "backuptargets/status", "backupvolumes", "backupvolumes/status", "backups", "backups/status",
|
||||||
|
"recurringjobs", "recurringjobs/status", "orphans", "orphans/status", "snapshots", "snapshots/status",
|
||||||
|
"supportbundles", "supportbundles/status", "systembackups", "systembackups/status", "systemrestores", "systemrestores/status",
|
||||||
|
"volumeattachments", "volumeattachments/status"]
|
||||||
|
verbs: ["*"]
|
||||||
|
- apiGroups: ["coordination.k8s.io"]
|
||||||
|
resources: ["leases"]
|
||||||
|
verbs: ["*"]
|
||||||
|
- apiGroups: ["metrics.k8s.io"]
|
||||||
|
resources: ["pods", "nodes"]
|
||||||
|
verbs: ["get", "list"]
|
||||||
|
- apiGroups: ["apiregistration.k8s.io"]
|
||||||
|
resources: ["apiservices"]
|
||||||
|
verbs: ["list", "watch"]
|
||||||
|
- apiGroups: ["admissionregistration.k8s.io"]
|
||||||
|
resources: ["mutatingwebhookconfigurations", "validatingwebhookconfigurations"]
|
||||||
|
verbs: ["get", "list", "create", "patch", "delete"]
|
||||||
|
- apiGroups: ["rbac.authorization.k8s.io"]
|
||||||
|
resources: ["roles", "rolebindings", "clusterrolebindings", "clusterroles"]
|
||||||
|
verbs: ["*"]
|
||||||
|
{{- if .Values.openshift.enabled }}
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRole
|
||||||
|
metadata:
|
||||||
|
name: longhorn-ocp-privileged-role
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
rules:
|
||||||
|
- apiGroups: ["security.openshift.io"]
|
||||||
|
resources: ["securitycontextconstraints"]
|
||||||
|
resourceNames: ["anyuid", "privileged"]
|
||||||
|
verbs: ["use"]
|
||||||
|
{{- end }}
|
49
chart/templates/clusterrolebinding.yaml
Normal file
49
chart/templates/clusterrolebinding.yaml
Normal file
@ -0,0 +1,49 @@
|
|||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: longhorn-bind
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: longhorn-role
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: longhorn-service-account
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: longhorn-support-bundle
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: cluster-admin
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: longhorn-support-bundle
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
{{- if .Values.openshift.enabled }}
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: ClusterRoleBinding
|
||||||
|
metadata:
|
||||||
|
name: longhorn-ocp-privileged-bind
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: ClusterRole
|
||||||
|
name: longhorn-ocp-privileged-role
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: longhorn-service-account
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: longhorn-ui-service-account
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: default # supportbundle-agent-support-bundle uses default sa
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
{{- end }}
|
3688
chart/templates/crds.yaml
Normal file
3688
chart/templates/crds.yaml
Normal file
File diff suppressed because it is too large
Load Diff
151
chart/templates/daemonset-sa.yaml
Normal file
151
chart/templates/daemonset-sa.yaml
Normal file
@ -0,0 +1,151 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-manager
|
||||||
|
name: longhorn-manager
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
||||||
|
app: longhorn-manager
|
||||||
|
{{- with .Values.annotations }}
|
||||||
|
annotations:
|
||||||
|
{{- toYaml . | nindent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: longhorn-manager
|
||||||
|
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
command:
|
||||||
|
- longhorn-manager
|
||||||
|
- -d
|
||||||
|
{{- if eq .Values.longhornManager.log.format "json" }}
|
||||||
|
- -j
|
||||||
|
{{- end }}
|
||||||
|
- daemon
|
||||||
|
- --engine-image
|
||||||
|
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.engine.repository }}:{{ .Values.image.longhorn.engine.tag }}"
|
||||||
|
- --instance-manager-image
|
||||||
|
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.instanceManager.repository }}:{{ .Values.image.longhorn.instanceManager.tag }}"
|
||||||
|
- --share-manager-image
|
||||||
|
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.shareManager.repository }}:{{ .Values.image.longhorn.shareManager.tag }}"
|
||||||
|
- --backing-image-manager-image
|
||||||
|
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.backingImageManager.repository }}:{{ .Values.image.longhorn.backingImageManager.tag }}"
|
||||||
|
- --support-bundle-manager-image
|
||||||
|
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.supportBundleKit.repository }}:{{ .Values.image.longhorn.supportBundleKit.tag }}"
|
||||||
|
- --manager-image
|
||||||
|
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}"
|
||||||
|
- --service-account
|
||||||
|
- longhorn-service-account
|
||||||
|
ports:
|
||||||
|
- containerPort: 9500
|
||||||
|
name: manager
|
||||||
|
- containerPort: 9501
|
||||||
|
name: conversion-wh
|
||||||
|
- containerPort: 9502
|
||||||
|
name: admission-wh
|
||||||
|
- containerPort: 9503
|
||||||
|
name: recov-backend
|
||||||
|
readinessProbe:
|
||||||
|
httpGet:
|
||||||
|
path: /v1/healthz
|
||||||
|
port: 9501
|
||||||
|
scheme: HTTPS
|
||||||
|
volumeMounts:
|
||||||
|
- name: dev
|
||||||
|
mountPath: /host/dev/
|
||||||
|
- name: proc
|
||||||
|
mountPath: /host/proc/
|
||||||
|
- name: longhorn
|
||||||
|
mountPath: /var/lib/longhorn/
|
||||||
|
mountPropagation: Bidirectional
|
||||||
|
- name: longhorn-grpc-tls
|
||||||
|
mountPath: /tls-files/
|
||||||
|
env:
|
||||||
|
- name: POD_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
- name: POD_IP
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: status.podIP
|
||||||
|
- name: NODE_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: spec.nodeName
|
||||||
|
volumes:
|
||||||
|
- name: dev
|
||||||
|
hostPath:
|
||||||
|
path: /dev/
|
||||||
|
- name: proc
|
||||||
|
hostPath:
|
||||||
|
path: /proc/
|
||||||
|
- name: longhorn
|
||||||
|
hostPath:
|
||||||
|
path: /var/lib/longhorn/
|
||||||
|
- name: longhorn-grpc-tls
|
||||||
|
secret:
|
||||||
|
secretName: longhorn-grpc-tls
|
||||||
|
optional: true
|
||||||
|
{{- if .Values.privateRegistry.registrySecret }}
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.priorityClass }}
|
||||||
|
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
tolerations:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.tolerations }}
|
||||||
|
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.nodeSelector }}
|
||||||
|
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
serviceAccountName: longhorn-service-account
|
||||||
|
updateStrategy:
|
||||||
|
rollingUpdate:
|
||||||
|
maxUnavailable: "100%"
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-manager
|
||||||
|
name: longhorn-backend
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
{{- if .Values.longhornManager.serviceAnnotations }}
|
||||||
|
annotations:
|
||||||
|
{{ toYaml .Values.longhornManager.serviceAnnotations | indent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
spec:
|
||||||
|
type: {{ .Values.service.manager.type }}
|
||||||
|
sessionAffinity: ClientIP
|
||||||
|
selector:
|
||||||
|
app: longhorn-manager
|
||||||
|
ports:
|
||||||
|
- name: manager
|
||||||
|
port: 9500
|
||||||
|
targetPort: manager
|
||||||
|
{{- if .Values.service.manager.nodePort }}
|
||||||
|
nodePort: {{ .Values.service.manager.nodePort }}
|
||||||
|
{{- end }}
|
86
chart/templates/default-setting.yaml
Normal file
86
chart/templates/default-setting.yaml
Normal file
@ -0,0 +1,86 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: longhorn-default-setting
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
data:
|
||||||
|
default-setting.yaml: |-
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.backupTarget) }}backup-target: {{ .Values.defaultSettings.backupTarget }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.backupTargetCredentialSecret) }}backup-target-credential-secret: {{ .Values.defaultSettings.backupTargetCredentialSecret }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.allowRecurringJobWhileVolumeDetached) }}allow-recurring-job-while-volume-detached: {{ .Values.defaultSettings.allowRecurringJobWhileVolumeDetached }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.createDefaultDiskLabeledNodes) }}create-default-disk-labeled-nodes: {{ .Values.defaultSettings.createDefaultDiskLabeledNodes }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultDataPath) }}default-data-path: {{ .Values.defaultSettings.defaultDataPath }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaSoftAntiAffinity) }}replica-soft-anti-affinity: {{ .Values.defaultSettings.replicaSoftAntiAffinity }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaAutoBalance) }}replica-auto-balance: {{ .Values.defaultSettings.replicaAutoBalance }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.storageOverProvisioningPercentage) }}storage-over-provisioning-percentage: {{ .Values.defaultSettings.storageOverProvisioningPercentage }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.storageMinimalAvailablePercentage) }}storage-minimal-available-percentage: {{ .Values.defaultSettings.storageMinimalAvailablePercentage }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.storageReservedPercentageForDefaultDisk) }}storage-reserved-percentage-for-default-disk: {{ .Values.defaultSettings.storageReservedPercentageForDefaultDisk }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.upgradeChecker) }}upgrade-checker: {{ .Values.defaultSettings.upgradeChecker }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultReplicaCount) }}default-replica-count: {{ .Values.defaultSettings.defaultReplicaCount }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultDataLocality) }}default-data-locality: {{ .Values.defaultSettings.defaultDataLocality }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.defaultLonghornStaticStorageClass) }}default-longhorn-static-storage-class: {{ .Values.defaultSettings.defaultLonghornStaticStorageClass }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.backupstorePollInterval) }}backupstore-poll-interval: {{ .Values.defaultSettings.backupstorePollInterval }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.failedBackupTTL) }}failed-backup-ttl: {{ .Values.defaultSettings.failedBackupTTL }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.restoreVolumeRecurringJobs) }}restore-volume-recurring-jobs: {{ .Values.defaultSettings.restoreVolumeRecurringJobs }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.recurringSuccessfulJobsHistoryLimit) }}recurring-successful-jobs-history-limit: {{ .Values.defaultSettings.recurringSuccessfulJobsHistoryLimit }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.recurringFailedJobsHistoryLimit) }}recurring-failed-jobs-history-limit: {{ .Values.defaultSettings.recurringFailedJobsHistoryLimit }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.supportBundleFailedHistoryLimit) }}support-bundle-failed-history-limit: {{ .Values.defaultSettings.supportBundleFailedHistoryLimit }}{{ end }}
|
||||||
|
{{- if or (not (kindIs "invalid" .Values.defaultSettings.taintToleration)) (.Values.global.cattle.windowsCluster.enabled) }}
|
||||||
|
taint-toleration: {{ $windowsDefaultSettingTaintToleration := list }}{{ $defaultSettingTaintToleration := list -}}
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.defaultSetting.taintToleration -}}
|
||||||
|
{{- $windowsDefaultSettingTaintToleration = .Values.global.cattle.windowsCluster.defaultSetting.taintToleration -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- if not (kindIs "invalid" .Values.defaultSettings.taintToleration) -}}
|
||||||
|
{{- $defaultSettingTaintToleration = .Values.defaultSettings.taintToleration -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- $taintToleration := list $windowsDefaultSettingTaintToleration $defaultSettingTaintToleration }}{{ join ";" (compact $taintToleration) -}}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or (not (kindIs "invalid" .Values.defaultSettings.systemManagedComponentsNodeSelector)) (.Values.global.cattle.windowsCluster.enabled) }}
|
||||||
|
system-managed-components-node-selector: {{ $windowsDefaultSettingNodeSelector := list }}{{ $defaultSettingNodeSelector := list -}}
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector -}}
|
||||||
|
{{ $windowsDefaultSettingNodeSelector = .Values.global.cattle.windowsCluster.defaultSetting.systemManagedComponentsNodeSelector -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- if not (kindIs "invalid" .Values.defaultSettings.systemManagedComponentsNodeSelector) -}}
|
||||||
|
{{- $defaultSettingNodeSelector = .Values.defaultSettings.systemManagedComponentsNodeSelector -}}
|
||||||
|
{{- end -}}
|
||||||
|
{{- $nodeSelector := list $windowsDefaultSettingNodeSelector $defaultSettingNodeSelector }}{{ join ";" (compact $nodeSelector) -}}
|
||||||
|
{{- end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.priorityClass) }}priority-class: {{ .Values.defaultSettings.priorityClass }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.autoSalvage) }}auto-salvage: {{ .Values.defaultSettings.autoSalvage }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly) }}auto-delete-pod-when-volume-detached-unexpectedly: {{ .Values.defaultSettings.autoDeletePodWhenVolumeDetachedUnexpectedly }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.disableSchedulingOnCordonedNode) }}disable-scheduling-on-cordoned-node: {{ .Values.defaultSettings.disableSchedulingOnCordonedNode }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaZoneSoftAntiAffinity) }}replica-zone-soft-anti-affinity: {{ .Values.defaultSettings.replicaZoneSoftAntiAffinity }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaDiskSoftAntiAffinity) }}replica-disk-soft-anti-affinity: {{ .Values.defaultSettings.replicaDiskSoftAntiAffinity }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.nodeDownPodDeletionPolicy) }}node-down-pod-deletion-policy: {{ .Values.defaultSettings.nodeDownPodDeletionPolicy }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.nodeDrainPolicy) }}node-drain-policy: {{ .Values.defaultSettings.nodeDrainPolicy }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaReplenishmentWaitInterval) }}replica-replenishment-wait-interval: {{ .Values.defaultSettings.replicaReplenishmentWaitInterval }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentReplicaRebuildPerNodeLimit) }}concurrent-replica-rebuild-per-node-limit: {{ .Values.defaultSettings.concurrentReplicaRebuildPerNodeLimit }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentVolumeBackupRestorePerNodeLimit) }}concurrent-volume-backup-restore-per-node-limit: {{ .Values.defaultSettings.concurrentVolumeBackupRestorePerNodeLimit }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.disableRevisionCounter) }}disable-revision-counter: {{ .Values.defaultSettings.disableRevisionCounter }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.systemManagedPodsImagePullPolicy) }}system-managed-pods-image-pull-policy: {{ .Values.defaultSettings.systemManagedPodsImagePullPolicy }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.allowVolumeCreationWithDegradedAvailability) }}allow-volume-creation-with-degraded-availability: {{ .Values.defaultSettings.allowVolumeCreationWithDegradedAvailability }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.autoCleanupSystemGeneratedSnapshot) }}auto-cleanup-system-generated-snapshot: {{ .Values.defaultSettings.autoCleanupSystemGeneratedSnapshot }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit) }}concurrent-automatic-engine-upgrade-per-node-limit: {{ .Values.defaultSettings.concurrentAutomaticEngineUpgradePerNodeLimit }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.backingImageCleanupWaitInterval) }}backing-image-cleanup-wait-interval: {{ .Values.defaultSettings.backingImageCleanupWaitInterval }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.backingImageRecoveryWaitInterval) }}backing-image-recovery-wait-interval: {{ .Values.defaultSettings.backingImageRecoveryWaitInterval }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.guaranteedInstanceManagerCPU) }}guaranteed-instance-manager-cpu: {{ .Values.defaultSettings.guaranteedInstanceManagerCPU }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.kubernetesClusterAutoscalerEnabled) }}kubernetes-cluster-autoscaler-enabled: {{ .Values.defaultSettings.kubernetesClusterAutoscalerEnabled }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.orphanAutoDeletion) }}orphan-auto-deletion: {{ .Values.defaultSettings.orphanAutoDeletion }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.storageNetwork) }}storage-network: {{ .Values.defaultSettings.storageNetwork }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.deletingConfirmationFlag) }}deleting-confirmation-flag: {{ .Values.defaultSettings.deletingConfirmationFlag }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.engineReplicaTimeout) }}engine-replica-timeout: {{ .Values.defaultSettings.engineReplicaTimeout }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrity) }}snapshot-data-integrity: {{ .Values.defaultSettings.snapshotDataIntegrity }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation) }}snapshot-data-integrity-immediate-check-after-snapshot-creation: {{ .Values.defaultSettings.snapshotDataIntegrityImmediateCheckAfterSnapshotCreation }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.snapshotDataIntegrityCronjob) }}snapshot-data-integrity-cronjob: {{ .Values.defaultSettings.snapshotDataIntegrityCronjob }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.removeSnapshotsDuringFilesystemTrim) }}remove-snapshots-during-filesystem-trim: {{ .Values.defaultSettings.removeSnapshotsDuringFilesystemTrim }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.fastReplicaRebuildEnabled) }}fast-replica-rebuild-enabled: {{ .Values.defaultSettings.fastReplicaRebuildEnabled }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.replicaFileSyncHttpClientTimeout) }}replica-file-sync-http-client-timeout: {{ .Values.defaultSettings.replicaFileSyncHttpClientTimeout }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.logLevel) }}log-level: {{ .Values.defaultSettings.logLevel }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.backupCompressionMethod) }}backup-compression-method: {{ .Values.defaultSettings.backupCompressionMethod }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.backupConcurrentLimit) }}backup-concurrent-limit: {{ .Values.defaultSettings.backupConcurrentLimit }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.restoreConcurrentLimit) }}restore-concurrent-limit: {{ .Values.defaultSettings.restoreConcurrentLimit }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.v2DataEngine) }}v2-data-engine: {{ .Values.defaultSettings.v2DataEngine }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.offlineReplicaRebuilding) }}offline-replica-rebuilding: {{ .Values.defaultSettings.offlineReplicaRebuilding }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.allowEmptyNodeSelectorVolume) }}allow-empty-node-selector-volume: {{ .Values.defaultSettings.allowEmptyNodeSelectorVolume }}{{ end }}
|
||||||
|
{{ if not (kindIs "invalid" .Values.defaultSettings.allowEmptyDiskSelectorVolume) }}allow-empty-disk-selector-volume: {{ .Values.defaultSettings.allowEmptyDiskSelectorVolume }}{{ end }}
|
118
chart/templates/deployment-driver.yaml
Normal file
118
chart/templates/deployment-driver.yaml
Normal file
@ -0,0 +1,118 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: longhorn-driver-deployer
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-driver-deployer
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
||||||
|
app: longhorn-driver-deployer
|
||||||
|
spec:
|
||||||
|
initContainers:
|
||||||
|
- name: wait-longhorn-manager
|
||||||
|
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||||
|
command: ['sh', '-c', 'while [ $(curl -m 1 -s -o /dev/null -w "%{http_code}" http://longhorn-backend:9500/v1) != "200" ]; do echo waiting; sleep 2; done']
|
||||||
|
containers:
|
||||||
|
- name: longhorn-driver-deployer
|
||||||
|
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
command:
|
||||||
|
- longhorn-manager
|
||||||
|
- -d
|
||||||
|
- deploy-driver
|
||||||
|
- --manager-image
|
||||||
|
- "{{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}"
|
||||||
|
- --manager-url
|
||||||
|
- http://longhorn-backend:9500/v1
|
||||||
|
env:
|
||||||
|
- name: POD_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
- name: NODE_NAME
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: spec.nodeName
|
||||||
|
- name: SERVICE_ACCOUNT
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: spec.serviceAccountName
|
||||||
|
{{- if .Values.csi.kubeletRootDir }}
|
||||||
|
- name: KUBELET_ROOT_DIR
|
||||||
|
value: {{ .Values.csi.kubeletRootDir }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if and .Values.image.csi.attacher.repository .Values.image.csi.attacher.tag }}
|
||||||
|
- name: CSI_ATTACHER_IMAGE
|
||||||
|
value: "{{ template "registry_url" . }}{{ .Values.image.csi.attacher.repository }}:{{ .Values.image.csi.attacher.tag }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if and .Values.image.csi.provisioner.repository .Values.image.csi.provisioner.tag }}
|
||||||
|
- name: CSI_PROVISIONER_IMAGE
|
||||||
|
value: "{{ template "registry_url" . }}{{ .Values.image.csi.provisioner.repository }}:{{ .Values.image.csi.provisioner.tag }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if and .Values.image.csi.nodeDriverRegistrar.repository .Values.image.csi.nodeDriverRegistrar.tag }}
|
||||||
|
- name: CSI_NODE_DRIVER_REGISTRAR_IMAGE
|
||||||
|
value: "{{ template "registry_url" . }}{{ .Values.image.csi.nodeDriverRegistrar.repository }}:{{ .Values.image.csi.nodeDriverRegistrar.tag }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if and .Values.image.csi.resizer.repository .Values.image.csi.resizer.tag }}
|
||||||
|
- name: CSI_RESIZER_IMAGE
|
||||||
|
value: "{{ template "registry_url" . }}{{ .Values.image.csi.resizer.repository }}:{{ .Values.image.csi.resizer.tag }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if and .Values.image.csi.snapshotter.repository .Values.image.csi.snapshotter.tag }}
|
||||||
|
- name: CSI_SNAPSHOTTER_IMAGE
|
||||||
|
value: "{{ template "registry_url" . }}{{ .Values.image.csi.snapshotter.repository }}:{{ .Values.image.csi.snapshotter.tag }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if and .Values.image.csi.livenessProbe.repository .Values.image.csi.livenessProbe.tag }}
|
||||||
|
- name: CSI_LIVENESS_PROBE_IMAGE
|
||||||
|
value: "{{ template "registry_url" . }}{{ .Values.image.csi.livenessProbe.repository }}:{{ .Values.image.csi.livenessProbe.tag }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.csi.attacherReplicaCount }}
|
||||||
|
- name: CSI_ATTACHER_REPLICA_COUNT
|
||||||
|
value: {{ .Values.csi.attacherReplicaCount | quote }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.csi.provisionerReplicaCount }}
|
||||||
|
- name: CSI_PROVISIONER_REPLICA_COUNT
|
||||||
|
value: {{ .Values.csi.provisionerReplicaCount | quote }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.csi.resizerReplicaCount }}
|
||||||
|
- name: CSI_RESIZER_REPLICA_COUNT
|
||||||
|
value: {{ .Values.csi.resizerReplicaCount | quote }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.csi.snapshotterReplicaCount }}
|
||||||
|
- name: CSI_SNAPSHOTTER_REPLICA_COUNT
|
||||||
|
value: {{ .Values.csi.snapshotterReplicaCount | quote }}
|
||||||
|
{{- end }}
|
||||||
|
|
||||||
|
{{- if .Values.privateRegistry.registrySecret }}
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornDriver.priorityClass }}
|
||||||
|
priorityClassName: {{ .Values.longhornDriver.priorityClass | quote }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornDriver.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
tolerations:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornDriver.tolerations }}
|
||||||
|
{{ toYaml .Values.longhornDriver.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornDriver.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornDriver.nodeSelector }}
|
||||||
|
{{ toYaml .Values.longhornDriver.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
serviceAccountName: longhorn-service-account
|
||||||
|
securityContext:
|
||||||
|
runAsUser: 0
|
182
chart/templates/deployment-ui.yaml
Normal file
182
chart/templates/deployment-ui.yaml
Normal file
@ -0,0 +1,182 @@
|
|||||||
|
{{- if .Values.openshift.enabled }}
|
||||||
|
{{- if .Values.openshift.ui.route }}
|
||||||
|
# https://github.com/openshift/oauth-proxy/blob/master/contrib/sidecar.yaml
|
||||||
|
# Create a proxy service account and ensure it will use the route "proxy"
|
||||||
|
# Create a secure connection to the proxy via a route
|
||||||
|
apiVersion: route.openshift.io/v1
|
||||||
|
kind: Route
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-ui
|
||||||
|
name: {{ .Values.openshift.ui.route }}
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
to:
|
||||||
|
kind: Service
|
||||||
|
name: longhorn-ui
|
||||||
|
tls:
|
||||||
|
termination: reencrypt
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-ui
|
||||||
|
name: longhorn-ui
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
annotations:
|
||||||
|
service.alpha.openshift.io/serving-cert-secret-name: longhorn-ui-tls
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- name: longhorn-ui
|
||||||
|
port: {{ .Values.openshift.ui.port | default 443 }}
|
||||||
|
targetPort: {{ .Values.openshift.ui.proxy | default 8443 }}
|
||||||
|
selector:
|
||||||
|
app: longhorn-ui
|
||||||
|
---
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-ui
|
||||||
|
name: longhorn-ui
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
replicas: {{ .Values.longhornUI.replicas }}
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-ui
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
||||||
|
app: longhorn-ui
|
||||||
|
spec:
|
||||||
|
serviceAccountName: longhorn-ui-service-account
|
||||||
|
affinity:
|
||||||
|
podAntiAffinity:
|
||||||
|
preferredDuringSchedulingIgnoredDuringExecution:
|
||||||
|
- weight: 1
|
||||||
|
podAffinityTerm:
|
||||||
|
labelSelector:
|
||||||
|
matchExpressions:
|
||||||
|
- key: app
|
||||||
|
operator: In
|
||||||
|
values:
|
||||||
|
- longhorn-ui
|
||||||
|
topologyKey: kubernetes.io/hostname
|
||||||
|
containers:
|
||||||
|
{{- if .Values.openshift.enabled }}
|
||||||
|
{{- if .Values.openshift.ui.route }}
|
||||||
|
- name: oauth-proxy
|
||||||
|
image: {{ template "registry_url" . }}{{ .Values.image.openshift.oauthProxy.repository }}:{{ .Values.image.openshift.oauthProxy.tag }}
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
ports:
|
||||||
|
- containerPort: {{ .Values.openshift.ui.proxy | default 8443 }}
|
||||||
|
name: public
|
||||||
|
args:
|
||||||
|
- --https-address=:{{ .Values.openshift.ui.proxy | default 8443 }}
|
||||||
|
- --provider=openshift
|
||||||
|
- --openshift-service-account=longhorn-ui-service-account
|
||||||
|
- --upstream=http://localhost:8000
|
||||||
|
- --tls-cert=/etc/tls/private/tls.crt
|
||||||
|
- --tls-key=/etc/tls/private/tls.key
|
||||||
|
- --cookie-secret=SECRET
|
||||||
|
- --openshift-sar={"namespace":"{{ include "release_namespace" . }}","group":"longhorn.io","resource":"setting","verb":"delete"}
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /etc/tls/private
|
||||||
|
name: longhorn-ui-tls
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
- name: longhorn-ui
|
||||||
|
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.ui.repository }}:{{ .Values.image.longhorn.ui.tag }}
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
volumeMounts:
|
||||||
|
- name : nginx-cache
|
||||||
|
mountPath: /var/cache/nginx/
|
||||||
|
- name : nginx-config
|
||||||
|
mountPath: /var/config/nginx/
|
||||||
|
- name: var-run
|
||||||
|
mountPath: /var/run/
|
||||||
|
ports:
|
||||||
|
- containerPort: 8000
|
||||||
|
name: http
|
||||||
|
env:
|
||||||
|
- name: LONGHORN_MANAGER_IP
|
||||||
|
value: "http://longhorn-backend:9500"
|
||||||
|
- name: LONGHORN_UI_PORT
|
||||||
|
value: "8000"
|
||||||
|
volumes:
|
||||||
|
{{- if .Values.openshift.enabled }}
|
||||||
|
{{- if .Values.openshift.ui.route }}
|
||||||
|
- name: longhorn-ui-tls
|
||||||
|
secret:
|
||||||
|
secretName: longhorn-ui-tls
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
- emptyDir: {}
|
||||||
|
name: nginx-cache
|
||||||
|
- emptyDir: {}
|
||||||
|
name: nginx-config
|
||||||
|
- emptyDir: {}
|
||||||
|
name: var-run
|
||||||
|
{{- if .Values.privateRegistry.registrySecret }}
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornUI.priorityClass }}
|
||||||
|
priorityClassName: {{ .Values.longhornUI.priorityClass | quote }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornUI.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
tolerations:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornUI.tolerations }}
|
||||||
|
{{ toYaml .Values.longhornUI.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornUI.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornUI.nodeSelector }}
|
||||||
|
{{ toYaml .Values.longhornUI.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
---
|
||||||
|
kind: Service
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-ui
|
||||||
|
{{- if eq .Values.service.ui.type "Rancher-Proxy" }}
|
||||||
|
kubernetes.io/cluster-service: "true"
|
||||||
|
{{- end }}
|
||||||
|
name: longhorn-frontend
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
{{- if eq .Values.service.ui.type "Rancher-Proxy" }}
|
||||||
|
type: ClusterIP
|
||||||
|
{{- else }}
|
||||||
|
type: {{ .Values.service.ui.type }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if and .Values.service.ui.loadBalancerIP (eq .Values.service.ui.type "LoadBalancer") }}
|
||||||
|
loadBalancerIP: {{ .Values.service.ui.loadBalancerIP }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if and (eq .Values.service.ui.type "LoadBalancer") .Values.service.ui.loadBalancerSourceRanges }}
|
||||||
|
loadBalancerSourceRanges: {{- toYaml .Values.service.ui.loadBalancerSourceRanges | nindent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
selector:
|
||||||
|
app: longhorn-ui
|
||||||
|
ports:
|
||||||
|
- name: http
|
||||||
|
port: 80
|
||||||
|
targetPort: http
|
||||||
|
{{- if .Values.service.ui.nodePort }}
|
||||||
|
nodePort: {{ .Values.service.ui.nodePort }}
|
||||||
|
{{- else }}
|
||||||
|
nodePort: null
|
||||||
|
{{- end }}
|
48
chart/templates/ingress.yaml
Normal file
48
chart/templates/ingress.yaml
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
{{- if .Values.ingress.enabled }}
|
||||||
|
{{- if semverCompare ">=1.19-0" .Capabilities.KubeVersion.GitVersion -}}
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
{{- else -}}
|
||||||
|
apiVersion: networking.k8s.io/v1beta1
|
||||||
|
{{- end }}
|
||||||
|
kind: Ingress
|
||||||
|
metadata:
|
||||||
|
name: longhorn-ingress
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-ingress
|
||||||
|
annotations:
|
||||||
|
{{- if .Values.ingress.secureBackends }}
|
||||||
|
ingress.kubernetes.io/secure-backends: "true"
|
||||||
|
{{- end }}
|
||||||
|
{{- range $key, $value := .Values.ingress.annotations }}
|
||||||
|
{{ $key }}: {{ $value | quote }}
|
||||||
|
{{- end }}
|
||||||
|
spec:
|
||||||
|
{{- if and .Values.ingress.ingressClassName (semverCompare ">=1.18-0" .Capabilities.KubeVersion.GitVersion) }}
|
||||||
|
ingressClassName: {{ .Values.ingress.ingressClassName }}
|
||||||
|
{{- end }}
|
||||||
|
rules:
|
||||||
|
- host: {{ .Values.ingress.host }}
|
||||||
|
http:
|
||||||
|
paths:
|
||||||
|
- path: {{ default "" .Values.ingress.path }}
|
||||||
|
{{- if (semverCompare ">=1.18-0" $.Capabilities.KubeVersion.GitVersion) }}
|
||||||
|
pathType: ImplementationSpecific
|
||||||
|
{{- end }}
|
||||||
|
backend:
|
||||||
|
{{- if semverCompare ">=1.19-0" $.Capabilities.KubeVersion.GitVersion }}
|
||||||
|
service:
|
||||||
|
name: longhorn-frontend
|
||||||
|
port:
|
||||||
|
number: 80
|
||||||
|
{{- else }}
|
||||||
|
serviceName: longhorn-frontend
|
||||||
|
servicePort: 80
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.ingress.tls }}
|
||||||
|
tls:
|
||||||
|
- hosts:
|
||||||
|
- {{ .Values.ingress.host }}
|
||||||
|
secretName: {{ .Values.ingress.tlsSecret }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
@ -0,0 +1,27 @@
|
|||||||
|
{{- if .Values.networkPolicies.enabled }}
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: backing-image-data-source
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: backing-image-data-source
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- from:
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: instance-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: backing-image-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: backing-image-data-source
|
||||||
|
{{- end }}
|
@ -0,0 +1,27 @@
|
|||||||
|
{{- if .Values.networkPolicies.enabled }}
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: backing-image-manager
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: backing-image-manager
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- from:
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: instance-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: backing-image-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: backing-image-data-source
|
||||||
|
{{- end }}
|
@ -0,0 +1,27 @@
|
|||||||
|
{{- if .Values.networkPolicies.enabled }}
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: instance-manager
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: instance-manager
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- from:
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: instance-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: backing-image-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/component: backing-image-data-source
|
||||||
|
{{- end }}
|
35
chart/templates/network-policies/manager-network-policy.yaml
Normal file
35
chart/templates/network-policies/manager-network-policy.yaml
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
{{- if .Values.networkPolicies.enabled }}
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: longhorn-manager
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- from:
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-ui
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-csi-plugin
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
longhorn.io/managed-by: longhorn-manager
|
||||||
|
matchExpressions:
|
||||||
|
- { key: recurring-job.longhorn.io, operator: Exists }
|
||||||
|
- podSelector:
|
||||||
|
matchExpressions:
|
||||||
|
- { key: longhorn.io/job-task, operator: Exists }
|
||||||
|
- podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-driver-deployer
|
||||||
|
{{- end }}
|
@ -0,0 +1,17 @@
|
|||||||
|
{{- if .Values.networkPolicies.enabled }}
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: longhorn-recovery-backend
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- ports:
|
||||||
|
- protocol: TCP
|
||||||
|
port: 9503
|
||||||
|
{{- end }}
|
@ -0,0 +1,46 @@
|
|||||||
|
{{- if and .Values.networkPolicies.enabled .Values.ingress.enabled (not (eq .Values.networkPolicies.type "")) }}
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: longhorn-ui-frontend
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-ui
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- from:
|
||||||
|
{{- if eq .Values.networkPolicies.type "rke1"}}
|
||||||
|
- namespaceSelector:
|
||||||
|
matchLabels:
|
||||||
|
kubernetes.io/metadata.name: ingress-nginx
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app.kubernetes.io/component: controller
|
||||||
|
app.kubernetes.io/instance: ingress-nginx
|
||||||
|
app.kubernetes.io/name: ingress-nginx
|
||||||
|
{{- else if eq .Values.networkPolicies.type "rke2" }}
|
||||||
|
- namespaceSelector:
|
||||||
|
matchLabels:
|
||||||
|
kubernetes.io/metadata.name: kube-system
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app.kubernetes.io/component: controller
|
||||||
|
app.kubernetes.io/instance: rke2-ingress-nginx
|
||||||
|
app.kubernetes.io/name: rke2-ingress-nginx
|
||||||
|
{{- else if eq .Values.networkPolicies.type "k3s" }}
|
||||||
|
- namespaceSelector:
|
||||||
|
matchLabels:
|
||||||
|
kubernetes.io/metadata.name: kube-system
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app.kubernetes.io/name: traefik
|
||||||
|
ports:
|
||||||
|
- port: 8000
|
||||||
|
protocol: TCP
|
||||||
|
- port: 80
|
||||||
|
protocol: TCP
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
33
chart/templates/network-policies/webhook-network-policy.yaml
Normal file
33
chart/templates/network-policies/webhook-network-policy.yaml
Normal file
@ -0,0 +1,33 @@
|
|||||||
|
{{- if .Values.networkPolicies.enabled }}
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: longhorn-conversion-webhook
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- ports:
|
||||||
|
- protocol: TCP
|
||||||
|
port: 9501
|
||||||
|
---
|
||||||
|
apiVersion: networking.k8s.io/v1
|
||||||
|
kind: NetworkPolicy
|
||||||
|
metadata:
|
||||||
|
name: longhorn-admission-webhook
|
||||||
|
namespace: longhorn-system
|
||||||
|
spec:
|
||||||
|
podSelector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-manager
|
||||||
|
policyTypes:
|
||||||
|
- Ingress
|
||||||
|
ingress:
|
||||||
|
- ports:
|
||||||
|
- protocol: TCP
|
||||||
|
port: 9502
|
||||||
|
{{- end }}
|
56
chart/templates/postupgrade-job.yaml
Normal file
56
chart/templates/postupgrade-job.yaml
Normal file
@ -0,0 +1,56 @@
|
|||||||
|
apiVersion: batch/v1
|
||||||
|
kind: Job
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
"helm.sh/hook": post-upgrade
|
||||||
|
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation
|
||||||
|
name: longhorn-post-upgrade
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
activeDeadlineSeconds: 900
|
||||||
|
backoffLimit: 1
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: longhorn-post-upgrade
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: longhorn-post-upgrade
|
||||||
|
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
command:
|
||||||
|
- longhorn-manager
|
||||||
|
- post-upgrade
|
||||||
|
env:
|
||||||
|
- name: POD_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
restartPolicy: OnFailure
|
||||||
|
{{- if .Values.privateRegistry.registrySecret }}
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.priorityClass }}
|
||||||
|
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
||||||
|
{{- end }}
|
||||||
|
serviceAccountName: longhorn-service-account
|
||||||
|
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
tolerations:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.tolerations }}
|
||||||
|
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.nodeSelector }}
|
||||||
|
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
58
chart/templates/preupgrade-job.yaml
Normal file
58
chart/templates/preupgrade-job.yaml
Normal file
@ -0,0 +1,58 @@
|
|||||||
|
{{- if .Values.helmPreUpgradeCheckerJob.enabled }}
|
||||||
|
apiVersion: batch/v1
|
||||||
|
kind: Job
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
"helm.sh/hook": pre-upgrade
|
||||||
|
"helm.sh/hook-delete-policy": hook-succeeded,before-hook-creation,hook-failed
|
||||||
|
name: longhorn-pre-upgrade
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
activeDeadlineSeconds: 900
|
||||||
|
backoffLimit: 1
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: longhorn-pre-upgrade
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: longhorn-pre-upgrade
|
||||||
|
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
command:
|
||||||
|
- longhorn-manager
|
||||||
|
- pre-upgrade
|
||||||
|
env:
|
||||||
|
- name: POD_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
restartPolicy: OnFailure
|
||||||
|
{{- if .Values.privateRegistry.registrySecret }}
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.priorityClass }}
|
||||||
|
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
||||||
|
{{- end }}
|
||||||
|
serviceAccountName: longhorn-service-account
|
||||||
|
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
tolerations:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.tolerations }}
|
||||||
|
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.nodeSelector }}
|
||||||
|
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
66
chart/templates/psp.yaml
Normal file
66
chart/templates/psp.yaml
Normal file
@ -0,0 +1,66 @@
|
|||||||
|
{{- if .Values.enablePSP }}
|
||||||
|
apiVersion: policy/v1beta1
|
||||||
|
kind: PodSecurityPolicy
|
||||||
|
metadata:
|
||||||
|
name: longhorn-psp
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
privileged: true
|
||||||
|
allowPrivilegeEscalation: true
|
||||||
|
requiredDropCapabilities:
|
||||||
|
- NET_RAW
|
||||||
|
allowedCapabilities:
|
||||||
|
- SYS_ADMIN
|
||||||
|
hostNetwork: false
|
||||||
|
hostIPC: false
|
||||||
|
hostPID: true
|
||||||
|
runAsUser:
|
||||||
|
rule: RunAsAny
|
||||||
|
seLinux:
|
||||||
|
rule: RunAsAny
|
||||||
|
fsGroup:
|
||||||
|
rule: RunAsAny
|
||||||
|
supplementalGroups:
|
||||||
|
rule: RunAsAny
|
||||||
|
volumes:
|
||||||
|
- configMap
|
||||||
|
- downwardAPI
|
||||||
|
- emptyDir
|
||||||
|
- secret
|
||||||
|
- projected
|
||||||
|
- hostPath
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: Role
|
||||||
|
metadata:
|
||||||
|
name: longhorn-psp-role
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- policy
|
||||||
|
resources:
|
||||||
|
- podsecuritypolicies
|
||||||
|
verbs:
|
||||||
|
- use
|
||||||
|
resourceNames:
|
||||||
|
- longhorn-psp
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: RoleBinding
|
||||||
|
metadata:
|
||||||
|
name: longhorn-psp-binding
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: Role
|
||||||
|
name: longhorn-psp-role
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: longhorn-service-account
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: default
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
{{- end }}
|
13
chart/templates/registry-secret.yaml
Normal file
13
chart/templates/registry-secret.yaml
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
{{- if .Values.privateRegistry.createSecret }}
|
||||||
|
{{- if .Values.privateRegistry.registrySecret }}
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: {{ .Values.privateRegistry.registrySecret }}
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
type: kubernetes.io/dockerconfigjson
|
||||||
|
data:
|
||||||
|
.dockerconfigjson: {{ template "secret" . }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
40
chart/templates/serviceaccount.yaml
Normal file
40
chart/templates/serviceaccount.yaml
Normal file
@ -0,0 +1,40 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: longhorn-service-account
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
{{- with .Values.serviceAccount.annotations }}
|
||||||
|
annotations:
|
||||||
|
{{- toYaml . | nindent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: longhorn-ui-service-account
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
{{- with .Values.serviceAccount.annotations }}
|
||||||
|
annotations:
|
||||||
|
{{- toYaml . | nindent 4 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.openshift.enabled }}
|
||||||
|
{{- if .Values.openshift.ui.route }}
|
||||||
|
{{- if not .Values.serviceAccount.annotations }}
|
||||||
|
annotations:
|
||||||
|
{{- end }}
|
||||||
|
serviceaccounts.openshift.io/oauth-redirectreference.primary: '{"kind":"OAuthRedirectReference","apiVersion":"v1","reference":{"kind":"Route","name":"longhorn-ui"}}'
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: ServiceAccount
|
||||||
|
metadata:
|
||||||
|
name: longhorn-support-bundle
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
{{- with .Values.serviceAccount.annotations }}
|
||||||
|
annotations:
|
||||||
|
{{- toYaml . | nindent 4 }}
|
||||||
|
{{- end }}
|
74
chart/templates/services.yaml
Normal file
74
chart/templates/services.yaml
Normal file
@ -0,0 +1,74 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-conversion-webhook
|
||||||
|
name: longhorn-conversion-webhook
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
type: ClusterIP
|
||||||
|
sessionAffinity: ClientIP
|
||||||
|
selector:
|
||||||
|
app: longhorn-manager
|
||||||
|
ports:
|
||||||
|
- name: conversion-webhook
|
||||||
|
port: 9501
|
||||||
|
targetPort: conversion-wh
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-admission-webhook
|
||||||
|
name: longhorn-admission-webhook
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
type: ClusterIP
|
||||||
|
sessionAffinity: ClientIP
|
||||||
|
selector:
|
||||||
|
app: longhorn-manager
|
||||||
|
ports:
|
||||||
|
- name: admission-webhook
|
||||||
|
port: 9502
|
||||||
|
targetPort: admission-wh
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
app: longhorn-recovery-backend
|
||||||
|
name: longhorn-recovery-backend
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
type: ClusterIP
|
||||||
|
sessionAffinity: ClientIP
|
||||||
|
selector:
|
||||||
|
app: longhorn-manager
|
||||||
|
ports:
|
||||||
|
- name: recovery-backend
|
||||||
|
port: 9503
|
||||||
|
targetPort: recov-backend
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
name: longhorn-engine-manager
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
clusterIP: None
|
||||||
|
selector:
|
||||||
|
longhorn.io/component: instance-manager
|
||||||
|
longhorn.io/instance-manager-type: engine
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
name: longhorn-replica-manager
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
spec:
|
||||||
|
clusterIP: None
|
||||||
|
selector:
|
||||||
|
longhorn.io/component: instance-manager
|
||||||
|
longhorn.io/instance-manager-type: replica
|
44
chart/templates/storageclass.yaml
Normal file
44
chart/templates/storageclass.yaml
Normal file
@ -0,0 +1,44 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: ConfigMap
|
||||||
|
metadata:
|
||||||
|
name: longhorn-storageclass
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
data:
|
||||||
|
storageclass.yaml: |
|
||||||
|
kind: StorageClass
|
||||||
|
apiVersion: storage.k8s.io/v1
|
||||||
|
metadata:
|
||||||
|
name: longhorn
|
||||||
|
annotations:
|
||||||
|
storageclass.kubernetes.io/is-default-class: {{ .Values.persistence.defaultClass | quote }}
|
||||||
|
provisioner: driver.longhorn.io
|
||||||
|
allowVolumeExpansion: true
|
||||||
|
reclaimPolicy: "{{ .Values.persistence.reclaimPolicy }}"
|
||||||
|
volumeBindingMode: Immediate
|
||||||
|
parameters:
|
||||||
|
numberOfReplicas: "{{ .Values.persistence.defaultClassReplicaCount }}"
|
||||||
|
staleReplicaTimeout: "30"
|
||||||
|
fromBackup: ""
|
||||||
|
{{- if .Values.persistence.defaultFsType }}
|
||||||
|
fsType: "{{ .Values.persistence.defaultFsType }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.persistence.defaultMkfsParams }}
|
||||||
|
mkfsParams: "{{ .Values.persistence.defaultMkfsParams }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.persistence.migratable }}
|
||||||
|
migratable: "{{ .Values.persistence.migratable }}"
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.persistence.backingImage.enable }}
|
||||||
|
backingImage: {{ .Values.persistence.backingImage.name }}
|
||||||
|
backingImageDataSourceType: {{ .Values.persistence.backingImage.dataSourceType }}
|
||||||
|
backingImageDataSourceParameters: {{ .Values.persistence.backingImage.dataSourceParameters }}
|
||||||
|
backingImageChecksum: {{ .Values.persistence.backingImage.expectedChecksum }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.persistence.recurringJobSelector.enable }}
|
||||||
|
recurringJobSelector: '{{ .Values.persistence.recurringJobSelector.jobList }}'
|
||||||
|
{{- end }}
|
||||||
|
dataLocality: {{ .Values.persistence.defaultDataLocality | quote }}
|
||||||
|
{{- if .Values.persistence.defaultNodeSelector.enable }}
|
||||||
|
nodeSelector: "{{ .Values.persistence.defaultNodeSelector.selector }}"
|
||||||
|
{{- end }}
|
16
chart/templates/tls-secrets.yaml
Normal file
16
chart/templates/tls-secrets.yaml
Normal file
@ -0,0 +1,16 @@
|
|||||||
|
{{- if .Values.ingress.enabled }}
|
||||||
|
{{- range .Values.ingress.secrets }}
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: {{ .name }}
|
||||||
|
namespace: {{ include "release_namespace" $ }}
|
||||||
|
labels: {{- include "longhorn.labels" $ | nindent 4 }}
|
||||||
|
app: longhorn
|
||||||
|
type: kubernetes.io/tls
|
||||||
|
data:
|
||||||
|
tls.crt: {{ .certificate | b64enc }}
|
||||||
|
tls.key: {{ .key | b64enc }}
|
||||||
|
---
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
57
chart/templates/uninstall-job.yaml
Normal file
57
chart/templates/uninstall-job.yaml
Normal file
@ -0,0 +1,57 @@
|
|||||||
|
apiVersion: batch/v1
|
||||||
|
kind: Job
|
||||||
|
metadata:
|
||||||
|
annotations:
|
||||||
|
"helm.sh/hook": pre-delete
|
||||||
|
"helm.sh/hook-delete-policy": before-hook-creation,hook-succeeded
|
||||||
|
name: longhorn-uninstall
|
||||||
|
namespace: {{ include "release_namespace" . }}
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 4 }}
|
||||||
|
spec:
|
||||||
|
activeDeadlineSeconds: 900
|
||||||
|
backoffLimit: 1
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
name: longhorn-uninstall
|
||||||
|
labels: {{- include "longhorn.labels" . | nindent 8 }}
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: longhorn-uninstall
|
||||||
|
image: {{ template "registry_url" . }}{{ .Values.image.longhorn.manager.repository }}:{{ .Values.image.longhorn.manager.tag }}
|
||||||
|
imagePullPolicy: {{ .Values.image.pullPolicy }}
|
||||||
|
command:
|
||||||
|
- longhorn-manager
|
||||||
|
- uninstall
|
||||||
|
- --force
|
||||||
|
env:
|
||||||
|
- name: LONGHORN_NAMESPACE
|
||||||
|
valueFrom:
|
||||||
|
fieldRef:
|
||||||
|
fieldPath: metadata.namespace
|
||||||
|
restartPolicy: Never
|
||||||
|
{{- if .Values.privateRegistry.registrySecret }}
|
||||||
|
imagePullSecrets:
|
||||||
|
- name: {{ .Values.privateRegistry.registrySecret }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.priorityClass }}
|
||||||
|
priorityClassName: {{ .Values.longhornManager.priorityClass | quote }}
|
||||||
|
{{- end }}
|
||||||
|
serviceAccountName: longhorn-service-account
|
||||||
|
{{- if or .Values.longhornManager.tolerations .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
tolerations:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.tolerations }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if .Values.longhornManager.tolerations }}
|
||||||
|
{{ toYaml .Values.longhornManager.tolerations | indent 6 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornManager.nodeSelector .Values.global.cattle.windowsCluster.enabled }}
|
||||||
|
nodeSelector:
|
||||||
|
{{- if and .Values.global.cattle.windowsCluster.enabled .Values.global.cattle.windowsCluster.nodeSelector }}
|
||||||
|
{{ toYaml .Values.global.cattle.windowsCluster.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- if or .Values.longhornManager.nodeSelector }}
|
||||||
|
{{ toYaml .Values.longhornManager.nodeSelector | indent 8 }}
|
||||||
|
{{- end }}
|
||||||
|
{{- end }}
|
7
chart/templates/validate-psp-install.yaml
Normal file
7
chart/templates/validate-psp-install.yaml
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
#{{- if gt (len (lookup "rbac.authorization.k8s.io/v1" "ClusterRole" "" "")) 0 -}}
|
||||||
|
#{{- if .Values.enablePSP }}
|
||||||
|
#{{- if not (.Capabilities.APIVersions.Has "policy/v1beta1/PodSecurityPolicy") }}
|
||||||
|
#{{- fail "The target cluster does not have the PodSecurityPolicy API resource. Please disable PSPs in this chart before proceeding." -}}
|
||||||
|
#{{- end }}
|
||||||
|
#{{- end }}
|
||||||
|
#{{- end }}
|
480
chart/values.yaml
Normal file
480
chart/values.yaml
Normal file
@ -0,0 +1,480 @@
|
|||||||
|
# Default values for longhorn.
|
||||||
|
# This is a YAML-formatted file.
|
||||||
|
# Declare variables to be passed into your templates.
|
||||||
|
global:
|
||||||
|
cattle:
|
||||||
|
# -- System default registry
|
||||||
|
systemDefaultRegistry: ""
|
||||||
|
windowsCluster:
|
||||||
|
# -- Enable this to allow Longhorn to run on the Rancher deployed Windows cluster
|
||||||
|
enabled: false
|
||||||
|
# -- Tolerate Linux nodes to run Longhorn user deployed components
|
||||||
|
tolerations:
|
||||||
|
- key: "cattle.io/os"
|
||||||
|
value: "linux"
|
||||||
|
effect: "NoSchedule"
|
||||||
|
operator: "Equal"
|
||||||
|
# -- Select Linux nodes to run Longhorn user deployed components
|
||||||
|
nodeSelector:
|
||||||
|
kubernetes.io/os: "linux"
|
||||||
|
defaultSetting:
|
||||||
|
# -- Toleration for Longhorn system managed components
|
||||||
|
taintToleration: cattle.io/os=linux:NoSchedule
|
||||||
|
# -- Node selector for Longhorn system managed components
|
||||||
|
systemManagedComponentsNodeSelector: kubernetes.io/os:linux
|
||||||
|
|
||||||
|
networkPolicies:
|
||||||
|
# -- Enable NetworkPolicies to limit access to the Longhorn pods
|
||||||
|
enabled: false
|
||||||
|
# -- Create the policy based on your distribution to allow access for the ingress. Options: `k3s`, `rke2`, `rke1`
|
||||||
|
type: "k3s"
|
||||||
|
|
||||||
|
image:
|
||||||
|
longhorn:
|
||||||
|
engine:
|
||||||
|
# -- Specify Longhorn engine image repository
|
||||||
|
repository: longhornio/longhorn-engine
|
||||||
|
# -- Specify Longhorn engine image tag
|
||||||
|
tag: master-head
|
||||||
|
manager:
|
||||||
|
# -- Specify Longhorn manager image repository
|
||||||
|
repository: longhornio/longhorn-manager
|
||||||
|
# -- Specify Longhorn manager image tag
|
||||||
|
tag: master-head
|
||||||
|
ui:
|
||||||
|
# -- Specify Longhorn ui image repository
|
||||||
|
repository: longhornio/longhorn-ui
|
||||||
|
# -- Specify Longhorn ui image tag
|
||||||
|
tag: master-head
|
||||||
|
instanceManager:
|
||||||
|
# -- Specify Longhorn instance manager image repository
|
||||||
|
repository: longhornio/longhorn-instance-manager
|
||||||
|
# -- Specify Longhorn instance manager image tag
|
||||||
|
tag: master-head
|
||||||
|
shareManager:
|
||||||
|
# -- Specify Longhorn share manager image repository
|
||||||
|
repository: longhornio/longhorn-share-manager
|
||||||
|
# -- Specify Longhorn share manager image tag
|
||||||
|
tag: master-head
|
||||||
|
backingImageManager:
|
||||||
|
# -- Specify Longhorn backing image manager image repository
|
||||||
|
repository: longhornio/backing-image-manager
|
||||||
|
# -- Specify Longhorn backing image manager image tag
|
||||||
|
tag: master-head
|
||||||
|
supportBundleKit:
|
||||||
|
# -- Specify Longhorn support bundle manager image repository
|
||||||
|
repository: longhornio/support-bundle-kit
|
||||||
|
# -- Specify Longhorn support bundle manager image tag
|
||||||
|
tag: v0.0.27
|
||||||
|
csi:
|
||||||
|
attacher:
|
||||||
|
# -- Specify CSI attacher image repository. Leave blank to autodetect
|
||||||
|
repository: longhornio/csi-attacher
|
||||||
|
# -- Specify CSI attacher image tag. Leave blank to autodetect
|
||||||
|
tag: v4.2.0
|
||||||
|
provisioner:
|
||||||
|
# -- Specify CSI provisioner image repository. Leave blank to autodetect
|
||||||
|
repository: longhornio/csi-provisioner
|
||||||
|
# -- Specify CSI provisioner image tag. Leave blank to autodetect
|
||||||
|
tag: v3.4.1
|
||||||
|
nodeDriverRegistrar:
|
||||||
|
# -- Specify CSI node driver registrar image repository. Leave blank to autodetect
|
||||||
|
repository: longhornio/csi-node-driver-registrar
|
||||||
|
# -- Specify CSI node driver registrar image tag. Leave blank to autodetect
|
||||||
|
tag: v2.7.0
|
||||||
|
resizer:
|
||||||
|
# -- Specify CSI driver resizer image repository. Leave blank to autodetect
|
||||||
|
repository: longhornio/csi-resizer
|
||||||
|
# -- Specify CSI driver resizer image tag. Leave blank to autodetect
|
||||||
|
tag: v1.7.0
|
||||||
|
snapshotter:
|
||||||
|
# -- Specify CSI driver snapshotter image repository. Leave blank to autodetect
|
||||||
|
repository: longhornio/csi-snapshotter
|
||||||
|
# -- Specify CSI driver snapshotter image tag. Leave blank to autodetect.
|
||||||
|
tag: v6.2.1
|
||||||
|
livenessProbe:
|
||||||
|
# -- Specify CSI liveness probe image repository. Leave blank to autodetect
|
||||||
|
repository: longhornio/livenessprobe
|
||||||
|
# -- Specify CSI liveness probe image tag. Leave blank to autodetect
|
||||||
|
tag: v2.9.0
|
||||||
|
openshift:
|
||||||
|
oauthProxy:
|
||||||
|
# -- For openshift user. Specify oauth proxy image repository
|
||||||
|
repository: quay.io/openshift/origin-oauth-proxy
|
||||||
|
# -- For openshift user. Specify oauth proxy image tag. Note: Use your OCP/OKD 4.X Version, Current Stable is 4.14
|
||||||
|
tag: 4.14
|
||||||
|
# -- Image pull policy which applies to all user deployed Longhorn Components. e.g, Longhorn manager, Longhorn driver, Longhorn UI
|
||||||
|
pullPolicy: IfNotPresent
|
||||||
|
|
||||||
|
service:
|
||||||
|
ui:
|
||||||
|
# -- Define Longhorn UI service type. Options: `ClusterIP`, `NodePort`, `LoadBalancer`, `Rancher-Proxy`
|
||||||
|
type: ClusterIP
|
||||||
|
# -- NodePort port number (to set explicitly, choose port between 30000-32767)
|
||||||
|
nodePort: null
|
||||||
|
manager:
|
||||||
|
# -- Define Longhorn manager service type.
|
||||||
|
type: ClusterIP
|
||||||
|
# -- NodePort port number (to set explicitly, choose port between 30000-32767)
|
||||||
|
nodePort: ""
|
||||||
|
|
||||||
|
persistence:
|
||||||
|
# -- Set Longhorn StorageClass as default
|
||||||
|
defaultClass: true
|
||||||
|
# -- Set filesystem type for Longhorn StorageClass
|
||||||
|
defaultFsType: ext4
|
||||||
|
# -- Set mkfs options for Longhorn StorageClass
|
||||||
|
defaultMkfsParams: ""
|
||||||
|
# -- Set replica count for Longhorn StorageClass
|
||||||
|
defaultClassReplicaCount: 3
|
||||||
|
# -- Set data locality for Longhorn StorageClass. Options: `disabled`, `best-effort`
|
||||||
|
defaultDataLocality: disabled
|
||||||
|
# -- Define reclaim policy. Options: `Retain`, `Delete`
|
||||||
|
reclaimPolicy: Delete
|
||||||
|
# -- Set volume migratable for Longhorn StorageClass
|
||||||
|
migratable: false
|
||||||
|
recurringJobSelector:
|
||||||
|
# -- Enable recurring job selector for Longhorn StorageClass
|
||||||
|
enable: false
|
||||||
|
# -- Recurring job selector list for Longhorn StorageClass. Please be careful of quotes of input. e.g., `[{"name":"backup", "isGroup":true}]`
|
||||||
|
jobList: []
|
||||||
|
backingImage:
|
||||||
|
# -- Set backing image for Longhorn StorageClass
|
||||||
|
enable: false
|
||||||
|
# -- Specify a backing image that will be used by Longhorn volumes in Longhorn StorageClass. If not exists, the backing image data source type and backing image data source parameters should be specified so that Longhorn will create the backing image before using it
|
||||||
|
name: ~
|
||||||
|
# -- Specify the data source type for the backing image used in Longhorn StorageClass.
|
||||||
|
# If the backing image does not exists, Longhorn will use this field to create a backing image. Otherwise, Longhorn will use it to verify the selected backing image.
|
||||||
|
dataSourceType: ~
|
||||||
|
# -- Specify the data source parameters for the backing image used in Longhorn StorageClass. This option accepts a json string of a map. e.g., `'{\"url\":\"https://backing-image-example.s3-region.amazonaws.com/test-backing-image\"}'`.
|
||||||
|
dataSourceParameters: ~
|
||||||
|
# -- Specify the expected SHA512 checksum of the selected backing image in Longhorn StorageClass
|
||||||
|
expectedChecksum: ~
|
||||||
|
defaultNodeSelector:
|
||||||
|
# -- Enable Node selector for Longhorn StorageClass
|
||||||
|
enable: false
|
||||||
|
# -- This selector enables only certain nodes having these tags to be used for the volume. e.g. `"storage,fast"`
|
||||||
|
selector: ""
|
||||||
|
# -- Allow automatically removing snapshots during filesystem trim for Longhorn StorageClass. Options: `ignored`, `enabled`, `disabled`
|
||||||
|
removeSnapshotsDuringFilesystemTrim: ignored
|
||||||
|
|
||||||
|
helmPreUpgradeCheckerJob:
|
||||||
|
enabled: true
|
||||||
|
|
||||||
|
csi:
|
||||||
|
# -- Specify kubelet root-dir. Leave blank to autodetect
|
||||||
|
kubeletRootDir: ~
|
||||||
|
# -- Specify replica count of CSI Attacher. Leave blank to use default count: 3
|
||||||
|
attacherReplicaCount: ~
|
||||||
|
# -- Specify replica count of CSI Provisioner. Leave blank to use default count: 3
|
||||||
|
provisionerReplicaCount: ~
|
||||||
|
# -- Specify replica count of CSI Resizer. Leave blank to use default count: 3
|
||||||
|
resizerReplicaCount: ~
|
||||||
|
# -- Specify replica count of CSI Snapshotter. Leave blank to use default count: 3
|
||||||
|
snapshotterReplicaCount: ~
|
||||||
|
|
||||||
|
defaultSettings:
|
||||||
|
# -- The endpoint used to access the backupstore. Available: NFS, CIFS, AWS, GCP, AZURE.
|
||||||
|
backupTarget: ~
|
||||||
|
# -- The name of the Kubernetes secret associated with the backup target.
|
||||||
|
backupTargetCredentialSecret: ~
|
||||||
|
# -- If this setting is enabled, Longhorn will automatically attaches the volume and takes snapshot/backup
|
||||||
|
# when it is the time to do recurring snapshot/backup.
|
||||||
|
allowRecurringJobWhileVolumeDetached: ~
|
||||||
|
# -- Create default Disk automatically only on Nodes with the label "node.longhorn.io/create-default-disk=true" if no other disks exist.
|
||||||
|
# If disabled, the default disk will be created on all new nodes when each node is first added.
|
||||||
|
createDefaultDiskLabeledNodes: ~
|
||||||
|
# -- Default path to use for storing data on a host. By default "/var/lib/longhorn/"
|
||||||
|
defaultDataPath: ~
|
||||||
|
# -- Longhorn volume has data locality if there is a local replica of the volume on the same node as the pod which is using the volume.
|
||||||
|
defaultDataLocality: ~
|
||||||
|
# -- Allow scheduling on nodes with existing healthy replicas of the same volume. By default false.
|
||||||
|
replicaSoftAntiAffinity: ~
|
||||||
|
# -- Enable this setting automatically rebalances replicas when discovered an available node.
|
||||||
|
replicaAutoBalance: ~
|
||||||
|
# -- The over-provisioning percentage defines how much storage can be allocated relative to the hard drive's capacity. By default 200.
|
||||||
|
storageOverProvisioningPercentage: ~
|
||||||
|
# -- If the minimum available disk capacity exceeds the actual percentage of available disk capacity,
|
||||||
|
# the disk becomes unschedulable until more space is freed up. By default 25.
|
||||||
|
storageMinimalAvailablePercentage: ~
|
||||||
|
# -- The reserved percentage specifies the percentage of disk space that will not be allocated to the default disk on each new Longhorn node.
|
||||||
|
storageReservedPercentageForDefaultDisk: ~
|
||||||
|
# -- Upgrade Checker will check for new Longhorn version periodically.
|
||||||
|
# When there is a new version available, a notification will appear in the UI. By default true.
|
||||||
|
upgradeChecker: ~
|
||||||
|
# -- The default number of replicas when a volume is created from the Longhorn UI.
|
||||||
|
# For Kubernetes configuration, update the `numberOfReplicas` in the StorageClass. By default 3.
|
||||||
|
defaultReplicaCount: ~
|
||||||
|
# -- The 'storageClassName' is given to PVs and PVCs that are created for an existing Longhorn volume. The StorageClass name can also be used as a label,
|
||||||
|
# so it is possible to use a Longhorn StorageClass to bind a workload to an existing PV without creating a Kubernetes StorageClass object.
|
||||||
|
# By default 'longhorn-static'.
|
||||||
|
defaultLonghornStaticStorageClass: ~
|
||||||
|
# -- In seconds. The backupstore poll interval determines how often Longhorn checks the backupstore for new backups.
|
||||||
|
# Set to 0 to disable the polling. By default 300.
|
||||||
|
backupstorePollInterval: ~
|
||||||
|
# -- In minutes. This setting determines how long Longhorn will keep the backup resource that was failed. Set to 0 to disable the auto-deletion.
|
||||||
|
failedBackupTTL: ~
|
||||||
|
# -- Restore recurring jobs from the backup volume on the backup target and create recurring jobs if not exist during a backup restoration.
|
||||||
|
restoreVolumeRecurringJobs: ~
|
||||||
|
# -- This setting specifies how many successful backup or snapshot job histories should be retained. History will not be retained if the value is 0.
|
||||||
|
recurringSuccessfulJobsHistoryLimit: ~
|
||||||
|
# -- This setting specifies how many failed backup or snapshot job histories should be retained. History will not be retained if the value is 0.
|
||||||
|
recurringFailedJobsHistoryLimit: ~
|
||||||
|
# -- This setting specifies how many failed support bundles can exist in the cluster.
|
||||||
|
# Set this value to **0** to have Longhorn automatically purge all failed support bundles.
|
||||||
|
supportBundleFailedHistoryLimit: ~
|
||||||
|
# -- taintToleration for longhorn system components
|
||||||
|
taintToleration: ~
|
||||||
|
# -- nodeSelector for longhorn system components
|
||||||
|
systemManagedComponentsNodeSelector: ~
|
||||||
|
# -- priorityClass for longhorn system componentss
|
||||||
|
priorityClass: ~
|
||||||
|
# -- If enabled, volumes will be automatically salvaged when all the replicas become faulty e.g. due to network disconnection.
|
||||||
|
# Longhorn will try to figure out which replica(s) are usable, then use them for the volume. By default true.
|
||||||
|
autoSalvage: ~
|
||||||
|
# -- If enabled, Longhorn will automatically delete the workload pod that is managed by a controller (e.g. deployment, statefulset, daemonset, etc...)
|
||||||
|
# when Longhorn volume is detached unexpectedly (e.g. during Kubernetes upgrade, Docker reboot, or network disconnect).
|
||||||
|
# By deleting the pod, its controller restarts the pod and Kubernetes handles volume reattachment and remount.
|
||||||
|
autoDeletePodWhenVolumeDetachedUnexpectedly: ~
|
||||||
|
# -- Disable Longhorn manager to schedule replica on Kubernetes cordoned node. By default true.
|
||||||
|
disableSchedulingOnCordonedNode: ~
|
||||||
|
# -- Allow scheduling new Replicas of Volume to the Nodes in the same Zone as existing healthy Replicas.
|
||||||
|
# Nodes don't belong to any Zone will be treated as in the same Zone.
|
||||||
|
# Notice that Longhorn relies on label `topology.kubernetes.io/zone=<Zone name of the node>` in the Kubernetes node object to identify the zone.
|
||||||
|
# By default true.
|
||||||
|
replicaZoneSoftAntiAffinity: ~
|
||||||
|
# -- Allow scheduling on disks with existing healthy replicas of the same volume. By default true.
|
||||||
|
replicaDiskSoftAntiAffinity: ~
|
||||||
|
# -- Defines the Longhorn action when a Volume is stuck with a StatefulSet/Deployment Pod on a node that is down.
|
||||||
|
nodeDownPodDeletionPolicy: ~
|
||||||
|
# -- Define the policy to use when a node with the last healthy replica of a volume is drained.
|
||||||
|
nodeDrainPolicy: ~
|
||||||
|
# -- In seconds. The interval determines how long Longhorn will wait at least in order to reuse the existing data on a failed replica
|
||||||
|
# rather than directly creating a new replica for a degraded volume.
|
||||||
|
replicaReplenishmentWaitInterval: ~
|
||||||
|
# -- This setting controls how many replicas on a node can be rebuilt simultaneously.
|
||||||
|
concurrentReplicaRebuildPerNodeLimit: ~
|
||||||
|
# -- This setting controls how many volumes on a node can restore the backup concurrently. Set the value to **0** to disable backup restore.
|
||||||
|
concurrentVolumeBackupRestorePerNodeLimit: ~
|
||||||
|
# -- This setting is only for volumes created by UI.
|
||||||
|
# By default, this is false meaning there will be a reivision counter file to track every write to the volume.
|
||||||
|
# During salvage recovering Longhorn will pick the replica with largest reivision counter as candidate to recover the whole volume.
|
||||||
|
# If revision counter is disabled, Longhorn will not track every write to the volume.
|
||||||
|
# During the salvage recovering, Longhorn will use the 'volume-head-xxx.img' file last modification time and
|
||||||
|
# file size to pick the replica candidate to recover the whole volume.
|
||||||
|
disableRevisionCounter: ~
|
||||||
|
# -- This setting defines the Image Pull Policy of Longhorn system managed pod.
|
||||||
|
# e.g. instance manager, engine image, CSI driver, etc.
|
||||||
|
# The new Image Pull Policy will only apply after the system managed pods restart.
|
||||||
|
systemManagedPodsImagePullPolicy: ~
|
||||||
|
# -- This setting allows user to create and attach a volume that doesn't have all the replicas scheduled at the time of creation.
|
||||||
|
allowVolumeCreationWithDegradedAvailability: ~
|
||||||
|
# -- This setting enables Longhorn to automatically cleanup the system generated snapshot after replica rebuild is done.
|
||||||
|
autoCleanupSystemGeneratedSnapshot: ~
|
||||||
|
# -- This setting controls how Longhorn automatically upgrades volumes' engines to the new default engine image after upgrading Longhorn manager.
|
||||||
|
# The value of this setting specifies the maximum number of engines per node that are allowed to upgrade to the default engine image at the same time.
|
||||||
|
# If the value is 0, Longhorn will not automatically upgrade volumes' engines to default version.
|
||||||
|
concurrentAutomaticEngineUpgradePerNodeLimit: ~
|
||||||
|
# -- This interval in minutes determines how long Longhorn will wait before cleaning up the backing image file when there is no replica in the disk using it.
|
||||||
|
backingImageCleanupWaitInterval: ~
|
||||||
|
# -- This interval in seconds determines how long Longhorn will wait before re-downloading the backing image file
|
||||||
|
# when all disk files of this backing image become failed or unknown.
|
||||||
|
backingImageRecoveryWaitInterval: ~
|
||||||
|
# -- This integer value indicates how many percentage of the total allocatable CPU on each node will be reserved for each instance manager Pod.
|
||||||
|
# You can leave it with the default value, which is 12%.
|
||||||
|
guaranteedInstanceManagerCPU: ~
|
||||||
|
# -- Enabling this setting will notify Longhorn that the cluster is using Kubernetes Cluster Autoscaler.
|
||||||
|
kubernetesClusterAutoscalerEnabled: ~
|
||||||
|
# -- This setting allows Longhorn to delete the orphan resource and its corresponding orphaned data automatically like stale replicas.
|
||||||
|
# Orphan resources on down or unknown nodes will not be cleaned up automatically.
|
||||||
|
orphanAutoDeletion: ~
|
||||||
|
# -- Longhorn uses the storage network for in-cluster data traffic. Leave this blank to use the Kubernetes cluster network.
|
||||||
|
storageNetwork: ~
|
||||||
|
# -- This flag is designed to prevent Longhorn from being accidentally uninstalled which will lead to data lost.
|
||||||
|
deletingConfirmationFlag: ~
|
||||||
|
# -- In seconds. The setting specifies the timeout between the engine and replica(s), and the value should be between 8 to 30 seconds.
|
||||||
|
# The default value is 8 seconds.
|
||||||
|
engineReplicaTimeout: ~
|
||||||
|
# -- This setting allows users to enable or disable snapshot hashing and data integrity checking.
|
||||||
|
snapshotDataIntegrity: ~
|
||||||
|
# -- Hashing snapshot disk files impacts the performance of the system.
|
||||||
|
# The immediate snapshot hashing and checking can be disabled to minimize the impact after creating a snapshot.
|
||||||
|
snapshotDataIntegrityImmediateCheckAfterSnapshotCreation: ~
|
||||||
|
# -- Unix-cron string format. The setting specifies when Longhorn checks the data integrity of snapshot disk files.
|
||||||
|
snapshotDataIntegrityCronjob: ~
|
||||||
|
# -- This setting allows Longhorn filesystem trim feature to automatically mark the latest snapshot and
|
||||||
|
# its ancestors as removed and stops at the snapshot containing multiple children.
|
||||||
|
removeSnapshotsDuringFilesystemTrim: ~
|
||||||
|
# -- This feature supports the fast replica rebuilding.
|
||||||
|
# It relies on the checksum of snapshot disk files, so setting the snapshot-data-integrity to **enable** or **fast-check** is a prerequisite.
|
||||||
|
fastReplicaRebuildEnabled: ~
|
||||||
|
# -- In seconds. The setting specifies the HTTP client timeout to the file sync server.
|
||||||
|
replicaFileSyncHttpClientTimeout: ~
|
||||||
|
# -- The log level Panic, Fatal, Error, Warn, Info, Debug, Trace used in longhorn manager. Default to Info.
|
||||||
|
logLevel: ~
|
||||||
|
# -- This setting allows users to specify backup compression method.
|
||||||
|
backupCompressionMethod: ~
|
||||||
|
# -- This setting controls how many worker threads per backup concurrently.
|
||||||
|
backupConcurrentLimit: ~
|
||||||
|
# -- This setting controls how many worker threads per restore concurrently.
|
||||||
|
restoreConcurrentLimit: ~
|
||||||
|
# -- This allows users to activate v2 data engine based on SPDK.
|
||||||
|
# Currently, it is in the preview phase and should not be utilized in a production environment.
|
||||||
|
v2DataEngine: ~
|
||||||
|
# -- This setting allows users to enable the offline replica rebuilding for volumes using v2 data engine.
|
||||||
|
offlineReplicaRebuilding: ~
|
||||||
|
# -- Allow Scheduling Empty Node Selector Volumes To Any Node
|
||||||
|
allowEmptyNodeSelectorVolume: ~
|
||||||
|
# -- Allow Scheduling Empty Disk Selector Volumes To Any Disk
|
||||||
|
allowEmptyDiskSelectorVolume: ~
|
||||||
|
|
||||||
|
privateRegistry:
|
||||||
|
# -- Set `true` to create a new private registry secret
|
||||||
|
createSecret: ~
|
||||||
|
# -- URL of private registry. Leave blank to apply system default registry
|
||||||
|
registryUrl: ~
|
||||||
|
# -- User used to authenticate to private registry
|
||||||
|
registryUser: ~
|
||||||
|
# -- Password used to authenticate to private registry
|
||||||
|
registryPasswd: ~
|
||||||
|
# -- If create a new private registry secret is true, create a Kubernetes secret with this name; else use the existing secret of this name. Use it to pull images from your private registry
|
||||||
|
registrySecret: ~
|
||||||
|
|
||||||
|
longhornManager:
|
||||||
|
log:
|
||||||
|
# -- Options: `plain`, `json`
|
||||||
|
format: plain
|
||||||
|
# -- Priority class for longhorn manager
|
||||||
|
priorityClass: ~
|
||||||
|
# -- Tolerate nodes to run Longhorn manager
|
||||||
|
tolerations: []
|
||||||
|
## If you want to set tolerations for Longhorn Manager DaemonSet, delete the `[]` in the line above
|
||||||
|
## and uncomment this example block
|
||||||
|
# - key: "key"
|
||||||
|
# operator: "Equal"
|
||||||
|
# value: "value"
|
||||||
|
# effect: "NoSchedule"
|
||||||
|
# -- Select nodes to run Longhorn manager
|
||||||
|
nodeSelector: {}
|
||||||
|
## If you want to set node selector for Longhorn Manager DaemonSet, delete the `{}` in the line above
|
||||||
|
## and uncomment this example block
|
||||||
|
# label-key1: "label-value1"
|
||||||
|
# label-key2: "label-value2"
|
||||||
|
# -- Annotation used in Longhorn manager service
|
||||||
|
serviceAnnotations: {}
|
||||||
|
## If you want to set annotations for the Longhorn Manager service, delete the `{}` in the line above
|
||||||
|
## and uncomment this example block
|
||||||
|
# annotation-key1: "annotation-value1"
|
||||||
|
# annotation-key2: "annotation-value2"
|
||||||
|
|
||||||
|
longhornDriver:
|
||||||
|
# -- Priority class for longhorn driver
|
||||||
|
priorityClass: ~
|
||||||
|
# -- Tolerate nodes to run Longhorn driver
|
||||||
|
tolerations: []
|
||||||
|
## If you want to set tolerations for Longhorn Driver Deployer Deployment, delete the `[]` in the line above
|
||||||
|
## and uncomment this example block
|
||||||
|
# - key: "key"
|
||||||
|
# operator: "Equal"
|
||||||
|
# value: "value"
|
||||||
|
# effect: "NoSchedule"
|
||||||
|
# -- Select nodes to run Longhorn driver
|
||||||
|
nodeSelector: {}
|
||||||
|
## If you want to set node selector for Longhorn Driver Deployer Deployment, delete the `{}` in the line above
|
||||||
|
## and uncomment this example block
|
||||||
|
# label-key1: "label-value1"
|
||||||
|
# label-key2: "label-value2"
|
||||||
|
|
||||||
|
longhornUI:
|
||||||
|
# -- Replica count for longhorn ui
|
||||||
|
replicas: 2
|
||||||
|
# -- Priority class count for longhorn ui
|
||||||
|
priorityClass: ~
|
||||||
|
# -- Tolerate nodes to run Longhorn UI
|
||||||
|
tolerations: []
|
||||||
|
## If you want to set tolerations for Longhorn UI Deployment, delete the `[]` in the line above
|
||||||
|
## and uncomment this example block
|
||||||
|
# - key: "key"
|
||||||
|
# operator: "Equal"
|
||||||
|
# value: "value"
|
||||||
|
# effect: "NoSchedule"
|
||||||
|
# -- Select nodes to run Longhorn UI
|
||||||
|
nodeSelector: {}
|
||||||
|
## If you want to set node selector for Longhorn UI Deployment, delete the `{}` in the line above
|
||||||
|
## and uncomment this example block
|
||||||
|
# label-key1: "label-value1"
|
||||||
|
# label-key2: "label-value2"
|
||||||
|
|
||||||
|
ingress:
|
||||||
|
# -- Set to true to enable ingress record generation
|
||||||
|
enabled: false
|
||||||
|
|
||||||
|
# -- Add ingressClassName to the Ingress
|
||||||
|
# Can replace the kubernetes.io/ingress.class annotation on v1.18+
|
||||||
|
ingressClassName: ~
|
||||||
|
|
||||||
|
# -- Layer 7 Load Balancer hostname
|
||||||
|
host: sslip.io
|
||||||
|
|
||||||
|
# -- Set this to true in order to enable TLS on the ingress record
|
||||||
|
tls: false
|
||||||
|
|
||||||
|
# -- Enable this in order to enable that the backend service will be connected at port 443
|
||||||
|
secureBackends: false
|
||||||
|
|
||||||
|
# -- If TLS is set to true, you must declare what secret will store the key/certificate for TLS
|
||||||
|
tlsSecret: longhorn.local-tls
|
||||||
|
|
||||||
|
# -- If ingress is enabled you can set the default ingress path
|
||||||
|
# then you can access the UI by using the following full path {{host}}+{{path}}
|
||||||
|
path: /
|
||||||
|
|
||||||
|
## If you're using kube-lego, you will want to add:
|
||||||
|
## kubernetes.io/tls-acme: true
|
||||||
|
##
|
||||||
|
## For a full list of possible ingress annotations, please see
|
||||||
|
## ref: https://github.com/kubernetes/ingress-nginx/blob/master/docs/annotations.md
|
||||||
|
##
|
||||||
|
## If tls is set to true, annotation ingress.kubernetes.io/secure-backends: "true" will automatically be set
|
||||||
|
# -- Ingress annotations done as key:value pairs
|
||||||
|
annotations:
|
||||||
|
# kubernetes.io/ingress.class: nginx
|
||||||
|
# kubernetes.io/tls-acme: true
|
||||||
|
|
||||||
|
# -- If you're providing your own certificates, please use this to add the certificates as secrets
|
||||||
|
secrets:
|
||||||
|
## If you're providing your own certificates, please use this to add the certificates as secrets
|
||||||
|
## key and certificate should start with -----BEGIN CERTIFICATE----- or
|
||||||
|
## -----BEGIN RSA PRIVATE KEY-----
|
||||||
|
##
|
||||||
|
## name should line up with a tlsSecret set further up
|
||||||
|
## If you're using kube-lego, this is unneeded, as it will create the secret for you if it is not set
|
||||||
|
##
|
||||||
|
## It is also possible to create and manage the certificates outside of this helm chart
|
||||||
|
## Please see README.md for more information
|
||||||
|
# - name: longhorn.local-tls
|
||||||
|
# key:
|
||||||
|
# certificate:
|
||||||
|
|
||||||
|
# -- For Kubernetes < v1.25, if your cluster enables Pod Security Policy admission controller,
|
||||||
|
# set this to `true` to ship longhorn-psp which allow privileged Longhorn pods to start
|
||||||
|
enablePSP: false
|
||||||
|
|
||||||
|
# -- Annotations to add to the Longhorn Manager DaemonSet Pods. Optional.
|
||||||
|
annotations: {}
|
||||||
|
|
||||||
|
serviceAccount:
|
||||||
|
# -- Annotations to add to the service account
|
||||||
|
annotations: {}
|
||||||
|
|
||||||
|
## openshift settings
|
||||||
|
openshift:
|
||||||
|
# -- Enable when using openshift
|
||||||
|
enabled: false
|
||||||
|
ui:
|
||||||
|
# -- UI route in openshift environment
|
||||||
|
route: "longhorn-ui"
|
||||||
|
# -- UI port in openshift environment
|
||||||
|
port: 443
|
||||||
|
# -- UI proxy in openshift environment
|
||||||
|
proxy: 8443
|
48
deploy/backupstores/azurite-backupstore.yaml
Normal file
48
deploy/backupstores/azurite-backupstore.yaml
Normal file
@ -0,0 +1,48 @@
|
|||||||
|
# same secret for longhorn-system namespace
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: azblob-secret
|
||||||
|
namespace: longhorn-system
|
||||||
|
type: Opaque
|
||||||
|
data:
|
||||||
|
AZBLOB_ACCOUNT_NAME: ZGV2c3RvcmVhY2NvdW50MQ==
|
||||||
|
AZBLOB_ACCOUNT_KEY: RWJ5OHZkTTAyeE5PY3FGbHFVd0pQTGxtRXRsQ0RYSjFPVXpGVDUwdVNSWjZJRnN1RnEyVVZFckN6NEk2dHEvSzFTWkZQVE90ci9LQkhCZWtzb0dNR3c9PQ==
|
||||||
|
AZBLOB_ENDPOINT: aHR0cDovL2F6YmxvYi1zZXJ2aWNlLmRlZmF1bHQ6MTAwMDAv
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: longhorn-test-azblob
|
||||||
|
namespace: default
|
||||||
|
labels:
|
||||||
|
app: longhorn-test-azblob
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-test-azblob
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-test-azblob
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- name: azurite
|
||||||
|
image: mcr.microsoft.com/azure-storage/azurite:3.23.0
|
||||||
|
ports:
|
||||||
|
- containerPort: 10000
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: azblob-service
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: longhorn-test-azblob
|
||||||
|
ports:
|
||||||
|
- port: 10000
|
||||||
|
targetPort: 10000
|
||||||
|
protocol: TCP
|
||||||
|
sessionAffinity: ClientIP
|
87
deploy/backupstores/cifs-backupstore.yaml
Normal file
87
deploy/backupstores/cifs-backupstore.yaml
Normal file
@ -0,0 +1,87 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: cifs-secret
|
||||||
|
namespace: longhorn-system
|
||||||
|
type: Opaque
|
||||||
|
data:
|
||||||
|
CIFS_USERNAME: bG9uZ2hvcm4tY2lmcy11c2VybmFtZQ== # longhorn-cifs-username
|
||||||
|
CIFS_PASSWORD: bG9uZ2hvcm4tY2lmcy1wYXNzd29yZA== # longhorn-cifs-password
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: cifs-secret
|
||||||
|
namespace: default
|
||||||
|
type: Opaque
|
||||||
|
data:
|
||||||
|
CIFS_USERNAME: bG9uZ2hvcm4tY2lmcy11c2VybmFtZQ== # longhorn-cifs-username
|
||||||
|
CIFS_PASSWORD: bG9uZ2hvcm4tY2lmcy1wYXNzd29yZA== # longhorn-cifs-password
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: longhorn-test-cifs
|
||||||
|
namespace: default
|
||||||
|
labels:
|
||||||
|
app: longhorn-test-cifs
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-test-cifs
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-test-cifs
|
||||||
|
spec:
|
||||||
|
volumes:
|
||||||
|
- name: cifs-volume
|
||||||
|
emptyDir: {}
|
||||||
|
containers:
|
||||||
|
- name: longhorn-test-cifs-container
|
||||||
|
image: derekbit/samba:latest
|
||||||
|
ports:
|
||||||
|
- containerPort: 139
|
||||||
|
- containerPort: 445
|
||||||
|
imagePullPolicy: Always
|
||||||
|
env:
|
||||||
|
- name: EXPORT_PATH
|
||||||
|
value: /opt/backupstore
|
||||||
|
- name: CIFS_DISK_IMAGE_SIZE_MB
|
||||||
|
value: "4096"
|
||||||
|
- name: CIFS_USERNAME
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: cifs-secret
|
||||||
|
key: CIFS_USERNAME
|
||||||
|
- name: CIFS_PASSWORD
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: cifs-secret
|
||||||
|
key: CIFS_PASSWORD
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
capabilities:
|
||||||
|
add: ["SYS_ADMIN", "DAC_READ_SEARCH"]
|
||||||
|
volumeMounts:
|
||||||
|
- name: cifs-volume
|
||||||
|
mountPath: "/opt/backupstore"
|
||||||
|
args: ["-u", "$(CIFS_USERNAME);$(CIFS_PASSWORD)", "-s", "backupstore;$(EXPORT_PATH);yes;no;no;all;none"]
|
||||||
|
---
|
||||||
|
kind: Service
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: longhorn-test-cifs-svc
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: longhorn-test-cifs
|
||||||
|
clusterIP: None
|
||||||
|
ports:
|
||||||
|
- name: netbios-port
|
||||||
|
port: 139
|
||||||
|
targetPort: 139
|
||||||
|
- name: microsoft-port
|
||||||
|
port: 445
|
||||||
|
targetPort: 445
|
91
deploy/backupstores/minio-backupstore.yaml
Normal file
91
deploy/backupstores/minio-backupstore.yaml
Normal file
@ -0,0 +1,91 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: minio-secret
|
||||||
|
namespace: default
|
||||||
|
type: Opaque
|
||||||
|
data:
|
||||||
|
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
|
||||||
|
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
|
||||||
|
AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA== # https://minio-service.default:9000
|
||||||
|
AWS_CERT: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMRENDQWhTZ0F3SUJBZ0lSQU1kbzQycGhUZXlrMTcvYkxyWjVZRHN3RFFZSktvWklodmNOQVFFTEJRQXcKR2pFWU1CWUdBMVVFQ2hNUFRHOXVaMmh2Y200Z0xTQlVaWE4wTUNBWERUSXdNRFF5TnpJek1EQXhNVm9ZRHpJeApNakF3TkRBek1qTXdNREV4V2pBYU1SZ3dGZ1lEVlFRS0V3OU1iMjVuYUc5eWJpQXRJRlJsYzNRd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEWHpVdXJnUFpEZ3pUM0RZdWFlYmdld3Fvd2RlQUQKODRWWWF6ZlN1USs3K21Oa2lpUVBvelVVMmZvUWFGL1BxekJiUW1lZ29hT3l5NVhqM1VFeG1GcmV0eDBaRjVOVgpKTi85ZWFJNWRXRk9teHhpMElPUGI2T0RpbE1qcXVEbUVPSXljdjRTaCsvSWo5Zk1nS0tXUDdJZGxDNUJPeThkCncwOVdkckxxaE9WY3BKamNxYjN6K3hISHd5Q05YeGhoRm9tb2xQVnpJbnlUUEJTZkRuSDBuS0lHUXl2bGhCMGsKVHBHSzYxc2prZnFTK3hpNTlJeHVrbHZIRXNQcjFXblRzYU9oaVh6N3lQSlorcTNBMWZoVzBVa1JaRFlnWnNFbQovZ05KM3JwOFhZdURna2kzZ0UrOElXQWRBWHExeWhqRDdSSkI4VFNJYTV0SGpKUUtqZ0NlSG5HekFnTUJBQUdqCmF6QnBNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTURFR0ExVWRFUVFxTUNpQ0NXeHZZMkZzYUc5emRJSVZiV2x1YVc4dGMyVnlkbWxqWlM1awpaV1poZFd4MGh3Ui9BQUFCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbUZMMzlNSHVZMzFhMTFEajRwMjVjCnFQRUM0RHZJUWozTk9kU0dWMmQrZjZzZ3pGejFXTDhWcnF2QjFCMVM2cjRKYjJQRXVJQkQ4NFlwVXJIT1JNU2MKd3ViTEppSEtEa0Jmb2U5QWI1cC9VakpyS0tuajM0RGx2c1cvR3AwWTZYc1BWaVdpVWorb1JLbUdWSTI0Q0JIdgpnK0JtVzNDeU5RR1RLajk0eE02czNBV2xHRW95YXFXUGU1eHllVWUzZjFBWkY5N3RDaklKUmVWbENtaENGK0JtCmFUY1RSUWN3cVdvQ3AwYmJZcHlERFlwUmxxOEdQbElFOW8yWjZBc05mTHJVcGFtZ3FYMmtYa2gxa3lzSlEralAKelFadHJSMG1tdHVyM0RuRW0yYmk0TktIQVFIcFc5TXUxNkdRakUxTmJYcVF0VEI4OGpLNzZjdEg5MzRDYWw2VgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t
|
||||||
|
AWS_CERT_KEY: LS0tLS1CRUdJTiBQUklWQVRFIEtFWS0tLS0tCk1JSUV2UUlCQURBTkJna3Foa2lHOXcwQkFRRUZBQVNDQktjd2dnU2pBZ0VBQW9JQkFRRFh6VXVyZ1BaRGd6VDMKRFl1YWViZ2V3cW93ZGVBRDg0VllhemZTdVErNyttTmtpaVFQb3pVVTJmb1FhRi9QcXpCYlFtZWdvYU95eTVYagozVUV4bUZyZXR4MFpGNU5WSk4vOWVhSTVkV0ZPbXh4aTBJT1BiNk9EaWxNanF1RG1FT0l5Y3Y0U2grL0lqOWZNCmdLS1dQN0lkbEM1Qk95OGR3MDlXZHJMcWhPVmNwSmpjcWIzeit4SEh3eUNOWHhoaEZvbW9sUFZ6SW55VFBCU2YKRG5IMG5LSUdReXZsaEIwa1RwR0s2MXNqa2ZxUyt4aTU5SXh1a2x2SEVzUHIxV25Uc2FPaGlYejd5UEpaK3EzQQoxZmhXMFVrUlpEWWdac0VtL2dOSjNycDhYWXVEZ2tpM2dFKzhJV0FkQVhxMXloakQ3UkpCOFRTSWE1dEhqSlFLCmpnQ2VIbkd6QWdNQkFBRUNnZ0VBZlVyQ1hrYTN0Q2JmZjNpcnp2cFFmZnVEbURNMzV0TmlYaDJTQVpSVW9FMFYKbSsvZ1UvdnIrN2s2eUgvdzhMOXhpZXFhQTljVkZkL0JuTlIrMzI2WGc2dEpCNko2ZGZxODJZdmZOZ0VDaUFMaQpqalNGemFlQmhnT3ZsWXZHbTR5OTU1Q0FGdjQ1cDNac1VsMTFDRXJlL1BGbGtaWHRHeGlrWFl6NC85UTgzblhZCnM2eDdPYTgyUjdwT2lraWh3Q0FvVTU3Rjc4ZWFKOG1xTmkwRlF2bHlxSk9QMTFCbVp4dm54ZU11S2poQjlPTnAKTFNwMWpzZXk5bDZNR2pVbjBGTG53RHZkVWRiK0ZlUEkxTjdWYUNBd3hJK3JHa3JTWkhnekhWWE92VUpON2t2QQpqNUZPNW9uNGgvK3hXbkYzM3lxZ0VvWWZ0MFFJL2pXS2NOV1d1a2pCd1FLQmdRRGVFNlJGRUpsT2Q1aVcxeW1qCm45RENnczVFbXFtRXN3WU95bkN3U2RhK1lNNnZVYmlac1k4WW9wMVRmVWN4cUh2NkFQWGpVd2NBUG1QVE9KRW8KMlJtS0xTYkhsTnc4bFNOMWJsWDBEL3Mzamc1R3VlVW9nbW5TVnhMa0h1OFhKR0o3VzFReEUzZG9IUHRrcTNpagpoa09QTnJpZFM0UmxqNTJwYkhscjUvQzRjUUtCZ1FENHhFYmpuck1heFV2b0xxVTRvT2xiOVc5UytSUllTc0cxCmxJUmgzNzZTV0ZuTTlSdGoyMTI0M1hkaE4zUFBtSTNNeiswYjdyMnZSUi9LMS9Cc1JUQnlrTi9kbkVuNVUxQkEKYm90cGZIS1Jvc1FUR1hIQkEvM0JrNC9qOWplU3RmVXgzZ2x3eUI0L2hORy9KM1ZVV2FXeURTRm5qZFEvcGJsRwp6VWlsSVBmK1l3S0JnUUNwMkdYYmVJMTN5TnBJQ3psS2JqRlFncEJWUWVDQ29CVHkvUHRncUtoM3BEeVBNN1kyCnZla09VMWgyQVN1UkhDWHRtQXgzRndvVXNxTFFhY1FEZEw4bXdjK1Y5eERWdU02TXdwMDBjNENVQmE1L2d5OXoKWXdLaUgzeFFRaVJrRTZ6S1laZ3JqSkxYYXNzT1BHS2cxbEFYV1NlckRaV3R3MEEyMHNLdXQ0NlEwUUtCZ0hGZQpxZHZVR0ZXcjhvTDJ0dzlPcmVyZHVJVTh4RnZVZmVFdHRRTVJ2N3pjRE5qT0gxUnJ4Wk9aUW0ySW92dkp6MTIyCnFKMWhPUXJtV3EzTHFXTCtTU3o4L3pqMG4vWERWVUIzNElzTFR2ODJDVnVXN2ZPRHlTSnVDRlpnZ0VVWkxZd3oKWDJRSm4xZGRSV1Z6S3hKczVJbDNXSERqL3dXZWxnaEJSOGtSZEZOM0FvR0FJNldDdjJQQ1lUS1ZZNjAwOFYwbgpyTDQ3YTlPanZ0Yy81S2ZxSjFpMkpKTUgyQi9jbU1WRSs4M2dpODFIU1FqMWErNnBjektmQVppZWcwRk9nL015ClB6VlZRYmpKTnY0QzM5KzdxSDg1WGdZTXZhcTJ0aDFEZWUvQ3NsMlM4QlV0cW5mc0VuMUYwcWhlWUJZb2RibHAKV3RUaE5oRi9oRVhzbkJROURyWkJKT1U9Ci0tLS0tRU5EIFBSSVZBVEUgS0VZLS0tLS0K
|
||||||
|
---
|
||||||
|
# same secret for longhorn-system namespace
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: minio-secret
|
||||||
|
namespace: longhorn-system
|
||||||
|
type: Opaque
|
||||||
|
data:
|
||||||
|
AWS_ACCESS_KEY_ID: bG9uZ2hvcm4tdGVzdC1hY2Nlc3Mta2V5 # longhorn-test-access-key
|
||||||
|
AWS_SECRET_ACCESS_KEY: bG9uZ2hvcm4tdGVzdC1zZWNyZXQta2V5 # longhorn-test-secret-key
|
||||||
|
AWS_ENDPOINTS: aHR0cHM6Ly9taW5pby1zZXJ2aWNlLmRlZmF1bHQ6OTAwMA== # https://minio-service.default:9000
|
||||||
|
AWS_CERT: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSURMRENDQWhTZ0F3SUJBZ0lSQU1kbzQycGhUZXlrMTcvYkxyWjVZRHN3RFFZSktvWklodmNOQVFFTEJRQXcKR2pFWU1CWUdBMVVFQ2hNUFRHOXVaMmh2Y200Z0xTQlVaWE4wTUNBWERUSXdNRFF5TnpJek1EQXhNVm9ZRHpJeApNakF3TkRBek1qTXdNREV4V2pBYU1SZ3dGZ1lEVlFRS0V3OU1iMjVuYUc5eWJpQXRJRlJsYzNRd2dnRWlNQTBHCkNTcUdTSWIzRFFFQkFRVUFBNElCRHdBd2dnRUtBb0lCQVFEWHpVdXJnUFpEZ3pUM0RZdWFlYmdld3Fvd2RlQUQKODRWWWF6ZlN1USs3K21Oa2lpUVBvelVVMmZvUWFGL1BxekJiUW1lZ29hT3l5NVhqM1VFeG1GcmV0eDBaRjVOVgpKTi85ZWFJNWRXRk9teHhpMElPUGI2T0RpbE1qcXVEbUVPSXljdjRTaCsvSWo5Zk1nS0tXUDdJZGxDNUJPeThkCncwOVdkckxxaE9WY3BKamNxYjN6K3hISHd5Q05YeGhoRm9tb2xQVnpJbnlUUEJTZkRuSDBuS0lHUXl2bGhCMGsKVHBHSzYxc2prZnFTK3hpNTlJeHVrbHZIRXNQcjFXblRzYU9oaVh6N3lQSlorcTNBMWZoVzBVa1JaRFlnWnNFbQovZ05KM3JwOFhZdURna2kzZ0UrOElXQWRBWHExeWhqRDdSSkI4VFNJYTV0SGpKUUtqZ0NlSG5HekFnTUJBQUdqCmF6QnBNQTRHQTFVZER3RUIvd1FFQXdJQ3BEQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBVEFQQmdOVkhSTUIKQWY4RUJUQURBUUgvTURFR0ExVWRFUVFxTUNpQ0NXeHZZMkZzYUc5emRJSVZiV2x1YVc4dGMyVnlkbWxqWlM1awpaV1poZFd4MGh3Ui9BQUFCTUEwR0NTcUdTSWIzRFFFQkN3VUFBNElCQVFDbUZMMzlNSHVZMzFhMTFEajRwMjVjCnFQRUM0RHZJUWozTk9kU0dWMmQrZjZzZ3pGejFXTDhWcnF2QjFCMVM2cjRKYjJQRXVJQkQ4NFlwVXJIT1JNU2MKd3ViTEppSEtEa0Jmb2U5QWI1cC9VakpyS0tuajM0RGx2c1cvR3AwWTZYc1BWaVdpVWorb1JLbUdWSTI0Q0JIdgpnK0JtVzNDeU5RR1RLajk0eE02czNBV2xHRW95YXFXUGU1eHllVWUzZjFBWkY5N3RDaklKUmVWbENtaENGK0JtCmFUY1RSUWN3cVdvQ3AwYmJZcHlERFlwUmxxOEdQbElFOW8yWjZBc05mTHJVcGFtZ3FYMmtYa2gxa3lzSlEralAKelFadHJSMG1tdHVyM0RuRW0yYmk0TktIQVFIcFc5TXUxNkdRakUxTmJYcVF0VEI4OGpLNzZjdEg5MzRDYWw2VgotLS0tLUVORCBDRVJUSUZJQ0FURS0tLS0t
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: longhorn-test-minio
|
||||||
|
namespace: default
|
||||||
|
labels:
|
||||||
|
app: longhorn-test-minio
|
||||||
|
spec:
|
||||||
|
replicas: 1
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-test-minio
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-test-minio
|
||||||
|
spec:
|
||||||
|
volumes:
|
||||||
|
- name: minio-volume
|
||||||
|
emptyDir: {}
|
||||||
|
- name: minio-certificates
|
||||||
|
secret:
|
||||||
|
secretName: minio-secret
|
||||||
|
items:
|
||||||
|
- key: AWS_CERT
|
||||||
|
path: public.crt
|
||||||
|
- key: AWS_CERT_KEY
|
||||||
|
path: private.key
|
||||||
|
containers:
|
||||||
|
- name: minio
|
||||||
|
image: minio/minio:RELEASE.2022-02-01T18-00-14Z
|
||||||
|
command: ["sh", "-c", "mkdir -p /storage/backupbucket && mkdir -p /root/.minio/certs && ln -s /root/certs/private.key /root/.minio/certs/private.key && ln -s /root/certs/public.crt /root/.minio/certs/public.crt && exec minio server /storage"]
|
||||||
|
env:
|
||||||
|
- name: MINIO_ROOT_USER
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: minio-secret
|
||||||
|
key: AWS_ACCESS_KEY_ID
|
||||||
|
- name: MINIO_ROOT_PASSWORD
|
||||||
|
valueFrom:
|
||||||
|
secretKeyRef:
|
||||||
|
name: minio-secret
|
||||||
|
key: AWS_SECRET_ACCESS_KEY
|
||||||
|
ports:
|
||||||
|
- containerPort: 9000
|
||||||
|
volumeMounts:
|
||||||
|
- name: minio-volume
|
||||||
|
mountPath: "/storage"
|
||||||
|
- name: minio-certificates
|
||||||
|
mountPath: "/root/certs"
|
||||||
|
readOnly: true
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: minio-service
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: longhorn-test-minio
|
||||||
|
ports:
|
||||||
|
- port: 9000
|
||||||
|
targetPort: 9000
|
||||||
|
protocol: TCP
|
||||||
|
sessionAffinity: ClientIP
|
60
deploy/backupstores/nfs-backupstore.yaml
Normal file
60
deploy/backupstores/nfs-backupstore.yaml
Normal file
@ -0,0 +1,60 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
name: longhorn-test-nfs
|
||||||
|
namespace: default
|
||||||
|
labels:
|
||||||
|
app: longhorn-test-nfs
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-test-nfs
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-test-nfs
|
||||||
|
spec:
|
||||||
|
volumes:
|
||||||
|
- name: nfs-volume
|
||||||
|
emptyDir: {}
|
||||||
|
containers:
|
||||||
|
- name: longhorn-test-nfs-container
|
||||||
|
image: longhornio/nfs-ganesha:latest
|
||||||
|
imagePullPolicy: Always
|
||||||
|
env:
|
||||||
|
- name: EXPORT_ID
|
||||||
|
value: "14"
|
||||||
|
- name: EXPORT_PATH
|
||||||
|
value: /opt/backupstore
|
||||||
|
- name: PSEUDO_PATH
|
||||||
|
value: /opt/backupstore
|
||||||
|
- name: NFS_DISK_IMAGE_SIZE_MB
|
||||||
|
value: "4096"
|
||||||
|
command: ["bash", "-c", "chmod 700 /opt/backupstore && /opt/start_nfs.sh | tee /var/log/ganesha.log"]
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
capabilities:
|
||||||
|
add: ["SYS_ADMIN", "DAC_READ_SEARCH"]
|
||||||
|
volumeMounts:
|
||||||
|
- name: nfs-volume
|
||||||
|
mountPath: "/opt/backupstore"
|
||||||
|
livenessProbe:
|
||||||
|
exec:
|
||||||
|
command: ["bash", "-c", "grep \"No export entries found\" /var/log/ganesha.log > /dev/null 2>&1 ; [ $? -ne 0 ]"]
|
||||||
|
initialDelaySeconds: 5
|
||||||
|
periodSeconds: 5
|
||||||
|
timeoutSeconds: 4
|
||||||
|
---
|
||||||
|
kind: Service
|
||||||
|
apiVersion: v1
|
||||||
|
metadata:
|
||||||
|
name: longhorn-test-nfs-svc
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
app: longhorn-test-nfs
|
||||||
|
clusterIP: None
|
||||||
|
ports:
|
||||||
|
- name: notnecessary
|
||||||
|
port: 1234
|
||||||
|
targetPort: 1234
|
@ -1,35 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Pod
|
|
||||||
metadata:
|
|
||||||
name: longhorn-test-nfs
|
|
||||||
labels:
|
|
||||||
app: longhorn-test-nfs
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: longhorn-test-nfs-container
|
|
||||||
image: janeczku/nfs-ganesha:latest
|
|
||||||
imagePullPolicy: Always
|
|
||||||
env:
|
|
||||||
- name: EXPORT_ID
|
|
||||||
value: "14"
|
|
||||||
- name: EXPORT_PATH
|
|
||||||
value: /opt/backupstore
|
|
||||||
- name: PSEUDO_PATH
|
|
||||||
value: /opt/backupstore
|
|
||||||
command: ["bash", "-c", "mkdir -p /opt/backupstore && /opt/start_nfs.sh"]
|
|
||||||
securityContext:
|
|
||||||
capabilities:
|
|
||||||
add: ["SYS_ADMIN", "DAC_READ_SEARCH"]
|
|
||||||
---
|
|
||||||
kind: Service
|
|
||||||
apiVersion: v1
|
|
||||||
metadata:
|
|
||||||
name: longhorn-test-nfs-svc
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
app: longhorn-test-nfs
|
|
||||||
clusterIP: None
|
|
||||||
ports:
|
|
||||||
- name: notnecessary
|
|
||||||
port: 1234
|
|
||||||
targetPort: 1234
|
|
@ -1,9 +0,0 @@
|
|||||||
kind: StorageClass
|
|
||||||
apiVersion: storage.k8s.io/v1
|
|
||||||
metadata:
|
|
||||||
name: longhorn
|
|
||||||
provisioner: rancher.io/longhorn
|
|
||||||
parameters:
|
|
||||||
numberOfReplicas: "3"
|
|
||||||
staleReplicaTimeout: "30"
|
|
||||||
fromBackup: ""
|
|
@ -1,302 +0,0 @@
|
|||||||
apiVersion: v1
|
|
||||||
kind: Namespace
|
|
||||||
metadata:
|
|
||||||
name: longhorn-system
|
|
||||||
---
|
|
||||||
apiVersion: v1
|
|
||||||
kind: ServiceAccount
|
|
||||||
metadata:
|
|
||||||
name: longhorn-service-account
|
|
||||||
namespace: longhorn-system
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
kind: ClusterRole
|
|
||||||
metadata:
|
|
||||||
name: longhorn-role
|
|
||||||
rules:
|
|
||||||
- apiGroups:
|
|
||||||
- apiextensions.k8s.io
|
|
||||||
resources:
|
|
||||||
- customresourcedefinitions
|
|
||||||
verbs:
|
|
||||||
- "*"
|
|
||||||
- apiGroups: [""]
|
|
||||||
resources: ["pods", "events", "persistentvolumes", "persistentvolumeclaims", "nodes", "proxy/nodes"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["extensions"]
|
|
||||||
resources: ["daemonsets"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["batch"]
|
|
||||||
resources: ["jobs", "cronjobs"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["storage.k8s.io"]
|
|
||||||
resources: ["storageclasses"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["longhorn.rancher.io"]
|
|
||||||
resources: ["nodes"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["longhorn.rancher.io"]
|
|
||||||
resources: ["volumes"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["longhorn.rancher.io"]
|
|
||||||
resources: ["engines"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["longhorn.rancher.io"]
|
|
||||||
resources: ["replicas"]
|
|
||||||
verbs: ["*"]
|
|
||||||
- apiGroups: ["longhorn.rancher.io"]
|
|
||||||
resources: ["settings"]
|
|
||||||
verbs: ["*"]
|
|
||||||
---
|
|
||||||
apiVersion: rbac.authorization.k8s.io/v1beta1
|
|
||||||
kind: ClusterRoleBinding
|
|
||||||
metadata:
|
|
||||||
name: longhorn-bind
|
|
||||||
roleRef:
|
|
||||||
apiGroup: rbac.authorization.k8s.io
|
|
||||||
kind: ClusterRole
|
|
||||||
name: longhorn-role
|
|
||||||
subjects:
|
|
||||||
- kind: ServiceAccount
|
|
||||||
name: longhorn-service-account
|
|
||||||
namespace: longhorn-system
|
|
||||||
---
|
|
||||||
apiVersion: apiextensions.k8s.io/v1beta1
|
|
||||||
kind: CustomResourceDefinition
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
longhorn-manager: Engine
|
|
||||||
name: engines.longhorn.rancher.io
|
|
||||||
spec:
|
|
||||||
group: longhorn.rancher.io
|
|
||||||
names:
|
|
||||||
kind: Engine
|
|
||||||
listKind: EngineList
|
|
||||||
plural: engines
|
|
||||||
shortNames:
|
|
||||||
- lhe
|
|
||||||
singular: engine
|
|
||||||
scope: Namespaced
|
|
||||||
version: v1alpha1
|
|
||||||
---
|
|
||||||
apiVersion: apiextensions.k8s.io/v1beta1
|
|
||||||
kind: CustomResourceDefinition
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
longhorn-manager: Replica
|
|
||||||
name: replicas.longhorn.rancher.io
|
|
||||||
spec:
|
|
||||||
group: longhorn.rancher.io
|
|
||||||
names:
|
|
||||||
kind: Replica
|
|
||||||
listKind: ReplicaList
|
|
||||||
plural: replicas
|
|
||||||
shortNames:
|
|
||||||
- lhr
|
|
||||||
singular: replica
|
|
||||||
scope: Namespaced
|
|
||||||
version: v1alpha1
|
|
||||||
---
|
|
||||||
apiVersion: apiextensions.k8s.io/v1beta1
|
|
||||||
kind: CustomResourceDefinition
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
longhorn-manager: Setting
|
|
||||||
name: settings.longhorn.rancher.io
|
|
||||||
spec:
|
|
||||||
group: longhorn.rancher.io
|
|
||||||
names:
|
|
||||||
kind: Setting
|
|
||||||
listKind: SettingList
|
|
||||||
plural: settings
|
|
||||||
shortNames:
|
|
||||||
- lhs
|
|
||||||
singular: setting
|
|
||||||
scope: Namespaced
|
|
||||||
version: v1alpha1
|
|
||||||
---
|
|
||||||
apiVersion: apiextensions.k8s.io/v1beta1
|
|
||||||
kind: CustomResourceDefinition
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
longhorn-manager: Volume
|
|
||||||
name: volumes.longhorn.rancher.io
|
|
||||||
spec:
|
|
||||||
group: longhorn.rancher.io
|
|
||||||
names:
|
|
||||||
kind: Volume
|
|
||||||
listKind: VolumeList
|
|
||||||
plural: volumes
|
|
||||||
shortNames:
|
|
||||||
- lhv
|
|
||||||
singular: volume
|
|
||||||
scope: Namespaced
|
|
||||||
version: v1alpha1
|
|
||||||
---
|
|
||||||
apiVersion: extensions/v1beta1
|
|
||||||
kind: DaemonSet
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-manager
|
|
||||||
name: longhorn-manager
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-manager
|
|
||||||
spec:
|
|
||||||
initContainers:
|
|
||||||
- name: init-container
|
|
||||||
image: rancher/longhorn-engine:de88734
|
|
||||||
command: ['sh', '-c', 'cp /usr/local/bin/* /data/']
|
|
||||||
volumeMounts:
|
|
||||||
- name: execbin
|
|
||||||
mountPath: /data/
|
|
||||||
containers:
|
|
||||||
- name: longhorn-manager
|
|
||||||
image: rancher/longhorn-manager:010fe60
|
|
||||||
imagePullPolicy: Always
|
|
||||||
securityContext:
|
|
||||||
privileged: true
|
|
||||||
command:
|
|
||||||
- longhorn-manager
|
|
||||||
- -d
|
|
||||||
- daemon
|
|
||||||
- --engine-image
|
|
||||||
- rancher/longhorn-engine:de88734
|
|
||||||
- --manager-image
|
|
||||||
- rancher/longhorn-manager:010fe60
|
|
||||||
- --service-account
|
|
||||||
- longhorn-service-account
|
|
||||||
ports:
|
|
||||||
- containerPort: 9500
|
|
||||||
volumeMounts:
|
|
||||||
- name: dev
|
|
||||||
mountPath: /host/dev/
|
|
||||||
- name: proc
|
|
||||||
mountPath: /host/proc/
|
|
||||||
- name: varrun
|
|
||||||
mountPath: /var/run/
|
|
||||||
- name: longhorn
|
|
||||||
mountPath: /var/lib/rancher/longhorn/
|
|
||||||
- name: execbin
|
|
||||||
mountPath: /usr/local/bin/
|
|
||||||
env:
|
|
||||||
- name: POD_NAMESPACE
|
|
||||||
valueFrom:
|
|
||||||
fieldRef:
|
|
||||||
fieldPath: metadata.namespace
|
|
||||||
- name: POD_IP
|
|
||||||
valueFrom:
|
|
||||||
fieldRef:
|
|
||||||
fieldPath: status.podIP
|
|
||||||
- name: NODE_NAME
|
|
||||||
valueFrom:
|
|
||||||
fieldRef:
|
|
||||||
fieldPath: spec.nodeName
|
|
||||||
volumes:
|
|
||||||
- name: dev
|
|
||||||
hostPath:
|
|
||||||
path: /dev/
|
|
||||||
- name: proc
|
|
||||||
hostPath:
|
|
||||||
path: /proc/
|
|
||||||
- name: varrun
|
|
||||||
hostPath:
|
|
||||||
path: /var/run/
|
|
||||||
- name: longhorn
|
|
||||||
hostPath:
|
|
||||||
path: /var/lib/rancher/longhorn/
|
|
||||||
- name: execbin
|
|
||||||
emptyDir: {}
|
|
||||||
serviceAccountName: longhorn-service-account
|
|
||||||
---
|
|
||||||
kind: Service
|
|
||||||
apiVersion: v1
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-manager
|
|
||||||
name: longhorn-backend
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
app: longhorn-manager
|
|
||||||
ports:
|
|
||||||
- port: 9500
|
|
||||||
targetPort: 9500
|
|
||||||
sessionAffinity: ClientIP
|
|
||||||
---
|
|
||||||
apiVersion: extensions/v1beta1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-ui
|
|
||||||
name: longhorn-ui
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-ui
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: longhorn-ui
|
|
||||||
image: rancher/longhorn-ui:1455f4f
|
|
||||||
ports:
|
|
||||||
- containerPort: 8000
|
|
||||||
env:
|
|
||||||
- name: LONGHORN_MANAGER_IP
|
|
||||||
value: "http://longhorn-backend:9500"
|
|
||||||
---
|
|
||||||
kind: Service
|
|
||||||
apiVersion: v1
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-ui
|
|
||||||
name: longhorn-frontend
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
selector:
|
|
||||||
app: longhorn-ui
|
|
||||||
ports:
|
|
||||||
- port: 80
|
|
||||||
targetPort: 8000
|
|
||||||
type: LoadBalancer
|
|
||||||
---
|
|
||||||
apiVersion: extensions/v1beta1
|
|
||||||
kind: Deployment
|
|
||||||
metadata:
|
|
||||||
name: longhorn-flexvolume-driver-deployer
|
|
||||||
namespace: longhorn-system
|
|
||||||
spec:
|
|
||||||
replicas: 1
|
|
||||||
template:
|
|
||||||
metadata:
|
|
||||||
labels:
|
|
||||||
app: longhorn-flexvolume-driver-deployer
|
|
||||||
spec:
|
|
||||||
containers:
|
|
||||||
- name: longhorn-flexvolume-driver-deployer
|
|
||||||
image: rancher/longhorn-manager:010fe60
|
|
||||||
imagePullPolicy: Always
|
|
||||||
command:
|
|
||||||
- longhorn-manager
|
|
||||||
- -d
|
|
||||||
- deploy-flexvolume-driver
|
|
||||||
- --manager-image
|
|
||||||
- rancher/longhorn-manager:010fe60
|
|
||||||
env:
|
|
||||||
- name: POD_NAMESPACE
|
|
||||||
valueFrom:
|
|
||||||
fieldRef:
|
|
||||||
fieldPath: metadata.namespace
|
|
||||||
- name: NODE_NAME
|
|
||||||
valueFrom:
|
|
||||||
fieldRef:
|
|
||||||
fieldPath: spec.nodeName
|
|
||||||
- name: FLEXVOLUME_DIR
|
|
||||||
value: "/home/kubernetes/flexvolume/"
|
|
||||||
serviceAccountName: longhorn-service-account
|
|
||||||
---
|
|
13
deploy/longhorn-images.txt
Normal file
13
deploy/longhorn-images.txt
Normal file
@ -0,0 +1,13 @@
|
|||||||
|
longhornio/csi-attacher:v4.2.0
|
||||||
|
longhornio/csi-provisioner:v3.4.1
|
||||||
|
longhornio/csi-resizer:v1.7.0
|
||||||
|
longhornio/csi-snapshotter:v6.2.1
|
||||||
|
longhornio/csi-node-driver-registrar:v2.7.0
|
||||||
|
longhornio/livenessprobe:v2.9.0
|
||||||
|
longhornio/backing-image-manager:master-head
|
||||||
|
longhornio/longhorn-engine:master-head
|
||||||
|
longhornio/longhorn-instance-manager:master-head
|
||||||
|
longhornio/longhorn-manager:master-head
|
||||||
|
longhornio/longhorn-share-manager:master-head
|
||||||
|
longhornio/longhorn-ui:master-head
|
||||||
|
longhornio/support-bundle-kit:v0.0.27
|
4346
deploy/longhorn.yaml
4346
deploy/longhorn.yaml
File diff suppressed because it is too large
Load Diff
61
deploy/podsecuritypolicy.yaml
Normal file
61
deploy/podsecuritypolicy.yaml
Normal file
@ -0,0 +1,61 @@
|
|||||||
|
apiVersion: policy/v1beta1
|
||||||
|
kind: PodSecurityPolicy
|
||||||
|
metadata:
|
||||||
|
name: longhorn-psp
|
||||||
|
spec:
|
||||||
|
privileged: true
|
||||||
|
allowPrivilegeEscalation: true
|
||||||
|
requiredDropCapabilities:
|
||||||
|
- NET_RAW
|
||||||
|
allowedCapabilities:
|
||||||
|
- SYS_ADMIN
|
||||||
|
hostNetwork: false
|
||||||
|
hostIPC: false
|
||||||
|
hostPID: true
|
||||||
|
runAsUser:
|
||||||
|
rule: RunAsAny
|
||||||
|
seLinux:
|
||||||
|
rule: RunAsAny
|
||||||
|
fsGroup:
|
||||||
|
rule: RunAsAny
|
||||||
|
supplementalGroups:
|
||||||
|
rule: RunAsAny
|
||||||
|
volumes:
|
||||||
|
- configMap
|
||||||
|
- downwardAPI
|
||||||
|
- emptyDir
|
||||||
|
- secret
|
||||||
|
- projected
|
||||||
|
- hostPath
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: Role
|
||||||
|
metadata:
|
||||||
|
name: longhorn-psp-role
|
||||||
|
namespace: longhorn-system
|
||||||
|
rules:
|
||||||
|
- apiGroups:
|
||||||
|
- policy
|
||||||
|
resources:
|
||||||
|
- podsecuritypolicies
|
||||||
|
verbs:
|
||||||
|
- use
|
||||||
|
resourceNames:
|
||||||
|
- longhorn-psp
|
||||||
|
---
|
||||||
|
apiVersion: rbac.authorization.k8s.io/v1
|
||||||
|
kind: RoleBinding
|
||||||
|
metadata:
|
||||||
|
name: longhorn-psp-binding
|
||||||
|
namespace: longhorn-system
|
||||||
|
roleRef:
|
||||||
|
apiGroup: rbac.authorization.k8s.io
|
||||||
|
kind: Role
|
||||||
|
name: longhorn-psp-role
|
||||||
|
subjects:
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: longhorn-service-account
|
||||||
|
namespace: longhorn-system
|
||||||
|
- kind: ServiceAccount
|
||||||
|
name: default
|
||||||
|
namespace: longhorn-system
|
36
deploy/prerequisite/longhorn-cifs-installation.yaml
Normal file
36
deploy/prerequisite/longhorn-cifs-installation.yaml
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: longhorn-cifs-installation
|
||||||
|
labels:
|
||||||
|
app: longhorn-cifs-installation
|
||||||
|
annotations:
|
||||||
|
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y cifs-utils; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y cifs-utils; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y cifs-utils; fi && if [ $? -eq 0 ]; then echo "cifs install successfully"; else echo "cifs utilities install failed error code $?"; fi
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-cifs-installation
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-cifs-installation
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
hostPID: true
|
||||||
|
initContainers:
|
||||||
|
- name: cifs-installation
|
||||||
|
command:
|
||||||
|
- nsenter
|
||||||
|
- --mount=/proc/1/ns/mnt
|
||||||
|
- --
|
||||||
|
- bash
|
||||||
|
- -c
|
||||||
|
- *cmd
|
||||||
|
image: alpine:3.12
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
containers:
|
||||||
|
- name: sleep
|
||||||
|
image: registry.k8s.io/pause:3.1
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
36
deploy/prerequisite/longhorn-iscsi-installation.yaml
Normal file
36
deploy/prerequisite/longhorn-iscsi-installation.yaml
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: longhorn-iscsi-installation
|
||||||
|
labels:
|
||||||
|
app: longhorn-iscsi-installation
|
||||||
|
annotations:
|
||||||
|
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y open-iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y iscsi-initiator-utils && echo "InitiatorName=$(/sbin/iscsi-iname)" > /etc/iscsi/initiatorname.iscsi && sudo systemctl -q enable iscsid && sudo systemctl start iscsid && sudo modprobe iscsi_tcp; fi && if [ $? -eq 0 ]; then echo "iscsi install successfully"; else echo "iscsi install failed error code $?"; fi
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-iscsi-installation
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-iscsi-installation
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
hostPID: true
|
||||||
|
initContainers:
|
||||||
|
- name: iscsi-installation
|
||||||
|
command:
|
||||||
|
- nsenter
|
||||||
|
- --mount=/proc/1/ns/mnt
|
||||||
|
- --
|
||||||
|
- bash
|
||||||
|
- -c
|
||||||
|
- *cmd
|
||||||
|
image: alpine:3.17
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
containers:
|
||||||
|
- name: sleep
|
||||||
|
image: registry.k8s.io/pause:3.1
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
35
deploy/prerequisite/longhorn-iscsi-selinux-workaround.yaml
Normal file
35
deploy/prerequisite/longhorn-iscsi-selinux-workaround.yaml
Normal file
@ -0,0 +1,35 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: longhorn-iscsi-selinux-workaround
|
||||||
|
labels:
|
||||||
|
app: longhorn-iscsi-selinux-workaround
|
||||||
|
annotations:
|
||||||
|
command: &cmd if ! rpm -q policycoreutils > /dev/null 2>&1; then echo "failed to apply workaround; only applicable in Fedora based distros with SELinux enabled"; exit; elif cd /tmp && echo '(allow iscsid_t self (capability (dac_override)))' > local_longhorn.cil && semodule -vi local_longhorn.cil && rm -f local_longhorn.cil; then echo "applied workaround successfully"; else echo "failed to apply workaround; error code $?"; fi
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-iscsi-selinux-workaround
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-iscsi-selinux-workaround
|
||||||
|
spec:
|
||||||
|
hostPID: true
|
||||||
|
initContainers:
|
||||||
|
- name: iscsi-selinux-workaround
|
||||||
|
command:
|
||||||
|
- nsenter
|
||||||
|
- --mount=/proc/1/ns/mnt
|
||||||
|
- --
|
||||||
|
- bash
|
||||||
|
- -c
|
||||||
|
- *cmd
|
||||||
|
image: alpine:3.17
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
containers:
|
||||||
|
- name: sleep
|
||||||
|
image: registry.k8s.io/pause:3.1
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
36
deploy/prerequisite/longhorn-nfs-installation.yaml
Normal file
36
deploy/prerequisite/longhorn-nfs-installation.yaml
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: longhorn-nfs-installation
|
||||||
|
labels:
|
||||||
|
app: longhorn-nfs-installation
|
||||||
|
annotations:
|
||||||
|
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y nfs-common && sudo modprobe nfs; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y nfs-client && sudo modprobe nfs; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y nfs-utils && sudo modprobe nfs; fi && if [ $? -eq 0 ]; then echo "nfs install successfully"; else echo "nfs install failed error code $?"; fi
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-nfs-installation
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-nfs-installation
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
hostPID: true
|
||||||
|
initContainers:
|
||||||
|
- name: nfs-installation
|
||||||
|
command:
|
||||||
|
- nsenter
|
||||||
|
- --mount=/proc/1/ns/mnt
|
||||||
|
- --
|
||||||
|
- bash
|
||||||
|
- -c
|
||||||
|
- *cmd
|
||||||
|
image: alpine:3.12
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
containers:
|
||||||
|
- name: sleep
|
||||||
|
image: registry.k8s.io/pause:3.1
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
36
deploy/prerequisite/longhorn-nvme-cli-installation.yaml
Normal file
36
deploy/prerequisite/longhorn-nvme-cli-installation.yaml
Normal file
@ -0,0 +1,36 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: longhorn-nvme-cli-installation
|
||||||
|
labels:
|
||||||
|
app: longhorn-nvme-cli-installation
|
||||||
|
annotations:
|
||||||
|
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y nvme-cli && sudo modprobe nvme-tcp; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y nvme-cli && sudo modprobe nvme-tcp; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y nvme-cli && sudo modprobe nvme-tcp; fi && if [ $? -eq 0 ]; then echo "nvme-cli install successfully"; else echo "nvme-cli install failed error code $?"; fi
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-nvme-cli-installation
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-nvme-cli-installation
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
hostPID: true
|
||||||
|
initContainers:
|
||||||
|
- name: nvme-cli-installation
|
||||||
|
command:
|
||||||
|
- nsenter
|
||||||
|
- --mount=/proc/1/ns/mnt
|
||||||
|
- --
|
||||||
|
- bash
|
||||||
|
- -c
|
||||||
|
- *cmd
|
||||||
|
image: alpine:3.12
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
containers:
|
||||||
|
- name: sleep
|
||||||
|
image: registry.k8s.io/pause:3.1
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
47
deploy/prerequisite/longhorn-spdk-setup.yaml
Normal file
47
deploy/prerequisite/longhorn-spdk-setup.yaml
Normal file
@ -0,0 +1,47 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: DaemonSet
|
||||||
|
metadata:
|
||||||
|
name: longhorn-spdk-setup
|
||||||
|
labels:
|
||||||
|
app: longhorn-spdk-setup
|
||||||
|
annotations:
|
||||||
|
command: &cmd OS=$(grep -E "^ID_LIKE=" /etc/os-release | cut -d '=' -f 2); if [[ -z "${OS}" ]]; then OS=$(grep -E "^ID=" /etc/os-release | cut -d '=' -f 2); fi; if [[ "${OS}" == *"debian"* ]]; then sudo apt-get update -q -y && sudo apt-get install -q -y git; elif [[ "${OS}" == *"suse"* ]]; then sudo zypper --gpg-auto-import-keys -q refresh && sudo zypper --gpg-auto-import-keys -q install -y git; else sudo yum makecache -q -y && sudo yum --setopt=tsflags=noscripts install -q -y git; fi && if [ $? -eq 0 ]; then echo "git install successfully"; else echo "git install failed error code $?"; fi && rm -rf ${SPDK_DIR}; git clone -b longhorn https://github.com/longhorn/spdk.git ${SPDK_DIR} && bash ${SPDK_DIR}/scripts/setup.sh ${SPDK_OPTION}; if [ $? -eq 0 ]; then echo "vm.nr_hugepages=$((HUGEMEM/2))" >> /etc/sysctl.conf; echo "SPDK environment is configured successfully"; else echo "Failed to configure SPDK environment error code $?"; fi; rm -rf ${SPDK_DIR}
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: longhorn-spdk-setup
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: longhorn-spdk-setup
|
||||||
|
spec:
|
||||||
|
hostNetwork: true
|
||||||
|
hostPID: true
|
||||||
|
initContainers:
|
||||||
|
- name: longhorn-spdk-setup
|
||||||
|
command:
|
||||||
|
- nsenter
|
||||||
|
- --mount=/proc/1/ns/mnt
|
||||||
|
- --
|
||||||
|
- bash
|
||||||
|
- -c
|
||||||
|
- *cmd
|
||||||
|
image: alpine:3.12
|
||||||
|
env:
|
||||||
|
- name: SPDK_DIR
|
||||||
|
value: "/tmp/spdk"
|
||||||
|
- name: SPDK_OPTION
|
||||||
|
value: ""
|
||||||
|
- name: HUGEMEM
|
||||||
|
value: "1024"
|
||||||
|
- name: PCI_ALLOWED
|
||||||
|
value: "none"
|
||||||
|
- name: DRIVER_OVERRIDE
|
||||||
|
value: "uio_pci_generic"
|
||||||
|
securityContext:
|
||||||
|
privileged: true
|
||||||
|
containers:
|
||||||
|
- name: sleep
|
||||||
|
image: registry.k8s.io/pause:3.1
|
||||||
|
updateStrategy:
|
||||||
|
type: RollingUpdate
|
7
deploy/upgrade_responder_server/README.md
Normal file
7
deploy/upgrade_responder_server/README.md
Normal file
@ -0,0 +1,7 @@
|
|||||||
|
# Upgrade Responder Helm Chart
|
||||||
|
|
||||||
|
This directory contains the helm values for the Longhorn upgrade responder server.
|
||||||
|
The values are in the file `./chart-values.yaml`.
|
||||||
|
When you update the content of `./chart-values.yaml`, automation pipeline will update the Longhorn upgrade responder.
|
||||||
|
Information about the source chart is in `chart.yaml`.
|
||||||
|
See [dev/upgrade-responder](../../dev/upgrade-responder/README.md) for manual deployment steps.
|
372
deploy/upgrade_responder_server/chart-values.yaml
Normal file
372
deploy/upgrade_responder_server/chart-values.yaml
Normal file
@ -0,0 +1,372 @@
|
|||||||
|
# Specify the name of the application that is using this Upgrade Responder server
|
||||||
|
# This will be used to create a database named <application-name>_upgrade_responder
|
||||||
|
# in the InfluxDB to store all data for this Upgrade Responder
|
||||||
|
# The name must be in snake case format
|
||||||
|
applicationName: longhorn
|
||||||
|
|
||||||
|
image:
|
||||||
|
repository: longhornio/upgrade-responder
|
||||||
|
tag: longhorn-head
|
||||||
|
pullPolicy: Always
|
||||||
|
|
||||||
|
secret:
|
||||||
|
name: upgrade-responder-secret
|
||||||
|
# Set this to false if you don't want to manage these secrets with helm
|
||||||
|
managed: false
|
||||||
|
|
||||||
|
resources:
|
||||||
|
limits:
|
||||||
|
cpu: 400m
|
||||||
|
memory: 512Mi
|
||||||
|
requests:
|
||||||
|
cpu: 200m
|
||||||
|
memory: 256Mi
|
||||||
|
|
||||||
|
# This configmap contains information about the latest release
|
||||||
|
# of the application that is using this Upgrade Responder
|
||||||
|
configMap:
|
||||||
|
responseConfig: |-
|
||||||
|
{
|
||||||
|
"versions": [
|
||||||
|
{
|
||||||
|
"name": "v1.3.3",
|
||||||
|
"releaseDate": "2023-04-19T00:00:00Z",
|
||||||
|
"tags": [
|
||||||
|
"stable"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "v1.4.3",
|
||||||
|
"releaseDate": "2023-07-14T00:00:00Z",
|
||||||
|
"tags": [
|
||||||
|
"latest",
|
||||||
|
"stable"
|
||||||
|
]
|
||||||
|
},
|
||||||
|
{
|
||||||
|
"name": "v1.5.1",
|
||||||
|
"releaseDate": "2023-07-19T00:00:00Z",
|
||||||
|
"tags": [
|
||||||
|
"latest"
|
||||||
|
]
|
||||||
|
}
|
||||||
|
]
|
||||||
|
}
|
||||||
|
requestSchema: |-
|
||||||
|
{
|
||||||
|
"appVersionSchema": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"extraTagInfoSchema": {
|
||||||
|
"hostKernelRelease": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"hostOsDistro": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"kubernetesNodeProvider": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"kubernetesVersion": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAllowRecurringJobWhileVolumeDetached": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAllowVolumeCreationWithDegradedAvailability": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAutoCleanupSystemGeneratedSnapshot": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAutoDeletePodWhenVolumeDetachedUnexpectedly": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAutoSalvage": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingBackupCompressionMethod": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingBackupTarget": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingCrdApiVersion": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingCreateDefaultDiskLabeledNodes": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingDefaultDataLocality": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingDisableRevisionCounter": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingDisableSchedulingOnCordonedNode": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingFastReplicaRebuildEnabled": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingKubernetesClusterAutoscalerEnabled": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingNodeDownPodDeletionPolicy": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingNodeDrainPolicy": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingOfflineReplicaRebuilding": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingOrphanAutoDeletion": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingPriorityClass": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingRegistrySecret": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingRemoveSnapshotsDuringFilesystemTrim": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaAutoBalance": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaSoftAntiAffinity": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaZoneSoftAntiAffinity": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaDiskSoftAntiAffinity": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
}
|
||||||
|
"longhornSettingRestoreVolumeRecurringJobs": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSnapshotDataIntegrity": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSnapshotDataIntegrityCronjob": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingStorageNetwork": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSystemManagedComponentsNodeSelector": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSystemManagedPodsImagePullPolicy": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingTaintToleration": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingV2DataEngine": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"extraFieldInfoSchema": {
|
||||||
|
"longhornInstanceManagerAverageCpuUsageMilliCores": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornInstanceManagerAverageMemoryUsageBytes": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornManagerAverageCpuUsageMilliCores": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornManagerAverageMemoryUsageBytes": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornNamespaceUid": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornNodeCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornNodeDiskHDDCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornNodeDiskNVMeCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornNodeDiskSSDCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingBackingImageCleanupWaitInterval": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingBackingImageRecoveryWaitInterval": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingBackupConcurrentLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingBackupstorePollInterval": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingConcurrentAutomaticEngineUpgradePerNodeLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingConcurrentReplicaRebuildPerNodeLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingConcurrentVolumeBackupRestorePerNodeLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingDefaultReplicaCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingEngineReplicaTimeout": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingFailedBackupTtl": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingGuaranteedInstanceManagerCpu": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingRecurringFailedJobsHistoryLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingRecurringSuccessfulJobsHistoryLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaFileSyncHttpClientTimeout": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaReplenishmentWaitInterval": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingRestoreConcurrentLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingStorageMinimalAvailablePercentage": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingStorageOverProvisioningPercentage": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingStorageReservedPercentageForDefaultDisk": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingSupportBundleFailedHistoryLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAccessModeRwoCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAccessModeRwxCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAccessModeUnknownCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAverageActualSizeBytes": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAverageNumberOfReplicas": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAverageSizeBytes": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAverageSnapshotCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeDataLocalityBestEffortCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeDataLocalityDisabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeDataLocalityStrictLocalCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeFrontendBlockdevCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeFrontendIscsiCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeOfflineReplicaRebuildingDisabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeOfflineReplicaRebuildingEnabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeReplicaAutoBalanceDisabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeReplicaSoftAntiAffinityFalseCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeReplicaZoneSoftAntiAffinityTrueCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeReplicaDiskSoftAntiAffinityTrueCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeRestoreVolumeRecurringJobFalseCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeSnapshotDataIntegrityDisabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeSnapshotDataIntegrityFastCheckCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeUnmapMarkSnapChainRemovedFalseCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
|
5
deploy/upgrade_responder_server/chart.yaml
Normal file
5
deploy/upgrade_responder_server/chart.yaml
Normal file
@ -0,0 +1,5 @@
|
|||||||
|
url: https://github.com/longhorn/upgrade-responder.git
|
||||||
|
commit: 116f807836c29185038cfb005708f0a8d41f4d35
|
||||||
|
releaseName: longhorn-upgrade-responder
|
||||||
|
namespace: longhorn-upgrade-responder
|
||||||
|
|
12
dev/scale-test/.gitignore
vendored
Normal file
12
dev/scale-test/.gitignore
vendored
Normal file
@ -0,0 +1,12 @@
|
|||||||
|
# ignores all goland project folders and files
|
||||||
|
.idea
|
||||||
|
*.iml
|
||||||
|
*.ipr
|
||||||
|
|
||||||
|
# ignore output folder
|
||||||
|
out
|
||||||
|
tmp
|
||||||
|
results
|
||||||
|
|
||||||
|
# ignore kubeconfig
|
||||||
|
kubeconfig
|
27
dev/scale-test/README.md
Normal file
27
dev/scale-test/README.md
Normal file
@ -0,0 +1,27 @@
|
|||||||
|
## Overview
|
||||||
|
scale-test is a collection of developer scripts that are used for scaling a cluster to a certain amount of volumes
|
||||||
|
while monitoring the time required to complete these actions.
|
||||||
|
`sample.sh` can be used to quickly see how long it takes for the requested amount of volumes to be up and usable.
|
||||||
|
`scale-test.py` can be used to create the amount of requested statefulsets based on the `statefulset.yaml` template,
|
||||||
|
as well as retrieve detailed timing information per volume.
|
||||||
|
|
||||||
|
|
||||||
|
### scale-test.py
|
||||||
|
scale-test.py watches `pod`, `pvc`, `va` events (ADDED, MODIFIED, DELETED).
|
||||||
|
Based on that information we can calculate the time of actions for each individual pod.
|
||||||
|
|
||||||
|
In additional scale-test.py can also be used to create a set of statefulset deployment files.
|
||||||
|
based on the `statefulset.yaml` with the following VARIABLES substituted based on the current sts index.
|
||||||
|
`@NODE_NAME@` - schedule each sts on a dedicated node
|
||||||
|
`@STS_NAME@` - also used for the volume-name
|
||||||
|
|
||||||
|
make sure to set the correct CONSTANT values in scale-test.py before running.
|
||||||
|
|
||||||
|
|
||||||
|
### sample.sh
|
||||||
|
sample.sh can be used to scale to a requested amount of volumes based on the existing statefulsets
|
||||||
|
and node count for the current cluster.
|
||||||
|
|
||||||
|
One can pass the requested amount of volumes as well as the node count of the current cluster.
|
||||||
|
example for 1000 volumes and 100 nodes: `./sample.sh 1000 100`
|
||||||
|
this expects there to be a statefulset deployment for each node.
|
19
dev/scale-test/sample.sh
Executable file
19
dev/scale-test/sample.sh
Executable file
@ -0,0 +1,19 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
requested=${1:-0}
|
||||||
|
node_count=${2:-1}
|
||||||
|
required_scale=$((requested / node_count))
|
||||||
|
|
||||||
|
now=$(date)
|
||||||
|
ready=$(kubectl get pods -o custom-columns=NAMESPACE:metadata.namespace,POD:metadata.name,PodIP:status.podIP,READY:status.containerStatuses[*].ready | grep -c true)
|
||||||
|
echo "$ready -- $now - start state"
|
||||||
|
|
||||||
|
cmd=$(kubectl scale --replicas="$required_scale" statefulset --all)
|
||||||
|
echo "$cmd"
|
||||||
|
while [ "$ready" -ne "$requested" ]; do
|
||||||
|
sleep 60
|
||||||
|
now=$(date)
|
||||||
|
ready=$(kubectl get pods -o custom-columns=NAMESPACE:metadata.namespace,POD:metadata.name,PodIP:status.podIP,READY:status.containerStatuses[*].ready | grep -c true)
|
||||||
|
echo "$ready -- $now - delta:"
|
||||||
|
done
|
||||||
|
echo "$requested -- $now - done state"
|
124
dev/scale-test/scale-test.py
Normal file
124
dev/scale-test/scale-test.py
Normal file
@ -0,0 +1,124 @@
|
|||||||
|
import sys
|
||||||
|
import asyncio
|
||||||
|
import logging
|
||||||
|
from pathlib import Path
|
||||||
|
from kubernetes import client, config, watch
|
||||||
|
|
||||||
|
NAMESPACE = "default"
|
||||||
|
NODE_PREFIX = "jmoody-work"
|
||||||
|
NODE_COUNT = 100
|
||||||
|
TEMPLATE_FILE = "statefulset.yaml"
|
||||||
|
KUBE_CONFIG = None
|
||||||
|
KUBE_CONTEXT = None
|
||||||
|
# KUBE_CONFIG = "kubeconfig"
|
||||||
|
# KUBE_CONTEXT = "jmoody-test-jmoody-control2"
|
||||||
|
|
||||||
|
|
||||||
|
def create_sts_deployment(count):
|
||||||
|
# @NODE_NAME@ - schedule each sts on a dedicated node
|
||||||
|
# @STS_NAME@ - also used for the volume-name
|
||||||
|
# create 100 stateful-sets
|
||||||
|
for i in range(count):
|
||||||
|
create_sts_yaml(i + 1)
|
||||||
|
|
||||||
|
|
||||||
|
def create_sts_yaml(index):
|
||||||
|
content = Path(TEMPLATE_FILE).read_text()
|
||||||
|
content = content.replace("@NODE_NAME@", NODE_PREFIX + str(index))
|
||||||
|
content = content.replace("@STS_NAME@", "sts" + str(index))
|
||||||
|
file = Path("out/sts" + str(index) + ".yaml")
|
||||||
|
file.parent.mkdir(parents=True, exist_ok=True)
|
||||||
|
file.write_text(content)
|
||||||
|
|
||||||
|
|
||||||
|
async def watch_pods_async():
|
||||||
|
log = logging.getLogger('pod_events')
|
||||||
|
log.setLevel(logging.INFO)
|
||||||
|
v1 = client.CoreV1Api()
|
||||||
|
w = watch.Watch()
|
||||||
|
for event in w.stream(v1.list_namespaced_pod, namespace=NAMESPACE):
|
||||||
|
process_pod_event(log, event)
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
|
||||||
|
def process_pod_event(log, event):
|
||||||
|
log.info("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))
|
||||||
|
if 'ADDED' in event['type']:
|
||||||
|
pass
|
||||||
|
elif 'DELETED' in event['type']:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
async def watch_pvc_async():
|
||||||
|
log = logging.getLogger('pvc_events')
|
||||||
|
log.setLevel(logging.INFO)
|
||||||
|
v1 = client.CoreV1Api()
|
||||||
|
w = watch.Watch()
|
||||||
|
for event in w.stream(v1.list_namespaced_persistent_volume_claim, namespace=NAMESPACE):
|
||||||
|
process_pvc_event(log, event)
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
|
||||||
|
def process_pvc_event(log, event):
|
||||||
|
log.info("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))
|
||||||
|
if 'ADDED' in event['type']:
|
||||||
|
pass
|
||||||
|
elif 'DELETED' in event['type']:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
async def watch_va_async():
|
||||||
|
log = logging.getLogger('va_events')
|
||||||
|
log.setLevel(logging.INFO)
|
||||||
|
storage = client.StorageV1Api()
|
||||||
|
w = watch.Watch()
|
||||||
|
for event in w.stream(storage.list_volume_attachment):
|
||||||
|
process_va_event(log, event)
|
||||||
|
await asyncio.sleep(0)
|
||||||
|
|
||||||
|
|
||||||
|
def process_va_event(log, event):
|
||||||
|
log.info("Event: %s %s %s" % (event['type'], event['object'].kind, event['object'].metadata.name))
|
||||||
|
if 'ADDED' in event['type']:
|
||||||
|
pass
|
||||||
|
elif 'DELETED' in event['type']:
|
||||||
|
pass
|
||||||
|
else:
|
||||||
|
pass
|
||||||
|
|
||||||
|
|
||||||
|
if __name__ == '__main__':
|
||||||
|
# create the sts deployment files
|
||||||
|
create_sts_deployment(NODE_COUNT)
|
||||||
|
|
||||||
|
# setup the monitor
|
||||||
|
log_format = '%(asctime)s - %(name)s - %(levelname)s - %(message)s'
|
||||||
|
logging.basicConfig(stream=sys.stdout,
|
||||||
|
level=logging.INFO,
|
||||||
|
format=log_format)
|
||||||
|
config.load_kube_config(config_file=KUBE_CONFIG,
|
||||||
|
context=KUBE_CONTEXT)
|
||||||
|
logging.info("scale-test started")
|
||||||
|
|
||||||
|
# datastructures to keep track of the timings
|
||||||
|
# TODO: process events and keep track of the results
|
||||||
|
# results should be per pod/volume
|
||||||
|
# information to keep track: pod index per sts
|
||||||
|
# volume-creation time per pod
|
||||||
|
# volume-attach time per pod
|
||||||
|
# volume-detach time per pod
|
||||||
|
pvc_to_va_map = dict()
|
||||||
|
pvc_to_pod_map = dict()
|
||||||
|
results = dict()
|
||||||
|
|
||||||
|
# start async event_loop
|
||||||
|
event_loop = asyncio.get_event_loop()
|
||||||
|
event_loop.create_task(watch_pods_async())
|
||||||
|
event_loop.create_task(watch_pvc_async())
|
||||||
|
event_loop.create_task(watch_va_async())
|
||||||
|
event_loop.run_forever()
|
||||||
|
logging.info("scale-test-finished")
|
41
dev/scale-test/statefulset.yaml
Normal file
41
dev/scale-test/statefulset.yaml
Normal file
@ -0,0 +1,41 @@
|
|||||||
|
apiVersion: apps/v1
|
||||||
|
kind: StatefulSet
|
||||||
|
metadata:
|
||||||
|
name: @STS_NAME@
|
||||||
|
spec:
|
||||||
|
replicas: 0
|
||||||
|
serviceName: @STS_NAME@
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: @STS_NAME@
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: @STS_NAME@
|
||||||
|
spec:
|
||||||
|
nodeName: @NODE_NAME@
|
||||||
|
restartPolicy: Always
|
||||||
|
terminationGracePeriodSeconds: 10
|
||||||
|
containers:
|
||||||
|
- name: '@STS_NAME@'
|
||||||
|
image: 'busybox:latest'
|
||||||
|
command: ["/bin/sh", "-ec", "while :; do echo '.'; sleep 5 ; done"]
|
||||||
|
livenessProbe:
|
||||||
|
exec:
|
||||||
|
command:
|
||||||
|
- ls
|
||||||
|
- /mnt/@STS_NAME@
|
||||||
|
initialDelaySeconds: 5
|
||||||
|
periodSeconds: 5
|
||||||
|
volumeMounts:
|
||||||
|
- name: @STS_NAME@
|
||||||
|
mountPath: /mnt/@STS_NAME@
|
||||||
|
volumeClaimTemplates:
|
||||||
|
- metadata:
|
||||||
|
name: @STS_NAME@
|
||||||
|
spec:
|
||||||
|
accessModes: [ "ReadWriteOnce" ]
|
||||||
|
storageClassName: "longhorn"
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 1Gi
|
45
dev/scripts/lm-update.sh
Executable file
45
dev/scripts/lm-update.sh
Executable file
@ -0,0 +1,45 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
#set -x
|
||||||
|
set -e
|
||||||
|
|
||||||
|
username=$1
|
||||||
|
|
||||||
|
if [ "$username" == "" ]
|
||||||
|
then
|
||||||
|
echo DockerHub username is required
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
update=$2
|
||||||
|
|
||||||
|
project="longhorn-manager"
|
||||||
|
base="${GOPATH}/src/github.com/longhorn/longhorn-manager"
|
||||||
|
yaml=${base}"/deploy/install/02-components/01-manager.yaml"
|
||||||
|
driver_yaml=${base}"/deploy/install/02-components/04-driver.yaml"
|
||||||
|
|
||||||
|
latest=`cat ${base}/bin/latest_image`
|
||||||
|
private=`sed "s/longhornio/${username}/g" ${base}/bin/latest_image`
|
||||||
|
|
||||||
|
echo Latest image ${latest}
|
||||||
|
echo Latest private image ${private}
|
||||||
|
docker tag ${latest} ${private}
|
||||||
|
docker push ${private}
|
||||||
|
|
||||||
|
escaped_private=${private//\//\\\/}
|
||||||
|
sed -i "s/image\:\ .*\/${project}:.*/image\:\ ${escaped_private}/g" $yaml
|
||||||
|
sed -i "s/-\ .*\/${project}:.*/-\ ${escaped_private}/g" $yaml
|
||||||
|
sed -i "s/imagePullPolicy\:\ .*/imagePullPolicy\:\ Always/g" $yaml
|
||||||
|
sed -i "s/image\:\ .*\/${project}:.*/image\:\ ${escaped_private}/g" $driver_yaml
|
||||||
|
sed -i "s/-\ .*\/${project}:.*/-\ ${escaped_private}/g" $driver_yaml
|
||||||
|
sed -i "s/imagePullPolicy\:\ .*/imagePullPolicy\:\ Always/g" $driver_yaml
|
||||||
|
|
||||||
|
set +e
|
||||||
|
|
||||||
|
if [ "$update" == "" ]
|
||||||
|
then
|
||||||
|
kubectl delete -f $yaml
|
||||||
|
kubectl create -f $yaml
|
||||||
|
kubectl delete -f $driver_yaml
|
||||||
|
kubectl create -f $driver_yaml
|
||||||
|
fi
|
24
dev/scripts/update-image-pull-policy.sh
Executable file
24
dev/scripts/update-image-pull-policy.sh
Executable file
@ -0,0 +1,24 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
NS=longhorn-system
|
||||||
|
KINDS="daemonset deployments"
|
||||||
|
|
||||||
|
function patch_kind {
|
||||||
|
kind=$1
|
||||||
|
list=$(kubectl -n $NS get $kind -o name)
|
||||||
|
for obj in $list
|
||||||
|
do
|
||||||
|
echo Updating $obj to imagePullPolicy: Always
|
||||||
|
name=${obj##*/}
|
||||||
|
kubectl -n $NS patch $obj -p '{"spec": {"template": {"spec":{"containers":[{"name":"'$name'","imagePullPolicy":"Always"}]}}}}'
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
for kind in $KINDS
|
||||||
|
do
|
||||||
|
patch_kind $kind
|
||||||
|
done
|
||||||
|
|
||||||
|
echo "Warning: Make sure check and wait for all pods running again!"
|
||||||
|
echo "Current status: (CTRL-C to exit)"
|
||||||
|
kubectl get pods -w -n longhorn-system
|
55
dev/upgrade-responder/README.md
Normal file
55
dev/upgrade-responder/README.md
Normal file
@ -0,0 +1,55 @@
|
|||||||
|
## Overview
|
||||||
|
|
||||||
|
### Install
|
||||||
|
|
||||||
|
1. Install Longhorn.
|
||||||
|
1. Install Longhorn [upgrade-responder](https://github.com/longhorn/upgrade-responder) stack.
|
||||||
|
```bash
|
||||||
|
./install.sh
|
||||||
|
```
|
||||||
|
Sample output:
|
||||||
|
```shell
|
||||||
|
secret/influxdb-creds created
|
||||||
|
persistentvolumeclaim/influxdb created
|
||||||
|
deployment.apps/influxdb created
|
||||||
|
service/influxdb created
|
||||||
|
Deployment influxdb is running.
|
||||||
|
Cloning into 'upgrade-responder'...
|
||||||
|
remote: Enumerating objects: 1077, done.
|
||||||
|
remote: Counting objects: 100% (1076/1076), done.
|
||||||
|
remote: Compressing objects: 100% (454/454), done.
|
||||||
|
remote: Total 1077 (delta 573), reused 1049 (delta 565), pack-reused 1
|
||||||
|
Receiving objects: 100% (1077/1077), 55.01 MiB | 18.10 MiB/s, done.
|
||||||
|
Resolving deltas: 100% (573/573), done.
|
||||||
|
Release "longhorn-upgrade-responder" does not exist. Installing it now.
|
||||||
|
NAME: longhorn-upgrade-responder
|
||||||
|
LAST DEPLOYED: Thu May 11 00:42:44 2023
|
||||||
|
NAMESPACE: default
|
||||||
|
STATUS: deployed
|
||||||
|
REVISION: 1
|
||||||
|
TEST SUITE: None
|
||||||
|
NOTES:
|
||||||
|
1. Get the Upgrade Responder server URL by running these commands:
|
||||||
|
export POD_NAME=$(kubectl get pods --namespace default -l "app.kubernetes.io/name=upgrade-responder,app.kubernetes.io/instance=longhorn-upgrade-responder" -o jsonpath="{.items[0].metadata.name}")
|
||||||
|
kubectl port-forward $POD_NAME 8080:8314 --namespace default
|
||||||
|
echo "Upgrade Responder server URL is http://127.0.0.1:8080"
|
||||||
|
Deployment longhorn-upgrade-responder is running.
|
||||||
|
persistentvolumeclaim/grafana-pvc created
|
||||||
|
deployment.apps/grafana created
|
||||||
|
service/grafana created
|
||||||
|
Deployment grafana is running.
|
||||||
|
|
||||||
|
[Upgrade Checker]
|
||||||
|
URL : http://longhorn-upgrade-responder.default.svc.cluster.local:8314/v1/checkupgrade
|
||||||
|
|
||||||
|
[InfluxDB]
|
||||||
|
URL : http://influxdb.default.svc.cluster.local:8086
|
||||||
|
Database : longhorn_upgrade_responder
|
||||||
|
Username : root
|
||||||
|
Password : root
|
||||||
|
|
||||||
|
[Grafana]
|
||||||
|
Dashboard : http://1.2.3.4:30864
|
||||||
|
Username : admin
|
||||||
|
Password : admin
|
||||||
|
```
|
424
dev/upgrade-responder/install.sh
Executable file
424
dev/upgrade-responder/install.sh
Executable file
@ -0,0 +1,424 @@
|
|||||||
|
#!/bin/bash
|
||||||
|
|
||||||
|
UPGRADE_RESPONDER_REPO="https://github.com/longhorn/upgrade-responder.git"
|
||||||
|
UPGRADE_RESPONDER_REPO_BRANCH="master"
|
||||||
|
UPGRADE_RESPONDER_VALUE_YAML="upgrade-responder-value.yaml"
|
||||||
|
UPGRADE_RESPONDER_IMAGE_REPO="longhornio/upgrade-responder"
|
||||||
|
UPGRADE_RESPONDER_IMAGE_TAG="master-head"
|
||||||
|
|
||||||
|
INFLUXDB_URL="http://influxdb.default.svc.cluster.local:8086"
|
||||||
|
|
||||||
|
APP_NAME="longhorn"
|
||||||
|
|
||||||
|
DEPLOYMENT_TIMEOUT_SEC=300
|
||||||
|
DEPLOYMENT_WAIT_INTERVAL_SEC=5
|
||||||
|
|
||||||
|
temp_dir=$(mktemp -d)
|
||||||
|
trap 'rm -rf "${temp_dir}"' EXIT # -f because packed Git files (.pack, .idx) are write protected.
|
||||||
|
|
||||||
|
cp -a ./* ${temp_dir}
|
||||||
|
cd ${temp_dir}
|
||||||
|
|
||||||
|
wait_for_deployment() {
|
||||||
|
local deployment_name="$1"
|
||||||
|
local start_time=$(date +%s)
|
||||||
|
|
||||||
|
while true; do
|
||||||
|
status=$(kubectl rollout status deployment/${deployment_name})
|
||||||
|
if [[ ${status} == *"successfully rolled out"* ]]; then
|
||||||
|
echo "Deployment ${deployment_name} is running."
|
||||||
|
break
|
||||||
|
fi
|
||||||
|
|
||||||
|
elapsed_secs=$(($(date +%s) - ${start_time}))
|
||||||
|
if [[ ${elapsed_secs} -ge ${timeout_secs} ]]; then
|
||||||
|
echo "Timed out waiting for deployment ${deployment_name} to be running."
|
||||||
|
exit 1
|
||||||
|
fi
|
||||||
|
|
||||||
|
echo "Deployment ${deployment_name} is not running yet. Waiting..."
|
||||||
|
sleep ${DEPLOYMENT_WAIT_INTERVAL_SEC}
|
||||||
|
done
|
||||||
|
}
|
||||||
|
|
||||||
|
install_influxdb() {
|
||||||
|
kubectl apply -f ./manifests/influxdb.yaml
|
||||||
|
wait_for_deployment "influxdb"
|
||||||
|
}
|
||||||
|
|
||||||
|
install_grafana() {
|
||||||
|
kubectl apply -f ./manifests/grafana.yaml
|
||||||
|
wait_for_deployment "grafana"
|
||||||
|
}
|
||||||
|
|
||||||
|
install_upgrade_responder() {
|
||||||
|
cat << EOF > ${UPGRADE_RESPONDER_VALUE_YAML}
|
||||||
|
applicationName: ${APP_NAME}
|
||||||
|
secret:
|
||||||
|
name: upgrade-responder-secrets
|
||||||
|
managed: true
|
||||||
|
influxDBUrl: "${INFLUXDB_URL}"
|
||||||
|
influxDBUser: "root"
|
||||||
|
influxDBPassword: "root"
|
||||||
|
configMap:
|
||||||
|
responseConfig: |-
|
||||||
|
{
|
||||||
|
"versions": [{
|
||||||
|
"name": "v1.0.0",
|
||||||
|
"releaseDate": "2020-05-18T12:30:00Z",
|
||||||
|
"tags": ["latest"]
|
||||||
|
}]
|
||||||
|
}
|
||||||
|
requestSchema: |-
|
||||||
|
{
|
||||||
|
"appVersionSchema": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"extraTagInfoSchema": {
|
||||||
|
"hostKernelRelease": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"hostOsDistro": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"kubernetesNodeProvider": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"kubernetesVersion": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAllowRecurringJobWhileVolumeDetached": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAllowVolumeCreationWithDegradedAvailability": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAutoCleanupSystemGeneratedSnapshot": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAutoDeletePodWhenVolumeDetachedUnexpectedly": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingAutoSalvage": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingBackupCompressionMethod": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingBackupTarget": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingCrdApiVersion": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingCreateDefaultDiskLabeledNodes": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingDefaultDataLocality": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingDisableRevisionCounter": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingDisableSchedulingOnCordonedNode": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingFastReplicaRebuildEnabled": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingKubernetesClusterAutoscalerEnabled": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingNodeDownPodDeletionPolicy": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingNodeDrainPolicy": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingOfflineReplicaRebuilding": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingOrphanAutoDeletion": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingPriorityClass": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingRegistrySecret": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingRemoveSnapshotsDuringFilesystemTrim": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaAutoBalance": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaSoftAntiAffinity": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaZoneSoftAntiAffinity": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaDiskSoftAntiAffinity": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingRestoreVolumeRecurringJobs": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSnapshotDataIntegrity": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSnapshotDataIntegrityCronjob": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSnapshotDataIntegrityImmediateCheckAfterSnapshotCreation": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingStorageNetwork": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSystemManagedComponentsNodeSelector": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingSystemManagedPodsImagePullPolicy": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingTaintToleration": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornSettingV2DataEngine": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
}
|
||||||
|
},
|
||||||
|
"extraFieldInfoSchema": {
|
||||||
|
"longhornInstanceManagerAverageCpuUsageMilliCores": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornInstanceManagerAverageMemoryUsageBytes": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornManagerAverageCpuUsageMilliCores": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornManagerAverageMemoryUsageBytes": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornNamespaceUid": {
|
||||||
|
"dataType": "string",
|
||||||
|
"maxLen": 200
|
||||||
|
},
|
||||||
|
"longhornNodeCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornNodeDiskHDDCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornNodeDiskNVMeCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornNodeDiskSSDCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingBackingImageCleanupWaitInterval": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingBackingImageRecoveryWaitInterval": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingBackupConcurrentLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingBackupstorePollInterval": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingConcurrentAutomaticEngineUpgradePerNodeLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingConcurrentReplicaRebuildPerNodeLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingConcurrentVolumeBackupRestorePerNodeLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingDefaultReplicaCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingEngineReplicaTimeout": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingFailedBackupTtl": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingGuaranteedInstanceManagerCpu": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingRecurringFailedJobsHistoryLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingRecurringSuccessfulJobsHistoryLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaFileSyncHttpClientTimeout": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingReplicaReplenishmentWaitInterval": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingRestoreConcurrentLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingStorageMinimalAvailablePercentage": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingStorageOverProvisioningPercentage": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingStorageReservedPercentageForDefaultDisk": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornSettingSupportBundleFailedHistoryLimit": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAccessModeRwoCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAccessModeRwxCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAccessModeUnknownCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAverageActualSizeBytes": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAverageNumberOfReplicas": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAverageSizeBytes": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeAverageSnapshotCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeDataLocalityBestEffortCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeDataLocalityDisabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeDataLocalityStrictLocalCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeFrontendBlockdevCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeFrontendIscsiCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeOfflineReplicaRebuildingDisabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeOfflineReplicaRebuildingEnabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeReplicaAutoBalanceDisabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeReplicaSoftAntiAffinityFalseCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeReplicaZoneSoftAntiAffinityTrueCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeReplicaDiskSoftAntiAffinityTrueCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeRestoreVolumeRecurringJobFalseCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeSnapshotDataIntegrityDisabledCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeSnapshotDataIntegrityFastCheckCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
},
|
||||||
|
"longhornVolumeUnmapMarkSnapChainRemovedFalseCount": {
|
||||||
|
"dataType": "float"
|
||||||
|
}
|
||||||
|
}
|
||||||
|
}
|
||||||
|
image:
|
||||||
|
repository: ${UPGRADE_RESPONDER_IMAGE_REPO}
|
||||||
|
tag: ${UPGRADE_RESPONDER_IMAGE_TAG}
|
||||||
|
EOF
|
||||||
|
|
||||||
|
git clone -b ${UPGRADE_RESPONDER_REPO_BRANCH} ${UPGRADE_RESPONDER_REPO}
|
||||||
|
helm upgrade --install ${APP_NAME}-upgrade-responder upgrade-responder/chart -f ${UPGRADE_RESPONDER_VALUE_YAML}
|
||||||
|
wait_for_deployment "${APP_NAME}-upgrade-responder"
|
||||||
|
}
|
||||||
|
|
||||||
|
output() {
|
||||||
|
local upgrade_responder_service_info=$(kubectl get svc/${APP_NAME}-upgrade-responder --no-headers)
|
||||||
|
local upgrade_responder_service_port=$(echo "${upgrade_responder_service_info}" | awk '{print $5}' | cut -d'/' -f1)
|
||||||
|
echo # a blank line to separate the installation outputs for better readability.
|
||||||
|
printf "[Upgrade Checker]\n"
|
||||||
|
printf "%-10s: http://${APP_NAME}-upgrade-responder.default.svc.cluster.local:${upgrade_responder_service_port}/v1/checkupgrade\n\n" "URL"
|
||||||
|
|
||||||
|
printf "[InfluxDB]\n"
|
||||||
|
printf "%-10s: ${INFLUXDB_URL}\n" "URL"
|
||||||
|
printf "%-10s: ${APP_NAME}_upgrade_responder\n" "Database"
|
||||||
|
printf "%-10s: root\n" "Username"
|
||||||
|
printf "%-10s: root\n\n" "Password"
|
||||||
|
|
||||||
|
local public_ip=$(curl -s https://ifconfig.me/ip)
|
||||||
|
local grafana_service_info=$(kubectl get svc/grafana --no-headers)
|
||||||
|
local grafana_service_port=$(echo "${grafana_service_info}" | awk '{print $5}' | cut -d':' -f2 | cut -d'/' -f1)
|
||||||
|
printf "[Grafana]\n"
|
||||||
|
printf "%-10s: http://${public_ip}:${grafana_service_port}\n" "Dashboard"
|
||||||
|
printf "%-10s: admin\n" "Username"
|
||||||
|
printf "%-10s: admin\n" "Password"
|
||||||
|
}
|
||||||
|
|
||||||
|
install_influxdb
|
||||||
|
install_upgrade_responder
|
||||||
|
install_grafana
|
||||||
|
output
|
86
dev/upgrade-responder/manifests/grafana.yaml
Normal file
86
dev/upgrade-responder/manifests/grafana.yaml
Normal file
@ -0,0 +1,86 @@
|
|||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: grafana-pvc
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
storageClassName: longhorn
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 2Gi
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: grafana
|
||||||
|
name: grafana
|
||||||
|
spec:
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: grafana
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: grafana
|
||||||
|
spec:
|
||||||
|
securityContext:
|
||||||
|
fsGroup: 472
|
||||||
|
supplementalGroups:
|
||||||
|
- 0
|
||||||
|
containers:
|
||||||
|
- name: grafana
|
||||||
|
image: grafana/grafana:7.1.0
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
env:
|
||||||
|
- name: GF_INSTALL_PLUGINS
|
||||||
|
value: "grafana-worldmap-panel"
|
||||||
|
ports:
|
||||||
|
- containerPort: 3000
|
||||||
|
name: http-grafana
|
||||||
|
protocol: TCP
|
||||||
|
readinessProbe:
|
||||||
|
failureThreshold: 3
|
||||||
|
httpGet:
|
||||||
|
path: /robots.txt
|
||||||
|
port: 3000
|
||||||
|
scheme: HTTP
|
||||||
|
initialDelaySeconds: 10
|
||||||
|
periodSeconds: 30
|
||||||
|
successThreshold: 1
|
||||||
|
timeoutSeconds: 2
|
||||||
|
livenessProbe:
|
||||||
|
failureThreshold: 3
|
||||||
|
initialDelaySeconds: 30
|
||||||
|
periodSeconds: 10
|
||||||
|
successThreshold: 1
|
||||||
|
tcpSocket:
|
||||||
|
port: 3000
|
||||||
|
timeoutSeconds: 1
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
cpu: 250m
|
||||||
|
memory: 750Mi
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /var/lib/grafana
|
||||||
|
name: grafana-pv
|
||||||
|
volumes:
|
||||||
|
- name: grafana-pv
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: grafana-pvc
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
name: grafana
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 3000
|
||||||
|
protocol: TCP
|
||||||
|
targetPort: http-grafana
|
||||||
|
selector:
|
||||||
|
app: grafana
|
||||||
|
sessionAffinity: None
|
||||||
|
type: LoadBalancer
|
90
dev/upgrade-responder/manifests/influxdb.yaml
Normal file
90
dev/upgrade-responder/manifests/influxdb.yaml
Normal file
@ -0,0 +1,90 @@
|
|||||||
|
apiVersion: v1
|
||||||
|
kind: Secret
|
||||||
|
metadata:
|
||||||
|
name: influxdb-creds
|
||||||
|
namespace: default
|
||||||
|
type: Opaque
|
||||||
|
data:
|
||||||
|
INFLUXDB_HOST: aW5mbHV4ZGI= # influxdb
|
||||||
|
INFLUXDB_PASSWORD: cm9vdA== # root
|
||||||
|
INFLUXDB_USERNAME: cm9vdA== # root
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: PersistentVolumeClaim
|
||||||
|
metadata:
|
||||||
|
name: influxdb
|
||||||
|
namespace: default
|
||||||
|
labels:
|
||||||
|
app: influxdb
|
||||||
|
spec:
|
||||||
|
accessModes:
|
||||||
|
- ReadWriteOnce
|
||||||
|
storageClassName: longhorn
|
||||||
|
resources:
|
||||||
|
requests:
|
||||||
|
storage: 2Gi
|
||||||
|
---
|
||||||
|
apiVersion: apps/v1
|
||||||
|
kind: Deployment
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: influxdb
|
||||||
|
name: influxdb
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
progressDeadlineSeconds: 600
|
||||||
|
replicas: 1
|
||||||
|
revisionHistoryLimit: 10
|
||||||
|
selector:
|
||||||
|
matchLabels:
|
||||||
|
app: influxdb
|
||||||
|
strategy:
|
||||||
|
rollingUpdate:
|
||||||
|
maxSurge: 25%
|
||||||
|
maxUnavailable: 25%
|
||||||
|
type: RollingUpdate
|
||||||
|
template:
|
||||||
|
metadata:
|
||||||
|
creationTimestamp: null
|
||||||
|
labels:
|
||||||
|
app: influxdb
|
||||||
|
spec:
|
||||||
|
containers:
|
||||||
|
- image: docker.io/influxdb:1.8.10
|
||||||
|
imagePullPolicy: IfNotPresent
|
||||||
|
name: influxdb
|
||||||
|
resources: {}
|
||||||
|
terminationMessagePath: /dev/termination-log
|
||||||
|
terminationMessagePolicy: File
|
||||||
|
envFrom:
|
||||||
|
- secretRef:
|
||||||
|
name: influxdb-creds
|
||||||
|
volumeMounts:
|
||||||
|
- mountPath: /var/lib/influxdb
|
||||||
|
name: var-lib-influxdb
|
||||||
|
volumes:
|
||||||
|
- name: var-lib-influxdb
|
||||||
|
persistentVolumeClaim:
|
||||||
|
claimName: influxdb
|
||||||
|
dnsPolicy: ClusterFirst
|
||||||
|
restartPolicy: Always
|
||||||
|
schedulerName: default-scheduler
|
||||||
|
securityContext: {}
|
||||||
|
terminationGracePeriodSeconds: 30
|
||||||
|
---
|
||||||
|
apiVersion: v1
|
||||||
|
kind: Service
|
||||||
|
metadata:
|
||||||
|
labels:
|
||||||
|
app: influxdb
|
||||||
|
name: influxdb
|
||||||
|
namespace: default
|
||||||
|
spec:
|
||||||
|
ports:
|
||||||
|
- port: 8086
|
||||||
|
protocol: TCP
|
||||||
|
targetPort: 8086
|
||||||
|
selector:
|
||||||
|
app: influxdb
|
||||||
|
sessionAffinity: None
|
||||||
|
type: ClusterIP
|
165
enhancements/20200319-default-disks-and-node-configuration.md
Normal file
165
enhancements/20200319-default-disks-and-node-configuration.md
Normal file
@ -0,0 +1,165 @@
|
|||||||
|
# Default disks and node configuration
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
This enhancement allows the user to customize the default disks and node configurations in Longhorn for newly added nodes using Kubernetes label and annotation, instead of using Longhorn API or UI.
|
||||||
|
|
||||||
|
### Related Issues
|
||||||
|
|
||||||
|
https://github.com/longhorn/longhorn/issues/1053
|
||||||
|
|
||||||
|
https://github.com/longhorn/longhorn/issues/991
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
### Goals
|
||||||
|
|
||||||
|
1. Allow users to customize the disks and node configuration for new nodes without using Longhorn API or UI. This will make it much easier for users to scale the cluster since it will eliminate the necessity to configure Longhorn manually for each newly added node if the node contains more than one disk or the disk configuration is different between the nodes.
|
||||||
|
2. Allow users to define node tags for newly added nodes without using the Longhorn API or UI.
|
||||||
|
|
||||||
|
### Non-goals
|
||||||
|
|
||||||
|
This enhancement will not keep the node label/annotation in sync with the Longhorn node/disks configuration.
|
||||||
|
|
||||||
|
## Proposal
|
||||||
|
|
||||||
|
1. Longhorn directly uses the node annotation to set the node tags once the node contains no tag.
|
||||||
|
2. Longhorn uses the setting `Create Default Disk on Labeled Nodes` to decide if to enable the default disks customization.
|
||||||
|
If the setting is enabled, Longhorn will wait for the default disks customization set, instead of directly creating the Longhorn default disk for the node without disks (new node is included).
|
||||||
|
Then Longhorn relies on the value of the node label `node.longhorn.io/create-default-disk` to decide how to customize default disks:
|
||||||
|
If the value is `config`, the annotation will be parsed and used as the default disks customization.
|
||||||
|
If the value is boolean value `true`, the data path setting will be used for the default disk.
|
||||||
|
And other values will be treated as `false` and no default disk will be created.
|
||||||
|
|
||||||
|
### User Stories
|
||||||
|
|
||||||
|
#### Scale up the cluster and add tags to new nodes
|
||||||
|
|
||||||
|
Before the enhancement, when the users want to scale up the Kubernetes cluster and add tags on the node, they would need access to the Longhorn API/UI to do that.
|
||||||
|
|
||||||
|
After the enhancement, the users can add a specified annotation to the new nodes to define the tags. In this way, the users don't need to work with Longhorn API/UI directly during the process of scaling up a cluster.
|
||||||
|
|
||||||
|
#### Scale up the cluster and add disks to new nodes
|
||||||
|
|
||||||
|
Before the enhancement, when the users want to scale up the Kubernetes cluster and customize the disks on the node, they would need to:
|
||||||
|
|
||||||
|
1. Enable the Longhorn setting `Create Default Disk on Labeled Nodes` to prevent the default disk to be created automatically on the node.
|
||||||
|
2. Add new nodes to the Kubernetes cluster, e.g. by using Rancher or Terraform, etc.
|
||||||
|
3. After the new node was recognized by Longhorn, edit the node to add disks using either Longhorn UI or API.
|
||||||
|
|
||||||
|
The third step here needs to be done for every node separately, making it inconvenient for the operation.
|
||||||
|
|
||||||
|
After the enhancement, the steps the user would take is:
|
||||||
|
|
||||||
|
1. Enable the Longhorn setting `Create Default Disk on Labeled Nodes`.
|
||||||
|
2. Add new nodes to the Kubernetes cluster, e.g. by using Rancher or Terraform, etc.
|
||||||
|
3. Add the label and annotations to the node to define the default disk(s) for the new node. Longhorn will pick it up automatically and add the disk(s) for the new node.
|
||||||
|
|
||||||
|
In this way, the user doesn't need to work with Longhorn API/UI directly during the process of scaling up a cluster.
|
||||||
|
|
||||||
|
### User experience description
|
||||||
|
|
||||||
|
#### Scenario 1 - Setup the default node tags:
|
||||||
|
|
||||||
|
1. The user adds the default node tags annotation `node.longhorn.io/default-node-tags=<node tag list>` to a Kubernetes node.
|
||||||
|
2. If the Longhorn node tag list was empty before step 1, the user should see the tag list for that node updated according to the annotation. Otherwise, the user should see no change to the tag list.
|
||||||
|
|
||||||
|
#### Scenario 2 - Setup and use the default disks for a new node:
|
||||||
|
|
||||||
|
1. The users enable the setting `Create Default Disk on Labeled Nodes`.
|
||||||
|
2. The users add a new node, then they will get a node without any disk.
|
||||||
|
1. By deleting all disks on an existing node, the users can get the same result.
|
||||||
|
3. After patching the label `node.longhorn.io/create-default-disk=config` and the annotation `node.longhorn.io/default-disks-config=<customized default disks>` for the Kubernetes node,
|
||||||
|
the node disks should be updated according to the annotation.
|
||||||
|
|
||||||
|
## Design
|
||||||
|
|
||||||
|
### Implementation Overview
|
||||||
|
|
||||||
|
##### For Node Tags:
|
||||||
|
|
||||||
|
If:
|
||||||
|
|
||||||
|
1. The Longhorn node contains no tag.
|
||||||
|
2. The Kubernetes node object of the same name contains an annotation `node.longhorn.io/default-node-tags`, for example:
|
||||||
|
```
|
||||||
|
node.longhorn.io/default-node-tags: '["fast","storage"]'
|
||||||
|
```
|
||||||
|
3. The annotation can be parsed successfully.
|
||||||
|
|
||||||
|
Then Longhorn will update the Longhorn node object with the new tags specified by the annotation.
|
||||||
|
|
||||||
|
The process will be done as a part of the node controller reconciliation logic in the Longhorn manager.
|
||||||
|
|
||||||
|
##### For Default Disks:
|
||||||
|
|
||||||
|
If:
|
||||||
|
|
||||||
|
1. The Longhorn node contains no disk.
|
||||||
|
2. The setting `Create Default Disk on Labeled Nodes` is enabled.
|
||||||
|
3. The Kubernetes node object of the same name contains the label `node.longhorn.io/create-default-disk: 'config'` and an annotation `node.longhorn.io/default-disks-config`, for example:
|
||||||
|
```
|
||||||
|
node.longhorn.io/default-disks-config:
|
||||||
|
'[{"path":"/mnt/disk1","allowScheduling":false},
|
||||||
|
{"path":"/mnt/disk2","allowScheduling":false,"storageReserved":1024,"tags":["ssd","fast"]}]'
|
||||||
|
```
|
||||||
|
4. The annotation can be parsed successfully.
|
||||||
|
|
||||||
|
Then Longhorn will create the customized default disk(s) specified by the annotation.
|
||||||
|
|
||||||
|
The process will be done as a part of the node controller reconciliation logic in the Longhorn manager.
|
||||||
|
|
||||||
|
##### Notice
|
||||||
|
|
||||||
|
If the label/annotations failed validation, no partial configuration will be applied and the whole annotation will be ignored. No change will be done for the node tag/disks.
|
||||||
|
|
||||||
|
The validation failure can be caused by:
|
||||||
|
1. The annotation format is invalid and cannot be parsed to tags/disks configuration.
|
||||||
|
2. The format is valid but there is an unqualified tag in the tag list.
|
||||||
|
3. The format is valid but there is an invalid disk parameter in the disk list.
|
||||||
|
e.g., duplicate disk path, non-existing disk path, multiple disks with the same file system, the reserved storage size being out of range...
|
||||||
|
|
||||||
|
### Test plan
|
||||||
|
|
||||||
|
1. The users deploy Longhorn system.
|
||||||
|
2. The users enable the setting `Create Default Disk on Labeled Nodes`.
|
||||||
|
3. The users scale the cluster. Then the newly introduced nodes should contain no disk and no tag.
|
||||||
|
4. The users pick up a new node, create 2 random data path in the container then patch the following valid node label and annotations:
|
||||||
|
```
|
||||||
|
labels:
|
||||||
|
node.longhorn.io/create-default-disk: "config"
|
||||||
|
},
|
||||||
|
annotations:
|
||||||
|
node.longhorn.io/default-disks-config:
|
||||||
|
'[{"path":"<random data path 1>","allowScheduling":false},
|
||||||
|
{"path":"<random data path 2>","allowScheduling":true,"storageReserved":1024,"tags":["ssd","fast"]}]'
|
||||||
|
node.longhorn.io/default-node-tags: '["fast","storage"]'
|
||||||
|
```
|
||||||
|
After the patching, the node disks and tags will be created and match the annotations.
|
||||||
|
|
||||||
|
5. The users use Longhorn UI to modify the node configuration. They will find that the node annotations keep unchanged and don't match the current node tag/disk configuration.
|
||||||
|
6. The users delete all node tags and disks via UI. Then the node tags/disks will be recreated immediately and match the annotations.
|
||||||
|
7. The users pick up another new node, directly patch the following invalid node label and annotations:
|
||||||
|
```
|
||||||
|
labels:
|
||||||
|
node.longhorn.io/create-default-disk: "config"
|
||||||
|
},
|
||||||
|
annotations:
|
||||||
|
node.longhorn.io/default-disks-config:
|
||||||
|
'[{"path":"<non-existing data path>","allowScheduling":false},
|
||||||
|
node.longhorn.io/default-node-tags: '["slow",".*invalid-tag"]'
|
||||||
|
```
|
||||||
|
Then they should find that the tag and disk list are still empty.
|
||||||
|
|
||||||
|
8. The users create a random data path then correct the annotation for the node:
|
||||||
|
```
|
||||||
|
annotations:
|
||||||
|
node.longhorn.io/default-disks-config:
|
||||||
|
'[{"path":"<random data path>","allowScheduling":false},
|
||||||
|
node.longhorn.io/default-node-tags: '["slow","storage"]'
|
||||||
|
```
|
||||||
|
Now they will see that the node tags and disks are created correctly and match the annotations.
|
||||||
|
|
||||||
|
### Upgrade strategy
|
||||||
|
|
||||||
|
N/A.
|
@ -0,0 +1,99 @@
|
|||||||
|
# Replace Filesystem ID key in Disk map
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
This enhancement will remove the dependency of filesystem ID in the DiskStatus, because we found there is no guarantee that filesystem ID won't change after the node reboots for e.g. XFS.
|
||||||
|
|
||||||
|
### Related Issues
|
||||||
|
|
||||||
|
https://github.com/longhorn/longhorn/issues/972
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
### Goals
|
||||||
|
|
||||||
|
1. Previously Longhorn is using filesystem ID as keys to the map of disks on the node. But we found there is no guarantee that filesystem ID won't change after the node reboots for certain filesystems e.g. XFS.
|
||||||
|
1. We want to enable the ability to configure CRD directly, prepare for the CRD based API access in the future
|
||||||
|
1. We also need to make sure previously implemented safe guards are not impacted by this change:
|
||||||
|
1. If a disk was accidentally unmounted on the node, we should detect that and stop replica from scheduling into it.
|
||||||
|
1. We shouldn't allow user to add two disks pointed to the same filesystem
|
||||||
|
|
||||||
|
### Non-goals
|
||||||
|
|
||||||
|
For this enhancement, we will not proactively stop replica from starting if the disk it resides in is NotReady. Lack of `replicas` directory should stop replica from starting automatically.
|
||||||
|
|
||||||
|
## Proposal
|
||||||
|
We will generate UUID for each disk called `diskUUID` and store it as a file `longhorn-disk.cfg` in the filesystem on the disk.
|
||||||
|
|
||||||
|
If the filesystem already has the `diskUUID` stored, we will retrieve and verify the `diskUUID` and make sure it doesn't change when we scan the disks.
|
||||||
|
|
||||||
|
The disk name can be customized by user as well.
|
||||||
|
|
||||||
|
### Background
|
||||||
|
Filesystem ID was a good identifier for the disk:
|
||||||
|
1. Different filesystems on the same node will have different filesystem IDs.
|
||||||
|
1. It's built-in in the filesystem. Only need one command(`stat`) to retrieve it.
|
||||||
|
|
||||||
|
But there is another assumption we had which turned out not to be true. We assumed filesystem ID won't change during the lifecycle of the filesystem. But we found that some filesystem ID can change after a remount. It caused an issue on XFS.
|
||||||
|
|
||||||
|
Besides that, there is another problem we want to address: currently API server is forwarding the request of updateDisks to the node of the disks, since only that node has access to the filesystem so it can fill in the FilesystemID(`fsid`). As long as we're using the `fsid` as the key of the disk map, we cannot create new disks without let the node handling the request. This become an issue when we want to allow direct editing CRDs as API.
|
||||||
|
|
||||||
|
### User Experience In Detail
|
||||||
|
|
||||||
|
Before the enhancement, if the users add more disks to the node, API gateway will forward the request to the responsible node, which will validate the input on the fly for cases like two disks point to the same filesystem.
|
||||||
|
|
||||||
|
After the enhancement, when the users add more disks to the node, API gateway will only validate the basic input. The other error cases will be reflected in the disk's Condition field.
|
||||||
|
|
||||||
|
1. If different disks point to the same directory, then:
|
||||||
|
1. If all the disks are added new, both disks will get condition `ready = false`, with the message indicating that they're pointing to the same filesystem.
|
||||||
|
1. If one of the disks already exists, the other disks will get condition `ready = false`, with the message indicating that they're pointing to the same filesystem as one of the existing disks.
|
||||||
|
1. If there is more than one disk exists and pointing to the same filesystem. Longhorn will identify which disk is the valid one using `diskUUID` and set the condition of other disks to `ready = false`.
|
||||||
|
|
||||||
|
### API changes
|
||||||
|
1. API input for the diskUpdate call will be a map[string]DiskSpec instead of []DiskSpec.
|
||||||
|
1. API no longer validates duplicate filesystem ID.
|
||||||
|
|
||||||
|
### UI changes
|
||||||
|
UI can let the user customize the disk name. By default UI can generate name like `disk-<random>` for the disks.
|
||||||
|
|
||||||
|
## Design
|
||||||
|
|
||||||
|
### Implementation Overview
|
||||||
|
|
||||||
|
The validation of will be done in the node controller `syncDiskStatus`.
|
||||||
|
|
||||||
|
syncDiskStatus process:
|
||||||
|
|
||||||
|
1. Scan through the disks, and record disks in the FSID to disk map
|
||||||
|
1. Check for each FSID after the scanning is done.
|
||||||
|
1. If there is only one disk in for a FSID
|
||||||
|
1. If the disk already has `status.diskUUID`
|
||||||
|
1. Check for file `longhorn-disk.cfg`
|
||||||
|
1. file exists: parse the value. If it doesn't match status.diskUUID, mark the disk as NotReady
|
||||||
|
1. case: mount the wrong disk.
|
||||||
|
1. file doesn't exist: mark the disk as NotReady
|
||||||
|
1. case: Reboot and forget to mount.
|
||||||
|
1. If the disk has empty `status.diskUUID`
|
||||||
|
1. check for file `longhorn-disk.cfg`.
|
||||||
|
1. if exists, parse uuid.
|
||||||
|
1. If there is no duplicate UUID in the disk list, then record the uuid
|
||||||
|
1. Otherwise mark as NotReady `duplicate UUID`.
|
||||||
|
1. if not exists, generate the uuid, record it in the file, then fill in `status.diskUUID`.
|
||||||
|
1. Creating new disk.
|
||||||
|
1. If there are more than one disks with the same FSID
|
||||||
|
1. if the disk has `status.diskUUID`
|
||||||
|
1. follow 2.i.a
|
||||||
|
1. If the disk doesn't have `status.diskUUID`
|
||||||
|
1. mark as NotReady due to duplicate FSID.
|
||||||
|
|
||||||
|
#### Note on the disk naming
|
||||||
|
The default disks of the node will be called `default-disk-<fsid>`. That includes the default disks created using node labels/annotations.
|
||||||
|
|
||||||
|
### Test plan
|
||||||
|
|
||||||
|
Update existing test plan on node testing will be enough for the first step, since it's already covered the case for changing filesystem.
|
||||||
|
|
||||||
|
### Upgrade strategy
|
||||||
|
|
||||||
|
No change for previous disks since they all used the FSID which is at least unique on the node.
|
||||||
|
Node controller will fill `diskUUID` field and create `longhorn-disk.cfg` automatically on the disk once it processed it.
|
78
enhancements/20200625-volume-deletion-flows.md
Normal file
78
enhancements/20200625-volume-deletion-flows.md
Normal file
@ -0,0 +1,78 @@
|
|||||||
|
# Volume Deletion Flows
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
This enhancement modifies the flow a user would follow for handling deletion of `Volumes` that are `Attached` or otherwise have resources such as a `Persistent Volume` associated with them. Specifically, this adds warnings in the `longhorn-ui` when deleting a `Volume` in these specific cases and provides a means for the `longhorn-manager` to clean up any leftover resources in `Kubernetes` associated with a deleted `Volume`.
|
||||||
|
|
||||||
|
### Related Issues
|
||||||
|
https://github.com/longhorn/longhorn/issues/520
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
### Goals
|
||||||
|
The goal of this enhancement is to either address or warn users about situations in which deleting a `Volume` could cause potential problems. In handling the case of cleaning up an associated `Persistent Volume` (and possibly `Persistent Volume Claim`), we can prevent there being leftover unusable `Volume`-related resources in `Kubernetes`. In warning about deletion when the `Volume` is attached, we can inform the user about possible consequences the deletion would have on existing workloads so the user can handle this accordingly.
|
||||||
|
|
||||||
|
### Non-Goals
|
||||||
|
This enhancement is not intended on completely blocking a user from pursuing any dangerous operations. For example, if a user insists on deleting a currently attached `Volume`, they should not be forbidden from doing so in case the user is absolutely sure that they want to follow through.
|
||||||
|
|
||||||
|
## Proposal
|
||||||
|
When a user wishes to delete a `Volume` from the `Longhorn UI`, the system should check to see if the `Volume` has a resource tied to it or is currently `Attached`:
|
||||||
|
- If the `Volume` is `Attached`, the user should be warned about the potential consequences of deleting the `Volume` (namely that any applications currently using the `Volume` will no longer have access to it and likely error out) before they can confirm the deletion or cancel it.
|
||||||
|
- If the `Volume` is tied to a `Volume` that is tied to a `Persistent Volume` (and possibly a `Persistent Volume Claim`), the user should be informed of this information and the fact that we will clean up those resources if the `Volume` is deleted. If the `Volume` is tied to a `Persistent Volume Claim`, the user should also be warned that there may be `Deployments` or `Stateful Sets` that depend on this `Volume` that could no longer work should the user delete the `Volume` (we cannot explicitly see this without having to monitor all `Deployments` and `Stateful Sets` to check if they use a `Longhorn`-backed `Persistent Volume Claim`). Afterwards, the user can confirm the deletion if they wish, which will lead to cleanup of the associated resources and deletion of the `Volume`.
|
||||||
|
|
||||||
|
### User Stories
|
||||||
|
#### Deletion of Volumes with Associated Resources
|
||||||
|
Before, a user deleting a `Volume` through the `longhorn-ui` would only face the default confirmation message. The user would see the related `Persistent Volume` (and possibly `Persistent Volume Claim`) from the `Volume` listing, but this information would not be displayed in the confirmation message, and on deletion, these resources would still exist, which could raise problems if a user attempted to use these in a workload since they would refer to a nonexistent `Volume`.
|
||||||
|
|
||||||
|
After this enhancement, the user would be alerted about the existence of these resources and the fact that deletion of the `Volume` would lead to cleanup of these resources. The user can decide as normal whether to follow through with deletion of the `Volume` from the `longhorn-ui` or not.
|
||||||
|
|
||||||
|
#### Deletion of an Attached Volume
|
||||||
|
Before, a user deleting a `Volume` that was `Attached` would only face the default confirmation message in the `longhorn-ui`. The fact that the `Volume` was `Attached` would not be indicated in the confirmation message, and the user could potentially cause errors in applications using the `Volume` without any warnings.
|
||||||
|
|
||||||
|
After this enhancement, a user would be alerted about the `Volume` being `Attached` and would be able to decide on a course of action for `Volume` deletion and handling of any applications using the `Volume` accordingly.
|
||||||
|
|
||||||
|
### User Experience In Detail
|
||||||
|
#### Deletion of Volumes with Associated Resources
|
||||||
|
1. The user attempts to delete a `Volume` that has a `Persistent Volume` (and potentially a `Persistent Volume Claim`) associated with it.
|
||||||
|
2. The confirmation message will appear, asking the user to confirm the operation. Additionally, the message will tell the user that the `longhorn-manager` will delete the `Kubernetes` resources associated with the `Volume`. If the `Volume` is additionally tied to a `Persistent Volume Claim`, the user will also be warned about possible adverse effects for any `Deployments` or `Stateful Sets` that may be using that `Volume`.
|
||||||
|
3. The user can now follow through with one of two options:
|
||||||
|
- They can press `Cancel`, which will do nothing and take them back to the `Volume` listing.
|
||||||
|
- They can press `Confirm` to follow through with the operation. The `longhorn-manager` will process deletion of the `Volume` and automatically clean up any associated `Persistent Volume` or `Persistent Volume Claim`.
|
||||||
|
|
||||||
|
#### Deletion of an Attached Volume
|
||||||
|
1. The user attempts to delete a `Volume` from the `longhorn-ui` that is currently `Attached`.
|
||||||
|
2. The confirmation message will appear, telling the user that the `Volume` is `Attached` and that deleting the `Volume` can lead to errors in any applications using the `Volume`.
|
||||||
|
- If the `Volume` is also attached to a `Kubernetes` workload (we can determine this from the `Kubernetes Status`)
|
||||||
|
3. The user can now follow through with one of two options:
|
||||||
|
- They can press `Cancel`, which will do nothing and take them back to the `Volume` listing.
|
||||||
|
- They can press `Confirm` to follow through with the operation. The `longhorn-manager` will process deletion of the `Volume`. The user will be responsible for handling any errored applications that depend on the now-deleted `Volume`.
|
||||||
|
|
||||||
|
### API Changes
|
||||||
|
From an API perspective, the call made to delete the `Volume` should look the same. The logic for handling deletion of any `Persistent Volume` or `Persistent Volume Claim` should go into the `Volume Controller`.
|
||||||
|
|
||||||
|
## Design
|
||||||
|
### Implementation Overview
|
||||||
|
1. `longhorn-ui` changes:
|
||||||
|
- When a user attempts to delete a `Volume`:
|
||||||
|
- If the `Volume` has an associated `Persistent Volume` and possibly `Persistent Volume Claim`, add an additional warning to the confirmation dialog regarding cleanup of these resources.
|
||||||
|
- If the `Volume` is `Attached`, add an additional warning to the confirmation dialog regarding possible errors that may occur that the user should account for.
|
||||||
|
2. `longhorn-manager` changes:
|
||||||
|
- In the `Volume Controller`, if a `Volume` has a `Deletion Timestamp`, check the `Kubernetes Status` of the `Volume`:
|
||||||
|
- If there is a `Persistent Volume`, delete it.
|
||||||
|
- If there is a `Persistent Volume Claim`, delete it.
|
||||||
|
|
||||||
|
### Test Plan
|
||||||
|
A number of integration tests will need to be added for the `longhorn-manager` in order to test the changes in this proposal:
|
||||||
|
1. From the API, create a `Volume` and then create a `Persistent Volume` and `Persistent Volume Claim`. Wait for the `Kubernetes Status` to be populated. Attempt to delete the `Volume`. Both the `Persistent Volume` and `Persistent Volume Claim` should be deleted as well.
|
||||||
|
2. Create a `Storage Class` for `Longhorn` and use that to provision a new `Volume` for a `Persistent Volume Claim`. Attempt to delete the `Volume`. Both the `Persistent Volume` and `Persistent Volume Claim` should be deleted as well.
|
||||||
|
|
||||||
|
Additionally, some manual testing will need to be performed against the `longhorn-ui` changes for this proposal:
|
||||||
|
1. From the `longhorn-ui`, create a new `Volume` and then create a `Persistent Volume` for that `Volume`. Attempt to delete the `Volume`. The dialog box should indicate the user that there will be `Kubernetes` resources that will be deleted as a result.
|
||||||
|
2. From the `longhorn-ui`, create a new `Volume` and then `Attach` it. Attempt to delete the `Volume`. The dialog box should indicate that the `Volume` is in use and warn about potential errors.
|
||||||
|
3. Use `Kubernetes` to create a `Volume` and use it in a `Pod`. Attempt to delete the `Volume` from the `longhorn-ui`. Multiple warnings should show up in the dialog box, with one indicating removal of the `Kubernetes` resources and the other warning about the `Volume` being in use.
|
||||||
|
|
||||||
|
### Upgrade strategy
|
||||||
|
No special upgrade strategy is necessary. Once the user upgrades to the new version of `Longhorn`, these new capabilities will be accessible from the `longhorn-ui` without any special work.
|
||||||
|
|
||||||
|
### Notes
|
||||||
|
- There is interest in allowing the user to decide on whether or not to retain the `Persistent Volume` (and possibly `Persistent Volume Claim`) for certain use cases such as restoring from a `Backup`. However, this would require changes to the way `go-rancher` generates the `Go` client that we use so that `Delete` requests against resources are able to take inputs.
|
||||||
|
- In the case that a `Volume` is provisioned from a `Storage Class` (and set to be `Deleted` once the `Persistent Volume Claim` utilizing that `Volume` has been deleted), the `Volume` should still be deleted properly regardless of how the deletion was initiated. If the `Volume` is deleted from the UI, the call that the `Volume Controller` makes to delete the `Persistent Volume` would only trigger one more deletion call from the `CSI` server to delete the `Volume`, which would return successfully and allow the `Persistent Volume` to be deleted and the `Volume` to be deleted as well. If the `Volume` is deleted because of the `Persistent Volume Claim`, the `CSI` server would be able to successfully make a `Volume` deletion call before deleting the `Persistent Volume`. The `Volume Controller` would have no additional resources to delete and be able to finish deletion of the `Volume`.
|
141
enhancements/20200701-backupstore-file-locks.md
Normal file
141
enhancements/20200701-backupstore-file-locks.md
Normal file
@ -0,0 +1,141 @@
|
|||||||
|
# Backupstore File Locks
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
|
||||||
|
This enhancement will address backup issues that are the result of concurrently running backup operations,
|
||||||
|
by implementing a synchronisation solution that utilizes files on the backup store as Locks.
|
||||||
|
|
||||||
|
### Related Issues
|
||||||
|
|
||||||
|
https://github.com/longhorn/longhorn/issues/612
|
||||||
|
https://github.com/longhorn/longhorn/issues/1393
|
||||||
|
https://github.com/longhorn/longhorn/issues/1392
|
||||||
|
https://github.com/longhorn/backupstore/pull/37
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
### Goals
|
||||||
|
|
||||||
|
Identify and prevent backup issues caused as a result of concurrent backup operations.
|
||||||
|
Since it should be safe to do backup creation & backup restoration at the same time,
|
||||||
|
we should allow these concurrent operations.
|
||||||
|
|
||||||
|
## Proposal
|
||||||
|
|
||||||
|
The idea is to implement a locking mechanism that utilizes the backupstore,
|
||||||
|
to prevent the following dangerous cases of concurrent operations.
|
||||||
|
1. prevent backup deletion during backup restoration
|
||||||
|
2. prevent backup deletion while a backup is in progress
|
||||||
|
3. prevent backup creation during backup deletion
|
||||||
|
4. prevent backup restoration during backup deletion
|
||||||
|
|
||||||
|
The locking solution shouldn't unnecessary block operations, so the following cases should be allowed.
|
||||||
|
1. allow backup creation during restoration
|
||||||
|
2. allow backup restoration during creation
|
||||||
|
|
||||||
|
The locking solution should have a maximum wait time for lock acquisition,
|
||||||
|
which will fail the backup operation so that the user does not have to wait forever.
|
||||||
|
|
||||||
|
The locking solution should be self expiring, so that when a process dies unexpectedly,
|
||||||
|
future processes are able to acquire the lock.
|
||||||
|
|
||||||
|
The locking solution should guarantee that only a single type of lock is active at a time.
|
||||||
|
|
||||||
|
The locking solution should allow a lock to be passed down into async running go routines.
|
||||||
|
|
||||||
|
|
||||||
|
### User Experience In Detail
|
||||||
|
|
||||||
|
Before this enhancement, it is possible to delete a backup while a backup restoration is in progress.
|
||||||
|
This would lead to an unhealthy restoration volume.
|
||||||
|
|
||||||
|
After this enhancement, a backup deletion could only happen after the restoration has been completed.
|
||||||
|
This way the backupstore continues to contain all the necessary blocks that are required for the restoration.
|
||||||
|
|
||||||
|
After this enhancement, creation & restoration operations are mutually exclusive with backup deletion operations.
|
||||||
|
|
||||||
|
### API changes
|
||||||
|
|
||||||
|
## Design
|
||||||
|
### Implementation Overview
|
||||||
|
|
||||||
|
Conceptually the lock can be thought of as a **RW** lock,
|
||||||
|
it includes a `Type` specifier where different types are mutually exclusive.
|
||||||
|
|
||||||
|
To allow the lock to be passed into async running go routines, we add a `count` field,
|
||||||
|
that keeps track of the current references to this lock.
|
||||||
|
|
||||||
|
```go
|
||||||
|
type FileLock struct {
|
||||||
|
Name string
|
||||||
|
Type LockType
|
||||||
|
Acquired bool
|
||||||
|
driver BackupStoreDriver
|
||||||
|
volume string
|
||||||
|
count int32
|
||||||
|
serverTime time.Time
|
||||||
|
refreshTimer *time.Ticker
|
||||||
|
}
|
||||||
|
```
|
||||||
|
|
||||||
|
To make the lock self expiring, we rely on `serverTime` updates which needs to be refreshed by a timer.
|
||||||
|
We chose a `LOCK_REFRESH_INTERVAL` of **60** seconds, each refresh cycle a locks `serverTime` will be updated.
|
||||||
|
A lock is considered expired once the current time is after a locks `serverTime` + `LOCK_MAX_WAIT_TIME` of **150** seconds.
|
||||||
|
Once a lock is expired any currently active attempts to acquire that lock will timeout.
|
||||||
|
|
||||||
|
```go
|
||||||
|
const (
|
||||||
|
LOCKS_DIRECTORY = "locks"
|
||||||
|
LOCK_PREFIX = "lock"
|
||||||
|
LOCK_SUFFIX = ".lck"
|
||||||
|
LOCK_REFRESH_INTERVAL = time.Second * 60
|
||||||
|
LOCK_MAX_WAIT_TIME = time.Second * 150
|
||||||
|
LOCK_CHECK_INTERVAL = time.Second * 10
|
||||||
|
LOCK_CHECK_WAIT_TIME = time.Second * 2
|
||||||
|
)
|
||||||
|
```
|
||||||
|
|
||||||
|
Lock Usage
|
||||||
|
1. create a new lock instance via `lock := lock.New()`
|
||||||
|
2. call `lock.Lock()` which will block till the lock has been acquired and increment the lock reference count.
|
||||||
|
3. defer `lock.Unlock()` which will decrement the lock reference count and remove the lock once unreferenced.
|
||||||
|
|
||||||
|
To make sure the locks are **mutually exclusive**, we use the following process to acquire a lock.
|
||||||
|
1. create a lock file on the backupstore with a unique `Name`.
|
||||||
|
2. retrieve all lock files from the backupstore order them by `Acquired` then by `serverTime`
|
||||||
|
followed by `Name`
|
||||||
|
3. check if we can acquire the lock, we can only acquire if there is no unexpired(i) lock
|
||||||
|
of a different type(ii) that has priority(iii).
|
||||||
|
1. Locks are self expiring, once the current time is after
|
||||||
|
`lock.serverTime + LOCK_MAX_WAIT_TIME` we no longer need to consider
|
||||||
|
this lock as valid.
|
||||||
|
2. Backup & Restore Locks are mapped to compatible types while Delete
|
||||||
|
Locks are mapped to a different type to be mutually exclusive with the
|
||||||
|
others.
|
||||||
|
3. Priority is based on the comparison order, where locks are compared by
|
||||||
|
`lock.Acquired` then by `lock.serverTime` followed by `lock.Name`. Where
|
||||||
|
acquired locks are always sorted before non acquired locks.
|
||||||
|
4. if lock acquisition times out, return err which will fail the backup operation.
|
||||||
|
5. once the lock is acquired, continuously refresh the lock (updates `lock.serverTime`)
|
||||||
|
5. once the lock is acquired, it can be passed around by calling `lock.Lock()`
|
||||||
|
6. once the lock is no longer referenced, it will be removed from the backupstore.
|
||||||
|
|
||||||
|
It's very unlikely to run into lock collisions, since we use uniquely generated name for the lock filename.
|
||||||
|
In cases where two locks have the same `lock.serverTime`, we can rely on the `lock.Name` as a differentiator between 2 locks.
|
||||||
|
|
||||||
|
### Test plan
|
||||||
|
|
||||||
|
A number of integration tests will need to be added for the `longhorn-engine` in order to test the changes in this proposal:
|
||||||
|
1. place an expired lock file into a backupstore, then verify that a new lock can be acquired.
|
||||||
|
2. place an active lock file of Type `Delete` into a backupstore,
|
||||||
|
then verify that backup/restore operations will trigger lock acquisition timeout.
|
||||||
|
3. place an active lock file of Type `Delete` into a backupstore,
|
||||||
|
then verify that a new `Delete` operation can acquire a lock.
|
||||||
|
4. place an active lock file of Type `Backup/Restore` into a backupstore,
|
||||||
|
then verify that delete operations will trigger lock acquisition timeout.
|
||||||
|
5. place an active lock file of Type `Backup/Restore` into a backupstore,
|
||||||
|
then verify that a new `Backup/Restore` operation can acquire a lock.
|
||||||
|
|
||||||
|
### Upgrade strategy
|
||||||
|
|
||||||
|
No special upgrade strategy is necessary.
|
199
enhancements/20200721-refactor-restore-for-rebuild-enabling.md
Normal file
199
enhancements/20200721-refactor-restore-for-rebuild-enabling.md
Normal file
@ -0,0 +1,199 @@
|
|||||||
|
# Refactor restore for rebuild enabling
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
This enhancement will refactor the restore implementation and enable rebuild for restore/DR volumes.
|
||||||
|
|
||||||
|
### Related Issues
|
||||||
|
https://github.com/longhorn/longhorn/issues/1279
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
### Goals
|
||||||
|
The goal of this enhancement is to simplify the restore flow so that it can work for rebuilding replicas of restore/DR volumes without breaking the live upgrade feature.
|
||||||
|
|
||||||
|
### Non-goals
|
||||||
|
This enhancement won't guarantee that the restore/DR volume activation won't be blocked by replica rebuilding.
|
||||||
|
|
||||||
|
## Proposal
|
||||||
|
- When there are replicas crashing among restore/DR volumes, new rebuilding replicas will be created as usual. But instead of following the normal replica rebuilding workflow (syncing data/files from other running replicas), the rebuilding replicas of restore/DR volumes will directly restore data from backup.
|
||||||
|
- The normal rebuilding (file syncing) workflow implicitly considers that all existing snapshots won't change and newer data during the rebuilding will be written into volume head. But for restore/DR volumes, new data writing is directly handled by replica (sync agent server) and it will be written to underlying snapshots rather than volume heads. As a result, the normal rebuilding logic doesn't fit restore/DR volumes.
|
||||||
|
- In order to skip the file syncing and snapshotting and directly do restore, the rebuilding related API should be updated, which will lead to API version bumps.
|
||||||
|
- As long as there is a replica having not restored the newest/latest backup, longhorn manager will directly call restore command. Then rebuilding replicas will be able to start the restore even if all other replicas are up-to-date.
|
||||||
|
- Previously, in order to maintain the consistency of DR volume replicas, Longhorn manager will guarantee that all replicas have restored the same backup before starting the next backup restore. But considering the case that the newly rebuilt replicas are empty whereas the existing replicas have restored some backups, This restriction makes replica rebuilding become impossible in some cases. Hence we need to break the restriction.
|
||||||
|
- This restriction break degrades the consistency of DR volume replicas. But it's acceptable as long as all replicas can finish the latest backup restore and the DR volume can be activated in the end.
|
||||||
|
- This modification means engines and replicas should be intelligent enough to decide if they need to do restore and which kind of restore they need to launch.
|
||||||
|
- Actually replica processes have all information about the restore status and they can decide if they need incremental restore or full restore by themself. Specifying the last backup in the restore command is redundant.
|
||||||
|
- Longhorn manager only needs to tell the replicas what is the latest backup they should restore.
|
||||||
|
- Longhorn manager still need to know what is the last restored backup of all replicas, since it relies on it to determine if the restore/DR volume is available/can be activated.
|
||||||
|
- Longhorn should wait for rebuild complete and check restore status before auto detachment.
|
||||||
|
- Otherwise, the restore volume will be automatically detached when the rebuild is in progress then the rebuild is meaningless in this case.
|
||||||
|
|
||||||
|
### User Stories
|
||||||
|
|
||||||
|
#### Replica crashes when a restore/DR volume is in restore progress
|
||||||
|
Before, the restore volume keeps state `Degraded` if there is replica crashing. And the volume will finally become `Faulted` if all replicas are crashed one by one during restoring.
|
||||||
|
|
||||||
|
After, the restore volume will start replica rebuilding automatically then be back to state `Healthy` if there is replica crashing. The volume is available as long as all replicas are not crashed at the same time. And volume will finish activation/auto-detachment after the rebuild is done.
|
||||||
|
|
||||||
|
### User Experience In Detail
|
||||||
|
|
||||||
|
#### Replica crash on restore volume
|
||||||
|
1. Users create a restore volume and wait for restore complete.
|
||||||
|
2. When the restore is in progress, some replicas somehow get crashed. Then the volume rebuilds new replicas immediately, and it will become `Healthy` once the new replicas start rebuilding.
|
||||||
|
4. The volume will be detached automatically once the restore and the rebuild complete.
|
||||||
|
|
||||||
|
#### Replica crash on DR volume
|
||||||
|
1. Users create a DR volume.
|
||||||
|
2. Some replicas get crashed. Then the DR volume automatically rebuilds new replicas and restores the latest backup for the rebuilt replicas.
|
||||||
|
3. Users try to activate the DR volume. The DR volume will wait for the rebuild of all replicas and successful restoration of the latest backup before detachment.
|
||||||
|
|
||||||
|
### API changes
|
||||||
|
|
||||||
|
#### CLI API
|
||||||
|
- Add a new flag `--restore` for command `add-replica`, which indicates skipping file syncing and snapshotting.
|
||||||
|
- Deprecate the arg `lastRestoreBackup` and the flag `--incrementally` for command `backup restore`.
|
||||||
|
- Add a new command `verify-rebuild-replica`, which can mark the rebuilding replicas as available (mode `RW`) for restore/DR volumes after the initial restore is done.
|
||||||
|
|
||||||
|
#### Controller gRPC API
|
||||||
|
- Create a separate message/struct for `ReplicaCreate` request then add the two new fields `Mode` and `SnapshotRequired` to the request.
|
||||||
|
|
||||||
|
## Design
|
||||||
|
|
||||||
|
### Implementation Overview
|
||||||
|
|
||||||
|
#### Engine Part:
|
||||||
|
1. Modify command `add-replica` related APIs:
|
||||||
|
1. Use a new flag `--restore` in command `add-replica` to indicate that file syncing and snapshotting should be skipped for restore/DR volumes.
|
||||||
|
2. The current controller gRPC call `ReplicaCreate` used in the command will directly create a snapshot before the rebuilding. But considering the (snapshot) consistency of of restore/DR volumes, snapshots creation/deletion is fully controlled by the restore command (and the expansion command during the restore). Hence, the snapshotting here needs to be skipped by updating the gRPC call `ReplicaCreate`.
|
||||||
|
2. Add command `verify-rebuild-replica`:
|
||||||
|
1. It just calls the existing controller gRPC function `ReplicaVerifyRebuild`.
|
||||||
|
2. It's mainly used to mark the rebuilding replica of restore/DR volumes as mode `RW` with some verifications and a replica reload.
|
||||||
|
3. Modify command `backup restore`:
|
||||||
|
1. Deprecate/Ignore the arg `lastRestoreBackup` in the restore command and the following sync agent gRPC function. Instead, the sync agent server will directly do a full restore or a incremental restore based on its current restore status.
|
||||||
|
2. Deprecate/Ignore the flag `--incrementally` for command `backup restore`. By checking the disk list of all existing replicas, the command function knows if it needs to generate a new snapshot name.
|
||||||
|
3. The caller of the gRPC call `BackupRestore` only needs to tell the name of the final snapshot file that stores restored data.
|
||||||
|
1. For new restore volume, there is no existing snapshot among all replicas hence we will generate a random snapshot name.
|
||||||
|
2. For replicas of DR volumes or rebuilding replicas of restore volumes, the caller will find the replica containing the most snapshots then use the latest snapshot of the replica in the following restore.
|
||||||
|
3. As for the delta file used in the incremental restore, it will be generated by the sync agent server rather than by the caller. Since the caller has no idea about the last restored backup now and the delta file naming format is `volume-delta-<last restored backup name>.img`.
|
||||||
|
4. To avoid disk/snapshot chain inconsistency between rebuilt replicas and old replicas of a DR volume, snapshot purge is required if there are more than 1 snapshots in one replica. And the (incremental) restore will be blocked before the snapshot purge complete.
|
||||||
|
3. Make the sync agent gRPC call `BackupRestore` more “intelligent”: The function will check the restore status first. If there is no restore record in the sync agent server or the last restored backup is invalid, a full restore will be applied. This means we can remove the gRPC call `BackupRestoreIncrementally`.
|
||||||
|
4. Remove the expansion before the restore call. The expansion of DR volumes should be guaranteed by longhorn manager.
|
||||||
|
5. Coalesce the incremental restore related functions to normal restore functions if possible.
|
||||||
|
|
||||||
|
#### Manager Part:
|
||||||
|
1. Allow replica replenishment for restore/DR volumes.
|
||||||
|
2. Add the new flag `--restore` when using command `add-replica` to rebuild replicas of restore/DR volumes.
|
||||||
|
3. Modify the pre-restore check and restore status sync logic:
|
||||||
|
1. Previously, the restore command will be invoked only if there is no restoring replica. Right now the command will be called as long as there is a replica having not restored the latest backup.
|
||||||
|
2. Do not apply the consensual check as the prerequisite of the restore command invocation. The consensual check will be used for `engine.Status.LastRestoredBackup` update only.
|
||||||
|
3. Invoke `verify-rebuild-replica` when there is a complete restore for a rebuilding replica (mode `WO`).
|
||||||
|
4. Modify the way to invoke restore command:
|
||||||
|
1. Retain the old implementation for compatibility.
|
||||||
|
2. For the engine using the new engine image, call restore command directly as long as the pre-restore check gets passed.
|
||||||
|
3. Need to ignore some errors. e.g.: replicas are restoring, the requested backup restore is the same as the last backup restore, or replicas need to complete the snapshot purge before the restore.
|
||||||
|
5. Mark the rebuilding replicas as mode `ERR` and disable the replica replenishment during the expansion.
|
||||||
|
6. Modify the prerequisites of restore volume auto detachment or DR volume activation:
|
||||||
|
1. Wait for the rebuild complete and the volume becoming `Healthy`.
|
||||||
|
2. Check and wait for the snapshot purge.
|
||||||
|
3. This prerequisite check works only for new restore/DR volumes.
|
||||||
|
|
||||||
|
### Test plan
|
||||||
|
|
||||||
|
#### Engine integration tests:
|
||||||
|
|
||||||
|
##### Restore volume simple rebuild:
|
||||||
|
1. Create a restore volume with 2 replicas.
|
||||||
|
2. Run command `backup restore` for the DR volume.
|
||||||
|
3. Delete one replica of the restore volume.
|
||||||
|
4. Initialize a new replica, and add the replica to the restore volume.
|
||||||
|
5. Run command `backup restore`.
|
||||||
|
6. Verify the restored data is correct, and all replicas work fine.
|
||||||
|
|
||||||
|
##### DR volume rebuild after expansion:
|
||||||
|
1. Create a DR volume with 2 replicas.
|
||||||
|
2. Run command `backup restore` for the DR volume.
|
||||||
|
3. Wait for restore complete.
|
||||||
|
4. Expand the DR volume and wait for the expansion complete.
|
||||||
|
5. Delete one replica of the DR volume.
|
||||||
|
6. Initialize a new replica, and add the replica to the DR volume.
|
||||||
|
7. Run command `backup restore`. The old replica should start snapshot purge and the restore is actually not launched.
|
||||||
|
8. Wait for the snapshot purge complete.
|
||||||
|
9. Re-run command `backup restore`. Then wait for the restore complete.
|
||||||
|
10. Check if the restored data is correct, and all replicas work fine. And verify all replicas contain only 1 snapshot.
|
||||||
|
|
||||||
|
#### Manager integration tests:
|
||||||
|
|
||||||
|
##### Restore volume rebuild:
|
||||||
|
1. Launch a pod with Longhorn volume.
|
||||||
|
2. Write data to the volume and take a backup.
|
||||||
|
3. Create a restore volume from the backup and wait for the restore start.
|
||||||
|
4. Crash one random replicas. Then check if the replicas will be rebuilt and the restore volume can be `Healthy` after the rebuilding.
|
||||||
|
5. Wait for the restore complete and auto detachment.
|
||||||
|
6. Launch a pod for the restored volume.
|
||||||
|
7. Verify all replicas work fine with the correct data.
|
||||||
|
|
||||||
|
##### DR volume rebuild during the restore:
|
||||||
|
1. Launch a pod with Longhorn volume.
|
||||||
|
2. Write data to the volume and take the 1st backup.
|
||||||
|
3. Wait for the 1st backup creation complete then write more data to the volume (which is the data of the 2nd backup).
|
||||||
|
4. Create a DR volume from the 1st backup and wait for the restore start.
|
||||||
|
5. Crash one random replica.
|
||||||
|
6. Take the 2nd backup for the original volume. Then trigger DR volume last backup update immediately (by calling backup list API) after the 2nd backup creation complete.
|
||||||
|
7. Check if the replicas will be rebuilt and the restore volume can be `Healthy` after the rebuilding.
|
||||||
|
8. Wait for the restore complete then activate the volume.
|
||||||
|
9. Launch a pod for the activated DR volume.
|
||||||
|
10. Verify all replicas work fine with the correct data.
|
||||||
|
|
||||||
|
##### DR volume rebuild with expansion:
|
||||||
|
1. Launch a pod with Longhorn volume.
|
||||||
|
2. Write data to the volume and take the 1st backup.
|
||||||
|
3. Create a DR volume from the 1st backup.
|
||||||
|
4. Shutdown the pod and wait for the original volume detached.
|
||||||
|
5. Expand the original volume and wait for the expansion complete.
|
||||||
|
6. Re-launch a pod for the original volume.
|
||||||
|
7. Write data to the original volume and take the 2nd backup. (Make sure the total data size is larger than the original volume size so that there is date written to the expanded part.)
|
||||||
|
8. Wait for the 2nd backup creation complete.
|
||||||
|
9. Trigger DR volume and crash one random replica of the DR volume.
|
||||||
|
10. Check if the replicas will be rebuilt, and the restore volume can be `Healthy` after the rebuilding.
|
||||||
|
11. Wait for the expansion, restore, and rebuild complete.
|
||||||
|
12. Verify the DR volume size and snapshots count after the restore.
|
||||||
|
13. Write data to the original volume and take the 3rd backup.
|
||||||
|
14. Wait for the 3rd backup creation complete then trigger the incremental restore for the DR volume.
|
||||||
|
15. Activate the DR volume and wait for the DR volume activated.
|
||||||
|
16. Launch a pod for the activated DR volume.
|
||||||
|
17. Verify the restored data of the activated DR volume.
|
||||||
|
18. Write more data to the activated DR volume. Then verify all replicas are still running.
|
||||||
|
19. Crash one random replica of the activated DR volume.
|
||||||
|
20. Wait for the rebuild complete then verify the activated volume still works fine.
|
||||||
|
|
||||||
|
### Manual test
|
||||||
|
1. Launch Longhorn v1.0.1.
|
||||||
|
2. Launch a pod with Longhorn volume.
|
||||||
|
3. Write data to the volume and take the 1st backup.
|
||||||
|
4. Create 2 DR volumes from the 1st backup.
|
||||||
|
5. Shutdown the pod and wait for the original volume detached.
|
||||||
|
6. Expand the original volume and wait for the expansion complete.
|
||||||
|
7. Write data to the original volume and take the 2nd backup. (Make sure the total data size is larger than the original volume size so that there is date written to the expanded part.)
|
||||||
|
8. Trigger incremental restore for the DR volumes by listing the backup volumes, and wait for restore complete.
|
||||||
|
9. Upgrade Longhorn to the latest version.
|
||||||
|
10. Crash one random replica for the 1st DR volume .
|
||||||
|
11. Verify the 1st DR volume won't rebuild replicas and keep state `Degraded`.
|
||||||
|
12. Write data to the original volume and take the 3rd backup.
|
||||||
|
13. Trigger incremental restore for the DR volumes, and wait for restore complete.
|
||||||
|
14. Do live upgrade for the 1st DR volume. This live upgrade call should fail and nothing gets changed.
|
||||||
|
15. Activate the 1st DR volume.
|
||||||
|
16. Launch a pod for the 1st activated volume, and verify the restored data is correct.
|
||||||
|
17. Do live upgrade for the original volume and the 2nd DR volumes.
|
||||||
|
18. Crash one random replica for the 2nd DR volume.
|
||||||
|
19. Wait for the restore & rebuild complete.
|
||||||
|
20. Delete one replica for the 2nd DR volume, then activate the DR volume before the rebuild complete.
|
||||||
|
21. Verify the DR volume will be auto detached after the rebuild complete.
|
||||||
|
22. Launch a pod for the 2nd activated volume, and verify the restored data is correct.
|
||||||
|
23. Crash one replica for the 2nd activated volume.
|
||||||
|
24. Wait for the rebuild complete, then verify the volume still works fine by reading/writing more data.
|
||||||
|
|
||||||
|
### Upgrade strategy
|
||||||
|
Live upgrade is supported.
|
||||||
|
|
||||||
|
## Note
|
||||||
|
It's possible that the restore/DR volume rebuilding somehow gets stuck, or users have no time to wait for the restore/DR volume rebuilding done. We need to provide a way that users can use the volume as soon as possible. This enhancement is tracked in https://github.com/longhorn/longhorn/issues/1512.
|
@ -0,0 +1,83 @@
|
|||||||
|
# Replica Eviction Support for Disks and Nodes
|
||||||
|
|
||||||
|
## Summary
|
||||||
|
This enhancement is to simplify and automatically evict the replicas on the selected disabled disks or nodes to other suitable disks and nodes per user's request. Meanwhile keep the same level of fault tolerance during this eviction period of time.
|
||||||
|
|
||||||
|
### Related Issues
|
||||||
|
https://github.com/longhorn/longhorn/issues/292
|
||||||
|
https://github.com/longhorn/longhorn/issues/298
|
||||||
|
|
||||||
|
## Motivation
|
||||||
|
|
||||||
|
### Goals
|
||||||
|
1. Allow user easily evict the replicas on the selected disks or nodes to other disks or nodes without impact the user defined `Volume.Spec.numberOfReplicas` and keep the same level of fault tolerance. This means we don't change the user defined replica number.
|
||||||
|
2. Report any error to user during the eviction time.
|
||||||
|
3. Allow user to cancel the eviction at any time.
|
||||||
|
|
||||||
|
## Proposal
|
||||||
|
1. Add `Eviction Requested` with `true` and `false` selection buttons for disks and nodes. This is for user to evict or cancel the eviction of the disks or the nodes.
|
||||||
|
2. Add new `evictionRequested` field to `Node.Spec`, `Node.Spec.disks` Spec and `Replica.Status`. These will help tracking the request from user and trigger replica controller to update `Replica.Status` and volume controller to do the eviction. And this will reconcile with `scheduledReplica` of selected disks on the nodes.
|
||||||
|
3. Display `fail to evict` error message to `Dashboard` and any other eviction errors to the `Event log`.
|
||||||
|
|
||||||
|
### User Stories
|
||||||
|
### Disks and Nodes Eviction
|
||||||
|
For disk replacement or node replacement, the eviction needs to be done successfully in order to guarantee Longhorn volume function properly.
|
||||||
|
|
||||||
|
Before, when user wants to evict a disk or a node they need to do the following steps:
|
||||||
|
|
||||||
|
1. User needs to disable the disk or the node.
|
||||||
|
2. User needs to scale up the replica count for the volume which has replica on disabled disks or nodes, and wait for the rebuild complete, scale down the replica count, then delete the replicas on this disk or node.
|
||||||
|
|
||||||
|
After this enhancement, user can click `true` to the `Eviction Requested` on scheduling disabled disks or nodes. Or select `Disable` for scheduling and `true` to the `Eviction Requested` at the same time then save this change. The backend will take care of the eviction for the disks or nodes and cleanup for all the replicas on disks or nodes.
|
||||||
|
|
||||||
|
### User Experience In Detail
|
||||||
|
#### Disks and Nodes Eviction
|
||||||
|
1. User can select `true` to the `Eviction Requested` from `Longhorn UI` for disks or nodes. And user has to make sure the selected disks or nodes have been disabled, or select the `Disable` Scheduling at the same time of `true` to the `Eviction Requested`.
|
||||||
|
2. Once `Eviction Requested` has been set to `true` on the disks or nodes, they can not be enabled for `Scheduling`.
|
||||||
|
3. If the disks or the nodes haven't been disabled for `Scheduling`, there will be error message showed in `Dashboard` immediately to indicate that user need to disable the disk or node for eviction.
|
||||||
|
4. And user will wait for the replica number for the disks or nodes to be 0.
|
||||||
|
5. If there is any error e.g. no space or couldn't find other schedulable disk, the error message will be logged in the `Event log`. And the eviction will be suspended until either user sets the `Eviction Requested` to `false` or cleanup more disk spaces for the new replicas.
|
||||||
|
6. If user cancel the eviction by setting the `Eviction Requested` to `false`, the remaining replicas on the selected disks or nodes will remain on the disks or nodes.
|
||||||
|
|
||||||
|
### API changes
|
||||||
|
From an API perspective, the call to set `Eviction Requested` to `true` or `false` on the `Node` or `Disk` eviction should look the same. The logic for handling the new field `Eviction Requested` `true` or `false` should to be in the `Node Controller` and `Volume Controller`.
|
||||||
|
|
||||||
|
## Design
|
||||||
|
|
||||||
|
### Implementation Overview
|
||||||
|
|
||||||
|
1. On `Longhorn UI` `Node` page, for nodes eviction, adding `Eviction Requested` `true` and `false` options in the `Edit Node` sub-selection, next to `Node Scheduling`. For disks eviction, adding `Eviction Requested` `true` and `false` options in `Edit node and disks` sub-selection under `Operation` column next to each disk `Scheduling` options. This is for user to evict or cancel the eviction of the disks or the nodes.
|
||||||
|
2. Add new `evictionRequested` field to `Node.Spec`, `Node.Spec.disks` Spec and `Replica.Status`. These will help tracking the request from user and trigger replica controller to update `Replica.Status` and volume controller to do the eviction. And this will reconcile with `scheduledReplica` of selected disks on the nodes.
|
||||||
|
3. Add a informer in `Replica Controller` to get these information and update `evictionRequested` field in `Replica.Status`.
|
||||||
|
4. Once `Eviction Requested` has been set to `true` for disks or nodes, the `evictionRequested` fields for the disks and nodes will be set to `true` (default is `false`).
|
||||||
|
5. `Replica Controller` will update `evictionRequested` field in `Replica.Status` and `Volume Controller` to get these information from it's replicas.
|
||||||
|
6. During reconcile the engine replica, based on `Replica.Status.EvictionRequested` of the volume replicas to trigger rebuild for different volumes' replicas. And remove one replica with `evictionRequested` `true`.
|
||||||
|
7. Logged the errors to `Event log` during the reconcile process.
|
||||||
|
8. By the end from `Longhorn UI`, the replica number on the eviction disks or nodes should be 0, this mean eviction is success.
|
||||||
|
9. If the volume is 'Detached', Longhorn will 'Automatically Attach' the volume and do the eviction, after eviction success, the volume will be 'Automatically detach'. If there is any error during the eviction, it will get suspended, until user solve the problem, the 'Auto Detach' will be triggered at the end.
|
||||||
|
|
||||||
|
### Test plan
|
||||||
|
|
||||||
|
#### Manual Test Plan For Disks and Nodes Eviction
|
||||||
|
Positive Case:
|
||||||
|
|
||||||
|
For both `Replica Node Level Soft Anti-Affinity` has been enabled and disabled. Also the volume can be 'Attached' or 'Detached'.
|
||||||
|
1. User can select one or more disks or nodes for eviction. Select `Eviction Requested` to `true` on the disabled disks or nodes, Longhorn should start rebuild replicas for the volumes which have replicas on the eviction disks or nodes, and after rebuild success, the replica number on the evicted disks or nodes should be 0. E.g. When there are 3 nodes in the cluster, and with `Replica Node Level Soft Anti-Affinity` is set to `false`, disable one node, and create a volume with replica count 2. And then evict one of them, the eviction should get stuck, then set `Replica Node Level Soft Anti-Affinity` to `true`, the eviction should go through.
|
||||||
|
|
||||||
|
Negative Cases:
|
||||||
|
1. If user selects the disks or nodes have not been disabled scheduling, Longhorn should display the error message on `Dashboard` immediately. Or during the eviction, the disabled disk or node can not be re-enabled again.
|
||||||
|
2. If there is no enough disk spaces or nodes for disks or nodes eviction, Longhorn should log the error message in the `Event Log`. And once the disk spaces or nodes resources are good enough, the eviction should continue. Or if the user selects `Eviction Requested` to `false`, Longhorn should stop eviction and clear the `evictionRequested` fields for nodes, disks and volumes crd objects. E.g. When there are 3 nodes in the cluster, and the volume replica count is 3, the eviction should get stuck when the `Replica Node Level Soft Anti-Affinity` is `false`.
|
||||||
|
|
||||||
|
#### Integration Test Plan
|
||||||
|
For `Replica Node Level Soft Anti-Affinity` is enabled, create 2 replicas on the same disk or node, and then evict this disk or node, the 2 replicas should goto another disk of node.
|
||||||
|
|
||||||
|
For `Replica Node Level Soft Anti-Affinity` is disabled, create 1 replica on a disk, and evict this disk or node, the replica should goto the other disk of node.
|
||||||
|
|
||||||
|
For node eviction, Longhorn will process the eviction based on the disks for the node, this is like disk eviction. After eviction success, the replica number on the evicted node should be 0.
|
||||||
|
|
||||||
|
#### Error Indication
|
||||||
|
During the eviction, user can click the `Replicas Number` on the `Node` page, and set which replicas are left from eviction, and click the `Replica Name` will redirect user to the `Volume` page to set if there is any error for this volume. If there is any error during the rebuild, Longhorn should display the error message from UI. The error could be `failed to schedule a replica` due to disk space or based on schedule policy, can not find a valid disk to put the replica.
|
||||||
|
|
||||||
|
### Upgrade strategy
|
||||||
|
No special upgrade strategy is necessary. Once the user upgrades to the new version of `Longhorn`, these new capabilities will be accessible from the `longhorn-ui` without any special work.
|
||||||
|
|
Some files were not shown because too many files have changed in this diff Show More
Loading…
Reference in New Issue
Block a user